id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
7dcc7d255a6a-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_S3LOG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3LOG.md
e5da3efe446a-0
The following example queries SVCS\_S3LOG for the last query that ran\. ``` select * from svcs_s3log where query = pg_last_query_id() order by query,segment; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3LOG.md
f9c81e85d345-0
Following are some common issues that affect query performance, with instructions on ways to diagnose and resolve them\. **Topics** + [Table statistics missing or out of date](#table-statistics-missing-or-out-of-date) + [Nested loop](#nested-loop) + [Hash join](#hash-join) + [Ghost rows or uncommitted rows](#ghost-rows-or-uncommitted-rows) + [Unsorted or missorted rows](#unsorted-or-mis-sorted-rows) + [Suboptimal data distribution](#suboptimal-data-distribution) + [Insufficient memory allocated to the query](#insufficient-memory-allocated-to-the-query) + [Suboptimal WHERE clause](#suboptimal-WHERE-clause) + [Insufficiently restrictive predicate](#insufficiently-restrictive-predicate) + [Very large result set](#very-large-result-set) + [Large SELECT list](#large-SELECT-list)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
7cc7d83841d4-0
If table statistics are missing or out of date, you might see the following: + A warning message in EXPLAIN command results\. + A missing statistics alert event in STL\_ALERT\_EVENT\_LOG\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\. To fix this issue, run [ANALYZE](r_ANALYZE.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
0fccbddbb5ba-0
If a nested loop is present, you might see a nested loop alert event in STL\_ALERT\_EVENT\_LOG\. You can also identify this type of event by running the query at [Identifying queries with nested loops](diagnostic-queries-for-query-tuning.md#identify-queries-with-nested-loops)\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\. To fix this, review your query for cross\-joins and remove them if possible\. Cross\-joins are joins without a join condition that result in the Cartesian product of two tables\. They are typically executed as nested loop joins, which are the slowest of the possible join types\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
dcae904501ea-0
If a hash join is present, you might see the following: + Hash and hash join operations in the query plan\. For more information, see [Analyzing the query plan](c-analyzing-the-query-plan.md)\. + An HJOIN step in the segment with the highest maxtime value in SVL\_QUERY\_SUMMARY\. For more information, see [Using the SVL\_QUERY\_SUMMARY view](using-SVL-Query-Summary.md)\. To fix this issue, you can take a couple of approaches: + Rewrite the query to use a merge join if possible\. You can do this by specifying join columns that are both distribution keys and sort keys\. + If the HJOIN step in SVL\_QUERY\_SUMMARY has a very high value in the rows field compared to the rows value in the final RETURN step in the query, check whether you can rewrite the query to join on a unique column\. When a query does not join on a unique column, such as a primary key, that increases the number of rows involved in the join\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
191907a6e58e-0
If ghost rows or uncommitted rows are present, you might see an alert event in STL\_ALERT\_EVENT\_LOG that indicates excessive ghost rows\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\. To fix this issue, you can take a couple of approaches: + Check the **Loads** tab of your Amazon Redshift console for active load operations on any of the query tables\. If you see active load operations, wait for those to complete before taking action\. + If there are no active load operations, run [VACUUM](r_VACUUM_command.md) on the query tables to remove deleted rows\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
c600dd71e0c0-0
If unsorted or missorted rows are present, you might see a very selective filter alert event in STL\_ALERT\_EVENT\_LOG\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\. You can also check to see if any of the tables in your query have large unsorted areas by running the query in [Identifying tables with data skew or unsorted rows](diagnostic-queries-for-query-tuning.md#identify-tables-with-data-skew-or-unsorted-rows)\. To fix this issue, you can take a couple of approaches: + Run [VACUUM](r_VACUUM_command.md) on the query tables to re\-sort the rows\. + Review the sort keys on the query tables to see if any improvements can be made\. Remember to weigh the performance of this query against the performance of other important queries and the system overall before making any changes\. For more information, see [Choosing sort keys](t_Sorting_data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
2163b08e0dd0-0
If data distribution is suboptimal, you might see the following: + A serial execution, large broadcast, or large distribution alert event appears in STL\_ALERT\_EVENT\_LOG\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\. + Slices are not processing approximately the same number of rows for a given step\. For more information, see [Using the SVL\_QUERY\_REPORT view](using-SVL-Query-Report.md)\. + Slices are not taking approximately the same amount of time for a given step\. For more information, see [Using the SVL\_QUERY\_REPORT view](using-SVL-Query-Report.md)\. If none of the preceding is true, you can also see if any of the tables in your query have data skew by running the query in [Identifying tables with data skew or unsorted rows](diagnostic-queries-for-query-tuning.md#identify-tables-with-data-skew-or-unsorted-rows)\. To fix this issue, take another look at the distribution styles for the tables in the query and see if any improvements can be made\. Remember to weigh the performance of this query against the performance of other important queries and the system overall before making any changes\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
2b24c04f28c5-0
If insufficient memory is allocated to your query, you might see a step in SVL\_QUERY\_SUMMARY that has an `is_diskbased` value of true\. For more information, see [Using the SVL\_QUERY\_SUMMARY view](using-SVL-Query-Summary.md)\. To fix this issue, allocate more memory to the query by temporarily increasing the number of query slots it uses\. Workload Management \(WLM\) reserves slots in a query queue equivalent to the concurrency level set for the queue\. For example, a queue with a concurrency level of 5 has 5 slots\. Memory assigned to the queue is allocated equally to each slot\. Assigning several slots to one query gives that query access to the memory for all of those slots\. For more information on how to temporarily increase the slots for a query, see [wlm\_query\_slot\_count](r_wlm_query_slot_count.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
8283e55e6bf2-0
If your WHERE clause causes excessive table scans, you might see a SCAN step in the segment with the highest `maxtime` value in SVL\_QUERY\_SUMMARY\. For more information, see [Using the SVL\_QUERY\_SUMMARY view](using-SVL-Query-Summary.md)\. To fix this issue, add a WHERE clause to the query based on the primary sort column of the largest table\. This approach helps minimize scanning time\. For more information, see [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
fde4dc7169ad-0
If your query has an insufficiently restrictive predicate, you might see a SCAN step in the segment with the highest `maxtime` value in SVL\_QUERY\_SUMMARY that has a very high `rows` value compared to the `rows` value in the final RETURN step in the query\. For more information, see [Using the SVL\_QUERY\_SUMMARY view](using-SVL-Query-Summary.md)\. To fix this issue, try adding a predicate to the query or making the existing predicate more restrictive to narrow the output\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
f6e8b3cca693-0
If your query returns a very large result set, consider rewriting the query to use [UNLOAD](r_UNLOAD.md) to write the results to Amazon S3\. This approach improves the performance of the RETURN step by taking advantage of parallel processing\. For more information on checking for a very large result set, see [Using the SVL\_QUERY\_SUMMARY view](using-SVL-Query-Summary.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
0d06250f7bea-0
If your query has an unusually large SELECT list, you might see a `bytes` value that is high relative to the `rows` value for any step \(in comparison to other steps\) in SVL\_QUERY\_SUMMARY\. This high `bytes` value can be an indicator that you are selecting a lot of columns\. For more information, see [Using the SVL\_QUERY\_SUMMARY view](using-SVL-Query-Summary.md)\. To fix this issue, review the columns you are selecting and see if any can be removed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-performance-improvement-opportunities.md
8ab512a298db-0
Records compile time and location for each query segment of queries, including queries run on a scaling cluster as well as queries run on the main cluster\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the views with the prefix SVL except that the SVL views provide information only for queries run on the main cluster\. SVCS\_COMPILE is visible to all users\. For information about SCL\_COMPILE, see [SVL\_COMPILE](r_SVL_COMPILE.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_COMPILE.md
11e360a09b0b-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_COMPILE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_COMPILE.md
6115e5c36904-0
In this example, queries 35878 and 35879 executed the same SQL statement\. The compile column for query 35878 shows `1` for four query segments, which indicates that the segments were compiled\. Query 35879 shows `0` in the compile column for every segment, indicating that the segments did not need to be compiled again\. ``` select userid, xid, pid, query, segment, locus, datediff(ms, starttime, endtime) as duration, compile from svcs_compile where query = 35878 or query = 35879 order by query, segment; userid | xid | pid | query | segment | locus | duration | compile --------+--------+-------+-------+---------+-------+----------+--------- 100 | 112780 | 23028 | 35878 | 0 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 1 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 2 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 3 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 4 | 1 | 0 | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_COMPILE.md
6115e5c36904-1
100 | 112780 | 23028 | 35878 | 4 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 5 | 1 | 0 | 0 100 | 112780 | 23028 | 35878 | 6 | 1 | 1380 | 1 100 | 112780 | 23028 | 35878 | 7 | 1 | 1085 | 1 100 | 112780 | 23028 | 35878 | 8 | 1 | 1197 | 1 100 | 112780 | 23028 | 35878 | 9 | 2 | 905 | 1 100 | 112782 | 23028 | 35879 | 0 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 1 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 2 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 3 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 4 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 5 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 6 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 7 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 8 | 1 | 0 | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_COMPILE.md
6115e5c36904-2
100 | 112782 | 23028 | 35879 | 8 | 1 | 0 | 0 100 | 112782 | 23028 | 35879 | 9 | 2 | 0 | 0 (20 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_COMPILE.md
24e298088276-0
The WLM configuration properties are either dynamic or static\. You can apply dynamic properties to the database without a cluster reboot, but static properties require a cluster reboot for changes to take effect\. However, if you change dynamic and static properties at the same time, then you must reboot the cluster for all the property changes to take effect\. This is true whether the changed properties are dynamic or static\. While dynamic properties are being applied, your cluster status is `modifying`\. Switching between automatic WLM and manual WLM is a static change and requires a cluster reboot to take effect\. The following table indicates which WLM properties are dynamic or static when using automatic WLM or manual WLM\. **** | WLM Property | Automatic WLM | Manual WLM | | --- | --- | --- | | Query groups | Dynamic | Static | | Query group wildcard | Dynamic | Static | | User groups | Dynamic | Static | | User group wildcard | Dynamic | Static | | Concurrency on main | Not applicable | Dynamic | | Concurrency Scaling mode | Dynamic | Dynamic | | Enable short query acceleration | Not applicable | Dynamic | | Maximum runtime for short queries | Dynamic | Dynamic | | Percent of memory to use | Not applicable | Dynamic | | Timeout | Not applicable | Dynamic | | Priority | Dynamic | Not applicable | | Adding or removing queues | Dynamic | Static | **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-properties.md
24e298088276-1
| Priority | Dynamic | Not applicable | | Adding or removing queues | Dynamic | Static | **Note** When using manual WLM, if the timeout value is changed, the new value is applied to any query that begins running after the value is changed\. If the concurrency or percent of memory to use are changed, Amazon Redshift changes to the new configuration dynamically\. Thus, currently running queries aren't affected by the change\. For more information, see [WLM Dynamic Memory Allocation\.](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-dynamic-memory-allocation.html) When using automatic WLM, timeout is ignored\. **Topics** + [WLM dynamic memory allocation](cm-c-wlm-dynamic-memory-allocation.md) + [Dynamic WLM example](cm-c-wlm-dynamic-example.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-properties.md
84597a57270d-0
You add the cluster public key to each host's authorized keys file for all of the Amazon EMR cluster nodes so that the hosts will recognize Amazon Redshift and accept the SSH connection\. **To add the Amazon Redshift cluster public key to the host's authorized keys file** 1. Access the host using an SSH connection\. For information about connecting to an instance using SSH, see [Connect to Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-connect-to-instance-linux.html) in the *Amazon EC2 User Guide*\. 1. Copy the Amazon Redshift public key from the console or from the CLI response text\. 1. Copy and paste the contents of the public key into the `/home/<ssh_username>/.ssh/authorized_keys` file on the host\. Include the complete string, including the prefix "`ssh-rsa` " and suffix "`Amazon-Redshift`"\. For example: ``` ssh-rsa AAAACTP3isxgGzVWoIWpbVvRCOzYdVifMrh… uA70BnMHCaMiRdmvsDOedZDOedZ Amazon-Redshift ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-add-key-to-host.md
83f90c8fe160-0
To load data from files located in one or more S3 buckets, use the FROM clause to indicate how COPY locates the files in Amazon S3\. You can provide the object path to the data files as part of the FROM clause, or you can provide the location of a manifest file that contains a list of Amazon S3 object paths\. COPY from Amazon S3 uses an HTTPS connection\. **Important** If the Amazon S3 buckets that hold the data files don't reside in the same AWS Region as your cluster, you must use the [REGION](#copy-region) parameter to specify the Region in which the data is located\. **Topics** + [Syntax](#copy-parameters-data-source-s3-syntax) + [Examples](#copy-parameters-data-source-s3-examples) + [Optional parameters](#copy-parameters-data-source-s3-optional-parms) + [Unsupported parameters](#copy-parameters-data-source-s3-unsupported-parms)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f0f153408294-0
``` FROM { 's3://objectpath' | 's3://manifest_file' } authorization | MANIFEST | ENCRYPTED | REGION [AS] 'aws-region' | optional-parameters ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
2488f0f19a6a-0
The following example uses an object path to load data from Amazon S3\. ``` copy customer from 's3://mybucket/customer' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` The following example uses a manifest file to load data from Amazon S3\. ``` copy customer from 's3://mybucket/cust.manifest' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-0
FROM <a name="copy-parameters-from"></a> The source of the data to be loaded\. For more information about the encoding of the Amazon S3 file, see [Data conversion parameters](copy-parameters-data-conversion.md)\. 's3://*copy\_from\_s3\_objectpath*' <a name="copy-s3-objectpath"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-1
Specifies the path to the Amazon S3 objects that contain the data—for example, `'s3://mybucket/custdata.txt'`\. The *s3://copy\_from\_s3\_objectpath* parameter can reference a single file or a set of objects or folders that have the same key prefix\. For example, the name `custdata.txt` is a key prefix that refers to a number of physical files: `custdata.txt`,`custdata.txt.1`, `custdata.txt.2`, `custdata.txt.bak`,and so on\. The key prefix can also reference a number of folders\. For example, `'s3://mybucket/custfolder'` refers to the folders `custfolder`, `custfolder_1`, `custfolder_2`, and so on\. If a key prefix references multiple folders, all of the files in the folders are loaded\. If a key prefix matches a file as well as a folder, such as `custfolder.log`, COPY attempts to load the file also\. If a key prefix might result in COPY attempting to load unwanted files, use a manifest file\. For more information, see [copy_from_s3_manifest_file](#copy-manifest-file), following\. If the S3 bucket that holds the data files doesn't reside in the same AWS Region as your cluster, you must use the [REGION](#copy-region) parameter to specify the Region in which the data is located\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-2
For more information, see [Loading data from Amazon S3](t_Loading-data-from-S3.md)\. 's3://*copy\_from\_s3\_manifest\_file*' <a name="copy-manifest-file"></a> Specifies the Amazon S3 object key for a manifest file that lists the data files to be loaded\. The *'s3://*copy\_from\_s3\_manifest\_file'** argument must explicitly reference a single file—for example, `'s3://mybucket/manifest.txt'`\. It cannot reference a key prefix\. The manifest is a text file in JSON format that lists the URL of each file that is to be loaded from Amazon S3\. The URL includes the bucket name and full object path for the file\. The files that are specified in the manifest can be in different buckets, but all the buckets must be in the same AWS Region as the Amazon Redshift cluster\. If a file is listed twice, the file is loaded twice\. The following example shows the JSON for a manifest that loads three files\. ``` { "entries": [ {"url":"s3://mybucket-alpha/custdata.1","mandatory":true}, {"url":"s3://mybucket-alpha/custdata.2","mandatory":true},
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-3
{"url":"s3://mybucket-alpha/custdata.2","mandatory":true}, {"url":"s3://mybucket-beta/custdata.1","mandatory":false} ] } ``` The double quote characters are required, and must be simple quotation marks \(0x22\), not slanted or "smart" quotes\. Each entry in the manifest can optionally include a `mandatory` flag\. If `mandatory` is set to `true`, COPY terminates if it doesn't find the file for that entry; otherwise, COPY will continue\. The default value for `mandatory` is `false`\. When loading from data files in ORC or Parquet format, a `meta` field is required, as shown in the following example\. ``` { "entries":[ { "url":"s3://mybucket-alpha/orc/2013-10-04-custdata", "mandatory":true, "meta":{ "content_length":99 } }, { "url":"s3://mybucket-beta/orc/2013-10-05-custdata", "mandatory":true, "meta":{ "content_length":99 }
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-4
"mandatory":true, "meta":{ "content_length":99 } } ] } ``` The manifest file must not be encrypted or compressed, even if the ENCRYPTED, GZIP, LZOP, BZIP2, or ZSTD options are specified\. COPY returns an error if the specified manifest file isn't found or the manifest file isn't properly formed\. If a manifest file is used, the MANIFEST parameter must be specified with the COPY command\. If the MANIFEST parameter isn't specified, COPY assumes that the file specified with FROM is a data file\. For more information, see [Loading data from Amazon S3](t_Loading-data-from-S3.md)\. *authorization* The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2\. You can provide that authorization by referencing an AWS Identity and Access Management \(IAM\) role that is attached to your cluster \(role\-based access control\) or by providing the access credentials for an IAM user \(key\-based access control\)\. For increased security and flexibility, we recommend using IAM role\-based access control\. For more information, see [Authorization parameters](copy-parameters-authorization.md)\. MANIFEST <a name="copy-manifest"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-5
MANIFEST <a name="copy-manifest"></a> Specifies that a manifest is used to identify the data files to be loaded from Amazon S3\. If the MANIFEST parameter is used, COPY loads data from the files listed in the manifest referenced by *'s3://copy\_from\_s3\_manifest\_file'*\. If the manifest file isn't found, or isn't properly formed, COPY fails\. For more information, see [Using a manifest to specify data files](loading-data-files-using-manifest.md)\. ENCRYPTED <a name="copy-encrypted"></a> A clause that specifies that the input files on Amazon S3 are encrypted using client\-side encryption with customer\-managed symmetric keys \(CSE\-CMK\)\. For more information, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md)\. Don't specify ENCRYPTED if the input files are encrypted using Amazon S3 server\-side encryption \(SSE\-KMS or SSE\-S3\)\. COPY reads server\-side encrypted files automatically\. If you specify the ENCRYPTED parameter, you must also specify the [MASTER_SYMMETRIC_KEY](#copy-master-symmetric-key) parameter or include the **master\_symmetric\_key** value in the [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-6
If the encrypted files are in compressed format, add the GZIP, LZOP, BZIP2, or ZSTD parameter\. Manifest files and JSONPaths files must not be encrypted, even if the ENCRYPTED option is specified\. MASTER\_SYMMETRIC\_KEY '*master\_key*' <a name="copy-master-symmetric-key"></a> The master symmetric key that was used to encrypt data files on Amazon S3\. If MASTER\_SYMMETRIC\_KEY is specified, the [ENCRYPTED](#copy-encrypted) parameter must also be specified\. MASTER\_SYMMETRIC\_KEY can't be used with the CREDENTIALS parameter\. For more information, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md)\. If the encrypted files are in compressed format, add the GZIP, LZOP, BZIP2, or ZSTD parameter\. REGION \[AS\] '*aws\-region*' <a name="copy-region"></a> Specifies the AWS Region where the source data is located\. REGION is required for COPY from an Amazon S3 bucket or an DynamoDB table when the AWS resource that contains the data isn't in the same Region as the Amazon Redshift cluster\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
f39fe883b9c5-7
The value for *aws\_region* must match a Region listed in the [Amazon Redshift regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#redshift_region) table\. If the REGION parameter is specified, all resources, including a manifest file or multiple Amazon S3 buckets, must be located in the specified Region\. Transferring data across Regions incurs additional charges against the Amazon S3 bucket or the DynamoDB table that contains the data\. For more information about pricing, see **Data Transfer OUT From Amazon S3 To Another AWS Region** on the [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/) page and **Data Transfer OUT** on the [Amazon DynamoDB Pricing](https://aws.amazon.com/dynamodb/pricing/) page\. By default, COPY assumes that the data is located in the same Region as the Amazon Redshift cluster\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
111ac705c740-0
You can optionally specify the following parameters with COPY from Amazon S3: + [Column mapping options](copy-parameters-column-mapping.md) + [Data format parameters](copy-parameters-data-format.md#copy-data-format-parameters) + [Data conversion parameters](copy-parameters-data-conversion.md) + [ Data load operations](copy-parameters-data-load.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
d76feb3db2fc-0
You cannot use the following parameters with COPY from Amazon S3: + SSH + READRATIO
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-s3.md
d591e1a341c4-0
To move data between your cluster and another AWS resource, such as Amazon S3, Amazon DynamoDB, Amazon EMR, or Amazon EC2, your cluster must have permission to access the resource and perform the necessary actions\. For example, to load data from Amazon S3, COPY must have LIST access to the bucket and GET access for the bucket objects\. For information about minimum permissions, see [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](#copy-usage_notes-iam-permissions)\. To get authorization to access the resource, your cluster must be authenticated\. You can choose either of the following authentication methods: + [Role\-based access control](#copy-usage_notes-access-role-based) – For role\-based access control, you specify an AWS Identity and Access Management \(IAM\) role that your cluster uses for authentication and authorization\. To safeguard your AWS credentials and sensitive data, we strongly recommend using role\-based authentication\. + [Key\-based access control](#copy-usage_notes-access-key-based) – For key\-based access control, you provide the AWS access credentials \(access key ID and secret access key\) for an IAM user as plain text\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
9fad3fa70fd7-0
With <a name="copy-usage_notes-access-role-based.phrase"></a>role\-based access control, your cluster temporarily assumes an IAM role on your behalf\. Then, based on the authorizations granted to the role, your cluster can access the required AWS resources\. An IAM *role* is similar to an IAM user, in that it is an AWS identity with permissions policies that determine what the identity can and cannot do in AWS\. However, instead of being uniquely associated with one user, a role can be assumed by any entity that needs it\. Also, a role doesn’t have any credentials \(a password or access keys\) associated with it\. Instead, if a role is associated with a cluster, access keys are created dynamically and provided to the cluster\. We recommend using role\-based access control because it provides more secure, fine\-grained control of access to AWS resources and sensitive user data, in addition to safeguarding your AWS credentials\. Role\-based authentication delivers the following benefits: + You can use AWS standard IAM tools to define an IAM role and associate the role with multiple clusters\. When you modify the access policy for a role, the changes are applied automatically to all clusters that use the role\. + You can define fine\-grained IAM policies that grant permissions for specific clusters and database users to access specific AWS resources and actions\. + Your cluster obtains temporary session credentials at run time and refreshes the credentials as needed until the operation completes\. If you use key\-based temporary credentials, the operation fails if the temporary credentials expire before it completes\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
9fad3fa70fd7-1
+ Your access key ID and secret access key ID aren't stored or transmitted in your SQL code\. To use role\-based access control, you must first create an IAM role using the Amazon Redshift service role type, and then attach the role to your cluster\. The role must have, at a minimum, the permissions listed in [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](#copy-usage_notes-iam-permissions)\. For steps to create an IAM role and attach it to your cluster, see [Authorizing Amazon Redshift to Access Other AWS Services On Your Behalf](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html) in the *Amazon Redshift Cluster Management Guide*\. You can add a role to a cluster or view the roles associated with a cluster by using the Amazon Redshift Management Console, CLI, or API\. For more information, see [Associating an IAM Role With a Cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html) in the *Amazon Redshift Cluster Management Guide*\. When you create an IAM role, IAM returns an Amazon Resource Name \(ARN\) for the role\. To specify an IAM role, provide the role ARN with either the [IAM_ROLE](copy-parameters-authorization.md#copy-iam-role) parameter or the [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) parameter\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
9fad3fa70fd7-2
For example, suppose the following role is attached to the cluster\. ``` "IamRoleArn": "arn:aws:iam::0123456789012:role/MyRedshiftRole" ``` The following COPY command example uses the IAM\_ROLE parameter with the ARN in the previous example for authentication and access to Amazon S3\. ``` copy customer from 's3://mybucket/mydata' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` The following COPY command example uses the CREDENTIALS parameter to specify the IAM role\. ``` copy customer from 's3://mybucket/mydata' credentials 'aws_iam_role=arn:aws:iam::0123456789012:role/MyRedshiftRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
0f1cb6e6f217-0
With <a name="copy-usage_notes-access-key-based.phrase"></a>key\-based access control, you provide the access key ID and secret access key for an IAM user that is authorized to access the AWS resources that contain the data\. You can user either the [ACCESS_KEY_ID and SECRET_ACCESS_KEY](copy-parameters-authorization.md#copy-access-key-id) parameters together or the [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) parameter\. To authenticate using ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY, replace *<access\-key\-id>* and *<secret\-access\-key>* with an authorized user's access key ID and full secret access key as shown following\. ``` ACCESS_KEY_ID '<access-key-id>' SECRET_ACCESS_KEY '<secret-access-key>'; ``` To authenticate using the CREDENTIALS parameter, replace *<access\-key\-id>* and *<secret\-access\-key>* with an authorized user's access key ID and full secret access key as shown following\. ``` CREDENTIALS 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>'; ``` **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
0f1cb6e6f217-1
``` **Note** We strongly recommend using an IAM role for authentication instead of supplying a plain\-text access key ID and secret access key\. If you choose key\-based access control, never use your AWS account \(root\) credentials\. Always create an IAM user and provide that user's access key ID and secret access key\. For steps to create an IAM user, see [Creating an IAM User in Your AWS Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)\. The IAM user must have, at a minimum, the permissions listed in [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](#copy-usage_notes-iam-permissions)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
50e3e840fb99-0
If you are using key\-based access control, you can further limit the access users have to your data by using temporary security credentials\. Role\-based authentication automatically uses temporary credentials\. **Note** We strongly recommend using [role-based access control](#copy-usage_notes-access-role-based.phrase) instead of creating temporary credentials and providing access key ID and secret access key as plain text\. Role\-based access controlautomatically uses temporary credentials\. Temporary security credentials provide enhanced security because they have short lifespans and cannot be reused after they expire\. The access key ID and secret access key generated with the token cannot be used without the token, and a user who has these temporary security credentials can access your resources only until the credentials expire\. To grant users temporary access to your resources, you call AWS Security Token Service \(AWS STS\) API operations\. The AWS STS API operations return temporary security credentials consisting of a security token, an access key ID, and a secret access key\. You issue the temporary security credentials to the users who need temporary access to your resources\. These users can be existing IAM users, or they can be non\-AWS users\. For more information about creating temporary security credentials, see [Using Temporary Security Credentials](https://docs.aws.amazon.com/STS/latest/UsingSTS/Welcome.html) in the IAM User Guide\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
50e3e840fb99-1
You can use either the [ACCESS_KEY_ID and SECRET_ACCESS_KEY](copy-parameters-authorization.md#copy-access-key-id) parameters together with the [SESSION_TOKEN](copy-parameters-authorization.md#copy-token) parameter or the [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) parameter\. You must also supply the access key ID and secret access key that were provided with the token\. To authenticate using ACCESS\_KEY\_ID, SECRET\_ACCESS\_KEY, and SESSION\_TOKEN, replace *<temporary\-access\-key\-id>*, *<temporary\-secret\-access\-key>*, and *<temporary\-token>* as shown following\. ``` ACCESS_KEY_ID '<temporary-access-key-id>' SECRET_ACCESS_KEY '<temporary-secret-access-key>' SESSION_TOKEN '<temporary-token>'; ``` To authenticate using CREDENTIALS, include `token=<temporary-token>` in the credentials string as shown following\. ``` CREDENTIALS 'aws_access_key_id=<temporary-access-key-id>;aws_secret_access_key=<temporary-secret-access-key>;token=<temporary-token>'; ``` The following example shows a COPY command with temporary security credentials\. ``` copy table-name from 's3://objectpath'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
50e3e840fb99-2
``` copy table-name from 's3://objectpath' access_key_id '<temporary-access-key-id>' secret_access_key '<temporary-secret-access-key> token '<temporary-token>'; ``` The following example loads the LISTING table with temporary credentials and file encryption\. ``` copy listing from 's3://mybucket/data/listings_pipe.txt' access_key_id '<temporary-access-key-id>' secret_access_key '<temporary-secret-access-key> token '<temporary-token>' master_symmetric_key '<master-key>' encrypted; ``` The following example loads the LISTING table using the CREDENTIALS parameter with temporary credentials and file encryption\. ``` copy listing from 's3://mybucket/data/listings_pipe.txt' credentials 'aws_access_key_id=<temporary-access-key-id>;<aws_secret_access_key=<temporary-secret-access-key>;token=<temporary-token>;master_symmetric_key=<master-key>' encrypted; ``` **Important**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
50e3e840fb99-3
encrypted; ``` **Important** The temporary security credentials must be valid for the entire duration of the COPY or UNLOAD operation\. If the temporary security credentials expire during the operation, the command fails and the transaction is rolled back\. For example, if temporary security credentials expire after 15 minutes and the COPY operation requires one hour, the COPY operation fails before it completes\. If you use role\-based access, the temporary security credentials are automatically refreshed until the operation completes\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
37b8f6b045e4-0
The IAM role or IAM user referenced by the CREDENTIALS parameter must have, at a minimum, the following permissions: + For COPY from Amazon S3, permission to LIST the Amazon S3 bucket and GET the Amazon S3 objects that are being loaded, and the manifest file, if one is used\. + For COPY from Amazon S3, Amazon EMR, and remote hosts \(SSH\) with JSON\-formatted data, permission to LIST and GET the JSONPaths file on Amazon S3, if one is used\. + For COPY from DynamoDB, permission to SCAN and DESCRIBE the DynamoDB table that is being loaded\. + For COPY from an Amazon EMR cluster, permission for the `ListInstances` action on the Amazon EMR cluster\. + For UNLOAD to Amazon S3, GET, LIST, and PUT permissions for the Amazon S3 bucket to which the data files are being unloaded\. + For CREATE LIBRARY from Amazon S3, permission to LIST the Amazon S3 bucket and GET the Amazon S3 objects being imported\. **Note** If you receive the error message `S3ServiceException: Access Denied`, when running a COPY, UNLOAD, or CREATE LIBRARY command, your cluster doesn’t have proper access permissions for Amazon S3\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
37b8f6b045e4-1
You can manage IAM permissions by attaching an IAM policy to an IAM role that is attached to your cluster, to your IAM user, or to the group to which your IAM user belongs\. For example, the `AmazonS3ReadOnlyAccess` managed policy grants LIST and GET permissions to Amazon S3 resources\. For more information about IAM policies, see [Managing IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-access-permissions.md
cc0fbd74131f-0
The BTRIM function trims a string by removing leading and trailing blanks or by removing characters that match an optional specified string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BTRIM.md
8dda1a1423e3-0
``` BTRIM(string [, matching_string ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BTRIM.md
8a6dece54d2c-0
*string* The first input parameter is a VARCHAR string\. *matching\_string* The second parameter, if present, is a VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BTRIM.md
75e1385f6a2e-0
The BTRIM function returns a VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BTRIM.md
eddacb38fbe6-0
The following example trims leading and trailing blanks from the string `' abc '`: ``` select ' abc ' as untrim, btrim(' abc ') as trim; untrim | trim ----------+------ abc | abc (1 row) ``` The following example removes the leading and trailing `'xyz'` strings from the string `'xyzaxyzbxyzcxyz'` ``` select 'xyzaxyzbxyzcxyz' as untrim, btrim('xyzaxyzbxyzcxyz', 'xyz') as trim; untrim | trim -----------------+----------- xyzaxyzbxyzcxyz | axyzbxyzc (1 row) ``` Note that the leading and trailing occurrences of `'xyz'` were removed, but that occurrences that were internal within the string were not removed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BTRIM.md
0c41d00aecdb-0
Analyzes query steps that execute window functions\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WINDOW.md
bf03f3dff382-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_WINDOW.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WINDOW.md
f7318b391b0b-0
The following example returns window function results for slice 0 and segment 3\. ``` select query, tasknum, rows, is_diskbased, workmem from stl_window where slice=0 and segment=3; ``` ``` query | tasknum | rows | is_diskbased | workmem -------+---------+------+--------------+---------- 86326 | 36 | 1857 | f | 95256616 705 | 15 | 1857 | f | 95256616 86399 | 27 | 1857 | f | 95256616 649 | 10 | 0 | f | 95256616 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WINDOW.md
17186ee49dfd-0
Redshift Spectrum supports querying `array`, `map`, and `struct` complex types through extensions to the Amazon Redshift SQL syntax\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
d5a18e76c416-0
You can extract data from `struct` columns using a dot notation that concatenates field names into paths\. For example, the following query returns given and family names for customers\. The given name is accessed by the long path `c.name.given`\. The family name is accessed by the long path `c.name.family`\. ``` SELECT c.id, c.name.given, c.name.family FROM spectrum.customers c; ``` The preceding query returns the following data\. ``` id | given | family ---|-------|------- 1 | John | Smith 2 | Jenny | Doe 3 | Andy | Jones (3 rows) ``` A `struct` can be a column of another `struct`, which can be a column of another `struct`, at any level\. The paths that access columns in such deeply nested `struct`s can be arbitrarily long\. For example, see the definition for the column `x` in the following example\. ``` x struct<a: string, b: struct<c: integer, d: struct<e: string> > > ``` You can access the data in `e` as `x.b.d.e`\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
d5a18e76c416-1
``` You can access the data in `e` as `x.b.d.e`\. **Note** You use `struct`s only to describe the path to the fields that they contain\. You can't access them directly in a query or return them from a query\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
c611d5fa9d1d-0
You can extract data from `array` columns \(and, by extension, `map` columns\) by specifying the `array` columns in a `FROM` clause in place of table names\. The extension applies to the `FROM` clause of the main query, and also the `FROM` clauses of subqueries\. You can't reference `array` elements by position, such as `c.orders[0]`\. By combining ranging over `arrays` with joins, you can achieve various kinds of unnesting, as explained in the following use cases\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
42f727cabd29-0
The following query selects customer IDs and order ship dates for customers that have orders\. The SQL extension in the FROM clause `c.orders o` depends on the alias `c`\. ``` SELECT c.id, o.shipdate FROM spectrum.customers c, c.orders o ``` For each customer `c` that has orders, the `FROM` clause returns one row for each order `o` of the customer `c`\. That row combines the customer row `c` and the order row `o`\. Then the `SELECT` clause keeps only the `c.id` and `o.shipdate`\. The result is the following\. ``` id| shipdate --|---------------------- 1 |2018-03-01 11:59:59 1 |2018-03-01 09:10:00 3 |2018-03-02 08:02:15 (3 rows) ``` The alias `c` provides access to the customer fields, and the alias `o` provides access to the order fields\. The semantics are similar to standard SQL\. You can think of the `FROM` clause as running the following nested loop, which is followed by `SELECT` choosing the fields to output\. ``` for each customer c in spectrum.customers for each order o in c.orders output c.id and o.shipdate ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
42f727cabd29-1
for each order o in c.orders output c.id and o.shipdate ``` Therefore, if a customer doesn't have an order, the customer doesn't appear in the result\. You can also think of this as the `FROM` clause performing a `JOIN` with the `customers` table and the `orders` array\. In fact, you can also write the query as shown in the following example\. ``` SELECT c.id, o.shipdate FROM spectrum.customers c INNER JOIN c.orders o ON true ``` **Note** If a schema named `c` exists with a table named `orders`, then `c.orders` refers to the table `orders`, and not the array column of `customers`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
2af0ae1c1be5-0
The following query outputs all customer names and their orders\. If a customer hasn't placed an order, the customer's name is still returned\. However, in this case the order columns are NULL, as shown in the following example for Jenny Doe\. ``` SELECT c.id, c.name.given, c.name.family, o.shipdate, o.price FROM spectrum.customers c LEFT JOIN c.orders o ON true ``` The preceding query returns the following data\. ``` id | given | family | shipdate | price ----|---------|---------|----------------------|-------- 1 | John | Smith | 2018-03-01 11:59:59 | 100.5 2 | John | Smith | 2018-03-01 09:10:00 | 99.12 2 | Jenny | Doe | | 3 | Andy | Jones | 2018-03-02 08:02:15 | 13.5 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
8084cc8ea29f-0
When an alias `p` in a `FROM` clause ranges over an array of scalars, the query refers to the values of `p` simply as `p`\. For example, the following query produces pairs of customer names and phone numbers\. ``` SELECT c.name.given, c.name.family, p AS phone FROM spectrum.customers c LEFT JOIN c.phones p ON true ``` The preceding query returns the following data\. ``` given | family | phone -------|----------|----------- John | Smith | 123-4577891 Jenny | Doe | 858-8675309 Jenny | Doe | 415-9876543 Andy | Jones | (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
d60d203b3338-0
Redshift Spectrum treats the `map` data type as an `array` type that contains `struct` types with a `key` column and a `value` column\. The `key` must be a `scalar`; the value can be any data type\. For example, the following code creates an external table with a `map` for storing phone numbers\. ``` CREATE EXTERNAL TABLE spectrum.customers ( id int, name struct<given:varchar(20), family:varchar(20)>, phones map<varchar(20), varchar(20)>, orders array<struct<shipdate:timestamp, price:double precision>> ) ``` Because a `map` type behaves like an `array` type with columns `key` and `value`, you can think of the preceding schemas as if they were the following\. ``` CREATE EXTERNAL TABLE spectrum.customers ( id int, name struct<given:varchar(20), family:varchar(20)>, phones array<struct<key:varchar(20), value:varchar(20)>>, orders array<struct<shipdate:timestamp, price:double precision>> ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
d60d203b3338-1
orders array<struct<shipdate:timestamp, price:double precision>> ) ``` The following query returns the names of customers with a mobile phone number and returns the number for each name\. The map query is treated as the equivalent of querying a nested `array` of `struct` types\. The following query only returns data if you have created the external table as described previously\. ``` SELECT c.name.given, c.name.family, p.value FROM spectrum.customers c, c.phones p WHERE p.key = 'mobile' ``` **Note** The `key` for a `map` is a `string` for Ion and JSON file types\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-query-nested-data-sqlextensions.md
7bce716adfac-0
Records the usage periods for concurrency scaling\. Each usage period is a consecutive duration where an individual concurrency scaling cluster is actively processing queries\. By default, this view is visible by a superuser only\. The database superuser can choose to open it up to all users\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_CONCURRENCY_SCALING_USAGE.md
716dbb5b1c66-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_CONCURRENCY_SCALING_USAGE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_CONCURRENCY_SCALING_USAGE.md
a05966b675f7-0
To view the usage duration in seconds for a specific period, enter the following query: ``` select * from svcs_concurrency_scaling_usage order by start_time; start_time | end_time | queries | usage_in_seconds ----------------------------+----------------------------+---------+------------------ 2019-02-14 18:43:53.01063 | 2019-02-14 19:16:49.781649 | 48 | 1977 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_CONCURRENCY_SCALING_USAGE.md
05efc5243320-0
**Topics** + [Step 1: Create a database](t_creating_database.md) + [Step 2: Create a database user](t_adding_redshift_user_cmd.md) + [Step 3: Create a database table](t_creating_table.md) + [Step 4: Load sample data](cm-dev-t-load-sample-data.md) + [Step 5: Query the system tables](t_querying_redshift_system_tables.md) + [Step 6: Cancel a query](cancel_query.md) + [Step 7: Clean up your resources](cm-dev-t-clean-up-resources.md) This section describes the basic steps to begin using the Amazon Redshift database\. The examples in this section assume you have signed up for the Amazon Redshift data warehouse service, created a cluster, and established a connection to the cluster from your SQL client tool such at the Amazon Redshift console query editor\. For information about these tasks, see [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/)\. **Important** The cluster that you deployed for this exercise will be running in a live environment\. As long as it is running, it will accrue charges to your AWS account\. For more pricing information, go to [the Amazon Redshift pricing page](https://aws.amazon.com/redshift/pricing/)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_to_admin.md
05efc5243320-1
To avoid unnecessary charges, you should delete your cluster when you are done with it\. The final step of the exercise explains how to do so\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_to_admin.md
2691217a377c-0
ST\_Envelope returns the minimum bounding box of the input geometry, as follows: + If the input geometry is empty, the returned geometry is a copy of the input geometry\. + If the minimum bounding box of the input geometry degenerates to a point, the returned geometry is a point\. + If the minimum bounding box of the input geometry is one\-dimensional, a two\-point linestring is returned\. + If none of the preceding is true, the function returns a clockwise\-oriented polygon whose vertices are the corners of the minimum bounding box\. The spatial reference system identifier \(SRID\) of the returned geometry is the same as that of the input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Envelope-function.md
9bad2cf577d8-0
``` ST_Envelope(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Envelope-function.md
5ac15b069ce0-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Envelope-function.md
cbf727be11f7-0
`GEOMETRY` If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Envelope-function.md
0a35a261019a-0
The following SQL converts a well\-known text \(WKT\) representation of a four\-point `LINESTRING` to a `GEOMETRY` object and returns a polygon whose vertices whose corners are the minimum bounding box\. ``` SELECT ST_AsText(ST_Envelope(ST_GeomFromText('GEOMETRYCOLLECTION(POLYGON((0 0,10 0,0 10,0 0)),LINESTRING(20 10,20 0,10 0))'))); ``` ``` st_astext ------------------------------------ POLYGON((0 0,0 10,20 10,20 0,0 0)) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Envelope-function.md
f9aec1d540a1-0
An alternate to methods demonstrated in this tutorial is to query top\-level nested collection columns as serialized JSON\. You can use the serialization to inspect, convert, and ingest nested data as JSON with Redshift Spectrum\. This method is supported for ORC, JSON, Ion, and Parquet formats\. Use the session configuration parameter `json_serialization_enable` to configure the serialization behavior\. When set, complex JSON data types are serialized to VARCHAR\(65535\)\. The nested JSON can be accessed with [JSON functions](json-functions.md)\. For more information, see [json\_serialization\_enable](r_json_serialization_enable.md)\. For example, without setting `json_serialization_enable`, the following queries that access nested columns directly fail\. ``` SELECT * FROM spectrum.customers LIMIT 1; => ERROR: Nested tables do not support '*' in the SELECT clause. SELECT name FROM spectrum.customers LIMIT 1; => ERROR: column "name" does not exist in customers ``` Setting `json_serialization_enable` enables querying top\-level collections directly\. ``` SET json_serialization_enable TO true; SELECT * FROM spectrum.customers order by id LIMIT 1; id | name | phones | orders
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/serializing-complex-JSON.md
f9aec1d540a1-1
SET json_serialization_enable TO true; SELECT * FROM spectrum.customers order by id LIMIT 1; id | name | phones | orders ---+--------------------------------------+----------------+---------------------------------------------------------------------------------------------------------------------- 1 | {"given": "John", "family": "Smith"} | ["123-457789"] | [{"shipdate": "2018-03-01T11:59:59.000Z", "price": 100.50}, {"shipdate": "2018-03-01T09:10:00.000Z", "price": 99.12}] SELECT name FROM spectrum.customers order by id LIMIT 1; name --------- {"given": "John", "family": "Smith"} ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/serializing-complex-JSON.md
f9aec1d540a1-2
--------- {"given": "John", "family": "Smith"} ``` Consider the following items when serializing nested JSON\. + When collection columns are serialized as VARCHAR\(65535\), their nested subfields can't be accessed directly as part of the query syntax \(for example, in the filter clause\)\. However, JSON functions can be used to access nested JSON\. + The following specialized representations are not supported: + ORC unions + ORC maps with complex type keys + Ion datagrams + Ion SEXP + Timestamps are returned as ISO serialized strings\. + Primitive map keys are promoted to string \(for example, `1` to `"1"`\)\. + Top\-level null values are serialized as NULLs\. + If the serialization overflows the maximum VARCHAR size of 65535, the cell is set to NULL\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/serializing-complex-JSON.md
9935e5ccf25b-0
By default, string values contained in nested collections are serialized as escaped JSON strings\. Escaping might be undesirable when the strings are valid JSON\. Instead you might want to write nested subelements or fields that are VARCHAR directly as JSON\. Enable this behavior with the `json_serialization_parse_nested_strings` session\-level configuration\. When both `json_serialization_enable` and `json_serialization_parse_nested_strings` are set, valid JSON values are serialized inline without escape characters\. When the value is not valid JSON, it is escaped as if the `json_serialization_parse_nested_strings` configuration value was not set\. For more information, see [json\_serialization\_parse\_nested\_strings](r_json_serialization_parse_nested_strings.md)\. For example, assume the data from the previous example contained JSON as a `structs` complex type in the `name` VARCHAR\(20\) field: ``` name --------- {"given": "{\"first\":\"John\",\"middle\":\"James\"}", "family": "Smith"} ``` When `json_serialization_parse_nested_strings` is set, the `name` column is serialized as follows: ``` SET json_serialization_enable TO true;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/serializing-complex-JSON.md
9935e5ccf25b-1
``` SET json_serialization_enable TO true; SET json_serialization_parse_nested_strings TO true; SELECT name FROM spectrum.customers order by id LIMIT 1; name --------- {"given": {"first":"John","middle":"James"}, "family": "Smith"} ``` Instead of being escaped like this: ``` SET json_serialization_enable TO true; SELECT name FROM spectrum.customers order by id LIMIT 1; name --------- {"given": "{\"first\":\"John\",\"middle\":\"James\"}", "family": "Smith"} ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/serializing-complex-JSON.md
2b1367516a0b-0
The CEILING or CEIL function is used to round a number up to the next whole number\. \(The [FLOOR function](r_FLOOR.md) rounds a number down to the next whole number\.\)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CEILING_FLOOR.md
bd5b266538bd-0
``` CEIL | CEILING(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CEILING_FLOOR.md
e254f2f2a038-0
*number* DOUBLE PRECISION number to be rounded\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CEILING_FLOOR.md
749e6f018e2c-0
CEILING and CEIL return an integer\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CEILING_FLOOR.md
61d54a6c6a69-0
Calculate the ceiling of the commission paid for a given sales transaction: ``` select ceiling(commission) from sales where salesid=10000; ceiling --------- 29 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CEILING_FLOOR.md
d523bbb83dae-0
Define primary key and foreign key constraints between tables wherever appropriate\. Even though they are informational only, the query optimizer uses those constraints to generate more efficient query plans\. Do not define primary key and foreign key constraints unless your application enforces the constraints\. Amazon Redshift does not enforce unique, primary\-key, and foreign\-key constraints\. See [Defining constraints](t_Defining_constraints.md) for additional information about how Amazon Redshift uses constraints\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-defining-constraints.md
edee7da0bd80-0
For a given expression, replaces all occurrences of specified characters with specified substitutes\. Existing characters are mapped to replacement characters by their positions in the *characters\_to\_replace* and *characters\_to\_substitute* arguments\. If more characters are specified in the *characters\_to\_replace* argument than in the *characters\_to\_substitute* argument, the extra characters from the *characters\_to\_replace* argument are omitted in the return value\. TRANSLATE is similar to the [REPLACE function](r_REPLACE.md) and the [REGEXP\_REPLACE function](REGEXP_REPLACE.md), except that REPLACE substitutes one entire string with another string and REGEXP\_REPLACE lets you search a string for a regular expression pattern, while TRANSLATE makes multiple single\-character substitutions\. If any argument is null, the return is NULL\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
f0b468965bd3-0
``` TRANSLATE ( expression, characters_to_replace, characters_to_substitute ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
d24ae6daf546-0
*expression* The expression to be translated\. *characters\_to\_replace* A string containing the characters to be replaced\. *characters\_to\_substitute* A string containing the characters to substitute\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
0e6c03a798ed-0
VARCHAR
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
20f3cbaf36bb-0
The following example replaces several characters in a string: ``` select translate('mint tea', 'inea', 'osin'); translate ----------- most tin ``` The following example replaces the at sign \(@\) with a period for all values in a column: ``` select email, translate(email, '@', '.') as obfuscated_email from users limit 10; email obfuscated_email ------------------------------------------------------------------------------------------- Etiam.laoreet.libero@sodalesMaurisblandit.edu Etiam.laoreet.libero.sodalesMaurisblandit.edu amet.faucibus.ut@condimentumegetvolutpat.ca amet.faucibus.ut.condimentumegetvolutpat.ca turpis@accumsanlaoreet.org turpis.accumsanlaoreet.org ullamcorper.nisl@Cras.edu ullamcorper.nisl.Cras.edu
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
20f3cbaf36bb-1
ullamcorper.nisl@Cras.edu ullamcorper.nisl.Cras.edu arcu.Curabitur@senectusetnetus.com arcu.Curabitur.senectusetnetus.com ac@velit.ca ac.velit.ca Aliquam.vulputate.ullamcorper@amalesuada.org Aliquam.vulputate.ullamcorper.amalesuada.org vel.est@velitegestas.edu vel.est.velitegestas.edu dolor.nonummy@ipsumdolorsit.ca dolor.nonummy.ipsumdolorsit.ca et@Nunclaoreet.ca et.Nunclaoreet.ca ``` The following example replaces spaces with underscores and strips out periods for all values in a column: ``` select city, translate(city, ' .', '_') from users where city like 'Sain%' or city like 'St%' group by city order by city; city translate --------------+------------------ Saint Albans Saint_Albans Saint Cloud Saint_Cloud Saint Joseph Saint_Joseph Saint Louis Saint_Louis Saint Paul Saint_Paul St. George St_George
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
20f3cbaf36bb-2
Saint Joseph Saint_Joseph Saint Louis Saint_Louis Saint Paul Saint_Paul St. George St_George St. Marys St_Marys St. Petersburg St_Petersburg Stafford Stafford Stamford Stamford Stanton Stanton Starkville Starkville Statesboro Statesboro Staunton Staunton Steubenville Steubenville Stevens Point Stevens_Point Stillwater Stillwater Stockton Stockton Sturgis Sturgis ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRANSLATE.md
571ef3e6d55f-0
Creates a new stored procedure or replaces an existing procedure for the current database\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
7acad5fcf7b0-0
``` CREATE [ OR REPLACE ] PROCEDURE sp_procedure_name ( [ [ argname ] [ argmode ] argtype [, ...] ] ) AS $$ procedure_body $$ LANGUAGE plpgsql [ { SECURITY INVOKER | SECURITY DEFINER } ] [ SET configuration_parameter { TO value | = value } ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
36e5f608b97a-0
OR REPLACE A clause that specifies that if a procedure with the same name and input argument data types, or signature, as this one already exists, the existing procedure is replaced\. You can only replace a procedure with a new procedure that defines an identical set of data types\. You must be a superuser or the owner to replace a procedure\. If you define a procedure with the same name as an existing procedure, but a different signature, you create a new procedure\. In other words, the procedure name is overloaded\. For more information, see [Overloading procedure names](stored-procedure-naming.md#stored-procedure-overloading-name)\. *sp\_procedure\_name* The name of the procedure\. If you specify a schema name \(such as **myschema\.myprocedure**\), the procedure is created in the specified schema\. Otherwise, the procedure is created in the current schema\. For more information about valid names, see [Names and identifiers](r_names.md)\. We recommend that you prefix all stored procedure names with `sp_`\. Amazon Redshift reserves the `sp_` prefix for stored procedure names\. By using the `sp_` prefix, you ensure that your stored procedure name doesn't conflict with any existing or future Amazon Redshift built\-in stored procedure or function names\. For more information, see [Naming stored procedures](stored-procedure-naming.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
36e5f608b97a-1
You can define more than one procedure with the same name if the data types for the input arguments, or signatures, are different\. In other words, in this case the procedure name is overloaded\. For more information, see [Overloading procedure names](stored-procedure-naming.md#stored-procedure-overloading-name) *\[argname\] \[ argmode\] argtype* A list of argument names, argument modes, and data types\. Only the data type is required\. Name and mode are optional and their position can be switched\. The argument mode can be IN, OUT, or INOUT\. The default is IN\. You can use OUT and INOUT arguments to return one or more values from a procedure call\. When there are OUT or INOUT arguments, the procedure call returns one result row containing *n* columns, where *n* is the total number of OUT or INOUT arguments\. INOUT arguments are input and output arguments at the same time\. *Input arguments* include both IN and INOUT arguments, and *output arguments* include both OUT and INOUT arguments\. OUT arguments aren't specified as part of the CALL statement\. Specify INOUT arguments in the stored procedure CALL statement\. INOUT arguments can be useful when passing and returning values from a nested call, and also when returning a `refcursor`\. For more information on `refcursor` types, see [Cursors](c_PLpgSQL-statements.md#r_PLpgSQL-cursors)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
36e5f608b97a-2
The argument data types can be any standard Amazon Redshift data type\. In addition, an argument data type can be `refcursor`\. You can specify a maximum of 32 input arguments and 32 output arguments\. AS $$ *procedure\_body* $$ A construct that encloses the procedure to be executed\. The literal keywords AS $$ and $$ are required\. Amazon Redshift requires you to enclose the statement in your procedure by using a format called dollar quoting\. Anything within the enclosure is passed exactly as is\. You don't need to escape any special characters because the contents of the string are written literally\. With *dollar quoting, *you use a pair of dollar signs \($$\) to signify the start and the end of the statement to run, as shown in the following example\. ``` $$ my statement $$ ``` Optionally, between the dollar signs in each pair, you can specify a string to help identify the statement\. The string that you use must be the same in both the start and the end of the enclosure pairs\. This string is case\-sensitive, and it follows the same constraints as an unquoted identifier except that it can't contain dollar signs\. The following example uses the string test\. ``` $test$ my statement $test$ ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
36e5f608b97a-3
``` $test$ my statement $test$ ``` This syntax is also useful for nested dollar quoting\. For more information about dollar quoting, see "Dollar\-quoted String Constants" under [Lexical Structure](https://www.postgresql.org/docs/9.0/sql-syntax-lexical.html) in the PostgreSQL documentation\. *procedure\_body* A set of valid PL/pgSQL statements\. PL/pgSQL statements augment SQL commands with procedural constructs, including looping and conditional expressions, to control logical flow\. Most SQL commands can be used in the procedure body, including data modification language \(DML\) such as COPY, UNLOAD and INSERT, and data definition language \(DDL\) such as CREATE TABLE\. For more information, see [PL/pgSQL language reference](c_pl_pgSQL_reference.md)\. LANGUAGE *plpgsql* A language value\. Specify `plpgsql`\. You must have permission for usage on language to use `plpgsql`\. For more information, see [GRANT](r_GRANT.md)\. SECURITY INVOKER \| SECURITY DEFINER The security mode for the procedure determines the procedure's access privileges at runtime\. The procedure must have permission to access the underlying database objects\. For SECURITY INVOKER mode, the procedure uses the privileges of the user calling the procedure\. The user must have explicit permissions on the underlying database objects\. The default is SECURITY INVOKER\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
36e5f608b97a-4
For SECURITY DEFINER mode, the procedure is run using the database privileges as the procedure's owner\. The user calling the procedure needs execute privilege on the procedure, but doesn't need any privileges on the underlying objects\. SET configuration\_parameter \{ TO value \| = value \} The SET clause causes the specified `configuration_parameter` to be set to the specified value when the procedure is entered\. This clause then restores `configuration_parameter` to its earlier value when the procedure exits\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md