id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
2876a1187dea-0
You can use the COPY command to load data in parallel from one or more remote hosts, such Amazon EC2 instances or other computers\. COPY connects to the remote hosts using SSH and executes commands on the remote hosts to generate text output\. The remote host can be an Amazon EC2 Linux instance or another Unix or Linux computer configured to accept SSH connections\. This guide assumes your remote host is an Amazon EC2 instance\. Where the procedure is different for another computer, the guide will point out the difference\. Amazon Redshift can connect to multiple hosts, and can open multiple SSH connections to each host\. Amazon Redshifts sends a unique command through each connection to generate text output to the host's standard output, which Amazon Redshift then reads as it would a text file\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-remote-hosts.md
f944caa029cd-0
Before you begin, you should have the following in place: + One or more host machines, such as Amazon EC2 instances, that you can connect to using SSH\. + Data sources on the hosts\. You will provide commands that the Amazon Redshift cluster will run on the hosts to generate the text output\. After the cluster connects to a host, the COPY command runs the commands, reads the text from the hosts' standard output, and loads the data in parallel into an Amazon Redshift table\. The text output must be in a form that the COPY command can ingest\. For more information, see [Preparing your input data](t_preparing-input-data.md) + Access to the hosts from your computer\. For an Amazon EC2 instance, you will use an SSH connection to access the host\. You will need to access the host to add the Amazon Redshift cluster's public key to the host's authorized keys file\. + A running Amazon Redshift cluster\. For information about how to launch a cluster, see [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-remote-hosts.md
ad8e9d2e9bf6-0
This section walks you through the process of loading data from remote hosts\. The following sections provide the details you need to accomplish each step\. + **[Step 1: Retrieve the cluster public key and cluster node IP addresses](load-from-host-steps-retrieve-key-and-ips.md)** The public key enables the Amazon Redshift cluster nodes to establish SSH connections to the remote hosts\. You will use the IP address for each cluster node to configure the host security groups or firewall to permit access from your Amazon Redshift cluster using these IP addresses\. + **[Step 2: Add the Amazon Redshift cluster public key to the host's authorized keys file](load-from-host-steps-add-key-to-host.md)** You add the Amazon Redshift cluster public key to the host's authorized keys file so that the host will recognize the Amazon Redshift cluster and accept the SSH connection\. + **[Step 3: Configure the host to accept all of the Amazon Redshift cluster's IP addresses](load-from-host-steps-configure-security-groups.md)** For Amazon EC2 , modify the instance's security groups to add ingress rules to accept the Amazon Redshift IP addresses\. For other hosts, modify the firewall so that your Amazon Redshift nodes are able to establish SSH connections to the remote host\. + **[Step 4: Get the public key for the host](load-from-host-steps-get-the-host-key.md)**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-remote-hosts.md
ad8e9d2e9bf6-1
+ **[Step 4: Get the public key for the host](load-from-host-steps-get-the-host-key.md)** You can optionally specify that Amazon Redshift should use the public key to identify the host\. You will need to locate the public key and copy the text into your manifest file\. + **[Step 5: Create a manifest file](load-from-host-steps-create-manifest.md)** The manifest is a JSON\-formatted text file with the details Amazon Redshift needs to connect to the hosts and fetch the data\. + **[Step 6: Upload the manifest file to an Amazon S3 bucket](load-from-host-steps-upload-manifest.md)** Amazon Redshift reads the manifest and uses that information to connect to the remote host\. If the Amazon S3 bucket does not reside in the same Region as your Amazon Redshift cluster, you must use the [REGION](copy-parameters-data-source-s3.md#copy-region) option to specify the Region in which the data is located\. + **[Step 7: Run the COPY command to load the data](load-from-host-steps-run-copy.md)** From an Amazon Redshift database, run the COPY command to load the data into an Amazon Redshift table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-remote-hosts.md
11f813b7b553-0
Calculates the cumulative distribution of a value within a window or partition\. Assuming ascending ordering, the cumulative distribution is determined using this formula: `count of rows with values <= x / count of rows in the window or partition` where *x* equals the value in the current row of the column specified in the ORDER BY clause\. The following dataset illustrates use of this formula: ``` Row# Value Calculation CUME_DIST 1 2500 (1)/(5) 0.2 2 2600 (2)/(5) 0.4 3 2800 (3)/(5) 0.6 4 2900 (4)/(5) 0.8 5 3100 (5)/(5) 1.0 ``` The return value range is >0 to 1, inclusive\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_CUME_DIST.md
7b86cbc695dc-0
``` CUME_DIST () OVER ( [ PARTITION BY partition_expression ] [ ORDER BY order_list ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_CUME_DIST.md
fdc9710339df-0
OVER A clause that specifies the window partitioning\. The OVER clause cannot contain a window frame specification\. PARTITION BY *partition\_expression* Optional\. An expression that sets the range of records for each group in the OVER clause\. ORDER BY *order\_list* The expression on which to calculate cumulative distribution\. The expression must have either a numeric data type or be implicitly convertible to one\. If ORDER BY is omitted, the return value is 1 for all rows\. If ORDER BY doesn't produce a unique ordering, the order of the rows is nondeterministic\. For more information, see [Unique ordering of data for window functions](r_Examples_order_by_WF.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_CUME_DIST.md
42670ab736b7-0
FLOAT8
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_CUME_DIST.md
7b7e1cc2e7d0-0
The following example calculates the cumulative distribution of the quantity for each seller: ``` select sellerid, qty, cume_dist() over (partition by sellerid order by qty) from winsales; sellerid qty cume_dist -------------------------------------------------- 1 10.00 0.33 1 10.64 0.67 1 30.37 1 3 10.04 0.25 3 15.15 0.5 3 20.75 0.75 3 30.55 1 2 20.09 0.5 2 20.12 1 4 10.12 0.5 4 40.23 1 ``` For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_CUME_DIST.md
2deadc05cc2e-0
Records system\-defined error and warning messages generating during user\-defined function \(UDF\) execution\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_UDF_LOG.md
86808fb05bcb-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_UDF_LOG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_UDF_LOG.md
b3bd88f5afa7-0
The following example shows how UDFs handle system\-defined errors\. The first block shows the definition for a UDF function that returns the inverse of an argument\. When you run the function and provide a 0 argument, as the second block shows, the function returns an error\. The third statement reads the error message that is logged in SVL\_UDF\_LOG ``` -- Create a function to find the inverse of a number CREATE OR REPLACE FUNCTION f_udf_inv(a int) RETURNS float IMMUTABLE AS $$ return 1/a $$ LANGUAGE plpythonu; -- Run the function with a 0 argument to create an error Select f_udf_inv(0) from sales; -- Query SVL_UDF_LOG to view the message Select query, created, message::varchar from svl_udf_log; query | created | message -------+----------------------------+--------------------------------------------------------- 2211 | 2015-08-22 00:11:12.04819 | ZeroDivisionError: long division or modulo by zero\nNone ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_UDF_LOG.md
b3bd88f5afa7-1
``` The following example adds logging and a warning message to the UDF so that a divide by zero operation results in a warning message instead of stopping with an error message\. ``` -- Create a function to find the inverse of a number and log a warning CREATE OR REPLACE FUNCTION f_udf_inv_log(a int) RETURNS float IMMUTABLE AS $$ import logging logger = logging.getLogger() #get root logger if a==0: logger.warning('You attempted to divide by zero.\nReturning zero instead of error.\n') return 0 else: return 1/a $$ LANGUAGE plpythonu; ``` The following example runs the function, then queries SVL\_UDF\_LOG to view the message\. ``` -- Run the function with a 0 argument to trigger the warning Select f_udf_inv_log(0) from sales; -- Query SVL_UDF_LOG to view the message Select query, created, message::varchar from svl_udf_log; query | created | message
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_UDF_LOG.md
b3bd88f5afa7-2
Select query, created, message::varchar from svl_udf_log; query | created | message ------+----------------------------+---------------------------------- 0 | 2015-08-22 00:11:12.04819 | You attempted to divide by zero. Returning zero instead of error. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_UDF_LOG.md
5edfce9cfe36-0
The VAR\_SAMP and VAR\_POP window functions return the sample and population variance of a set of numeric values \(integer, decimal, or floating\-point\)\. See also [VAR\_SAMP and VAR\_POP functions](r_VARIANCE_functions.md)\. VAR\_SAMP and VARIANCE are synonyms for the same function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_VARIANCE.md
0587f16574da-0
``` VAR_SAMP | VARIANCE | VAR_POP ( [ ALL ] expression ) OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list frame_clause ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_VARIANCE.md
423dae30072d-0
*expression * The target column or expression that the function operates on\. ALL With the argument ALL, the function retains all duplicate values from the expression\. ALL is the default\. DISTINCT is not supported\. OVER Specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\. PARTITION BY *expr\_list* Defines the window for the function in terms of one or more expressions\. ORDER BY *order\_list* Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\. *frame\_clause* If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_VARIANCE.md
6833a288f128-0
The argument types supported by the VARIANCE functions are SMALLINT, INTEGER, BIGINT, NUMERIC, DECIMAL, REAL, and DOUBLE PRECISION\. Regardless of the data type of the expression, the return type of a VARIANCE function is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_VARIANCE.md
3074d6f0436e-0
An error encountered during the execution of a stored procedure ends the execution flow and ends the transaction\. You can trap errors using an EXCEPTION block\. The only supported condition is OTHERS, which matches every error type except query cancellation\. ``` [ <<label>> ] [ DECLARE declarations ] BEGIN statements EXCEPTION WHEN OTHERS THEN handler_statements END; ``` In an Amazon Redshift stored procedure, the only supported *handler\_statement* is RAISE\. Any error encountered during the execution automatically ends the entire stored procedure call and rolls back the transaction\. This occurs because subtransactions are not supported\. If an error occurs in the exception handling block, it is propagated out and can be caught by an outer exception handling block, if one exists\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-trapping-errors.md
ab4fd8e8a9e7-0
Contains rows for query steps that are used to evaluate expressions\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PROJECT.md
53f3ec35f63f-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_PROJECT.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PROJECT.md
2f154497f854-0
The following example returns all rows for query steps that were used to evaluate expressions for slice 0 and segment 1\. ``` select query, step, starttime, endtime, tasknum, rows from stl_project where slice=0 and segment=1; ``` ``` query | step | starttime | endtime | tasknum | rows --------+------+---------------------+---------------------+---------+------ 86399 | 2 | 2013-08-29 22:01:21 | 2013-08-29 22:01:21 | 25 | -1 86399 | 3 | 2013-08-29 22:01:21 | 2013-08-29 22:01:21 | 25 | -1 719 | 1 | 2013-08-12 22:38:33 | 2013-08-12 22:38:33 | 7 | -1 86383 | 1 | 2013-08-29 21:58:35 | 2013-08-29 21:58:35 | 7 | -1 714 | 1 | 2013-08-12 22:38:17 | 2013-08-12 22:38:17 | 2 | -1 86375 | 1 | 2013-08-29 21:57:59 | 2013-08-29 21:57:59 | 2 | -1
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PROJECT.md
2f154497f854-1
86375 | 1 | 2013-08-29 21:57:59 | 2013-08-29 21:57:59 | 2 | -1 86397 | 2 | 2013-08-29 22:01:20 | 2013-08-29 22:01:20 | 19 | -1 627 | 1 | 2013-08-12 22:34:13 | 2013-08-12 22:34:13 | 34 | -1 86326 | 2 | 2013-08-29 21:45:28 | 2013-08-29 21:45:28 | 34 | -1 86326 | 3 | 2013-08-29 21:45:28 | 2013-08-29 21:45:28 | 34 | -1 86325 | 2 | 2013-08-29 21:45:27 | 2013-08-29 21:45:27 | 28 | -1 86371 | 1 | 2013-08-29 21:57:42 | 2013-08-29 21:57:42 | 4 | -1 111100 | 2 | 2013-09-03 19:04:45 | 2013-09-03 19:04:45 | 12 | -1 704 | 2 | 2013-08-12 22:36:34 | 2013-08-12 22:36:34 | 37 | -1 649 | 2 | 2013-08-12 22:34:47 | 2013-08-12 22:34:47 | 38 | -1 649 | 3 | 2013-08-12 22:34:47 | 2013-08-12 22:34:47 | 38 | -1
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PROJECT.md
2f154497f854-2
649 | 3 | 2013-08-12 22:34:47 | 2013-08-12 22:34:47 | 38 | -1 632 | 2 | 2013-08-12 22:34:22 | 2013-08-12 22:34:22 | 13 | -1 705 | 2 | 2013-08-12 22:36:48 | 2013-08-12 22:36:49 | 13 | -1 705 | 3 | 2013-08-12 22:36:48 | 2013-08-12 22:36:49 | 13 | -1 3 | 1 | 2013-08-12 20:07:40 | 2013-08-12 20:07:40 | 3 | -1 86373 | 1 | 2013-08-29 21:57:58 | 2013-08-29 21:57:58 | 3 | -1 107976 | 1 | 2013-09-03 04:05:12 | 2013-09-03 04:05:12 | 3 | -1 86381 | 1 | 2013-08-29 21:58:35 | 2013-08-29 21:58:35 | 8 | -1 86396 | 1 | 2013-08-29 22:01:20 | 2013-08-29 22:01:20 | 15 | -1 711 | 1 | 2013-08-12 22:37:10 | 2013-08-12 22:37:10 | 20 | -1 86324 | 1 | 2013-08-29 21:45:27 | 2013-08-29 21:45:27 | 24 | -1 (26 rows)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PROJECT.md
2f154497f854-3
(26 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PROJECT.md
afad1783ce96-0
GeometryType returns the subtype of an input geometry as a string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/GeometryType-function.md
a4cb29b34648-0
``` GeometryType(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/GeometryType-function.md
f63145a1716a-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/GeometryType-function.md
a880575e2fe8-0
`VARCHAR` representing the subtype of *geom*\. If *geom* is null, then null is returned\. The values returned are as follows\. | Returned string value | Geometry subtype | | --- | --- | | `POINT` | Returned if *geom* is a `POINT` subtype | | `LINESTRING` | Returned if *geom* is a `LINESTRING` subtype | | `POLYGON` | Returned if *geom* is a `POLYGON` subtype | | `MULTIPOINT` | Returned if *geom* is a `MULTIPOINT` subtype | | `MULTILINESTRING` | Returned if *geom* is a `MULTILINESTRING` subtype | | `MULTIPOLYGON` | Returned if *geom* is a `MULTIPOLYGON` subtype | | `GEOMETRYCOLLECTION` | Returned if *geom* is a `GEOMETRYCOLLECTION` subtype |
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/GeometryType-function.md
0071bf3a1c45-0
The following SQL converts a well\-known text \(WKT\) representation of a polygon and returns the `GEOMETRY` subtype as a string\. ``` SELECT GeometryType(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))')); ``` ``` geometrytype ------------- POLYGON ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/GeometryType-function.md
1a6e4e5fed1a-0
Following, you can find a quick reference that identifies and addresses some common issues you might encounter with Amazon Redshift Spectrum queries\. To view errors generated by Redshift Spectrum queries, query the [SVL\_S3LOG](r_SVL_S3LOG.md) system table\. **Topics** + [Retries exceeded](#spectrum-troubleshooting-retries-exceeded) + [Access throttled](#spectrum-troubleshooting-access-throttled) + [Resource limit exceeded](#spectrum-troubleshooting-resource-limit-exceeded) + [No rows returned for a partitioned table](#spectrum-troubleshooting-no-rows-partitioned-table) + [Not authorized error](#spectrum-troubleshooting-not-authorized-error) + [Incompatible data formats](#spectrum-troubleshooting-incompatible-data-format) + [Syntax error when using Hive DDL in Amazon Redshift](#spectrum-troubleshooting-syntax-error-using-hive-ddl) + [Permission to create temporary tables](#spectrum-troubleshooting-permission-to-create-temp-tables)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
8d0565f3e405-0
If an Amazon Redshift Spectrum request times out, the request is canceled and resubmitted\. After five failed retries, the query fails with the following error\. ``` error: Spectrum Scan Error: Retries exceeded ``` Possible causes include the following: + Large file sizes \(greater than 1 GB\)\. Check your file sizes in Amazon S3 and look for large files and file size skew\. Break up large files into smaller files, between 100 MB and 1 GB\. Try to make files about the same size\. + Slow network throughput\. Try your query later\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
30473951281f-0
Amazon Redshift Spectrum is subject to the service quotas of other AWS services\. Under high usage, Redshift Spectrum requests might be required to slow down, resulting in the following error\. ``` error: Spectrum Scan Error: Access throttled ``` Two types of throttling can happen: + Access throttled by Amazon S3\. + Access throttled by AWS KMS\. The error context provides more details about the type of throttling\. Following, you can find causes and possible resolutions for this throttling\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
a74d170043a1-0
Amazon S3 might throttle a Redshift Spectrum request if the read request rate on a [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) is too high\. For information about a GET/HEAD request rate that you can achieve in Amazon S3, see [Optimizing Amazon S3 Performance](https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html) in *Amazon Simple Storage Service Developer Guide\.* The Amazon S3 GET/HEAD request rate takes into account all GET/HEAD requests on a prefix so different applications accessing the same prefix share the total requests rate\. If your Redshift Spectrum requests frequently get throttled by Amazon S3, reduce the number of Amazon S3 GET/HEAD requests that Redshift Spectrum makes to Amazon S3\. To do this, try merging small files into larger files\. We recommend using file sizes of 64 MB or larger\. Also consider partitioning your Redshift Spectrum tables to benefit from early filtering and to reduce the number of files accessed in Amazon S3\. For more information, see [Partitioning Redshift Spectrum external tables](c-spectrum-external-tables.md#c-spectrum-external-tables-partitioning)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
1969944400ca-0
If you store your data in Amazon S3 using server\-side encryption \(SSE\-S3 or SSE\-KMS\), Amazon S3 calls an API operation to AWS KMS for each file that Redshift Spectrum accesses\. These requests count toward your cryptographic operations quota; for more information, see [AWS KMS Request Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/requests-per-second.html)\. For more information on SSE\-S3 and SSE\-KMS, see [Protecting Data Using Server\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) and [Protecting Data Using Server\-Side Encryption with CMKs Stored in AWS KMS](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) in *Amazon Simple Storage Service Developer Guide\.* A first step to reduce the number of requests that Redshift Spectrum makes to AWS KMS is to reduce the number of files accessed\. To do this, try merging small files into larger files\. We recommend using file sizes of 64 MB or larger\. If your Redshift Spectrum requests frequently get throttled by AWS KMS, consider requesting a quota increase for your AWS KMS request rate for cryptographic operations\. To request a quota increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *Amazon Web Services General Reference*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
e40768f9fca1-0
Redshift Spectrum enforces an upper bound on the amount of memory a request can use\. A Redshift Spectrum request that requires more memory fails, resulting in the following error\. ``` error: Spectrum Scan Error: Resource limit exceeded ``` There are two common reasons that can cause a Redshift Spectrum request to overrun its memory allowance: + Redshift Spectrum processes a large chunk of data that can't be split in smaller chunks\. + A large aggregation step is processed by Redshift Spectrum\. We recommend using a file format that supports parallel reads with split sizes of 128 MB or less\. See [Creating data files for queries in Amazon Redshift Spectrum](c-spectrum-data-files.md) for supported file formats and generic guidelines for data file creation\. When using file formats or compression algorithms that don't support parallel reads, we recommend keeping file sizes between 64 MB and 128 MB\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
c9d2e44ee201-0
If your query returns zero rows from a partitioned external table, check whether a partition has been added for this external table\. Redshift Spectrum only scans files in an Amazon S3 location that has been explicitly added using `ALTER TABLE … ADD PARTITION`\. Query the [SVV\_EXTERNAL\_PARTITIONS](r_SVV_EXTERNAL_PARTITIONS.md) view to find existing partitions\. Run `ALTER TABLE … ADD PARTITION` for each missing partition\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
7095f10fc0fc-0
Verify that the IAM role for the cluster allows access to the Amazon S3 file objects\. If your external database is on Amazon Athena, verify that the IAM role allows access to Athena resources\. For more information, see [IAM policies for Amazon Redshift Spectrum](c-spectrum-iam-policies.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
1c8e90ff3ac7-0
For a columnar file format, such as Apache Parquet, the column type is embedded with the data\. The column type in the CREATE EXTERNAL TABLE definition must match the column type of the data file\. If there is a mismatch, you receive an error similar to the following: ``` File 'https://s3bucket/location/file has an incompatible Parquet schema for column β€˜s3://s3bucket/location.col1'. Column type: VARCHAR, Par ``` The error message might be truncated due to the limit on message length\. To retrieve the complete error message, including column name and column type, query the [SVL\_S3LOG](r_SVL_S3LOG.md) system view\. The following example queries SVL\_S3LOG for the last query executed\. ``` select message from svl_s3log where query = pg_last_query_id() order by query,segment,slice; ``` The following is an example of a result that shows the full error message\. ``` message –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––-
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
1c8e90ff3ac7-1
Spectrum Scan Error. File 'https://s3bucket/location/file has an incompatible Parquet schema for column ' s3bucket/location.col1'. Column type: VARCHAR, Parquet schema:\noptional int64 l_orderkey [i:0 d:1 r:0]\n ``` To correct the error, alter the external table to match the column type of the Parquet file\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
83d6436a370d-0
Amazon Redshift supports data definition language \(DDL\) for CREATE EXTERNAL TABLE that is similar to Hive DDL\. However, the two types of DDL aren't always exactly the same\. If you copy Hive DDL to create or alter Amazon Redshift external tables, you might encounter syntax errors\. The following are examples of differences between Amazon Redshift and Hive DDL: + Amazon Redshift requires single quotation marks \('\) where Hive DDL supports double quotation marks \("\)\. + Amazon Redshift doesn't support the STRING data type\. Use VARCHAR instead\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
fd3298f066aa-0
To run Redshift Spectrum queries, the database user must have permission to create temporary tables in the database\. The following example grants temporary permission on the database `spectrumdb` to the `spectrumusers` user group\. ``` grant temp on database spectrumdb to group spectrumusers; ``` For more information, see [GRANT](r_GRANT.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-troubleshooting.md
5360477807f8-0
Deletes a schema\. For an external schema, you can also drop the external database associated with the schema\. This command isn't reversible\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_SCHEMA.md
e7a3aa57d75a-0
``` DROP SCHEMA [ IF EXISTS ] name [, ...] [ DROP EXTERNAL DATABASE ] [ CASCADE | RESTRICT ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_SCHEMA.md
81b5db94fbe8-0
IF EXISTS Clause that indicates that if the specified schema doesn’t exist, the command should make no changes and return a message that the schema doesn't exist, rather than terminating with an error\. This clause is useful when scripting, so the script doesn’t fail if DROP SCHEMA runs against a nonexistent schema\. *name* Names of the schemas to drop\. You can specify multiple schema names separated by commas\. DROP EXTERNAL DATABASE Clause that indicates that if an external schema is dropped, drop the external database associated with the external schema, if one exists\. If no external database exists, the command returns a message stating that no external database exists\. If multiple external schemas are dropped, all databases associated with the specified schemas are dropped\. If an external database contains dependent objects such as tables, include the CASCADE option to drop the dependent objects as well\. When you drop an external database, the database is also dropped for any other external schemas associated with the database\. Tables defined in other external schemas using the database are also dropped\. DROP EXTERNAL DATABASE doesn't support external databases stored in a HIVE metastore\. CASCADE Keyword that indicates to automatically drop all objects in the schema\. If DROP EXTERNAL DATABASE is specified, all objects in the external database are also dropped\. RESTRICT Keyword that indicates not to drop a schema or external database if it contains any objects\. This action is the default\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_SCHEMA.md
011fe618f9e0-0
The following example deletes a schema named S\_SALES\. This example uses RESTRICT as a safety mechanism so that the schema isn't deleted if it contains any objects\. In this case, you need to delete the schema objects before deleting the schema\. ``` drop schema s_sales restrict; ``` The following example deletes a schema named S\_SALES and all objects that depend on that schema\. ``` drop schema s_sales cascade; ``` The following example either drops the S\_SALES schema if it exists, or does nothing and returns a message if it doesn't\. ``` drop schema if exists s_sales; ``` The following example deletes an external schema named S\_SPECTRUM and the external database associated with it\. This example uses RESTRICT so that the schema and database aren't deleted if they contain any objects\. In this case, you need to delete the dependent objects before deleting the schema and the database\. ``` drop schema s_spectrum drop external database restrict; ``` The following example deletes multiple schemas and the external databases associated with them, along with any dependent objects\. ``` drop schema s_sales, s_profit, s_revenue drop external database cascade; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_SCHEMA.md
ea7289bd9231-0
Repeats a string the specified number of times\. If the input parameter is numeric, REPEAT treats it as a string\. Synonym for [REPLICATE function](r_REPLICATE.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REPEAT.md
9122266c5aaa-0
``` REPEAT(string, integer) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REPEAT.md
c62663f4b718-0
*string* The first input parameter is the string to be repeated\. *integer* The second parameter is an integer indicating the number of times to repeat the string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REPEAT.md
51b50f05503b-0
The REPEAT function returns a string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REPEAT.md
7cad07d9cc6e-0
The following example repeats the value of the CATID column in the CATEGORY table three times: ``` select catid, repeat(catid,3) from category order by 1,2; catid | repeat -------+-------- 1 | 111 2 | 222 3 | 333 4 | 444 5 | 555 6 | 666 7 | 777 8 | 888 9 | 999 10 | 101010 11 | 111111 (11 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REPEAT.md
db218c8dd20f-0
PERCENTILE\_DISC is an inverse distribution function that assumes a discrete distribution model\. It takes a percentile value and a sort specification and returns an element from the given set\. For a given percentile value P, PERCENTILE\_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest cumulative distribution value \(with respect to the same sort specification\) that is greater than or equal to P\. You can specify only the PARTITION clause in the OVER clause\. PERCENTILE\_DISC is a compute\-node only function\. The function returns an error if the query doesn't reference a user\-defined table or Amazon Redshift system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_DISC.md
b7dd801bdc06-0
``` PERCENTILE_DISC ( percentile ) WITHIN GROUP (ORDER BY expr) OVER ( [ PARTITION BY expr_list ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_DISC.md
dde29c97e905-0
*percentile* Numeric constant between 0 and 1\. Nulls are ignored in the calculation\. WITHIN GROUP \( ORDER BY *expr*\) Specifies numeric or date/time values to sort and compute the percentile over\. OVER Specifies the window partitioning\. The OVER clause cannot contain a window ordering or window frame specification\. PARTITION BY *expr* Optional argument that sets the range of records for each group in the OVER clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_DISC.md
897a8fe97939-0
The same data type as the ORDER BY expression in the WITHIN GROUP clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_DISC.md
2c8398ea1e61-0
The following examples uses the WINSALES table\. For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\. ``` select sellerid, qty, percentile_disc(0.5) within group (order by qty) over() as median from winsales; sellerid | qty | median ----------+-----+-------- 1 | 10 | 20 3 | 10 | 20 1 | 10 | 20 4 | 10 | 20 3 | 15 | 20 2 | 20 | 20 2 | 20 | 20 3 | 20 | 20 1 | 30 | 20 3 | 30 | 20 4 | 40 | 20 (11 rows) ``` ``` select sellerid, qty, percentile_disc(0.5) within group (order by qty) over(partition by sellerid) as median from winsales; sellerid | qty | median ----------+-----+-------- 2 | 20 | 20 2 | 20 | 20 4 | 10 | 10 4 | 40 | 10 1 | 10 | 10
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_DISC.md
2c8398ea1e61-1
2 | 20 | 20 4 | 10 | 10 4 | 40 | 10 1 | 10 | 10 1 | 10 | 10 1 | 30 | 10 3 | 10 | 15 3 | 15 | 15 3 | 20 | 15 3 | 30 | 15 (11 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_DISC.md
bce674a67cb6-0
TIMESTAMPTZ\_CMP\_DATE compares the value of a time stamp and a date\. If the time stamp and date values are identical, the function returns 0\. If the time stamp is greater alphabetically, the function returns 1\. If the date is greater, the function returns –1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_DATE.md
dd2741339185-0
``` TIMESTAMPTZ_CMP_DATE(timestamptz, date) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_DATE.md
8e322882e7b6-0
*timestamptz* A TIMESTAMPTZ column or an expression that implicitly converts to a time stamp with a time zone\. *date* A date column or an expression that implicitly converts to a date\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_DATE.md
bf6bb9403ca5-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_DATE.md
b0bee7480498-0
Depending on the nature of your data, we recommend following the practices in this section to minimize vacuum times\. **Topics** + [Deciding whether to reindex](r_vacuum-decide-whether-to-reindex.md) + [Managing the size of the unsorted region](r_vacuum_diskspacereqs.md) + [Managing the volume of merged rows](vacuum-managing-volume-of-unmerged-rows.md) + [Loading your data in sort key order](vacuum-load-in-sort-key-order.md) + [Using time series tables](vacuum-time-series-tables.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-managing-vacuum-times.md
4fb030a4dd78-0
WLM configures query queues according to WLM service classes, which are internally defined\. Amazon Redshift creates several internal queues according to these service classes along with the queues defined in the WLM configuration\. The terms *queue* and *service class* are often used interchangeably in the system tables\. The superuser queue uses service class 5\. User\-defined queues use service class 6 and greater\. You can view the status of queries, queues, and service classes by using WLM\-specific system tables\. Query the following system tables to do the following: + View which queries are being tracked and what resources are allocated by the workload manager\. + See which queue a query has been assigned to\. + View the status of a query that is currently being tracked by the workload manager\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-system-tables-and-views.html) You use the task ID to track a query in the system tables\. The following example shows how to obtain the task ID of the most recently submitted user query: ``` select task from stl_wlm_query where exec_start_time =(select max(exec_start_time) from stl_wlm_query); task ------ 137 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-system-tables-and-views.md
4fb030a4dd78-1
task ------ 137 (1 row) ``` The following example displays queries that are currently executing or waiting in various service classes \(queues\)\. This query is useful in tracking the overall concurrent workload for Amazon Redshift: ``` select * from stv_wlm_query_state order by query; xid |task|query|service_| wlm_start_ | state |queue_ | exec_ | | |class | time | |time | time ----+----+-----+--------+-------------+---------+-------+-------- 2645| 84 | 98 | 3 | 2010-10-... |Returning| 0 | 3438369 2650| 85 | 100 | 3 | 2010-10-... |Waiting | 0 | 1645879 2660| 87 | 101 | 2 | 2010-10-... |Executing| 0 | 916046 2661| 88 | 102 | 1 | 2010-10-... |Executing| 0 | 13291 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-system-tables-and-views.md
e783a1953221-0
The following table lists the IDs assigned to service classes\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-system-tables-and-views.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-system-tables-and-views.md
42c127b87138-0
You can create a custom scalar user\-defined function \(UDF\) using either a SQL SELECT clause or a Python program\. The new function is stored in the database and is available for any user with sufficient privileges to run, in much the same way as you run existing Amazon Redshift functions\. For Python UDFs, in addition to using the standard Python functionality, you can import your own custom Python modules\. For more information, see [Python language support for UDFs](udf-python-language-support.md)\. By default, all users can execute UDFs\. For more information about privileges, see [UDF security and privileges](udf-security-and-privileges.md)\. **Topics** + [UDF security and privileges](udf-security-and-privileges.md) + [Creating a scalar SQL UDF](udf-creating-a-scalar-sql-udf.md) + [Creating a scalar Python UDF](udf-creating-a-scalar-udf.md) + [Naming UDFs](udf-naming-udfs.md) + [Logging errors and warnings in UDFs](udf-logging-messages.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/user-defined-functions.md
bd73a11224b2-0
ST\_StartPoint returns the first point of an input linestring\. The spatial reference system identifier \(SRID\) value of the result is the same as that of the input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_StartPoint-function.md
9616317e7644-0
``` ST_StartPoint(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_StartPoint-function.md
502bcac34c17-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_StartPoint-function.md
321174b8dfce-0
`GEOMETRY` If *geom* is null, then null is returned\. If *geom* is empty, then null is returned\. If *geom* isn't a `LINESTRING`, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_StartPoint-function.md
af29a1e6934c-0
The following SQL returns an extended well\-known text \(EWKT\) representation of a four\-point `LINESTRING` to a `GEOMETRY` object and returns the start point of the linestring\. ``` SELECT ST_AsEWKT(ST_StartPoint(ST_GeomFromText('LINESTRING(0 0,10 0,10 10,5 5,0 5)',4326))); ``` ``` st_asewkt ------------- SRID=4326;POINT(0 0) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_StartPoint-function.md
7f54c57f9abf-0
Your cluster needs authorization to access your external Data Catalog in AWS Glue or Amazon Athena and your data files in Amazon S3\. You provide that authorization by referencing an AWS Identity and Access Management \(IAM\) role that is attached to your cluster\. For more information about using roles with Amazon Redshift, see [Authorizing COPY and UNLOAD Operations Using IAM Roles](https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html)\. **Note** In certain cases, you can migrate your Athena Data Catalog to an AWS Glue Data Catalog\. You can do this if your cluster is in an AWS Region where AWS Glue is supported and you have Redshift Spectrum external tables in the Athena Data Catalog\. To use the AWS Glue Data Catalog with Redshift Spectrum, you might need to change your IAM policies\. For more information, see [Upgrading to the AWS Glue Data Catalog](https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html#glue-upgrade) in the *Athena User Guide*\. When you create a role for Amazon Redshift, choose one of the following approaches: + If you are using Redshift Spectrum with either an Athena Data Catalog or AWS Glue Data Catalog, follow the steps outlined in [To create an IAM role for Amazon Redshift](#spectrum-get-started-create-role)\. + If you are using Redshift Spectrum with an AWS Glue Data Catalog that is enabled for AWS Lake Formation, follow the steps outlined in these procedures:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
7f54c57f9abf-1
+ If you are using Redshift Spectrum with an AWS Glue Data Catalog that is enabled for AWS Lake Formation, follow the steps outlined in these procedures: + [To create an IAM role for Amazon Redshift using an AWS Glue Data Catalog enabled for AWS Lake Formation ](#spectrum-get-started-create-role-lake-formation) + [To grant SELECT permissions on the table to query in the Lake Formation database](#spectrum-get-started-grant-lake-formation-table) <a name="spectrum-get-started-create-role"></a> **To create an IAM role for Amazon Redshift** 1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home)\. 1. In the navigation pane, choose **Roles**\. 1. Choose **Create role**\. 1. Choose **AWS service**, and then choose **Redshift**\. 1. Under **Select your use case**, choose **Redshift \- Customizable** and then choose **Next: Permissions**\. 1. The **Attach permissions policy** page appears\. Choose `AmazonS3ReadOnlyAccess` and `AWSGlueConsoleFullAccess`, if you're using the AWS Glue Data Catalog\. Or choose `AmazonAthenaFullAccess` if you're using the Athena Data Catalog\. Choose **Next: Review**\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
7f54c57f9abf-2
**Note** The `AmazonS3ReadOnlyAccess` policy gives your cluster read\-only access to all Amazon S3 buckets\. To grant access to only the AWS sample data bucket, create a new policy and add the following permissions\. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": "arn:aws:s3:::awssampledbuswest2/*" } ] } ``` 1. For **Role name**, enter a name for your role, for example **mySpectrumRole**\. 1. Review the information, and then choose **Create role**\. 1. In the navigation pane, choose **Roles**\. Choose the name of your new role to view the summary, and then copy the **Role ARN** to your clipboard\. This value is the Amazon Resource Name \(ARN\) for the role that you just created\. You use that value when you create external tables to reference your data files on Amazon S3\.<a name="spectrum-get-started-create-role-lake-formation"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
7f54c57f9abf-3
**To create an IAM role for Amazon Redshift using an AWS Glue Data Catalog enabled for AWS Lake Formation** 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\. 1. In the navigation pane, choose **Policies**\. If this is your first time choosing **Policies**, the **Welcome to Managed Policies** page appears\. Choose **Get Started**\. 1. Choose **Create policy**\. 1. Choose to create the policy on the **JSON** tab\. 1. Paste in the following JSON policy document, which grants access to the Data Catalog but denies the administrator permissions for Lake Formation\. ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "RedshiftPolicyForLF", "Effect": "Allow", "Action": [ "glue:*", "lakeformation:GetDataAccess" ], "Resource": "*" } ] } ``` 1. When you are finished, choose **Review** to review the policy\. The policy validator reports any syntax errors\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
7f54c57f9abf-4
``` 1. When you are finished, choose **Review** to review the policy\. The policy validator reports any syntax errors\. 1. On the **Review policy** page, for **Name** enter **mySpectrumPolicy** to name the policy that you are creating\. Enter a **Description** \(optional\)\. Review the policy **Summary** to see the permissions that are granted by your policy\. Then choose **Create policy** to save your work\. After you create a policy, you can create a role and apply the policy\. 1. In the navigation pane of the IAM console, choose **Roles**, and then choose **Create role**\. 1. For **Select type of trusted entity**, choose **AWS service**\. 1. Choose the Amazon Redshift service to assume this role\. 1. Choose the **Redshift Customizable** use case for your service\. Then choose **Next: Permissions**\. 1. Choose the permissions policy that you created, `mySpectrumPolicy`, to attach to the role\. 1. Choose **Next: Tagging**\. 1. Choose **Next: Review**\. 1. For **Role name**, enter the name **mySpectrumRole**\. 1. \(Optional\) For **Role description**, enter a description for the new role\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
7f54c57f9abf-5
1. \(Optional\) For **Role description**, enter a description for the new role\. 1. Review the role, and then choose **Create role**\.<a name="spectrum-get-started-grant-lake-formation-table"></a> **To grant SELECT permissions on the table to query in the Lake Formation database** 1. Open the Lake Formation console at [https://console\.aws\.amazon\.com/lakeformation/](https://console.aws.amazon.com/lakeformation/)\. 1. In the navigation pane, choose **Permissions**, and then choose **Grant**\. 1. Provide the following information: + For **IAM role**, choose the IAM role you created, `mySpectrumRole`\. When you run the Amazon Redshift Query Editor, it uses this IAM role for permission to the data\. **Note** To grant SELECT permission on the table in a Lake Formation–enabled Data Catalog to query, do the following: Register the path for the data in Lake Formation\. Grant users permission to that path in Lake Formation\. Created tables can be found in the path registered in Lake Formation\. + For **Database**, choose your Lake Formation database\. + For **Table**, choose a table within the database to query\. + For **Columns**, choose **All Columns**\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
7f54c57f9abf-6
+ For **Columns**, choose **All Columns**\. + Choose the **Select** permission\. 1. Choose **Save**\. **Important** As a best practice, allow access only to the underlying Amazon S3 objects through Lake Formation permissions\. To prevent unapproved access, remove any permission granted to Amazon S3 objects outside of Lake Formation\. If you previously accessed Amazon S3 objects before setting up Lake Formation, remove any IAM policies or bucket permissions that previously were set up\. For more information, see [Upgrading AWS Glue Data Permissions to the AWS Lake Formation Model](https://docs.aws.amazon.com/lake-formation/latest/dg/upgrade-glue-lake-formation.html) and [Lake Formation Permissions](https://docs.aws.amazon.com/lake-formation/latest/dg/lake-formation-permissions.html)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-role.md
1e45b616d2a2-0
Returns the query ID of the most recently executed COPY command in the current session\. If no COPY commands have been executed in the current session, PG\_LAST\_COPY\_ID returns \-1\. The value for PG\_LAST\_COPY\_ID is updated when the COPY command begins the load process\. If the COPY fails because of invalid load data, the COPY ID is updated, so you can use PG\_LAST\_COPY\_ID when you query STL\_LOAD\_ERRORS table\. If the COPY transaction is rolled back, the COPY ID is not updated\. The COPY ID is not updated if the COPY command fails because of an error that occurs before the load process begins, such as a syntax error, access error, invalid credentials, or insufficient privileges\. The COPY ID is not updated if the COPY fails during the analyze compression step, which begins after a successful connection, but before the data load\. COPY performs compression analysis when the COMPUPDATE parameter is set to ON or when the target table is empty and all the table columns either have RAW encoding or no encoding\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_ID.md
cab9381b2ff1-0
``` pg_last_copy_id() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_ID.md
ef795b1c57f3-0
Returns an integer\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_ID.md
d90ab5353ba8-0
The following query returns the query ID of the latest COPY command in the current session\. ``` select pg_last_copy_id(); pg_last_copy_id --------------- 5437 (1 row) ``` The following query joins STL\_LOAD\_ERRORS to STL\_LOADERROR\_DETAIL to view the details errors that occurred during the most recent load in the current session: ``` select d.query, substring(d.filename,14,20), d.line_number as line, substring(d.value,1,16) as value, substring(le.err_reason,1,48) as err_reason from stl_loaderror_detail d, stl_load_errors le where d.query = le.query and d.query = pg_last_copy_id(); query | substring | line | value | err_reason -------+-------------------+------+----------+---------------------------- 558| allusers_pipe.txt | 251 | 251 | String contains invalid or
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_ID.md
d90ab5353ba8-1
558| allusers_pipe.txt | 251 | 251 | String contains invalid or unsupported UTF8 code 558| allusers_pipe.txt | 251 | ZRU29FGR | String contains invalid or unsupported UTF8 code 558| allusers_pipe.txt | 251 | Kaitlin | String contains invalid or unsupported UTF8 code 558| allusers_pipe.txt | 251 | Walter | String contains invalid or unsupported UTF8 code ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_ID.md
93ac8af6d22c-0
In this section, you can find information about the date and time scalar functions that Amazon Redshift supports\. **Topics** + [Summary of date and time functions](#date-functions-summary) + [Date and time functions in transactions](#date-functions-transactions) + [Deprecated leader node\-only functions](#date-functions-deprecated) + [ADD\_MONTHS function](r_ADD_MONTHS.md) + [AT TIME ZONE function](r_AT_TIME_ZONE.md) + [CONVERT\_TIMEZONE function](CONVERT_TIMEZONE.md) + [CURRENT\_DATE function](r_CURRENT_DATE_function.md) + [DATE\_CMP function](r_DATE_CMP.md) + [DATE\_CMP\_TIMESTAMP function](r_DATE_CMP_TIMESTAMP.md) + [DATE\_CMP\_TIMESTAMPTZ function](r_DATE_CMP_TIMESTAMPTZ.md) + [DATE\_PART\_YEAR function](r_DATE_PART_YEAR.md) + [DATEADD function](r_DATEADD_function.md) + [DATEDIFF function](r_DATEDIFF_function.md) + [DATE\_PART function](r_DATE_PART_function.md) + [DATE\_TRUNC function](r_DATE_TRUNC.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Date_functions_header.md
93ac8af6d22c-1
+ [DATE\_TRUNC function](r_DATE_TRUNC.md) + [EXTRACT function](r_EXTRACT_function.md) + [GETDATE function](r_GETDATE.md) + [INTERVAL\_CMP function](r_INTERVAL_CMP.md) + [LAST\_DAY function](r_LAST_DAY.md) + [MONTHS\_BETWEEN function](r_MONTHS_BETWEEN_function.md) + [NEXT\_DAY function](r_NEXT_DAY.md) + [SYSDATE function](r_SYSDATE.md) + [TIMEOFDAY function](r_TIMEOFDAY_function.md) + [TIMESTAMP\_CMP function](r_TIMESTAMP_CMP.md) + [TIMESTAMP\_CMP\_DATE function](r_TIMESTAMP_CMP_DATE.md) + [TIMESTAMP\_CMP\_TIMESTAMPTZ function](r_TIMESTAMP_CMP_TIMESTAMPTZ.md) + [TIMESTAMPTZ\_CMP function](r_TIMESTAMPTZ_CMP.md) + [TIMESTAMPTZ\_CMP\_DATE function](r_TIMESTAMPTZ_CMP_DATE.md) + [TIMESTAMPTZ\_CMP\_TIMESTAMP function](r_TIMESTAMPTZ_CMP_TIMESTAMP.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Date_functions_header.md
93ac8af6d22c-2
+ [TIMESTAMPTZ\_CMP\_TIMESTAMP function](r_TIMESTAMPTZ_CMP_TIMESTAMP.md) + [TIMEZONE function](r_TIMEZONE.md) + [TO\_TIMESTAMP function](r_TO_TIMESTAMP.md) + [TRUNC Date function](r_TRUNC_date.md) + [Dateparts for Date or Time Stamp functions](r_Dateparts_for_datetime_functions.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Date_functions_header.md
f035522f5f16-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/Date_functions_header.html) **Note** Leap seconds are not considered in elapsed\-time calculations\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Date_functions_header.md
3e096cbbb8ee-0
When you execute the following functions within a transaction block \(BEGIN … END\), the function returns the start date or time of the current transaction, not the start of the current statement\. + SYSDATE + TIMESTAMP + CURRENT\_DATE The following functions always return the start date or time of the current statement, even when they are within a transaction block\. + GETDATE + TIMEOFDAY
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Date_functions_header.md
67a4cce3233b-0
The following date functions are deprecated because they execute only on the leader node\. For more information, see [Leader node–only functions](c_SQL_functions_leader_node_only.md)\. + AGE\. Use [DATEDIFF function](r_DATEDIFF_function.md) instead\. + CURRENT\_TIME\. Use [GETDATE function](r_GETDATE.md) or [SYSDATE](r_SYSDATE.md) instead\. + CURRENT\_TIMESTAMP\. Use [GETDATE function](r_GETDATE.md) or [SYSDATE](r_SYSDATE.md) instead\. + LOCALTIME\. Use [GETDATE function](r_GETDATE.md) or [SYSDATE](r_SYSDATE.md) instead\. + LOCALTIMESTAMP\. Use [GETDATE function](r_GETDATE.md) or [SYSDATE](r_SYSDATE.md) instead\. + ISFINITE + NOW\. Use [GETDATE function](r_GETDATE.md) or [SYSDATE](r_SYSDATE.md) instead\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Date_functions_header.md
4cecf16b7364-0
Now you have your cluster associated with a new parameter group and you've configured WLM\. Next, run some queries to see how Amazon Redshift routes queries into queues for processing\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
532804308a93-0
First, verify that the database has the WLM configuration that you expect\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
759d2da10746-0
1. Open psql and run the following query\. The query uses the WLM\_QUEUE\_STATE\_VW view you created in [Step 1: Create the WLM\_QUEUE\_STATE\_VW view](tutorial-wlm-understanding-default-processing.md#tutorial-wlm-create-queue-state-view)\. If you already had a session connected to the database prior to the cluster reboot, you need to reconnect\. ``` select * from wlm_queue_state_vw; ``` The following is an example result\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_060.png) Compare these results to the results you received in [Step 1: Create the WLM\_QUEUE\_STATE\_VW view](tutorial-wlm-understanding-default-processing.md#tutorial-wlm-create-queue-state-view)\. Notice that there are now two additional queues\. Queue 1 is now the queue for the test query group, and queue 2 is the queue for the admin user group\. Queue 3 is now the default queue\. The last queue in the list is always the default queue\. That's the queue to which queries are routed by default if no user group or query group is specified in a query\. 1. Run the following query to confirm that your query now runs in queue 3\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
759d2da10746-1
1. Run the following query to confirm that your query now runs in queue 3\. ``` select * from wlm_query_state_vw; ``` The following is an example result\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_070.png)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
baacc304413e-0
1. Run the following query to route it to the `test` query group\. ``` set query_group to test; select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; ``` 1. From the other psql window, run the following query\. ``` select * from wlm_query_state_vw; ``` The following is an example result\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_080.png) The query was routed to the test query group, which is queue 1 now\. 1. Select all from the queue state view\. ``` select * from wlm_queue_state_vw; ``` You see a result similar to the following\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_090.png) 1. Now, reset the query group and run the long query again: ``` reset query_group;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
baacc304413e-1
1. Now, reset the query group and run the long query again: ``` reset query_group; select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; ``` 1. Run the queries against the views to see the results\. ``` select * from wlm_queue_state_vw; select * from wlm_query_state_vw; ``` The following are example results\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_100.png) ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_110.png) The result should be that the query is now running in queue 3 again\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
2695b37073ad-0
In [Step 1: Create a parameter group](tutorial-wlm-modifying-wlm-configuration.md#tutorial-wlm-create-parameter-group), you configured one of your query queues with a user group named `admin`\. Before you can run any queries in this queue, you need to create the user group in the database and add a user to the group\. Then you log on with psql using the new user’s credentials and run queries\. You need to run queries as a superuser, such as
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
2695b37073ad-1
run queries as a superuser, such as the masteruser, to create database users\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
549d6183c1ed-0
1. In the database, create a new database user named `adminwlm` by running the following command in a psql window\. ``` create user adminwlm createuser password '123Admin'; ``` 1. Then, run the following commands to create the new user group and add your new `adminwlm` user to it\. ``` create group admin; alter group admin add user adminwlm; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
8b8d61ebd792-0
Next you run a query and route it to the user group queue\. You do this when you want to route your query to a queue that is configured to handle the type of query you want to run\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md
84058a852814-0
1. In psql window 2, run the following queries to switch to the `adminwlm` account and run a query as that user\. ``` set session authorization 'adminwlm'; select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; ``` 1. In psql window 1, run the following query to see the query queue that the queries are routed to\. ``` select * from wlm_query_state_vw; select * from wlm_queue_state_vw; ``` The following are example results\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_120.png) ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/psql_tutorial_wlm_130.png) The queue that this query ran in is queue 2, the `admin` user queue\. Anytime you run queries logged in as this user, they run in queue 2 unless you specify a different query group to use\. The chosen queue depends on the queue assignment rules\. For more information, see [WLM queue assignment rules](cm-c-wlm-queue-assignment-rules.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-routing-queries-to-queues.md