id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
571ce2562217-0
`INTEGER` If *geom* is null, then null is returned\. If *geom* is not of subtype `LINESTRING`, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumPoints-function.md
54d91dfcea0b-0
The following SQL returns the number of points in the input linestring\. ``` SELECT ST_NumPoints(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_numpoints ------------- 4 ``` The following SQL returns null because the input *geom* is not of subtype `LINESTRING`\. ``` SELECT ST_NumPoints(ST_GeomFromText('MULTIPOINT(1 2,3 4)')); ``` ``` st_numpoints ------------- ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumPoints-function.md
c494e2718024-0
\(Optional\) Closes all of the free resources that are associated with an open cursor\. [COMMIT](r_COMMIT.md), [END](r_END.md), and [ROLLBACK](r_ROLLBACK.md) automatically close the cursor, so it isn't necessary to use the CLOSE command to explicitly close the cursor\. For more information, see [DECLARE](declare.md), [FETCH](fetch.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/close.md
41ff43ed986c-0
``` CLOSE cursor ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/close.md
b9c6e3968ac8-0
*cursor* Name of the cursor to close\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/close.md
537c62536acf-0
The following commands close the cursor and perform a commit, which ends the transaction: ``` close movie_cursor; commit; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/close.md
ede7a0d4ba9e-0
If you already have a cluster that you want to use, you can skip this step\. For the exercises in this tutorial, use a four\-node cluster\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-launch-cluster.md
6838a3c05170-0
**To create a cluster** 1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\. **Important** If you use IAM user credentials, make sure that you have the necessary permissions to perform the cluster operations\. For more information, see [Controlling access to IAM users](https://docs.aws.amazon.com/redshift/latest/mgmt/iam-redshift-user-mgmt.html) in the *Amazon Redshift Cluster Management Guide*\. 1. At top right, choose the AWS Region in which you want to create the cluster\. For the purposes of this tutorial, choose **US West \(Oregon\)**\. 1. On the navigation menu, choose **CLUSTERS**, then choose **Create cluster**\. The **Create cluster** page appears\. 1. Choose **dc2\.large** for the node type in the **Compute optimized** section\. Then choose **4** for the **Nodes**\. 1. In the **Cluster details** section, specify values for **Cluster identifier**, **Database port**, **Master user name**, and **Master user password**\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-launch-cluster.md
6838a3c05170-1
1. In the **Cluster permissions** section, choose an IAM role from **Available IAM roles**\. This role should be one that you previously created and that has access to Amazon S3\. Then choose **Add IAM role** to add it to the list of **Attached IAM roles** for the cluster\. 1. Choose **Create cluster**\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-launch-cluster.md
b155212d3eef-0
Follow the steps in [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/), but choose **Multi Node** for **Cluster Type** and set **Number of Compute Nodes** to **4**\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-console-cluster-type.png) Follow the [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/) steps to connect to your cluster from a SQL client and test a connection\. You don't need to complete the remaining Getting Started steps to create tables, upload data, and try example queries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-launch-cluster.md
d4c49857dad5-0
[Step 2: Download the data files](tutorial-loading-data-download-files.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-launch-cluster.md
2dda34c08c3c-0
Shows the definition of a given stored procedure, including its signature\. You can use the output of a SHOW PROCEDURE to recreate the stored procedure\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW_PROCEDURE.md
321792b250f7-0
``` SHOW PROCEDURE sp_name [( [ [ argname ] [ argmode ] argtype [, ...] ] )] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW_PROCEDURE.md
68479b8248ef-0
*sp\_name* The name of the procedure to show\. *\[argname\] \[ argmode\] argtype* Input argument types to identify the stored procedure\. Optionally, you can include the full argument data types, including OUT arguments\. This part is optional if the name of the stored procedure is unique \(that is, not overloaded\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW_PROCEDURE.md
af627295af12-0
The following example shows the definition of the procedure `test_spl2`\. ``` show procedure test_sp2(int, varchar); Stored Procedure Definition ------------------------------------------------------------------------------------------------------------ CREATE OR REPLACE PROCEDURE public.test_sp2(f1 integer, INOUT f2 character varying, OUT character varying) LANGUAGE plpgsql AS $_$ DECLARE out_var alias for $3; loop_var int; BEGIN IF f1 is null OR f2 is null THEN RAISE EXCEPTION 'input cannot be null'; END IF; CREATE TEMP TABLE etl(a int, b varchar); FOR loop_var IN 1..f1 LOOP insert into etl values (loop_var, f2); f2 := f2 || '+' || f2; END LOOP; SELECT INTO out_var count(*) from etl; END; $_$ (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW_PROCEDURE.md
bf8ec0a875d8-0
TIMESTAMPTZ\_CMP compares the value of two time stamp with time zone values and returns an integer\. If the time stamps are identical, the function returns 0\. If the first time stamp is greater alphabetically, the function returns 1\. If the second time stamp is greater, the function returns –1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP.md
867ebdd8b194-0
``` TIMESTAMPTZ_CMP(timestamptz1, timestamptz2) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP.md
d9f993d7208e-0
*timestamptz1* A TIMESTAMPTZ column or an expression that implicitly converts to a time stamp with time zone\. *timestamptz2* A TIMESTAMPTZ column or an expression that implicitly converts to a time stamp with time zone\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP.md
0c6f519ec2e5-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP.md
e360f9758b1d-0
You can do runtime conversions between compatible data types by using the CAST and CONVERT functions\. Certain data types require an explicit conversion to other data types using the CAST or CONVERT function\. Other data types can be converted implicitly, as part of another command, without using the CAST or CONVERT function\. See [Type compatibility and conversion](r_Type_conversion.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
9489963bd3b2-0
You can use two equivalent syntax forms to cast expressions from one data type to another: ``` CAST ( expression AS type ) expression :: type ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
18846c76c465-0
*expression* An expression that evaluates to one or more values, such as a column name or a literal\. Converting null values returns nulls\. The expression cannot contain blank or empty strings\. *type* One of the supported [Data types](c_Supported_data_types.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
88bb151288f3-0
CAST returns the data type specified by the *type* argument\. **Note** Amazon Redshift returns an error if you try to perform a problematic conversion such as the following DECIMAL conversion that loses precision: ``` select 123.456::decimal(2,1); ``` or an INTEGER conversion that causes an overflow: ``` select 12345678::smallint; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
35fcfb7b982f-0
You can also use the CONVERT function to convert values from one data type to another: ``` CONVERT ( type, expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
6d30cf545713-0
*type* One of the supported [Data types](c_Supported_data_types.md)\. *expression* An expression that evaluates to one or more values, such as a column name or a literal\. Converting null values returns nulls\. The expression cannot contain blank or empty strings\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
3d61ea2da289-0
CONVERT returns the data type specified by the *type* argument\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
5da4bbe6a616-0
The following two queries are equivalent\. They both cast a decimal value to an integer: ``` select cast(pricepaid as integer) from sales where salesid=100; pricepaid ----------- 162 (1 row) ``` ``` select pricepaid::integer from sales where salesid=100; pricepaid ----------- 162 (1 row) ``` The following query uses the CONVERT function to return the same result: ``` select convert(integer, pricepaid) from sales where salesid=100; pricepaid ----------- 162 (1 row) ``` In this example, the values in a time stamp column are cast as dates: ``` select cast(saletime as date), salesid from sales order by salesid limit 10; saletime | salesid -----------+--------- 2008-02-18 | 1 2008-06-06 | 2 2008-06-06 | 3 2008-06-09 | 4
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
5da4bbe6a616-1
2008-06-06 | 2 2008-06-06 | 3 2008-06-09 | 4 2008-08-31 | 5 2008-07-16 | 6 2008-06-26 | 7 2008-07-10 | 8 2008-07-22 | 9 2008-08-06 | 10 (10 rows) ``` In this example, the values in a date column are cast as time stamps: ``` select cast(caldate as timestamp), dateid from date order by dateid limit 10; caldate | dateid --------------------+-------- 2008-01-01 00:00:00 | 1827 2008-01-02 00:00:00 | 1828 2008-01-03 00:00:00 | 1829 2008-01-04 00:00:00 | 1830 2008-01-05 00:00:00 | 1831 2008-01-06 00:00:00 | 1832 2008-01-07 00:00:00 | 1833 2008-01-08 00:00:00 | 1834 2008-01-09 00:00:00 | 1835 2008-01-10 00:00:00 | 1836 (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
5da4bbe6a616-2
2008-01-10 00:00:00 | 1836 (10 rows) ``` In this example, an integer is cast as a character string: ``` select cast(2008 as char(4)); bpchar -------- 2008 ``` In this example, a DECIMAL\(6,3\) value is cast as a DECIMAL\(4,1\) value: ``` select cast(109.652 as decimal(4,1)); numeric --------- 109.7 ``` In this example, the PRICEPAID column \(a DECIMAL\(8,2\) column\) in the SALES table is converted to a DECIMAL\(38,2\) column and the values are multiplied by 100000000000000000000\. ``` select salesid, pricepaid::decimal(38,2)*100000000000000000000 as value from sales where salesid<10 order by salesid; salesid | value ---------+---------------------------- 1 | 72800000000000000000000.00 2 | 7600000000000000000000.00
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
5da4bbe6a616-3
1 | 72800000000000000000000.00 2 | 7600000000000000000000.00 3 | 35000000000000000000000.00 4 | 17500000000000000000000.00 5 | 15400000000000000000000.00 6 | 39400000000000000000000.00 7 | 78800000000000000000000.00 8 | 19700000000000000000000.00 9 | 59100000000000000000000.00 (9 rows) ``` **Note** You can't perform a CAST or CONVERT operation on the `GEOMETRY` data type to change it to another data type\. However, you can provide a hexadecimal representation of a string literal in extended well\-known binary \(EWKB\) format as input to functions that accept a `GEOMETRY` argument\. For example, the `ST_AsText` function following expects a `GEOMETRY` data type\. ``` SELECT ST_AsText('01010000000000000000001C400000000000002040'); ``` ``` st_astext ------------ POINT(7 8) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
5da4bbe6a616-4
st_astext ------------ POINT(7 8) ``` You can also explicitly specify the `GEOMETRY` data type\. ``` SELECT ST_AsText('010100000000000000000014400000000000001840'::geometry); ``` ``` st_astext ------------ POINT(5 6) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CAST_function.md
da888a0a806e-0
Insert dates that have different formats and display the output: ``` create table datetable (start_date date, end_date date); ``` ``` insert into datetable values ('2008-06-01','2008-12-31'); insert into datetable values ('Jun 1,2008','20081231'); ``` ``` select * from datetable order by 1; start_date | end_date ------------------------ 2008-06-01 | 2008-12-31 2008-06-01 | 2008-12-31 ``` If you insert a time stamp value into a DATE column, the time portion is ignored and only the date loaded\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_datetime_types.md
1017ba1af355-0
If you insert a date into a TIMESTAMP or TIMESTAMPTZ column, the time defaults to midnight\. For example, if you insert the literal `20081231`, the stored value is `2008-12-31 00:00:00`\. To change the time zone for the current session, use the [SET](r_SET.md) command to set the [timezone](r_timezone_config.md) configuration parameter\. Insert timestamps that have different formats and display the output: ``` create table tstamp(timeofday timestamp, timeofdaytz timestamptz); insert into tstamp values('Jun 1,2008 09:59:59', 'Jun 1,2008 09:59:59 EST' ); insert into tstamp values('Dec 31,2008 18:20','Dec 31,2008 18:20'); insert into tstamp values('Jun 1,2008 09:59:59 EST', 'Jun 1,2008 09:59:59'); timeofday --------------------- 2008-06-01 09:59:59 2008-12-31 18:20:00 (2 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_datetime_types.md
686500d59bc3-0
The following list contains all of the valid time zone abbreviations that can be specified with the [CONVERT\_TIMEZONE function](CONVERT_TIMEZONE.md)\. For a current, complete of list time zone abbreviations, execute the following command\. ``` select pg_timezone_abbrevs(); ``` ``` ACSST ACST ACT ADT AESST AEST AFT AKDT AKST ALMST ALMT AMST AMT ANAST ANAT ARST ART AST AWSST AWST AZOST AZOT AZST AZT BDST BDT BNT BORT BOT BRA BRST BRT BST BTT CADT CAST CCT CDT CEST CET CETDST CHADT CHAST CHUT CKT CLST CLT COT CST CXT
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/time-zone-abbrevs.md
686500d59bc3-1
CHUT CKT CLST CLT COT CST CXT DAVT DDUT EASST EAST EAT EDT EEST EET EETDST EGST EGT EST FET FJST FJT FKST FKT FNST FNT GALT GAMT GEST GET GFT GILT GMT GYT HKT HST ICT IDT IOT IRKST IRKT IRT IST JAYT JST KDT KGST KGT KOST KRAST KRAT KST LHDT LHST LIGT LINT LKT MAGST MAGT MART MAWT MDT MEST MET METDST MEZ
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/time-zone-abbrevs.md
686500d59bc3-2
MART MAWT MDT MEST MET METDST MEZ MHT MMT MPT MSD MSK MST MUST MUT MVT MYT NDT NFT NOVST NOVT NPT NST NUT NZDT NZST NZT OMSST OMST PDT PET PETST PETT PGT PHOT PHT PKST PKT PMDT PMST PONT PST PWT PYST PYT RET SADT SAST SCT SGT TAHT TFT TJT TKT TMT TOT TRUT TVT UCT ULAST ULAT UT UTC UYST UYT UZST UZT
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/time-zone-abbrevs.md
686500d59bc3-3
ULAT UT UTC UYST UYT UZST UZT VET VLAST VLAT VOLT VUT WADT WAKT WAST WAT WDT WET WETDST WFT WGST WGT YAKST YAKT YAPT YEKST YEKT Z ZULU ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/time-zone-abbrevs.md
1a5cd0463a01-0
**Topics** + [Data sources](copy-parameters-data-source.md) + [Authorization parameters](copy-parameters-authorization.md) + [Column mapping options](copy-parameters-column-mapping.md) + [Data format parameters](copy-parameters-data-format.md) + [Data load operations](copy-parameters-data-load.md) + [Alphabetical parameter list](r_COPY-alphabetical-parm-list.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY-parameters.md
8830060609f5-0
ST\_SetSRID returns a geometry that is the same as input geometry, except updated with the value input for the spatial reference system identifier \(SRID\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SetSRID-function.md
72d4285cd552-0
``` ST_SetSRID(geom, srid) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SetSRID-function.md
2b09431f5af7-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. *srid* A value of data type `INTEGER` that represents a SRID\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SetSRID-function.md
c45f8c745521-0
`GEOMETRY` The SRID value of the returned geometry is set to *srid*\. If *geom* or *srid* is null, then null is returned\. If *srid* is negative, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SetSRID-function.md
46b2b53438d9-0
The following SQL sets the SRID value of a linestring\. ``` SELECT ST_AsEWKT(ST_SetSRID(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)'),50)); ``` ``` st_asewkt ------------- SRID=50;LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SetSRID-function.md
6d9f5c0f5193-0
Use the STV\_LOAD\_STATE table to find information about current state of ongoing COPY statements\. The COPY command updates this table after every million records are loaded\. STV\_LOAD\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_LOAD_STATE.md
b21938b2d043-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_LOAD_STATE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_LOAD_STATE.md
b7d8483b273f-0
To view the progress of each slice for a COPY command, type the following query\. This example uses the PG\_LAST\_COPY\_ID\(\) function to retrieve information for the last COPY command\. ``` select slice , bytes_loaded, bytes_to_load , pct_complete from stv_load_state where query = pg_last_copy_id(); slice | bytes_loaded | bytes_to_load | pct_complete -------+--------------+---------------+-------------- 2 | 0 | 0 | 0 3 | 12840898 | 39104640 | 32 (2 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_LOAD_STATE.md
1ca7bedae061-0
The following CREATE TABLE statement demonstrates the declaration of different numeric data types: ``` create table film ( film_id integer, language_id smallint, original_language_id smallint, rental_duration smallint default 3, rental_rate numeric(4,2) default 4.99, length smallint, replacement_cost real default 25.00); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_numeric_types201.md
74fbb2bd6b6a-0
The following example attempts to insert the value 33000 into a SMALLINT column\. ``` insert into film(language_id) values(33000); ``` The range for SMALLINT is \-32768 to \+32767, so Amazon Redshift returns an error\. ``` An error occurred when executing the SQL command: insert into film(language_id) values(33000) ERROR: smallint out of range [SQL State=22003] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_numeric_types201.md
f6beb61898d1-0
The following example inserts the a decimal value into an INT column\. ``` insert into film(language_id) values(1.5); ``` This value is inserted but rounded up to the integer value 2\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_numeric_types201.md
b54447e00dac-0
The following example inserts a decimal value that has higher precision that the column\. ``` insert into film(rental_rate) values(35.512); ``` In this case, the value `35.51` is inserted into the column\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_numeric_types201.md
af28f43793ca-0
In this case, the value `350.10` is out of range\. The number of digits for values in DECIMAL columns is equal to the column's precision minus its scale \(4 minus 2 for the RENTAL\_RATE column\)\. In other words, the allowed range for a `DECIMAL(4,2)` column is `-99.99` through `99.99`\. ``` insert into film(rental_rate) values (350.10); ERROR: numeric field overflow DETAIL: The absolute value is greater than or equal to 10^2 for field with precision 4, scale 2. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_numeric_types201.md
0892a0732c1f-0
The following example inserts variable\-precision values into a REAL column\. ``` insert into film(replacement_cost) values(1999.99); insert into film(replacement_cost) values(19999.99); select replacement_cost from film; replacement_cost ------------------ 20000 1999.99 ... ``` The value `19999.99` is converted to `20000` to meet the 6\-digit precision requirement for the column\. The value `1999.99` is loaded as is\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_numeric_types201.md
99a50220a792-0
Returns information to track or troubleshoot a data load\. This view records the progress of each data file as it is loaded into a database table\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
1eedfdd5d228-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_LOAD_COMMITS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
7f54545ada4d-0
The following example returns details for the last COPY operation\. ``` select query, trim(filename) as file, curtime as updated from stl_load_commits where query = pg_last_copy_id(); query | file | updated -------+----------------------------------+---------------------------- 28554 | s3://dw-tickit/category_pipe.txt | 2013-11-01 17:14:52.648486 (1 row) ``` The following query contains entries for a fresh load of the tables in the TICKIT database: ``` select query, trim(filename), curtime from stl_load_commits where filename like '%tickit%' order by query; ``` ``` query | btrim | curtime -------+---------------------------+----------------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
7f54545ada4d-1
22475 | tickit/allusers_pipe.txt | 2013-02-08 20:58:23.274186 22478 | tickit/venue_pipe.txt | 2013-02-08 20:58:25.070604 22480 | tickit/category_pipe.txt | 2013-02-08 20:58:27.333472 22482 | tickit/date2008_pipe.txt | 2013-02-08 20:58:28.608305 22485 | tickit/allevents_pipe.txt | 2013-02-08 20:58:29.99489 22487 | tickit/listings_pipe.txt | 2013-02-08 20:58:37.632939 22593 | tickit/allusers_pipe.txt | 2013-02-08 21:04:08.400491 22596 | tickit/venue_pipe.txt | 2013-02-08 21:04:10.056055 22598 | tickit/category_pipe.txt | 2013-02-08 21:04:11.465049 22600 | tickit/date2008_pipe.txt | 2013-02-08 21:04:12.461502 22603 | tickit/allevents_pipe.txt | 2013-02-08 21:04:14.785124 22605 | tickit/listings_pipe.txt | 2013-02-08 21:04:20.170594
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
7f54545ada4d-2
22605 | tickit/listings_pipe.txt | 2013-02-08 21:04:20.170594 (12 rows) ``` The fact that a record is written to the log file for this system view does not mean that the load committed successfully as part of its containing transaction\. To verify load commits, query the STL\_UTILITYTEXT view and look for the COMMIT record that corresponds with a COPY transaction\. For example, this query joins STL\_LOAD\_COMMITS and STL\_QUERY based on a subquery against STL\_UTILITYTEXT: ``` select l.query,rtrim(l.filename),q.xid from stl_load_commits l, stl_query q where l.query=q.query and exists (select xid from stl_utilitytext where xid=q.xid and rtrim("text")='COMMIT'); query | rtrim | xid -------+---------------------------+------- 22600 | tickit/date2008_pipe.txt | 68311 22480 | tickit/category_pipe.txt | 68066 7508 | allusers_pipe.txt | 23365 7552 | category_pipe.txt | 23415 7576 | allevents_pipe.txt | 23429
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
7f54545ada4d-3
7552 | category_pipe.txt | 23415 7576 | allevents_pipe.txt | 23429 7516 | venue_pipe.txt | 23390 7604 | listings_pipe.txt | 23445 22596 | tickit/venue_pipe.txt | 68309 22605 | tickit/listings_pipe.txt | 68316 22593 | tickit/allusers_pipe.txt | 68305 22485 | tickit/allevents_pipe.txt | 68071 7561 | allevents_pipe.txt | 23429 7541 | category_pipe.txt | 23415 7558 | date2008_pipe.txt | 23428 22478 | tickit/venue_pipe.txt | 68065 526 | date2008_pipe.txt | 2572 7466 | allusers_pipe.txt | 23365 22482 | tickit/date2008_pipe.txt | 68067 22598 | tickit/category_pipe.txt | 68310 22603 | tickit/allevents_pipe.txt | 68315 22475 | tickit/allusers_pipe.txt | 68061 547 | date2008_pipe.txt | 2572 22487 | tickit/listings_pipe.txt | 68072
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
7f54545ada4d-4
547 | date2008_pipe.txt | 2572 22487 | tickit/listings_pipe.txt | 68072 7531 | venue_pipe.txt | 23390 7583 | listings_pipe.txt | 23445 (25 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_COMMITS.md
1619764d2c73-0
You can query the system view SVL\_STORED\_PROC\_CALL to get information about stored procedure calls, including start time, end time, and whether a call is aborted\. Each stored procedure call receives a query ID\. SVL\_STORED\_PROC\_CALL is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_CALL.md
e04b2a19da89-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_STORED_PROC_CALL.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_CALL.md
7754bbf35494-0
The following query returns the elapsed time in descending order and the completion status for stored procedure calls in the past day\. ``` select query, datediff(seconds, starttime, endtime) as elapsed_time, aborted, trim(querytxt) as call from svl_stored_proc_call where starttime >= current_timestamp - interval '1 day' order by 2 desc; query | elapsed_time | aborted | call --------+--------------+---------+----------------------------------------------------------------------------------- 4166 | 7 | 0 | call search_batch_status(35,'succeeded'); 2433 | 3 | 0 | call test_batch (123456) 1810 | 1 | 0 | call prod_benchmark (123456) 1836 | 1 | 0 | call prod_testing (123456) 1808 | 1 | 0 | call prod_portfolio ('N', 123456) 1816 | 1 | 1 | call prod_portfolio ('Y', 123456) (6 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_CALL.md
8fc772dee2fc-0
**Topics** + [Take the loading data tutorial](c_best-practices-loading-take-loading-data-tutorial.md) + [Take the tuning table design tutorial](c_best-practices-loading-take-table-design-tutorial.md) + [Use a COPY command to load data](c_best-practices-use-copy.md) + [Use a single COPY command to load from multiple files](c_best-practices-single-copy-command.md) + [Split your load data into multiple files](c_best-practices-use-multiple-files.md) + [Compress your data files](c_best-practices-compress-data-files.md) + [Use a manifest file](best-practices-preventing-load-data-errors.md) + [Verify data files before and after a load](c_best-practices-verifying-data-files.md) + [Use a multi\-row insert](c_best-practices-multi-row-inserts.md) + [Use a bulk insert](c_best-practices-bulk-inserts.md) + [Load data in sort key order](c_best-practices-sort-key-order.md) + [Load data in sequential blocks](c_best-practices-load-data-in-sequential-blocks.md) + [Use time\-series tables](c_best-practices-time-series-tables.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading-data-best-practices.md
8fc772dee2fc-1
+ [Use time\-series tables](c_best-practices-time-series-tables.md) + [Use a staging table to perform a merge \(upsert\)](c_best-practices-upsert.md) + [Schedule around maintenance windows](c_best-practices-avoid-maintenance.md) Loading very large datasets can take a long time and consume a lot of computing resources\. How your data is loaded can also affect query performance\. This section presents best practices for loading data efficiently using COPY commands, bulk inserts, and staging tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading-data-best-practices.md
85a64373a892-0
ST\_Equals returns true if the input geometries are geometrically equal\. Geometries are considered geometrically equal if they have equal point sets and their interiors have a nonempty intersection\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Equals-function.md
5801e77d636a-0
``` ST_Equals(geom1, geom2) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Equals-function.md
83b6e8c91c46-0
*geom1* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. *geom2* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. This value is compared with *geom1* to determine if it is equal to *geom1*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Equals-function.md
ae5c6c080d50-0
`BOOLEAN` If *geom1* or *geom2* is null, then an error is returned\. If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\. If *geom1* or *geom2* is a geometry collection, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Equals-function.md
8100c537de1d-0
The following SQL checks if the two polygons are geometrically equal\. ``` SELECT ST_Equals(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'), ST_GeomFromText('POLYGON((-1 3,2 1,0 -3,-1 3))')); ``` ``` st_equals ----------- false ``` The following SQL checks if the two linestrings are geometrically equal\. ``` SELECT ST_Equals(ST_GeomFromText('LINESTRING(1 0,10 0)'), ST_GeomFromText('LINESTRING(1 0,5 0,10 0)')); ``` ``` st_equals ----------- true ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Equals-function.md
f8d33914121a-0
Increments a date or time stamp value by a specified interval\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
d49d6e653c0e-0
``` DATEADD( datepart, interval, {date|timestamp} ) ``` This function returns a time stamp data type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
af6fb6fde996-0
*datepart* The date part \(year, month, or day, for example\) that the function operates on\. See [Dateparts for Date or Time Stamp functions](r_Dateparts_for_datetime_functions.md)\. *interval* An integer that specified the interval \(number of days, for example\) to add to the target expression\. A negative integer subtracts the interval\. *date*\|*timestamp* A date or timestamp column or an expression that implicitly converts to a date or time stamp\. The date or time stamp expression must contain the specified date part\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
943aec82eb13-0
TIMESTAMP
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
0d5160c7d50f-0
Add 30 days to each date in November that exists in the DATE table: ``` select dateadd(day,30,caldate) as novplus30 from date where month='NOV' order by dateid; novplus30 --------------------- 2008-12-01 00:00:00 2008-12-02 00:00:00 2008-12-03 00:00:00 ... (30 rows) ``` Add 18 months to a literal date value: ``` select dateadd(month,18,'2008-02-28'); date_add --------------------- 2009-08-28 00:00:00 (1 row) ``` The default column name for a DATEADD function is DATE\_ADD\. The default time stamp for a date value is `00:00:00`\. Add 30 minutes to a date value that does not specify a time stamp: ``` select dateadd(m,30,'2008-02-28'); date_add ---------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
0d5160c7d50f-1
date_add --------------------- 2008-02-28 00:30:00 (1 row) ``` You can name dateparts in full or abbreviate them; in this case, *m* stands for minutes, not months\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
b0a4be39573e-0
The DATEADD\(month, \.\.\.\) and ADD\_MONTHS functions handle dates that fall at the ends of months differently\. + ADD\_MONTHS: If the date you are adding to is the last day of the month, the result is always the last day of the result month, regardless of the length of the month\. For example, April 30th \+ 1 month is May 31st: ``` select add_months('2008-04-30',1); add_months --------------------- 2008-05-31 00:00:00 (1 row) ``` + DATEADD: If there are fewer days in the date you are adding to than in the result month, the result will be the corresponding day of the result month, not the last day of that month\. For example, April 30th \+ 1 month is May 30th: ``` select dateadd(month,1,'2008-04-30'); date_add --------------------- 2008-05-30 00:00:00 (1 row) ``` The DATEADD function handles the leap year date 02\-29 differently when using dateadd\(month, 12,…\) or dateadd\(year, 1, …\)\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
b0a4be39573e-1
``` select dateadd(month,12,'2016-02-29'); date_add --------------------- 2017-02-28 00:00:00 select dateadd(year, 1, '2016-02-29'); date_add --------------------- 2017-03-01 00:00:00 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEADD_function.md
a3d9fa2609ff-0
The WHERE clause contains conditions that either join tables or apply predicates to columns in tables\. Tables can be inner\-joined by using appropriate syntax in either the WHERE clause or the FROM clause\. Outer join criteria must be specified in the FROM clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_clause.md
93b8efa0a5c4-0
``` [ WHERE condition ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_clause.md
cc6809fd7dcb-0
Any search condition with a Boolean result, such as a join condition or a predicate on a table column\. The following examples are valid join conditions: ``` sales.listid=listing.listid sales.listid<>listing.listid ``` The following examples are valid conditions on columns in tables: ``` catgroup like 'S%' venueseats between 20000 and 50000 eventname in('Jersey Boys','Spamalot') year=2008 length(catdesc)>25 date_part(month, caldate)=6 ``` Conditions can be simple or complex; for complex conditions, you can use parentheses to isolate logical units\. In the following example, the join condition is enclosed by parentheses\. ``` where (category.catid=event.catid) and category.catid in(6,7,8) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_clause.md
6e6b143c7451-0
You can use aliases in the WHERE clause to reference select list expressions\. You can't restrict the results of aggregate functions in the WHERE clause; use the HAVING clause for this purpose\. Columns that are restricted in the WHERE clause must derive from table references in the FROM clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_clause.md
0ec806c28dce-0
The following query uses a combination of different WHERE clause restrictions, including a join condition for the SALES and EVENT tables, a predicate on the EVENTNAME column, and two predicates on the STARTTIME column\. ``` select eventname, starttime, pricepaid/qtysold as costperticket, qtysold from sales, event where sales.eventid = event.eventid and eventname='Hannah Montana' and date_part(quarter, starttime) in(1,2) and date_part(year, starttime) = 2008 order by 3 desc, 4, 2, 1 limit 10; eventname | starttime | costperticket | qtysold ----------------+---------------------+-------------------+--------- Hannah Montana | 2008-06-07 14:00:00 | 1706.00000000 | 2 Hannah Montana | 2008-05-01 19:00:00 | 1658.00000000 | 2 Hannah Montana | 2008-06-07 14:00:00 | 1479.00000000 | 1 Hannah Montana | 2008-06-07 14:00:00 | 1479.00000000 | 3 Hannah Montana | 2008-06-07 14:00:00 | 1163.00000000 | 1
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_clause.md
0ec806c28dce-1
Hannah Montana | 2008-06-07 14:00:00 | 1163.00000000 | 1 Hannah Montana | 2008-06-07 14:00:00 | 1163.00000000 | 2 Hannah Montana | 2008-06-07 14:00:00 | 1163.00000000 | 4 Hannah Montana | 2008-05-01 19:00:00 | 497.00000000 | 1 Hannah Montana | 2008-05-01 19:00:00 | 497.00000000 | 2 Hannah Montana | 2008-05-01 19:00:00 | 497.00000000 | 4 (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_clause.md
f2570e0d13c8-0
Returns the length of the specified string as the number of bytes\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OCTET_LENGTH.md
7798b3df6602-0
``` OCTET_LENGTH(expression) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OCTET_LENGTH.md
c834125b98b7-0
*expression* The input parameter is a CHAR or VARCHAR text string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OCTET_LENGTH.md
2ba5c2122328-0
The OCTET\_LENGTH function returns an integer indicating the number of bytes in the input string\. The [LEN](r_LEN.md) function returns the actual number of characters in multi\-byte strings, not the number of bytes\. For example, to store three four\-byte Chinese characters, you need a VARCHAR\(12\) column\. The LEN function will return 3 for that same string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OCTET_LENGTH.md
9e1709968298-0
Length calculations do not count trailing spaces for fixed\-length character strings but do count them for variable\-length strings\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OCTET_LENGTH.md
448d9e1ffb01-0
The following example returns the number of bytes and the number of characters in the string `français`\. ``` select octet_length('français'), len('français'); octet_length | len --------------+----- 9 | 8 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OCTET_LENGTH.md
1188dcb536ac-0
Displays sort execution steps for queries, such as steps that use ORDER BY processing\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SORT.md
3bae14d90ad2-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SORT.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SORT.md
88cb6e0b4392-0
The following example returns sort results for slice 0 and segment 1\. ``` select query, bytes, tbl, is_diskbased, workmem from stl_sort where slice=0 and segment=1; ``` ``` query | bytes | tbl | is_diskbased | workmem -------+---------+-----+--------------+----------- 567 | 3126968 | 241 | f | 383385600 604 | 5292 | 242 | f | 383385600 675 | 104776 | 251 | f | 383385600 525 | 3126968 | 251 | f | 383385600 585 | 5068 | 241 | f | 383385600 630 | 204808 | 266 | f | 383385600 704 | 0 | 242 | f | 0 669 | 4606416 | 241 | f | 383385600 696 | 104776 | 241 | f | 383385600 651 | 4606416 | 254 | f | 383385600 632 | 0 | 256 | f | 0 599 | 396 | 241 | f | 383385600 86397 | 0 | 242 | f | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SORT.md
88cb6e0b4392-1
599 | 396 | 241 | f | 383385600 86397 | 0 | 242 | f | 0 621 | 5292 | 241 | f | 383385600 86325 | 0 | 242 | f | 0 572 | 5068 | 242 | f | 383385600 645 | 204808 | 241 | f | 383385600 590 | 396 | 242 | f | 383385600 (18 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SORT.md
a31a1a3ab087-0
Stores information about user\-defined libraries\. PG\_LIBRARY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_LIBRARY.md
007edaa886b3-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_LIBRARY.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_LIBRARY.md
577c666f473c-0
The following example returns information for user\-installed libraries\. ``` select * from pg_library; name | language_oid | file_store_id | owner -----------+--------------+---------------+------ f_urlparse | 108254 | 2000 | 100 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_LIBRARY.md
adfdecb59448-0
The data files that you use for queries in Amazon Redshift Spectrum are commonly the same types of files that you use for other applications\. For example, the same types of files are used with Amazon Athena, Amazon EMR, and Amazon QuickSight\. You can query the data in its original format directly from Amazon S3\. To do this, the data files must be in a format that Redshift Spectrum supports and be located in an Amazon S3 bucket that your cluster can access\. The Amazon S3 bucket with the data files and the Amazon Redshift cluster must be in the same AWS Region\. For information about supported AWS Regions, see [Amazon Redshift Spectrum Regions](c-using-spectrum.md#c-spectrum-regions)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-data-files.md
61ad941b0ae5-0
Redshift Spectrum supports the following structured and semistructured data formats\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-data-files.html) In the preceding table, the headings indicate the following: + **Columnar** – Whether the file format physically stores data in a column\-oriented structure as opposed to a row\-oriented one\. + **Supports parallel reads** – Whether the file format supports reading individual blocks within the file\. Reading individual blocks enables the distributed processing of a file across multiple independent Redshift Spectrum requests instead of having to read the full file in a single request\. + **Split unit** – For file formats that can be read in parallel, the split unit is the smallest chunk of data that a single Redshift Spectrum request can process\. **Note** Timestamp values in text files must be in the format `yyyy-MM-dd HH:mm:ss.SSSSSS`, as the following timestamp value shows: `2017-05-01 11:30:59.000000`\. We recommend using a columnar storage file format, such as Apache Parquet\. With a columnar storage file format, you can minimize data transfer out of Amazon S3 by selecting only the columns that you need\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-data-files.md
34dabdcefa95-0
To reduce storage space, improve performance, and minimize costs, we strongly recommend that you compress your data files\. Redshift Spectrum recognizes file compression types based on the file extension\. Redshift Spectrum supports the following compression types and extensions\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-data-files.html) You can apply compression at different levels\. Most commonly, you compress a whole file or compress individual blocks within a file\. For Redshift Spectrum to be able to read a file in parallel, the following must be true: + The file format supports parallel reads\. + The file\-level compression, if any, supports parallel reads\. It doesn't matter whether the individual split units within a file are compressed using a compression algorithm that can be read in parallel, because each split unit is processed by a single Redshift Spectrum request\. An example of this is Snappy\-compressed Parquet files\. Individual row groups within the Parquet file are compressed using Snappy, but the top\-level structure of the file remains uncompressed\. In this case, the file can be read in parallel because each Redshift Spectrum request can read and process individual row groups from Amazon S3\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-data-files.md
c5123539ed29-0
Redshift Spectrum transparently decrypts data files that are encrypted using the following encryption options: + Server\-side encryption \(SSE\-S3\) using an AES\-256 encryption key managed by Amazon S3\. + Server\-side encryption with keys managed by AWS Key Management Service \(SSE\-KMS\)\. Redshift Spectrum doesn't support Amazon S3 client\-side encryption\. For more information on server\-side encryption, see [Protecting Data Using Server\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html) in the *Amazon Simple Storage Service Developer Guide*\. Amazon Redshift uses massively parallel processing \(MPP\) to achieve fast execution of complex queries operating on large amounts of data\. Redshift Spectrum extends the same principle to query external data, using multiple Redshift Spectrum instances as needed to scan files\. Place the files in a separate folder for each table\. You can optimize your data for parallel processing by doing the following: + If your file format or compression doesn't support reading in parallel, break large files into many smaller files\. We recommend using file sizes between 64 MB and 1 GB\. + Keep all the files about the same size\. If some files are much larger than others, Redshift Spectrum can't distribute the workload evenly\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-data-files.md