id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
524f9b43117a-0
*expression * The target column or expression that the function operates on\. DISTINCT \| ALL With the argument DISTINCT, the function eliminates all duplicate values from the specified expression before calculating the maximum\. With the argument ALL, the function retains all duplicate values from the expression for calculating the maximum\. ALL is the default\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MAX.md
020682f09720-0
Accepts any data type except Boolean as input\. Returns the same data type as *expression*\. The Boolean equivalent of the MIN function is the [BOOL\_AND function](r_BOOL_AND.md), and the Boolean equivalent of MAX is the [BOOL\_OR function](r_BOOL_OR.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MAX.md
14f55699d901-0
Find the highest price paid from all sales: ``` select max(pricepaid) from sales; max ---------- 12624.00 (1 row) ``` Find the highest price paid per ticket from all sales: ``` select max(pricepaid/qtysold) as max_ticket_price from sales; max_ticket_price ----------------- 2500.00000000 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MAX.md
f92165f6918f-0
Many Amazon Redshift SQL language elements have different performance characteristics and use syntax and semantics and that are quite different from the equivalent PostgreSQL implementation\. **Important** Do not assume that the semantics of elements that Amazon Redshift and PostgreSQL have in common are identical\. Make sure to consult the *Amazon Redshift Developer Guide * [SQL commands](c_SQL_commands.md) to understand the often subtle differences\. One example in particular is the [VACUUM](r_VACUUM_command.md) command, which is used to clean up and reorganize tables\. VACUUM functions differently and uses a different set of parameters than the PostgreSQL version\. See [Vacuuming tables](t_Reclaiming_storage_space202.md) for more about information about using VACUUM in Amazon Redshift\. Often, database management and administration features and tools are different as well\. For example, Amazon Redshift maintains a set of system tables and views that provide information about how the system is functioning\. See [System tables and views](c_intro_system_tables.md) for more information\. The following list includes some examples of SQL features that are implemented differently in Amazon Redshift\. + [CREATE TABLE](r_CREATE_TABLE_NEW.md) Amazon Redshift does not support tablespaces, table partitioning, inheritance, and certain constraints\. The Amazon Redshift implementation of CREATE TABLE enables you to define the sort and distribution algorithms for tables to optimize parallel processing\. Amazon Redshift Spectrum supports table partitioning using the [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md) command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift-sql-implementated-differently.md
f92165f6918f-1
Amazon Redshift Spectrum supports table partitioning using the [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md) command\. + [ALTER TABLE](r_ALTER_TABLE.md) Only a subset of ALTER COLUMN actions are supported\. ADD COLUMN supports adding only one column in each ALTER TABLE statement\. + [COPY](r_COPY.md) The Amazon Redshift COPY command is highly specialized to enable the loading of data from Amazon S3 buckets and Amazon DynamoDB tables and to facilitate automatic compression\. See the [Loading data](t_Loading_data.md) section and the COPY command reference for details\. + [INSERT](r_INSERT_30.md), [UPDATE](r_UPDATE.md), and [DELETE](r_DELETE.md) WITH is not supported\. + [VACUUM](r_VACUUM_command.md) The parameters for VACUUM are entirely different\. For example, the default VACUUM operation in PostgreSQL simply reclaims space and makes it available for re\-use; however, the default VACUUM operation in Amazon Redshift is VACUUM FULL, which reclaims disk space and resorts all rows\. + Trailing spaces in VARCHAR values are ignored when string values are compared\. For more information, see [Significance of trailing blanks](r_Character_types.md#r_Character_types-significance-of-trailing-blanks)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift-sql-implementated-differently.md
b630cd5bfea5-0
The following examples perform a merge to update the SALES table\. The first example uses the simpler method of deleting from the target table and then inserting all of the rows from the staging table\. The second example requires updating on select columns in the target table, so it includes an extra update step\. **Sample merge data source** The examples in this section need a sample data source that includes both updates and inserts\. For the examples, we will create a sample table named SALES\_UPDATE that uses data from the SALES table\. We'll populate the new table with random data that represents new sales activity for December\. We will use the SALES\_UPDATE sample table to create the staging table in the examples that follow\. ``` -- Create a sample table as a copy of the SALES table create table sales_update as select * from sales; -- Change every fifth row so we have updates update sales_update set qtysold = qtysold*2, pricepaid = pricepaid*0.8, commission = commission*1.1 where saletime > '2008-11-30' and mod(sellerid, 5) = 0; -- Add some new rows so we have insert examples -- This example creates a duplicate of every fourth row insert into sales_update select (salesid + 172456) as salesid, listid, sellerid, buyerid, eventid, dateid, qtysold, pricepaid, commission, getdate() as saletime
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/merge-examples.md
b630cd5bfea5-1
from sales_update where saletime > '2008-11-30' and mod(sellerid, 4) = 0; ``` **Example of a merge that replaces existing rows** The following script uses the SALES\_UPDATE table to perform a merge operation on the SALES table with new data for December sales activity\. This example deletes rows in the SALES table that have updates so they can be replaced with the updated rows in the staging table\. The staging table should contain only rows that will participate in the merge, so the CREATE TABLE statement includes a filter to exclude rows that have not changed\. ``` -- Create a staging table and populate it with updated rows from SALES_UPDATE create temp table stagesales as select * from sales_update where sales_update.saletime > '2008-11-30' and sales_update.salesid = (select sales.salesid from sales where sales.salesid = sales_update.salesid and sales.listid = sales_update.listid and (sales_update.qtysold != sales.qtysold or sales_update.pricepaid != sales.pricepaid)); -- Start a new transaction begin transaction; -- Delete any rows from SALES that exist in STAGESALES, because they are updates -- The join includes a redundant predicate to collocate on the distribution key –- A filter on saletime enables a range-restricted scan on SALES
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/merge-examples.md
b630cd5bfea5-2
-- The join includes a redundant predicate to collocate on the distribution key –- A filter on saletime enables a range-restricted scan on SALES delete from sales using stagesales where sales.salesid = stagesales.salesid and sales.listid = stagesales.listid and sales.saletime > '2008-11-30'; -- Insert all the rows from the staging table into the target table insert into sales select * from stagesales; -- End transaction and commit end transaction; -- Drop the staging table drop table stagesales; ``` **Example of a merge that specifies a column list** The following example performs a merge operation to update SALES with new data for December sales activity\. We need sample data that includes both updates and inserts, along with rows that have not changed\. For this example, we want to update the QTYSOLD and PRICEPAID columns but leave COMMISSION and SALETIME unchanged\. The following script uses the SALES\_UPDATE table to perform a merge operation on the SALES table\. ``` -- Create a staging table and populate it with rows from SALES_UPDATE for Dec create temp table stagesales as select * from sales_update where saletime > '2008-11-30'; -- Start a new transaction begin transaction; -- Update the target table using an inner join with the staging table
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/merge-examples.md
b630cd5bfea5-3
-- Start a new transaction begin transaction; -- Update the target table using an inner join with the staging table -- The join includes a redundant predicate to collocate on the distribution key –- A filter on saletime enables a range-restricted scan on SALES update sales set qtysold = stagesales.qtysold, pricepaid = stagesales.pricepaid from stagesales where sales.salesid = stagesales.salesid and sales.listid = stagesales.listid and stagesales.saletime > '2008-11-30' and (sales.qtysold != stagesales.qtysold or sales.pricepaid != stagesales.pricepaid); -- Delete matching rows from the staging table -- using an inner join with the target table delete from stagesales using sales where sales.salesid = stagesales.salesid and sales.listid = stagesales.listid; -- Insert the remaining rows from the staging table into the target table insert into sales select * from stagesales; -- End transaction and commit end transaction; -- Drop the staging table drop table stagesales; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/merge-examples.md
2e7c805b9070-0
The following list provides links to each COPY command parameter description, sorted alphabetically\. + [ACCEPTANYDATE](copy-parameters-data-conversion.md#copy-acceptanydate) + [ACCEPTINVCHARS](copy-parameters-data-conversion.md#copy-acceptinvchars) + [ACCESS_KEY_ID and SECRET_ACCESS_KEY](copy-parameters-authorization.md#copy-access-key-id) + [AVRO](copy-parameters-data-format.md#copy-avro) + [BLANKSASNULL](copy-parameters-data-conversion.md#copy-blanksasnull) + [BZIP2](copy-parameters-file-compression.md#copy-bzip2) + [COMPROWS](copy-parameters-data-load.md#copy-comprows) + [COMPUPDATE](copy-parameters-data-load.md#copy-compupdate) + [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) + [CSV](copy-parameters-data-format.md#copy-csv) + [DATEFORMAT](copy-parameters-data-conversion.md#copy-dateformat) + [DELIMITER](copy-parameters-data-format.md#copy-delimiter) + [EMPTYASNULL](copy-parameters-data-conversion.md#copy-emptyasnull)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY-alphabetical-parm-list.md
2e7c805b9070-1
+ [EMPTYASNULL](copy-parameters-data-conversion.md#copy-emptyasnull) + [ENCODING](copy-parameters-data-conversion.md#copy-encoding) + [ENCRYPTED](copy-parameters-data-source-s3.md#copy-encrypted) + [ESCAPE](copy-parameters-data-conversion.md#copy-escape) + [EXPLICIT_IDS](copy-parameters-data-conversion.md#copy-explicit-ids) + [FILLRECORD](copy-parameters-data-conversion.md#copy-fillrecord) + [FIXEDWIDTH](copy-parameters-data-format.md#copy-fixedwidth) + [FORMAT](copy-parameters-data-format.md#copy-format) + [IAM_ROLE](copy-parameters-authorization.md#copy-iam-role) + [FROM](copy-parameters-data-source-s3.md#copy-parameters-from) + [GZIP](copy-parameters-file-compression.md#copy-gzip) + [IGNOREBLANKLINES](copy-parameters-data-conversion.md#copy-ignoreblanklines) + [IGNOREHEADER](copy-parameters-data-conversion.md#copy-ignoreheader) + [JSON](copy-parameters-data-format.md#copy-json)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY-alphabetical-parm-list.md
2e7c805b9070-2
+ [JSON](copy-parameters-data-format.md#copy-json) + [LZOP](copy-parameters-file-compression.md#copy-lzop) + [MANIFEST](copy-parameters-data-source-s3.md#copy-manifest) + [MASTER_SYMMETRIC_KEY](copy-parameters-data-source-s3.md#copy-master-symmetric-key) + [MAXERROR](copy-parameters-data-load.md#copy-maxerror) + [NOLOAD](copy-parameters-data-load.md#copy-noload) + [NULL AS](copy-parameters-data-conversion.md#copy-null-as) + [READRATIO](copy-parameters-data-source-dynamodb.md#copy-readratio) + [REGION](copy-parameters-data-source-s3.md#copy-region) + [REMOVEQUOTES](copy-parameters-data-conversion.md#copy-removequotes) + [ROUNDEC](copy-parameters-data-conversion.md#copy-roundec) + [SSH](copy-parameters-data-source-ssh.md#copy-ssh) + [STATUPDATE](copy-parameters-data-load.md#copy-statupdate) + [TIMEFORMAT](copy-parameters-data-conversion.md#copy-timeformat)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY-alphabetical-parm-list.md
2e7c805b9070-3
+ [TIMEFORMAT](copy-parameters-data-conversion.md#copy-timeformat) + [SESSION_TOKEN](copy-parameters-authorization.md#copy-token) + [TRIMBLANKS](copy-parameters-data-conversion.md#copy-trimblanks) + [TRUNCATECOLUMNS](copy-parameters-data-conversion.md#copy-truncatecolumns) + [ZSTD](copy-parameters-file-compression.md#copy-zstd)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY-alphabetical-parm-list.md
a3bd6f56c6a9-0
The BIT\_AND function runs bit\-wise AND operations on all of the values in a single integer column or expression\. This function aggregates each bit of each binary value that corresponds to each integer value in the expression\. The BIT\_AND function returns a result of `0` if none of the bits is set to 1 across all of the values\. If one or more bits is set to 1 across all values, the function returns an integer value\. This integer is the number that corresponds to the binary value for the those bits\. For example, a table contains four integer values in a column: 3, 7, 10, and 22\. These integers are represented in binary form as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_BIT_AND.html) A BIT\_AND operation on this dataset finds that all bits are set to `1` in the second\-to\-last position only\. The result is a binary value of `00000010`, which represents the integer value `2`\. Therefore, the BIT\_AND function returns `2`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_AND.md
30b826ecd163-0
``` BIT_AND ( [DISTINCT | ALL] expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_AND.md
70d06301535d-0
*expression * The target column or expression that the function operates on\. This expression must have an INT, INT2, or INT8 data type\. The function returns an equivalent INT, INT2, or INT8 data type\. DISTINCT \| ALL With the argument DISTINCT, the function eliminates all duplicate values for the specified expression before calculating the result\. With the argument ALL, the function retains all duplicate values\. ALL is the default\. For more information, see [DISTINCT support for bit\-wise aggregations](c_bitwise_aggregate_functions.md#distinct-support-for-bit-wise-aggregations)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_AND.md
4a8b11d60fb6-0
Given that meaningful business information is stored in integer columns, you can use bit\-wise functions to extract and aggregate that information\. The following query applies the BIT\_AND function to the LIKES column in a table called USERLIKES and groups the results by the CITY column\. ``` select city, bit_and(likes) from userlikes group by city order by city; city | bit_and --------------+--------- Los Angeles | 0 Sacramento | 0 San Francisco | 0 San Jose | 64 Santa Barbara | 192 (5 rows) ``` You can interpret these results as follows: + The integer value `192` for Santa Barbara translates to the binary value `11000000`\. In other words, all users in this city like sports and theatre, but not all users like any other type of event\. + The integer `64` translates to `01000000`\. So, for users in San Jose, the only type of event that they all like is theatre\. + The values of `0` for the other three cities indicate that no "likes" are shared by all users in those cities\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_AND.md
ca70931cbeaa-0
The following table identifies the datepart and timepart names and abbreviations that are accepted as arguments to the following functions: + DATEADD + DATEDIFF + DATE\_PART + DATE\_TRUNC + EXTRACT [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Dateparts_for_datetime_functions.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
57cb2f329cd0-0
Minor differences in query results occur when different date functions specify seconds, milliseconds, or microseconds as dateparts: + The EXTRACT function return integers for the specified datepart only, ignoring higher\- and lower\-level dateparts\. If the specified datepart is seconds, milliseconds and microseconds are not included in the result\. If the specified datepart is milliseconds, seconds and microseconds are not included\. If the specified datepart is microseconds, seconds and milliseconds are not included\. + The DATE\_PART function returns the complete seconds portion of the time stamp, regardless of the specified datepart, returning either a decimal value or an integer as required\. For example, compare the results of the following queries: ``` create table seconds(micro timestamp); insert into seconds values('2009-09-21 11:10:03.189717'); select extract(sec from micro) from seconds; date_part ----------- 3 (1 row) select date_part(sec, micro) from seconds; pgdate_part ------------- 3.189717 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
4deb981f024e-0
CENTURY or CENTURIES Amazon Redshift interprets a CENTURY to start with year *\#\#\#1* and end with year `###0`: ``` select extract (century from timestamp '2000-12-16 12:21:13'); date_part ----------- 20 (1 row) select extract (century from timestamp '2001-12-16 12:21:13'); date_part ----------- 21 (1 row) ``` EPOCH The Amazon Redshift implementation of EPOCH is relative to 1970\-01\-01 00:00:00\.000000 independent of the time zone where the cluster resides\. You might need to offset the results by the difference in hours depending on the time zone where the cluster is located\. The following example demonstrates the following: 1. Creates a table called EVENT\_EXAMPLE based on the EVENT table\. This CREATE AS command uses the DATE\_PART function to create a date column \(called PGDATE\_PART by default\) to store the epoch value for each event\. 1. Selects the column and data type of EVENT\_EXAMPLE from PG\_TABLE\_DEF\. 1. Selects EVENTNAME, STARTTIME, and PGDATE\_PART from the EVENT\_EXAMPLE table to view the different date and time formats\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
4deb981f024e-1
1. Selects EVENTNAME, STARTTIME, and PGDATE\_PART from the EVENT\_EXAMPLE table to view the different date and time formats\. 1. Selects EVENTNAME and STARTTIME from EVENT EXAMPLE as is\. Converts epoch values in PGDATE\_PART using a 1 second interval to a timestamp without time zone, and returns the results in a column called CONVERTED\_TIMESTAMP\. ``` create table event_example as select eventname, starttime, date_part(epoch, starttime) from event; select "column", type from pg_table_def where tablename='event_example'; column | type ---------------+----------------------------- eventname | character varying(200) starttime | timestamp without time zone pgdate_part | double precision (3 rows) ``` ``` select eventname, starttime, pgdate_part from event_example; eventname | starttime | pgdate_part ----------------------+---------------------+------------- Mamma Mia! | 2008-01-01 20:00:00 | 1199217600
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
4deb981f024e-2
Mamma Mia! | 2008-01-01 20:00:00 | 1199217600 Spring Awakening | 2008-01-01 15:00:00 | 1199199600 Nas | 2008-01-01 14:30:00 | 1199197800 Hannah Montana | 2008-01-01 19:30:00 | 1199215800 K.D. Lang | 2008-01-01 15:00:00 | 1199199600 Spamalot | 2008-01-02 20:00:00 | 1199304000 Macbeth | 2008-01-02 15:00:00 | 1199286000 The Cherry Orchard | 2008-01-02 14:30:00 | 1199284200 Macbeth | 2008-01-02 19:30:00 | 1199302200 Demi Lovato | 2008-01-02 19:30:00 | 1199302200 select eventname, starttime, timestamp with time zone 'epoch' + pgdate_part * interval '1 second' AS converted_timestamp from event_example; eventname | starttime | converted_timestamp ----------------------+---------------------+---------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
4deb981f024e-3
Mamma Mia! | 2008-01-01 20:00:00 | 2008-01-01 20:00:00 Spring Awakening | 2008-01-01 15:00:00 | 2008-01-01 15:00:00 Nas | 2008-01-01 14:30:00 | 2008-01-01 14:30:00 Hannah Montana | 2008-01-01 19:30:00 | 2008-01-01 19:30:00 K.D. Lang | 2008-01-01 15:00:00 | 2008-01-01 15:00:00 Spamalot | 2008-01-02 20:00:00 | 2008-01-02 20:00:00 Macbeth | 2008-01-02 15:00:00 | 2008-01-02 15:00:00 The Cherry Orchard | 2008-01-02 14:30:00 | 2008-01-02 14:30:00 Macbeth | 2008-01-02 19:30:00 | 2008-01-02 19:30:00 Demi Lovato | 2008-01-02 19:30:00 | 2008-01-02 19:30:00 ... ``` DECADE or DECADES
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
4deb981f024e-4
... ``` DECADE or DECADES Amazon Redshift interprets the DECADE or DECADES DATEPART based on the common calendar\. For example, because the common calendar starts from the year 1, the first decade \(decade 1\) is 0001\-01\-01 through 0009\-12\-31, and the second decade \(decade 2\) is 0010\-01\-01 through 0019\-12\-31\. For example, decade 201 spans from 2000\-01\-01 to 2009\-12\-31: ``` select extract(decade from timestamp '1999-02-16 20:38:40'); date_part ----------- 200 (1 row) select extract(decade from timestamp '2000-02-16 20:38:40'); date_part ----------- 201 (1 row) select extract(decade from timestamp '2010-02-16 20:38:40'); date_part ----------- 202 (1 row) ``` MIL or MILS Amazon Redshift interprets a MIL to start with the first day of year *\#001* and end with the last day of year `#000`: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
4deb981f024e-5
``` select extract (mil from timestamp '2000-12-16 12:21:13'); date_part ----------- 2 (1 row) select extract (mil from timestamp '2001-12-16 12:21:13'); date_part ----------- 3 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Dateparts_for_datetime_functions.md
e69aa449c919-0
Some Amazon Redshift features don't support access to federated data\. You can find related limitations and considerations following\. The following are considerations for transactions when working with federated queries: + If a query consists of federated tables, the leader node starts a READ ONLY REPEATABLE READ transaction on the remote database\. This transaction remains for the duration of the Amazon Redshift transaction\. + The leader node creates a snapshot of the remote database by calling `pg_export_snapshot()` and makes a read lock on the affected tables\. + A compute node starts a transaction and uses the snapshot created at the leader node to issue queries to the remote database\. The following are limitations and considerations when using federated queries with Amazon Redshift: + Federated queries support read access to external data sources\. You can't write or create database objects in the external data source\. + In some cases, you might access an Amazon RDS or Aurora database in a different AWS Region than Amazon Redshift\. In these cases, you typically incur network latency and billing charges for transferring data across AWS Regions\. We recommend using an Aurora global database with a local endpoint in the same AWS Region as your Amazon Redshift cluster\. Aurora global databases use dedicated infrastructure for storage\-based replication across any two AWS Regions with typical latency of less than 1 second\. + Consider the cost of accessing Amazon RDS or Aurora\. For example, when using this feature to access Aurora, Aurora charges are based on IOPS\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-limitations.md
e69aa449c919-1
+ Consider the cost of accessing Amazon RDS or Aurora\. For example, when using this feature to access Aurora, Aurora charges are based on IOPS\. + Federated queries don't enable access to Amazon Redshift from RDS PostgreSQL or Aurora PostgreSQL\. + Federated queries currently don't support access through materialized views\. + Federated queries are only available in AWS Regions where both Amazon Redshift and Amazon RDS or Aurora are available\. + Federated queries currently don't support `ALTER SCHEMA`\. To change a schema, use `DROP` and then `CREATE EXTERNAL SCHEMA`\. + Federated queries don't work with concurrency scaling\. An Amazon Redshift external schema references a database in an external RDS PostgreSQL or Aurora PostgreSQL\. When it does, these limitations apply: + When creating an external schema referencing Aurora, the Aurora PostgreSQL database must be at version 9\.6, or later\. + When creating an external schema referencing Amazon RDS, the Amazon RDS PostgreSQL database must be at version 9\.6, or later\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-limitations.md
b9c2d2ad6865-0
ST\_Area returns the Cartesian area of an input geometry\. The area units are the same as the units in which the coordinates of the input geometry are expressed\. For points, linestrings, multipoints, and multilinestrings, the function returns 0\. For geometry collections, it returns the sum of the areas of the geometries in the collection\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Area-function.md
22ed09e1d68d-0
``` ST_Area(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Area-function.md
4e08a5e82776-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Area-function.md
60c1150b0829-0
`DOUBLE PRECISION` If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Area-function.md
77fca7904e67-0
The following SQL returns the Cartesian area of a multipolygon\. ``` SELECT ST_Area(ST_GeomFromText('MULTIPOLYGON(((0 0,10 0,0 10,0 0)),((10 0,20 0,20 10,10 0)))')); ``` ``` st_area --------- 100 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Area-function.md
5b13c192f4db-0
Aborts the current transaction and discards all updates made by that transaction\. This command performs the same function as the [ABORT](r_ABORT.md) command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROLLBACK.md
4499132402b5-0
``` ROLLBACK [ WORK | TRANSACTION ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROLLBACK.md
f05675e8388b-0
WORK Optional keyword\. This keyword isn't supported within a stored procedure\. TRANSACTION Optional keyword\. WORK and TRANSACTION are synonyms\. Neither is supported within a stored procedure\. For information about using ROLLBACK within a stored procedure, see [Managing transactions](stored-procedure-transaction-management.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROLLBACK.md
df8dd946fcf6-0
The following example creates a table then starts a transaction where data is inserted into the table\. The ROLLBACK command then rolls back the data insertion to leave the table empty\. The following command creates an example table called MOVIE\_GROSS: ``` create table movie_gross( name varchar(30), gross bigint ); ``` The next set of commands starts a transaction that inserts two data rows into the table: ``` begin; insert into movie_gross values ( 'Raiders of the Lost Ark', 23400000); insert into movie_gross values ( 'Star Wars', 10000000 ); ``` Next, the following command selects the data from the table to show that it was successfully inserted: ``` select * from movie_gross; ``` The command output shows that both rows successfully inserted: ``` name | gross -------------------------+---------- Raiders of the Lost Ark | 23400000 Star Wars | 10000000 (2 rows) ``` This command now rolls back the data changes to where the transaction began: ``` rollback; ``` Selecting data from the table now shows an empty table: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROLLBACK.md
df8dd946fcf6-1
rollback; ``` Selecting data from the table now shows an empty table: ``` select * from movie_gross; name | gross ------+------- (0 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROLLBACK.md
d828b617c3f4-0
ST\_Point returns a point geometry from the input coordinate values\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Point-function.md
043afcef70b7-0
``` ST_Point(x, y) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Point-function.md
8cfcf764cc36-0
*x* A value of data type `DOUBLE PRECISION` that represents a first coordinate\. *y* A value of data type `DOUBLE PRECISION` that represents a second coordinate\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Point-function.md
36a41c0a36cf-0
`GEOMETRY` of subtype `POINT`\. The spatial reference system identifier \(SRID\) value of the returned geometry is set to 0\. If *x* or *y* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Point-function.md
cfdfff1101f5-0
The following SQL constructs a point geometry from the input coordinates\. ``` SELECT ST_AsText(ST_Point(5.0, 7.0)); ``` ``` st_astext ------------- POINT(5 7) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Point-function.md
bbabdf3e7736-0
ST\_GeometryType returns the subtype of an input geometry as a string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryType-function.md
07530259968a-0
``` ST_GeometryType(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryType-function.md
f06b8813afb5-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryType-function.md
0eb44bf8d7ba-0
`VARCHAR` representing the subtype of *geom*\. If *geom* is null, then null is returned\. The values returned are as follows\. | Returned string value | Geometry subtype | | --- | --- | | `ST_Point` | Returned if *geom* is a `POINT` subtype | | `ST_LineString` | Returned if *geom* is a `LINESTRING` subtype | | `ST_Polygon` | Returned if *geom* is a `POLYGON` subtype | | `ST_MultiPoint` | Returned if *geom* is a `MULTIPOINT` subtype | | `ST_MultiLineString` | Returned if *geom* is a `MULTILINESTRING` subtype | | `ST_MultiPolygon` | Returned if *geom* is a `MULTIPOLYGON` subtype | | `ST_GeometryCollection` | Returned if *geom* is a `GEOMETRYCOLLECTION` subtype |
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryType-function.md
ab595733759a-0
The following SQL returns the subtype of the input linestring geometry\. ``` SELECT ST_GeometryType(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_geometrytype ------------- ST_LineString ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryType-function.md
25dfd53ed95d-0
ST\_MakePoint returns a point geometry whose coordinate values are the input values\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePoint-function.md
289cfe41212c-0
``` ST_MakePoint(x, y) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePoint-function.md
41bfe407d028-0
*x* A value of data type `DOUBLE PRECISION` representing the first coordinate\. *y* A value of data type `DOUBLE PRECISION` representing the second coordinate\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePoint-function.md
0f935efcf156-0
`GEOMETRY` of subtype `POINT`\. The spatial reference system identifier \(SRID\) value of the returned geometry is set to 0\. If *x* or *y* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePoint-function.md
daca6f74bf08-0
The following SQL returns a `GEOMETRY` type of subtype `POINT` with the provided coordinates\. ``` SELECT ST_AsText(ST_MakePoint(1,3)); ``` ``` st_astext ----------- POINT(1 3) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePoint-function.md
88cb778d43ca-0
The STDDEV\_SAMP and STDDEV\_POP window functions return the sample and population standard deviation of a set of numeric values \(integer, decimal, or floating\-point\)\. See also [STDDEV\_SAMP and STDDEV\_POP functions](r_STDDEV_functions.md)\. STDDEV\_SAMP and STDDEV are synonyms for the same function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_STDDEV.md
4c3571288797-0
``` STDDEV_SAMP | STDDEV | STDDEV_POP ( [ ALL ] expression ) OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list frame_clause ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_STDDEV.md
6c3acf270260-0
*expression * The target column or expression that the function operates on\. ALL With the argument ALL, the function retains all duplicate values from the expression\. ALL is the default\. DISTINCT is not supported\. OVER Specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\. PARTITION BY *expr\_list* Defines the window for the function in terms of one or more expressions\. ORDER BY *order\_list* Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\. *frame\_clause* If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_STDDEV.md
d9aee1130d2b-0
The argument types supported by the STDDEV functions are SMALLINT, INTEGER, BIGINT, NUMERIC, DECIMAL, REAL, and DOUBLE PRECISION\. Regardless of the data type of the expression, the return type of a STDDEV function is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_STDDEV.md
3f4e8d011695-0
The following example shows how to use STDDEV\_POP and VAR\_POP functions as window functions\. The query computes the population variance and population standard deviation for PRICEPAID values in the SALES table\. ``` select salesid, dateid, pricepaid, round(stddev_pop(pricepaid) over (order by dateid, salesid rows unbounded preceding)) as stddevpop, round(var_pop(pricepaid) over (order by dateid, salesid rows unbounded preceding)) as varpop from sales order by 2,1; salesid | dateid | pricepaid | stddevpop | varpop --------+--------+-----------+-----------+--------- 33095 | 1827 | 234.00 | 0 | 0 65082 | 1827 | 472.00 | 119 | 14161 88268 | 1827 | 836.00 | 248 | 61283 97197 | 1827 | 708.00 | 230 | 53019 110328 | 1827 | 347.00 | 223 | 49845 110917 | 1827 | 337.00 | 215 | 46159 150314 | 1827 | 688.00 | 211 | 44414 157751 | 1827 | 1730.00 | 447 | 199679
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_STDDEV.md
3f4e8d011695-1
150314 | 1827 | 688.00 | 211 | 44414 157751 | 1827 | 1730.00 | 447 | 199679 165890 | 1827 | 4192.00 | 1185 | 1403323 ... ``` The sample standard deviation and variance functions can be used in the same way\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_STDDEV.md
2da79dd14497-0
Returns the query ID of the most recently executed query in the current session\. If no queries have been executed in the current session, PG\_LAST\_QUERY\_ID returns \-1\. PG\_LAST\_QUERY\_ID does not return the query ID for queries that execute exclusively on the leader node\. For more information, see [Leader node–only functions](c_SQL_functions_leader_node_only.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_QUERY_ID.md
2ce9c0f56916-0
``` pg_last_query_id() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_QUERY_ID.md
8201cdaf8e4b-0
Returns an integer\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_QUERY_ID.md
4daae864a5b7-0
The following query returns the ID of the latest query executed in the current session\. ``` select pg_last_query_id(); pg_last_query_id ---------------- 5437 (1 row) ``` The following query returns the query ID and text of the most recently executed query in the current session\. ``` select query, trim(querytxt) as sqlquery from stl_query where query = pg_last_query_id(); query | sqlquery ------+-------------------------------------------------- 5437 | select name, loadtime from stl_file_scan where loadtime > 1000000; (1 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_QUERY_ID.md
24685d4a4e48-0
After your external tables are created, you can query them using the same SELECT statements that you use to query other Amazon Redshift tables\. These SELECT statement queries include joining tables, aggregating data, and filtering on predicates\. <a name="spectrum-get-started-query-s3-data"></a> **To query your data in Amazon S3** 1. Get the number of rows in the SPECTRUM\.SALES table\. ``` select count(*) from spectrum.sales; ``` ``` count ------ 172462 ``` 1. Keep your larger fact tables in Amazon S3 and your smaller dimension tables in Amazon Redshift, as a best practice\. If you loaded the sample data in [Getting Started with Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html), you have a table named EVENT in your database\. If not, create the EVENT table by using the following command\. ``` create table event( eventid integer not null distkey, venueid smallint not null, catid smallint not null, dateid smallint not null sortkey, eventname varchar(200), starttime timestamp); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-query-s3-data.md
24685d4a4e48-1
eventname varchar(200), starttime timestamp); ``` 1. Load the EVENT table by replacing the IAM role ARN in the following COPY command with the role ARN you created in [Step 1\. Create an IAM role for Amazon Redshift](c-getting-started-using-spectrum-create-role.md)\. ``` copy event from 's3://awssampledbuswest2/tickit/allevents_pipe.txt' iam_role 'arn:aws:iam::123456789012:role/mySpectrumRole' delimiter '|' timeformat 'YYYY-MM-DD HH:MI:SS' region 'us-west-2'; ``` The following example joins the external table SPECTRUM\.SALES with the local table EVENT to find the total sales for the top 10 events\. ``` select top 10 spectrum.sales.eventid, sum(spectrum.sales.pricepaid) from spectrum.sales, event where spectrum.sales.eventid = event.eventid and spectrum.sales.pricepaid > 30 group by spectrum.sales.eventid order by 2 desc; ``` ``` eventid | sum --------+--------- 289 | 51846.00
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-query-s3-data.md
24685d4a4e48-2
eventid | sum --------+--------- 289 | 51846.00 7895 | 51049.00 1602 | 50301.00 851 | 49956.00 7315 | 49823.00 6471 | 47997.00 2118 | 47863.00 984 | 46780.00 7851 | 46661.00 5638 | 46280.00 ``` 1. View the query plan for the previous query\. Note the `S3 Seq Scan`, `S3 HashAggregate`, and `S3 Query Scan` steps that were executed against the data on Amazon S3\. ``` explain select top 10 spectrum.sales.eventid, sum(spectrum.sales.pricepaid) from spectrum.sales, event where spectrum.sales.eventid = event.eventid and spectrum.sales.pricepaid > 30 group by spectrum.sales.eventid order by 2 desc; ``` ``` QUERY PLAN
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-query-s3-data.md
24685d4a4e48-3
group by spectrum.sales.eventid order by 2 desc; ``` ``` QUERY PLAN ----------------------------------------------------------------------------- XN Limit (cost=1001055770628.63..1001055770628.65 rows=10 width=31) -> XN Merge (cost=1001055770628.63..1001055770629.13 rows=200 width=31) Merge Key: sum(sales.derived_col2) -> XN Network (cost=1001055770628.63..1001055770629.13 rows=200 width=31) Send to leader -> XN Sort (cost=1001055770628.63..1001055770629.13 rows=200 width=31) Sort Key: sum(sales.derived_col2) -> XN HashAggregate (cost=1055770620.49..1055770620.99 rows=200 width=31) -> XN Hash Join DS_BCAST_INNER (cost=3119.97..1055769620.49 rows=200000 width=31)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-query-s3-data.md
24685d4a4e48-4
-> XN Hash Join DS_BCAST_INNER (cost=3119.97..1055769620.49 rows=200000 width=31) Hash Cond: ("outer".derived_col1 = "inner".eventid) -> XN S3 Query Scan sales (cost=3010.00..5010.50 rows=200000 width=31) -> S3 HashAggregate (cost=3010.00..3010.50 rows=200000 width=16) -> S3 Seq Scan spectrum.sales location:"s3://awssampledbuswest2/tickit/spectrum/sales" format:TEXT (cost=0.00..2150.00 rows=172000 width=16) Filter: (pricepaid > 30.00) -> XN Hash (cost=87.98..87.98 rows=8798 width=4) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=4) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-query-s3-data.md
561780693479-0
Drops a user from a database\. Multiple users can be dropped with a single DROP USER command\. You must be a database superuser to execute this command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_USER.md
6cdd0a004885-0
``` DROP USER [ IF EXISTS ] name [, ... ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_USER.md
34f36ea400bd-0
IF EXISTS Clause that indicates that if the specified user account doesn’t exist, the command should make no changes and return a message that the user account doesn't exist, rather than terminating with an error\. This clause is useful when scripting, so the script doesn’t fail if DROP USER runs against a nonexistent user account\. *name* Name of the user account to remove\. You can specify multiple user accounts, with a comma separating each account name from the next\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_USER.md
606c6ea82726-0
You can't drop a user if the user owns any database object, such as a schema, database, table, or view, or if the user has any privileges on a database, table, column, or group\. If you attempt to drop such a user, you receive one of the following errors\. ``` ERROR: user "username" can't be dropped because the user owns some object [SQL State=55006] ERROR: user "username" can't be dropped because the user has a privilege on some object [SQL State=55006] ``` **Note** Amazon Redshift checks only the current database before dropping a user\. DROP USER doesn't return an error if the user owns database objects or has any privileges on objects in another database\. If you drop a user that owns objects in another database, the owner for those objects is changed to 'unknown'\. If a user owns an object, first drop the object or change its ownership to another user before dropping the original user\. If the user has privileges for an object, first revoke the privileges before dropping the user\. The following example shows dropping an object, changing ownership, and revoking privileges before dropping the user\. ``` drop database dwdatabase; alter schema dw owner to dwadmin; revoke all on table dwtable from dwuser; drop user dwuser; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_USER.md
a5a8708f1f62-0
The following example drops a user account called danny: ``` drop user danny; ``` The following example drops two user accounts, danny and billybob: ``` drop user danny, billybob; ``` The following example drops the user account danny if it exists, or does nothing and returns a message if it doesn't ``` drop user if exists danny; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_USER.md
6bdb64192623-0
The JSON data structure is made up of a set of *objects* or *arrays*\. A JSON *object* begins and ends with braces, and contains an unordered collection of name/value pairs\. Each name and value are separated by a colon, and the pairs are separated by commas\. The name is a string in double quotation marks\. The quote characters must be simple quotation marks \(0x22\), not slanted or "smart" quotes\. A JSON *array* begins and ends with brackets, and contains an ordered collection of values separated by commas\. A value can be a string in double quotation marks, a number, a Boolean true or false, null, a JSON object, or an array\. JSON objects and arrays can be nested, enabling a hierarchical data structure\. The following example shows a JSON data structure with two valid objects\. ``` { "id": 1006410, "title": "Amazon Redshift Database Developer Guide" } { "id": 100540, "name": "Amazon Simple Storage Service Developer Guide" } ``` The following shows the same data as two JSON arrays\. ``` [ 1006410, "Amazon Redshift Database Developer Guide" ] [ 100540, "Amazon Simple Storage Service Developer Guide" ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
6bdb64192623-1
] [ 100540, "Amazon Simple Storage Service Developer Guide" ] ``` You can let COPY automatically load fields from the JSON file by specifying the 'auto' option, or you can specify a JSONPaths file that COPY uses to parse the JSON source data\. A *JSONPaths file* is a text file that contains a single JSON object with the name `"jsonpaths"` paired with an array of JSONPath expressions\. If the name is any string other than `"jsonpaths"`, COPY uses the `'auto'` argument instead of using the JSONPaths file\. In the Amazon Redshift COPY syntax, a JSONPath expression specifies the explicit path to a single name element in a JSON hierarchical data structure, using either bracket notation or dot notation\. Amazon Redshift doesn't support any JSONPath elements, such as wildcard characters or filter expressions, that might resolve to an ambiguous path or multiple name elements\. As a result, Amazon Redshift can't parse complex, multi\-level data structures\. The following is an example of a JSONPaths file with JSONPath expressions using bracket notation\. The dollar sign \($\) represents the root\-level structure\. ``` { "jsonpaths": [ "$['id']", "$['store']['book']['title']", "$['location'][0]" ] }
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
6bdb64192623-2
"$['location'][0]" ] } ``` In the previous example, `$['location'][0]` references the first element in an array\. JSON uses zero\-based array indexing\. Array indices must be positive integers \(greater than or equal to zero\)\. The following example shows the previous JSONPaths file using dot notation\. ``` { "jsonpaths": [ "$.id", "$.store.book.title", "$.location[0]" ] } ``` You cannot mix bracket notation and dot notation in the `jsonpaths` array\. Brackets can be used in both bracket notation and dot notation to reference an array element\. When using dot notation, the JSONPath expressions must not contain the following characters: + Single straight quotation mark \( ' \) + Period, or dot \( \. \) + Brackets \( \[ \] \) unless used to reference an array element If the value in the name/value pair referenced by a JSONPath expression is an object or an array, the entire object or array is loaded as a string, including the braces or brackets\. For example, suppose your JSON data contains the following object\. ``` { "id": 0,
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
6bdb64192623-3
``` { "id": 0, "guid": "84512477-fa49-456b-b407-581d0d851c3c", "isActive": true, "tags": [ "nisi", "culpa", "ad", "amet", "voluptate", "reprehenderit", "veniam" ], "friends": [ { "id": 0, "name": "Carmella Gonzales" }, { "id": 1, "name": "Renaldo" } ] } ``` The JSONPath expression `$['tags']` then returns the following value\. ``` "["nisi","culpa","ad","amet","voluptate","reprehenderit","veniam"]" ``` The JSONPath expression `$['friends'][1]` then returns the following value\. ``` "{"id": 1,"name": "Renaldo"}" ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
6bdb64192623-4
``` "{"id": 1,"name": "Renaldo"}" ``` Each JSONPath expression in the `jsonpaths` array corresponds to one column in the Amazon Redshift target table\. The order of the `jsonpaths` array elements must match the order of the columns in the target table or the column list, if a column list is used\. For examples that show how to load data using either the `'auto'` argument or a JSONPaths file, and using either JSON objects or arrays, see [Copy from JSON examples](r_COPY_command_examples.md#r_COPY_command_examples-copy-from-json)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
54a703efa1c1-0
COPY loads `\n` as a newline character and loads `\t` as a tab character\. To load a backslash, escape it with a backslash \( `\\` \)\. For example, suppose you have the following JSON in a file named `escape.json` in the bucket `s3://mybucket/json/`\. ``` { "backslash": "This is a backslash: \\", "newline": "This sentence\n is on two lines.", "tab": "This sentence \t contains a tab." } ``` Execute the following commands to create the ESCAPES table and load the JSON\. ``` create table escapes (backslash varchar(25), newline varchar(35), tab varchar(35)); copy escapes from 's3://mybucket/json/escape.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' format as json 'auto'; ``` Query the ESCAPES table to view the results\. ``` select * from escapes; backslash | newline | tab
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
54a703efa1c1-1
Query the ESCAPES table to view the results\. ``` select * from escapes; backslash | newline | tab ------------------------+-------------------+---------------------------------- This is a backslash: \ | This sentence | This sentence contains a tab. : is on two lines. (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
0841b4cd258e-0
You might lose precision when loading numbers from data files in JSON format to a column that is defined as a numeric data type\. Some floating point values aren't represented exactly in computer systems\. As a result, data you copy from a JSON file might not be rounded as you expect\. To avoid a loss of precision, we recommend using one of the following alternatives: + Represent the number as a string by enclosing the value in double quotation characters\. + Use [ROUNDEC](copy-parameters-data-conversion.md#copy-roundec) to round the number instead of truncating\. + Instead of using JSON or Avro files, use CSV, character\-delimited, or fixed\-width text files\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-json.md
f8e02f285586-0
Amazon Redshift does not support locale\-specific or user\-defined collation sequences\. In general, the results of any predicate in any context could be affected by the lack of locale\-specific rules for sorting and comparing data values\. For example, ORDER BY expressions and functions such as MIN, MAX, and RANK return results based on binary UTF8 ordering of the data that does not take locale\-specific characters into account\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_collation_sequences.md
6740d2ba39b8-0
Use the SVL\_MULTI\_STATEMENT\_VIOLATIONS view to get a complete record of all of the SQL commands run on the system that violates transaction block restrictions\. Violations occur when you run any of the following SQL commands that Amazon Redshift restricts inside a transaction block or multi\-statement requests: + [CREATE DATABASE](r_CREATE_DATABASE.md) + [DROP DATABASE](r_DROP_DATABASE.md) + [ALTER TABLE APPEND](r_ALTER_TABLE_APPEND.md) + [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md) + DROP EXTERNAL TABLE + RENAME EXTERNAL TABLE + ALTER EXTERNAL TABLE + CREATE TABLESPACE + DROP TABLESPACE + [CREATE LIBRARY](r_CREATE_LIBRARY.md) + [DROP LIBRARY](r_DROP_LIBRARY.md) + REBUILDCAT + INDEXCAT + REINDEX DATABASE + [VACUUM](r_VACUUM_command.md) + [GRANT](r_GRANT.md) + [COPY](r_COPY.md) **Note** If there are any entries in this view, then change your corresponding applications and SQL scripts\. We recommend changing your application code to move the use of these restricted SQL commands outside of the transaction block\. If you need further assistance, contact AWS Support\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MULTI_STATEMENT_VIOLATIONS.md
6740d2ba39b8-1
SVL\_MULTI\_STATEMENT\_VIOLATIONS is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MULTI_STATEMENT_VIOLATIONS.md
2bc8c466e969-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_MULTI_STATEMENT_VIOLATIONS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MULTI_STATEMENT_VIOLATIONS.md
ee00b9eb6b5b-0
The following query returns multiple statements that have violations\. ``` select * from svl_multi_statement_violations order by starttime asc; userid | database | cmdname | xid | pid | label | starttime | endtime | sequence | type | text ============================================================================================================================== 1 | dev | CREATE DATABASE | 1034 | 5729 |label1 | ********* | ******* | 0 | DDL | create table c(b int); 1 | dev | CREATE DATABASE | 1034 | 5729 |label1 | ********* | ******* | 0 | UTILITY | create database b; 1 | dev | CREATE DATABASE | 1034 | 5729 |label1 | ********* | ******* | 0 | UTILITY | COMMIT ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MULTI_STATEMENT_VIOLATIONS.md
32843cd294b6-0
The BOOL\_AND function operates on a single Boolean or integer column or expression\. This function applies similar logic to the BIT\_AND and BIT\_OR functions\. For this function, the return type is a Boolean value \(`true` or `false`\)\. If all values in a set are true, the BOOL\_AND function returns `true` \(`t`\)\. If any value is false, the function returns `false` \(`f`\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_AND.md
81dd4c905c8d-0
``` BOOL_AND ( [DISTINCT | ALL] expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_AND.md
1d783638fa41-0
*expression * The target column or expression that the function operates on\. This expression must have a BOOLEAN or integer data type\. The return type of the function is BOOLEAN\. DISTINCT \| ALL With the argument DISTINCT, the function eliminates all duplicate values for the specified expression before calculating the result\. With the argument ALL, the function retains all duplicate values\. ALL is the default\. For more information, see [DISTINCT support for bit\-wise aggregations](c_bitwise_aggregate_functions.md#distinct-support-for-bit-wise-aggregations)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_AND.md
0f5cf67e87de-0
You can use the Boolean functions against either Boolean expressions or integer expressions\. For example, the following query return results from the standard USERS table in the TICKIT database, which has several Boolean columns\. The BOOL\_AND function returns `false` for all five rows\. Not all users in each of those states likes sports\. ``` select state, bool_and(likesports) from users group by state order by state limit 5; state | bool_and ------+--------- AB | f AK | f AL | f AZ | f BC | f (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_AND.md
572d98aafe9f-0
In this step, you create an Amazon S3 bucket and upload the data files to the bucket\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-upload-files.md
331b8bf1c19f-0
**To upload the files to an Amazon S3 bucket** 1. Create a bucket in Amazon S3\. 1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console\.aws\.amazon\.com/s3/](https://console.aws.amazon.com/s3/)\. 1. Click **Create Bucket**\. 1. In the **Bucket Name** box of the **Create a Bucket** dialog box, type a bucket name\. The bucket name you choose must be unique among all existing bucket names in Amazon S3\. One way to help ensure uniqueness is to prefix your bucket names with the name of your organization\. Bucket names must comply with certain rules\. For more information, go to [Bucket restrictions and limitations](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html) in the *Amazon Simple Storage Service Developer Guide\.* 1. Select a region\. Create the bucket in the same region as your cluster\. If your cluster is in the Oregon region, click **Oregon**\. 1. Click **Create**\. When Amazon S3 successfully creates your bucket, the console displays your empty bucket in the **Buckets** panel\. 1. Create a folder\. 1. Click the name of the new bucket\. 1. Click the **Actions** button, and click **Create Folder** in the drop\-down list\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-upload-files.md
331b8bf1c19f-1
1. Click the **Actions** button, and click **Create Folder** in the drop\-down list\. 1. Name the new folder **load**\. **Note** The bucket that you created is not in a sandbox\. In this exercise, you add objects to a real bucket\. You're charged a nominal amount for the time that you store the objects in the bucket\. For more information about Amazon S3 pricing, go to the [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) page\. 1. Upload the data files to the new Amazon S3 bucket\. 1. Click the name of the data folder\. 1. In the Upload \- Select Files wizard, click **Add Files**\. A file selection dialog box opens\. 1. Select all of the files you downloaded and extracted, and then click **Open**\. 1. Click **Start Upload**\. <a name="tutorial-loading-user-credentials"></a> **User Credentials**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-upload-files.md
331b8bf1c19f-2
<a name="tutorial-loading-user-credentials"></a> **User Credentials** The Amazon Redshift COPY command must have access to read the file objects in the Amazon S3 bucket\. If you use the same user credentials to create the Amazon S3 bucket and to run the Amazon Redshift COPY command, the COPY command has all necessary permissions\. If you want to use different user credentials, you can grant access by using the Amazon S3 access controls\. The Amazon Redshift COPY command requires at least ListBucket and GetObject permissions to access the file objects in the Amazon S3 bucket\. For more information about controlling access to Amazon S3 resources, go to [Managing access permissions to your Amazon S3 resources](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-upload-files.md
17c8e8bd9247-0
[Step 4: Create the sample tables](tutorial-loading-data-create-tables.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-upload-files.md
b708fbf19ede-0
If your input data is not compatible with the table columns that will receive it, the COPY command will fail\. Use the following guidelines to help ensure that your input data is valid: + Your data can only contain UTF\-8 characters up to four bytes long\. + Verify that CHAR and VARCHAR strings are no longer than the lengths of the corresponding columns\. VARCHAR strings are measured in bytes, not characters, so, for example, a four\-character string of Chinese characters that occupy four bytes each requires a VARCHAR\(16\) column\. + Multibyte characters can only be used with VARCHAR columns\. Verify that multibyte characters are no more than four bytes long\. + Verify that data for CHAR columns only contains single\-byte characters\. + Do not include any special characters or syntax to indicate the last field in a record\. This field can be a delimiter\. + If your data includes null terminators, also referred to as NUL \(UTF\-8 0000\) or binary zero \(0x000\), you can load these characters as NULLS into CHAR or VARCHAR columns by using the NULL AS option in the COPY command: `null as '\0'` or `null as '\000'` \. If you do not use NULL AS, null terminators will cause your COPY to fail\. + If your strings contain special characters, such as delimiters and embedded newlines, use the ESCAPE option with the [COPY](r_COPY.md) command\. + Verify that all single and double quotes are appropriately matched\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_preparing-input-data.md
b708fbf19ede-1
+ Verify that all single and double quotes are appropriately matched\. + Verify that floating\-point strings are in either standard floating\-point format, such as 12\.123, or an exponential format, such as 1\.0E4\. + Verify that all timestamp and date strings follow the specifications for [ DATEFORMAT and TIMEFORMAT strings](r_DATEFORMAT_and_TIMEFORMAT_strings.md)\. The default timestamp format is YYYY\-MM\-DD hh:mm:ss, and the default date format is YYYY\-MM\-DD\. + For more information about boundaries and limitations on individual data types, see [Data types](c_Supported_data_types.md)\. For information about multibyte character errors, see [Multibyte character load errors](multi-byte-character-load-errors.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_preparing-input-data.md
d25b89d8c5ce-0
You can use the Python logging module to create user\-defined error and warning messages in your UDFs\. Following query execution, you can query the [SVL\_UDF\_LOG](r_SVL_UDF_LOG.md) system view to retrieve logged messages\. **Note** UDF logging consumes cluster resources and might affect system performance\. We recommend implementing logging only for development and troubleshooting\. During query execution, the log handler writes messages to the SVL\_UDF\_LOG system view, along with the corresponding function name, node, and slice\. The log handler writes one row to the SVL\_UDF\_LOG per message, per slice\. Messages are truncated to 4096 bytes\. The UDF log is limited to 500 rows per slice\. When the log is full, the log handler discards older messages and adds a warning message to SVL\_UDF\_LOG\. **Note** The Amazon Redshift UDF log handler escapes newlines \( `\n` \), pipe \( `|` \) characters, and backslash \( `\` \) characters with a backslash \( `\` \) character\. By default, the UDF log level is set to WARNING\. Messages with a log level of WARNING, ERROR, and CRITICAL are logged\. Messages with lower severity INFO, DEBUG, and NOTSET are ignored\. To set the UDF log level, use the Python logger method\. For example, the following sets the log level to INFO\. ``` logger.setLevel(logging.INFO) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-logging-messages.md
d25b89d8c5ce-1
``` logger.setLevel(logging.INFO) ``` For more information about using the Python logging module, see [Logging facility for Python](https://docs.python.org/2.7/library/logging.html) in the Python documentation\. The following example creates a function named f\_pyerror that imports the Python logging module, instantiates the logger, and logs an error\. ``` CREATE OR REPLACE FUNCTION f_pyerror() RETURNS INTEGER VOLATILE AS $$ import logging logger = logging.getLogger() logger.setLevel(logging.INFO) logger.info('Your info message here') return 0 $$ language plpythonu; ``` The following example queries SVL\_UDF\_LOG to view the message logged in the previous example\. ``` select funcname, node, slice, trim(message) as message from svl_udf_log; funcname | query | node | slice | message ------------+-------+------+-------+------------------ f_pyerror | 12345 | 1| 1 | Your info message here ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-logging-messages.md
7e88384d5dc5-0
For the latest AWS terminology, see the [AWS Glossary](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html) in the *AWS General Reference*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-workdocs-user-guide/doc_source/glossary.md
7da3cc8cd0a0-0
You can share a folder or file with other users and groups both within and outside your organization\. Share by sending a link, or share by sending an invite to a user's email address\. When you share by invite, you choose which permissions to grant to the users that you're sharing with\. For more information about permissions, see [Permissions](permissions.md)\. You can also revoke shares, and users can remove themselves from the share\. To see a list of users that have access to a file or folder, select the file or folder, then choose **Share**, **Permissions**\. To see a list of all the changes that users have made to your files and folders, view your activity feed\. For more information, see [Tracking file activity](activity_feed.md)\. **Note** You can only share with directory groups, not email distribution lists\. **Topics** + [Sharing by invite](share-invite.md) + [Sharing a link](web_share_link.md) + [Removing share permissions](revoke_share.md) + [Removing yourself from a share](unshare_yourself.md) + [Transferring document ownership](transfer_owner.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-workdocs-user-guide/doc_source/share-docs.md