id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
5a371d6357a8-0
``` expression [ NOT ] BETWEEN expression AND expression ``` Expressions can be numeric, character, or datetime data types, but they must be compatible\. The range is inclusive\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_range_condition.md
503262e12b5b-0
The first example counts how many transactions registered sales of either 2, 3, or 4 tickets: ``` select count(*) from sales where qtysold between 2 and 4; count -------- 104021 (1 row) ``` The range condition includes the begin and end values\. ``` select min(dateid), max(dateid) from sales where dateid between 1900 and 1910; min | max -----+----- 1900 | 1910 ``` The first expression in a range condition must be the lesser value and the second expression the greater value\. The following example will always return zero rows due to the values of the expressions: ``` select count(*) from sales where qtysold between 4 and 2; count ------- 0 (1 row) ``` However, applying the NOT modifier will invert the logic and produce a count of all rows: ``` select count(*) from sales where qtysold not between 4 and 2; count -------- 172456 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_range_condition.md
503262e12b5b-1
count -------- 172456 (1 row) ``` The following query returns a list of venues with 20000 to 50000 seats: ``` select venueid, venuename, venueseats from venue where venueseats between 20000 and 50000 order by venueseats desc; venueid | venuename | venueseats ---------+-------------------------------+------------ 116 | Busch Stadium | 49660 106 | Rangers BallPark in Arlington | 49115 96 | Oriole Park at Camden Yards | 48876 ... (22 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_range_condition.md
7b3669a6083a-0
CURRENT\_DATE returns a date in the current session time zone \(UTC by default\) in the default format: YYYY\-MM\-DD\. **Note** CURRENT\_DATE returns the start date for the current transaction, not for the start of the current statement\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATE_function.md
e010ab9df2bd-0
``` CURRENT_DATE ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATE_function.md
7ad73c2b7e66-0
DATE
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATE_function.md
c52184be6339-0
Return the current date: ``` select current_date; date ------------ 2008-10-01 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATE_function.md
ff3ca53862bd-0
TIMESTAMP\_CMP\_TIMESTAMPTZ compares the value of a time stamp expression with a time stamp with time zone expression\. If the time stamp and time stamp with time zone values are identical, the function returns 0\. If the time stamp is greater alphabetically, the function returns 1\. If the time stamp with time zone is greater, the function returns –1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_TIMESTAMPTZ.md
db05f71193d5-0
``` TIMESTAMP_CMP_TIMESTAMPTZ(timestamp, timestamptz) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_TIMESTAMPTZ.md
6e03b945ac39-0
*timestamp* A TIMESTAMP column or an expression that implicitly converts to a time stamp\. *timestamptz* A TIMESTAMP column or an expression that implicitly converts to a time stamp with a time zone\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_TIMESTAMPTZ.md
9f37fb5715ba-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_TIMESTAMPTZ.md
a1417c4082eb-0
Returns execution information about a database query\. **Note** The STL\_QUERY and STL\_QUERYTEXT views only contain information about queries, not other utility and DDL commands\. For a listing and information on all statements executed by Amazon Redshift, you can also query the STL\_DDLTEXT and STL\_UTILITYTEXT views\. For a complete listing of all statements executed by Amazon Redshift, you can query the SVL\_STATEMENTTEXT view\. To manage disk space, the STL log views only retain approximately two to five days of log history, depending on log usage and available disk space\. If you want to retain the log data, you will need to periodically copy it to other tables or unload it to Amazon S3\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY.md
ec6e7471c9fe-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_QUERY.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY.md
97717e0ab40c-0
The following query lists the five most recent queries\. ``` select query, trim(querytxt) as sqlquery from stl_query order by query desc limit 5; query | sqlquery ------+-------------------------------------------------- 129 | select query, trim(querytxt) from stl_query order by query; 128 | select node from stv_disk_read_speeds; 127 | select system_status from stv_gui_status 126 | select * from systable_topology order by slice 125 | load global dict registry (5 rows) ``` The following query returns the time elapsed in descending order for queries that ran on February 15, 2013\. ``` select query, datediff(seconds, starttime, endtime), trim(querytxt) as sqlquery from stl_query where starttime >= '2013-02-15 00:00' and endtime < '2013-02-15 23:59' order by date_diff desc; query | date_diff | sqlquery
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY.md
97717e0ab40c-1
order by date_diff desc; query | date_diff | sqlquery -------+-----------+------------------------------------------- 55 | 119 | padb_fetch_sample: select count(*) from category 121 | 9 | select * from svl_query_summary; 181 | 6 | select * from svl_query_summary where query in(179,178); 172 | 5 | select * from svl_query_summary where query=148; ... (189 rows) ``` The following query shows the queue time and execution time for queries\. Queries with `concurrency_scaling_status = 1` ran on a concurrency scaling cluster\. All other queries ran on the main cluster\. ``` SELECT w.service_class AS queue , q.concurrency_scaling_status , COUNT( * ) AS queries , SUM( q.aborted ) AS aborted , SUM( ROUND( total_queue_time::NUMERIC / 1000000,2 ) ) AS queue_secs , SUM( ROUND( total_exec_time::NUMERIC / 1000000,2 ) ) AS exec_secs FROM stl_query q JOIN stl_wlm_query w
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY.md
97717e0ab40c-2
FROM stl_query q JOIN stl_wlm_query w USING (userid,query) WHERE q.userid > 1 AND service_class > 5 AND q.starttime > '2019-03-01 16:38:00' AND q.endtime < '2019-03-01 17:40:00' GROUP BY 1,2 ORDER BY 1,2; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY.md
2cac547049f2-0
ST\_IsClosed returns true if the input geometry is closed\. The following rules define a closed geometry: + The input geometry is a point or a multipoint\. + The input geometry is a linestring, and the start and end points of the linestring coincide\. + The input geometry is a nonempty multilinestring and all its linestrings are closed\. + The input geometry is a nonempty polygon, all polygon's rings are nonempty, and the start and end points of all its rings coincide\. + The input geometry is a nonempty multipolygon and all its polygons are closed\. + The input geometry is a nonempty geometry collection and all its components are closed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsClosed-function.md
9b70a567bc66-0
``` ST_IsClosed(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsClosed-function.md
7e308ad4dd76-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsClosed-function.md
99bc19ce6aed-0
`BOOLEAN` If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsClosed-function.md
04343ee0542c-0
The following SQL checks if the polygon is closed\. ``` SELECT ST_IsClosed(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))')); ``` ``` st_isclosed ----------- true ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsClosed-function.md
58886ca70485-0
Returns the name of the user associated with the current session\. This is the user who initiated the current database connection\. **Note** Do not use trailing parentheses when calling SESSION\_USER\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SESSION_USER.md
f62a36621362-0
``` session_user ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SESSION_USER.md
558b6b2019b3-0
Returns a CHAR or VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SESSION_USER.md
948d785a8872-0
The following example returns the current session user: ``` select session_user; session_user -------------- dwuser (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SESSION_USER.md
dfa356d1dfd2-0
<a name="def_superusers"></a>Database superusers have the same privileges as database owners for all databases\. The *masteruser*, which is the user you created when you launched the cluster, is a superuser\. You must be a superuser to create a superuser\. Amazon Redshift system tables and system views are either visible only to superusers or visible to all users\. Only superusers can query system tables and system views that are designated "visible to superusers\." For information, see [System tables and views](c_intro_system_tables.md)\. Superusers can view all PostgreSQL catalog tables\. For information, see [System catalog tables](c_intro_catalog_views.md)\. A database superuser bypasses all permission checks\. Be very careful when using a superuser role\. We recommend that you do most of your work as a role that is not a superuser\. Superusers retain all privileges regardless of GRANT and REVOKE commands\. To create a new database superuser, log on to the database as a superuser and issue a CREATE USER command or an ALTER USER command with the CREATEUSER privilege\. ``` create user adminuser createuser password '1234Admin'; alter user adminuser createuser; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_superusers.md
99a45e528987-0
Returns the number of rows that were unloaded by the last UNLOAD command executed in the current session\. PG\_LAST\_UNLOAD\_COUNT is updated with the query ID of the last UNLOAD, even if the operation failed\. The query ID is updated when the UNLOAD is executed\. If the UNLOAD fails because of a syntax error or because of insufficient privileges, PG\_LAST\_UNLOAD\_COUNT returns the count for the previous UNLOAD\. If no UNLOAD commands were executed in
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_COUNT.md
99a45e528987-1
If no UNLOAD commands were executed in the current session, or if the last UNLOAD failed during the unload operation, PG\_LAST\_UNLOAD\_COUNT returns 0\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_COUNT.md
f05fdd348e6e-0
``` pg_last_unload_count() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_COUNT.md
37af48ef72e2-0
Returns BIGINT\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_COUNT.md
7cbd61c7d70e-0
The following query returns the number of rows unloaded by the latest UNLOAD command in the current session\. ``` select pg_last_unload_count(); pg_last_unload_count -------------------- 192497 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_UNLOAD_COUNT.md
4031f15e1ae2-0
A compound expression is a series of simple expressions joined by arithmetic operators\. A simple expression used in a compound expression must return a numeric value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_compound_expressions.md
76e7fe12edf8-0
``` expression operator expression | (compound_expression) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_compound_expressions.md
1ad9e4f5e622-0
*expression* A simple expression that evaluates to a value\. *operator* A compound arithmetic expression can be constructed using the following operators, in this order of precedence: + \( \) : parentheses to control the order of evaluation + \+ , \- : positive and negative sign/operator + ^ , \|/ , \|\|/ : exponentiation, square root, cube root + \* , / , % : multiplication, division, and modulo operators + @ : absolute value + \+ , \- : addition and subtraction + & , \|, \#, \~, <<, >> : AND, OR, NOT, shift left, shift right bitwise operators + \|\|: concatenation *\(compound\_expression\)* Compound expressions may be nested using parentheses\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_compound_expressions.md
daebe35f07d8-0
Examples of compound expressions include: ``` ('SMITH' || 'JONES') sum(x) / y sqrt(256) * avg(column) rank() over (order by qtysold) / 100 (select (pricepaid - commission) from sales where dateid = 1882) * (qtysold) ``` Some functions can also be nested within other functions\. For example, any scalar function can nest within another scalar function\. The following example returns the sum of the absolute values of a set of numbers: ``` sum(abs(qtysold)) ``` Window functions cannot be used as arguments for aggregate functions or other window functions\. The following expression would return an error: ``` avg(rank() over (order by qtysold)) ``` Window functions can have a nested aggregate function\. The following expression sums sets of values and then ranks them: ``` rank() over (order by sum(qtysold)) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_compound_expressions.md
0b4c5ae0aed8-0
To use the [STL\_ALERT\_EVENT\_LOG](r_STL_ALERT_EVENT_LOG.md) system table to identify and correct potential performance issues with your query, follow these steps: 1. Run the following to determine your query ID: ``` select query, elapsed, substring from svl_qlog order by query desc limit 5; ``` Examine the truncated query text in the `substring` field to determine which `query` value to select\. If you have run the query more than once, use the `query` value from the row with the lower `elapsed` value\. That is the row for the compiled version\. If you have been running many queries, you can raise the value used by the LIMIT clause used to make sure that your query is included\. 1. Select rows from STL\_ALERT\_EVENT\_LOG for your query: ``` Select * from stl_alert_event_log where query = MyQueryID; ``` ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/stl_alert_event_log_results.png) 1. Evaluate the results for your query\. Use the following table to locate potential solutions for any issues that you have identified\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-reviewing-query-alerts.md
0b4c5ae0aed8-1
1. Evaluate the results for your query\. Use the following table to locate potential solutions for any issues that you have identified\. **Note** Not all queries have rows in STL\_ALERT\_EVENT\_LOG, only those with identified issues\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c-reviewing-query-alerts.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-reviewing-query-alerts.md
86e015712862-0
You can optionally provide the host's public key in the manifest file so that Amazon Redshift can identify the host\. The COPY command does not require the host public key but, for security reasons, we strongly recommend using a public key to help prevent 'man\-in\-the\-middle' attacks\. You can find the host's public key in the following location, where `<ssh_host_rsa_key_name>` is the unique name for the host's public key: ``` : /etc/ssh/<ssh_host_rsa_key_name>.pub ``` **Note** Amazon Redshift only supports RSA keys\. We do not support DSA keys\. When you create your manifest file in Step 5, you will paste the text of the public key into the "Public Key" field in the manifest file entry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-get-the-host-key.md
634939d1ddf1-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_userstable.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_userstable.md
32cd9bba4b91-0
For each group in a query, the LISTAGG aggregate function orders the rows for that group according to the ORDER BY expression, then concatenates the values into a single string\. LISTAGG is a compute\-node only function\. The function returns an error if the query doesn't reference a user\-defined table or Amazon Redshift system table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
7a0ddede62c2-0
``` LISTAGG( [DISTINCT] aggregate_expression [, 'delimiter' ] ) [ WITHIN GROUP (ORDER BY order_list) ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
3bb62c5bcffb-0
DISTINCT \(Optional\) A clause that eliminates duplicate values from the specified expression before concatenating\. Trailing spaces are ignored, so the strings `'a'` and `'a '` are treated as duplicates\. LISTAGG uses the first value encountered\. For more information, see [Significance of trailing blanks](r_Character_types.md#r_Character_types-significance-of-trailing-blanks)\. *aggregate\_expression* Any valid expression \(such as a column name\) that provides the values to aggregate\. NULL values and empty strings are ignored\. *delimiter* \(Optional\) The string constant to separate the concatenated values\. The default is NULL\. *WITHIN GROUP \(ORDER BY order\_list\)* \(Optional\) A clause that specifies the sort order of the aggregated values\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
b16688041c2b-0
VARCHAR\(MAX\)\. If the result set is larger than the maximum VARCHAR size \(64K – 1, or 65535\), then LISTAGG returns the following error: ``` Invalid operation: Result size exceeds LISTAGG limit ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
afd1704f657b-0
If a statement includes multiple LISTAGG functions that use WITHIN GROUP clauses, each WITHIN GROUP clause must use the same ORDER BY values\. For example, the following statement will return an error\. ``` select listagg(sellerid) within group (order by dateid) as sellers, listagg(dateid) within group (order by sellerid) as dates from winsales; ``` The following statements will execute successfully\. ``` select listagg(sellerid) within group (order by dateid) as sellers, listagg(dateid) within group (order by dateid) as dates from winsales; select listagg(sellerid) within group (order by dateid) as sellers, listagg(dateid) as dates from winsales; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
24a5486224fa-0
The following example aggregates seller IDs, ordered by seller ID\. ``` select listagg(sellerid, ', ') within group (order by sellerid) from sales where eventid = 4337; listagg ---------------------------------------------------------------------------------------------------------------------------------------- 380, 380, 1178, 1178, 1178, 2731, 8117, 12905, 32043, 32043, 32043, 32432, 32432, 38669, 38750, 41498, 45676, 46324, 47188, 47188, 48294 ``` The following example uses DISTINCT to return a list of unique seller IDs\. ``` select listagg(distinct sellerid, ', ') within group (order by sellerid) from sales where eventid = 4337; listagg
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
24a5486224fa-1
where eventid = 4337; listagg ------------------------------------------------------------------------------------------- 380, 1178, 2731, 8117, 12905, 32043, 32432, 38669, 38750, 41498, 45676, 46324, 47188, 48294 ``` The following example aggregates seller IDs in date order\. ``` select listagg(sellerid) within group (order by dateid) from winsales; listagg ------------- 31141242333 ``` The following example returns a pipe\-separated list of sales dates for buyer B\. ``` select listagg(dateid,'|') within group (order by sellerid desc,salesid asc) from winsales where buyerid = 'b'; listagg ---------------------------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
24a5486224fa-2
2003-08-02|2004-04-18|2004-04-18|2004-02-12 ``` The following example returns a comma\-separated list of sales IDs for each buyer ID\. ``` select buyerid, listagg(salesid,',') within group (order by salesid) as sales_id from winsales group by buyerid order by buyerid; buyerid | sales_id -----------+------------------------ a |10005,40001,40005 b |20001,30001,30004,30003 c |10001,20002,30007,10006 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LISTAGG.md
d170c6bcb90d-0
Amazon Redshift creates the SVL\_QUERY\_REPORT view from a UNION of a number of Amazon Redshift STL system tables to provide information about executed query steps\. This view breaks down the information about executed queries by slice and by step, which can help with troubleshooting node and slice issues in the Amazon Redshift cluster\. SVL\_QUERY\_REPORT is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_REPORT.md
1b5794d39051-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QUERY_REPORT.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_REPORT.md
a190586b3a30-0
The following query demonstrates the data skew of the returned rows for the query with query ID 279\. Use this query to determine if database data is evenly distributed over the slices in the data warehouse cluster: ``` select query, segment, step, max(rows), min(rows), case when sum(rows) > 0 then ((cast(max(rows) -min(rows) as float)*count(rows))/sum(rows)) else 0 end from svl_query_report where query = 279 group by query, segment, step order by segment, step; ``` This query should return data similar to the following sample output: ``` query | segment | step | max | min | case ------+---------+------+----------+----------+---------------------- 279 | 0 | 0 | 19721687 | 19721687 | 0 279 | 0 | 1 | 19721687 | 19721687 | 0 279 | 1 | 0 | 986085 | 986084 | 1.01411202804304e-06 279 | 1 | 1 | 986085 | 986084 | 1.01411202804304e-06
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_REPORT.md
a190586b3a30-1
279 | 1 | 1 | 986085 | 986084 | 1.01411202804304e-06 279 | 1 | 4 | 986085 | 986084 | 1.01411202804304e-06 279 | 2 | 0 | 1775517 | 788460 | 1.00098637606408 279 | 2 | 2 | 1775517 | 788460 | 1.00098637606408 279 | 3 | 0 | 1775517 | 788460 | 1.00098637606408 279 | 3 | 2 | 1775517 | 788460 | 1.00098637606408 279 | 3 | 3 | 1775517 | 788460 | 1.00098637606408 279 | 4 | 0 | 1775517 | 788460 | 1.00098637606408 279 | 4 | 1 | 1775517 | 788460 | 1.00098637606408 279 | 4 | 2 | 1 | 1 | 0 279 | 5 | 0 | 1 | 1 | 0 279 | 5 | 1 | 1 | 1 | 0 279 | 6 | 0 | 20 | 20 | 0 279 | 6 | 1 | 1 | 1 | 0 279 | 7 | 0 | 1 | 1 | 0 279 | 7 | 1 | 0 | 0 | 0 (19 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_REPORT.md
9e48d2cd514e-0
TIMESTAMP\_CMP\_DATE compares the value of a time stamp and a date\. If the time stamp and date values are identical, the function returns 0\. If the time stamp is greater alphabetically, the function returns 1\. If the date is greater, the function returns –1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_DATE.md
f9b1ade4cdf9-0
``` TIMESTAMP_CMP_DATE(timestamp, date) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_DATE.md
3fae48f9b9e9-0
*timestamp* A timestamp column or an expression that implicitly converts to a time stamp\. *date* A date column or an expression that implicitly converts to a date\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_DATE.md
3096d753c9f2-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_DATE.md
bad101b01a0e-0
The following example compares LISTTIME to the date `2008-06-18`\. Listings made after this date return `1`; listings made before this date return `-1`\. ``` select listid, listtime, timestamp_cmp_date(listtime, '2008-06-18') from listing order by 1, 2, 3 limit 10; listid | listtime | timestamp_cmp_date --------+---------------------+-------------------- 1 | 2008-01-24 06:43:29 | -1 2 | 2008-03-05 12:25:29 | -1 3 | 2008-11-01 07:35:33 | 1 4 | 2008-05-24 01:18:37 | -1 5 | 2008-05-17 02:29:11 | -1 6 | 2008-08-15 02:08:13 | 1 7 | 2008-11-15 09:38:15 | 1 8 | 2008-11-09 05:07:30 | 1 9 | 2008-09-09 08:03:36 | 1 10 | 2008-06-17 09:44:54 | -1 (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP_DATE.md
88e753678844-0
Defines a new user group\. Only a superuser can create a group\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_GROUP.md
6f1a62f6b72d-0
``` CREATE GROUP group_name [ [ WITH ] [ USER username ] [, ...] ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_GROUP.md
3c5726feba61-0
*group\_name* Name of the new user group\. Group names beginning with two underscores are reserved for Amazon Redshift internal use\. For more information about valid names, see [Names and identifiers](r_names.md)\. WITH Optional syntax to indicate additional parameters for CREATE GROUP\. USER Add one or more users to the group\. *username* Name of the user to add to the group\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_GROUP.md
cfb4e58e23c4-0
The following example creates a user group named ADMIN\_GROUP with a two users, ADMIN1 and ADMIN2\. ``` create group admin_group with user admin1, admin2; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_GROUP.md
c88517d96ae6-0
Commits the current transaction\. Performs exactly the same function as the COMMIT command\. See [COMMIT](r_COMMIT.md) for more detailed documentation\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_END.md
661cda6016d4-0
``` END [ WORK | TRANSACTION ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_END.md
55456ea84d8e-0
WORK Optional keyword\. TRANSACTION Optional keyword; WORK and TRANSACTION are synonyms\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_END.md
e868fb99da3b-0
The following examples all end the transaction block and commit the transaction: ``` end; ``` ``` end work; ``` ``` end transaction; ``` After any of these commands, Amazon Redshift ends the transaction block and commits the changes\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_END.md
e6e8a4cc9651-0
ST\_MakeLine creates a linestring from the input geometries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakeLine-function.md
b40e03f219c9-0
``` ST_MakeLine(geom1, geom2) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakeLine-function.md
de973840bd2b-0
*geom1* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `POINT`, `LINESTRING`, or `MULTIPOINT`\. *geom2* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `POINT`, `LINESTRING`, or `MULTIPOINT`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakeLine-function.md
0f9a3800e767-0
`GEOMETRY` of subtype `LINESTRING`\. The spatial reference system identifier \(SRID\) value of the returned geometry is the SRID value of the input geometries\. If *geom1* or *geom2* is null, then null is returned\. If *geom1* and *geom2* have different SRID values, then an error is returned\. If *geom1* or *geom2* is not a `POINT`, `LINESTRING`, or `MULTIPOINT`, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakeLine-function.md
7211ec730fdf-0
The following SQL constructs a linestring from two input linestrings\. ``` SELECT ST_MakeLine(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)'), ST_GeomFromText('LINESTRING(88.29 39.07,88.42 39.26,88.27 39.31,88.29 39.07)')); ``` ``` st_makeline ----------- 010200000008000000C3F5285C8F52534052B81E85EB113D407B14AE47E15A5340C3F5285C8F423D40E17A14AE475153408FC2F5285C4F3D40C3F5285C8F52534052B81E85EB113D40C3F5285C8F125640295C8FC2F58843407B14AE47E11A5640E17A14AE47A14340E17A14AE4711564048E17A14AEA74340C3F5285C8F125640295C8FC2F5884340 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakeLine-function.md
97df7e4885de-0
**Note** These examples contain line breaks for readability\. Do not include line breaks or spaces in your *credentials\-args* string\. **Topics** + [Load FAVORITEMOVIES from an DynamoDB table](#r_COPY_command_examples-load-favoritemovies-from-an-amazon-dynamodb-table) + [Load LISTING from an Amazon S3 bucket](#r_COPY_command_examples-load-listing-from-an-amazon-s3-bucket) + [Load LISTING from an Amazon EMR cluster](#copy-command-examples-emr) + [Example: COPY from Amazon S3 using a manifest](#copy-command-examples-manifest) + [Load LISTING from a pipe\-delimited file \(default delimiter\)](#r_COPY_command_examples-load-listing-from-a-pipe-delimited-file-default-delimiter) + [Load LISTING using columnar data in Parquet format](#r_COPY_command_examples-load-listing-from-parquet) + [Load LISTING using temporary credentials](#sub-example-load-favorite-movies) + [Load EVENT with options](#r_COPY_command_examples-load-event-with-options) + [Load VENUE from a fixed\-width data file](#r_COPY_command_examples-load-venue-from-a-fixed-width-data-file) + [Load CATEGORY from a CSV file](#load-from-csv)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
97df7e4885de-1
+ [Load CATEGORY from a CSV file](#load-from-csv) + [Load VENUE with explicit values for an IDENTITY column](#r_COPY_command_examples-load-venue-with-explicit-values-for-an-identity-column) + [Load TIME from a pipe\-delimited GZIP file](#r_COPY_command_examples-load-time-from-a-pipe-delimited-gzip-file) + [Load a timestamp or datestamp](#r_COPY_command_examples-load-a-time-datestamp) + [Load data from a file with default values](#r_COPY_command_examples-load-data-from-a-file-with-default-values) + [COPY data with the ESCAPE option](#r_COPY_command_examples-copy-data-with-the-escape-option) + [Copy from JSON examples](#r_COPY_command_examples-copy-from-json) + [Copy from Avro examples](#r_COPY_command_examples-copy-from-avro) + [Preparing files for COPY with the ESCAPE option](#r_COPY_preparing_data)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
f121312e35d4-0
The AWS SDKs include a simple example of creating a DynamoDB table called *Movies*\. \(For this example, see [Getting Started with DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.html)\.\) The following example loads the Amazon Redshift MOVIES table with data from the DynamoDB table\. The Amazon Redshift table must already exist in the database\. ``` copy favoritemovies from 'dynamodb://Movies' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' readratio 50; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
8e5c28a08f76-0
The following example loads LISTING from an Amazon S3 bucket\. The COPY command loads all of the files in the `/data/listing/` folder\. ``` copy listing from 's3://mybucket/data/listing/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
5a994565340d-0
The following example loads the SALES table with tab\-delimited data from lzop\-compressed files in an Amazon EMR cluster\. COPY loads every file in the `myoutput/` folder that begins with `part-`\. ``` copy sales from 'emr://j-SAMPLE2B500FC/myoutput/part-*' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '\t' lzop; ``` The following example loads the SALES table with JSON formatted data in an Amazon EMR cluster\. COPY loads every file in the `myoutput/json/` folder\. ``` copy sales from 'emr://j-SAMPLE2B500FC/myoutput/json/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' JSON 's3://mybucket/jsonpaths.txt'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
86ebb1bee0f3-0
You can use a manifest to ensure that your COPY command loads all of the required files, and only the required files, from Amazon S3\. You can also use a manifest when you need to load multiple files from different buckets or files that don't share the same prefix\. For example, suppose that you need to load the following three files: `custdata1.txt`, `custdata2.txt`, and `custdata3.txt`\. You could use the following command to load all of the files in `mybucket` that begin with `custdata` by specifying a prefix: ``` copy category from 's3://mybucket/custdata' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` If only two of the files exist because of an error, COPY loads only those two files and finish successfully, resulting in an incomplete data load\. If the bucket also contains an unwanted file that happens to use the same prefix, such as a file named `custdata.backup` for example, COPY loads that file as well, resulting in unwanted data being loaded\. To ensure that all of the required files are loaded and to prevent unwanted files from being loaded, you can use a manifest file\. The manifest is a JSON\-formatted text file that lists the files to be processed by the COPY command\. For example, the following manifest loads the three files in the previous example\. ``` { "entries":[
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
86ebb1bee0f3-1
``` { "entries":[ { "url":"s3://mybucket/custdata.1", "mandatory":true }, { "url":"s3://mybucket/custdata.2", "mandatory":true }, { "url":"s3://mybucket/custdata.3", "mandatory":true } ] } ``` The optional `mandatory` flag indicates whether COPY should terminate if the file doesn't exist\. The default is `false`\. Regardless of any mandatory settings, COPY terminates if no files are found\. In this example, COPY returns an error if any of the files isn't found\. Unwanted files that might have been picked up if you specified only a key prefix, such as `custdata.backup`, are ignored, because they aren't on the manifest\. When loading from data files in ORC or Parquet format, a `meta` field is required, as shown in the following example\. ``` { "entries":[ { "url":"s3://mybucket-alpha/orc/2013-10-04-custdata", "mandatory":true, "meta":{
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
86ebb1bee0f3-2
"mandatory":true, "meta":{ "content_length":99 } }, { "url":"s3://mybucket-beta/orc/2013-10-05-custdata", "mandatory":true, "meta":{ "content_length":99 } } ] } ``` The following example uses a manifest named `cust.manifest`\. ``` copy customer from 's3://mybucket/cust.manifest' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest; ``` You can use a manifest to load files from different buckets or files that don't share the same prefix\. The following example shows the JSON to load data with files whose names begin with a date stamp\. ``` { "entries": [ {"url":"s3://mybucket/2013-10-04-custdata.txt","mandatory":true}, {"url":"s3://mybucket/2013-10-05-custdata.txt”,"mandatory":true},
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
86ebb1bee0f3-3
{"url":"s3://mybucket/2013-10-05-custdata.txt”,"mandatory":true}, {"url":"s3://mybucket/2013-10-06-custdata.txt”,"mandatory":true}, {"url":"s3://mybucket/2013-10-07-custdata.txt”,"mandatory":true} ] } ``` The manifest can list files that are in different buckets, as long as the buckets are in the same AWS Region as the cluster\. ``` { "entries": [ {"url":"s3://mybucket-alpha/custdata1.txt","mandatory":false}, {"url":"s3://mybucket-beta/custdata1.txt","mandatory":false}, {"url":"s3://mybucket-beta/custdata2.txt","mandatory":false} ] } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
7f5cbc891606-0
The following example is a very simple case in which no options are specified and the input file contains the default delimiter, a pipe character \('\|'\)\. ``` copy listing from 's3://mybucket/data/listings_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
65a841fab80d-0
The following example loads data from a folder on Amazon S3 named parquet\. ``` copy listing from 's3://mybucket/data/listings/parquet/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' format as parquet; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
5bf2b174f56e-0
The following example uses the SESSION\_TOKEN parameter to specify temporary session credentials: ``` copy listing from 's3://mybucket/data/listings_pipe.txt' access_key_id '<access-key-id>' secret_access_key '<secret-access-key' session_token '<temporary-token>'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
40f275a69a64-0
The following example loads pipe\-delimited data into the EVENT table and applies the following rules: + If pairs of quotation marks are used to surround any character strings, they are removed\. + Both empty strings and strings that contain blanks are loaded as NULL values\. + The load fails if more than 5 errors are returned\. + Timestamp values must comply with the specified format; for example, a valid timestamp is `2008-09-26 05:43:12`\. ``` copy event from 's3://mybucket/data/allevents_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' removequotes emptyasnull blanksasnull maxerror 5 delimiter '|' timeformat 'YYYY-MM-DD HH:MI:SS'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
b7d42325d599-0
``` copy venue from 's3://mybucket/data/venue_fw.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'; ``` The preceding example assumes a data file formatted in the same way as the sample data shown\. In the sample following, spaces act as placeholders so that all of the columns are the same width as noted in the specification: ``` 1 Toyota Park Bridgeview IL0 2 Columbus Crew Stadium Columbus OH0 3 RFK Stadium Washington DC0 4 CommunityAmerica BallparkKansas City KS0 5 Gillette Stadium Foxborough MA68756 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
70d0e484d930-0
Suppose you want to load the CATEGORY with the values shown in the following table\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html) The following example shows the contents of a text file with the field values separated by commas\. ``` 12,Shows,Musicals,Musical theatre 13,Shows,Plays,All "non-musical" theatre 14,Shows,Opera,All opera, light, and "rock" opera 15,Concerts,Classical,All symphony, concerto, and choir concerts ``` If you load the file using the DELIMITER parameter to specify comma\-delimited input, the COPY command fails because some input fields contain commas\. You can avoid that problem by using the CSV parameter and enclosing the fields that contain commas in quote characters\. If the quote character appears within a quoted string, you need to escape it by doubling the quote character\. The default quote character is a double quotation mark, so you need to escape each double quotation mark with an additional double quotation mark\. Your new input file looks something like this\. ``` 12,Shows,Musicals,Musical theatre 13,Shows,Plays,"All ""non-musical"" theatre" 14,Shows,Opera,"All opera, light, and ""rock"" opera" 15,Concerts,Classical,"All symphony, concerto, and choir concerts" ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
70d0e484d930-1
15,Concerts,Classical,"All symphony, concerto, and choir concerts" ``` Assuming the file name is `category_csv.txt`, you can load the file by using the following COPY command: ``` copy category from 's3://mybucket/data/category_csv.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' csv; ``` Alternatively, to avoid the need to escape the double quotation marks in your input, you can specify a different quote character by using the QUOTE AS parameter\. For example, the following version of `category_csv.txt` uses '`%`' as the quote character: ``` 12,Shows,Musicals,Musical theatre 13,Shows,Plays,%All "non-musical" theatre% 14,Shows,Opera,%All opera, light, and "rock" opera% 15,Concerts,Classical,%All symphony, concerto, and choir concerts% ``` The following COPY command uses QUOTE AS to load `category_csv.txt`: ``` copy category from 's3://mybucket/data/category_csv.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
70d0e484d930-2
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' csv quote as '%'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
2f42999c4abd-0
The following example assumes that when the VENUE table was created that at least one column \(such as the `venueid` column\) was specified to be an IDENTITY column\. This command overrides the default IDENTITY behavior of autogenerating values for an IDENTITY column and instead loads the explicit values from the venue\.txt file\. ``` copy venue from 's3://mybucket/data/venue.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' explicit_ids; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
63c85a12a296-0
The following example loads the TIME table from a pipe\-delimited GZIP file: ``` copy time from 's3://mybucket/data/timerows.gz' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' gzip delimiter '|'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
924be4b9a14a-0
The following example loads data with a formatted timestamp\. **Note** The TIMEFORMAT of `HH:MI:SS` can also support fractional seconds beyond the `SS` to a microsecond level of detail\. The file `time.txt` used in this example contains one row, `2009-01-12 14:15:57.119568`\. ``` copy timestamp1 from 's3://mybucket/data/time.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' timeformat 'YYYY-MM-DD HH:MI:SS'; ``` The result of this copy is as follows: ``` select * from timestamp1; c1 ---------------------------- 2009-01-12 14:15:57.119568 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
81f9059e0c49-0
The following example uses a variation of the VENUE table in the TICKIT database\. Consider a VENUE\_NEW table defined with the following statement: ``` create table venue_new( venueid smallint not null, venuename varchar(100) not null, venuecity varchar(30), venuestate char(2), venueseats integer not null default '1000'); ``` Consider a venue\_noseats\.txt data file that contains no values for the VENUESEATS column, as shown in the following example: ``` 1|Toyota Park|Bridgeview|IL| 2|Columbus Crew Stadium|Columbus|OH| 3|RFK Stadium|Washington|DC| 4|CommunityAmerica Ballpark|Kansas City|KS| 5|Gillette Stadium|Foxborough|MA| 6|New York Giants Stadium|East Rutherford|NJ| 7|BMO Field|Toronto|ON| 8|The Home Depot Center|Carson|CA| 9|Dick's Sporting Goods Park|Commerce City|CO| 10|Pizza Hut Park|Frisco|TX| ``` The following COPY statement will successfully load the table from the file and apply the DEFAULT value \('1000'\) to the omitted column: ``` copy venue_new(venueid, venuename, venuecity, venuestate)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
81f9059e0c49-1
``` copy venue_new(venueid, venuename, venuecity, venuestate) from 's3://mybucket/data/venue_noseats.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|'; ``` Now view the loaded table: ``` select * from venue_new order by venueid; venueid | venuename | venuecity | venuestate | venueseats ---------+----------------------------+-----------------+------------+------------ 1 | Toyota Park | Bridgeview | IL | 1000 2 | Columbus Crew Stadium | Columbus | OH | 1000 3 | RFK Stadium | Washington | DC | 1000 4 | CommunityAmerica Ballpark | Kansas City | KS | 1000 5 | Gillette Stadium | Foxborough | MA | 1000 6 | New York Giants Stadium | East Rutherford | NJ | 1000 7 | BMO Field | Toronto | ON | 1000 8 | The Home Depot Center | Carson | CA | 1000 9 | Dick's Sporting Goods Park | Commerce City | CO | 1000
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
81f9059e0c49-2
8 | The Home Depot Center | Carson | CA | 1000 9 | Dick's Sporting Goods Park | Commerce City | CO | 1000 10 | Pizza Hut Park | Frisco | TX | 1000 (10 rows) ``` For the following example, in addition to assuming that no VENUESEATS data is included in the file, also assume that no VENUENAME data is included: ``` 1||Bridgeview|IL| 2||Columbus|OH| 3||Washington|DC| 4||Kansas City|KS| 5||Foxborough|MA| 6||East Rutherford|NJ| 7||Toronto|ON| 8||Carson|CA| 9||Commerce City|CO| 10||Frisco|TX| ``` Using the same table definition, the following COPY statement fails because no DEFAULT value was specified for VENUENAME, and VENUENAME is a NOT NULL column: ``` copy venue(venueid, venuecity, venuestate) from 's3://mybucket/data/venue_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|'; ``` Now consider a variation of the VENUE table that uses an IDENTITY column: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
81f9059e0c49-3
``` Now consider a variation of the VENUE table that uses an IDENTITY column: ``` create table venue_identity( venueid int identity(1,1), venuename varchar(100) not null, venuecity varchar(30), venuestate char(2), venueseats integer not null default '1000'); ``` As with the previous example, assume that the VENUESEATS column has no corresponding values in the source file\. The following COPY statement successfully loads the table, including the predefined IDENTITY data values instead of autogenerating those values: ``` copy venue(venueid, venuename, venuecity, venuestate) from 's3://mybucket/data/venue_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|' explicit_ids; ``` This statement fails because it doesn't include the IDENTITY column \(VENUEID is missing from the column list\) yet includes an EXPLICIT\_IDS parameter: ``` copy venue(venuename, venuecity, venuestate) from 's3://mybucket/data/venue_pipe.txt'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
81f9059e0c49-4
copy venue(venuename, venuecity, venuestate) from 's3://mybucket/data/venue_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|' explicit_ids; ``` This statement fails because it doesn't include an EXPLICIT\_IDS parameter: ``` copy venue(venueid, venuename, venuecity, venuestate) from 's3://mybucket/data/venue_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
359737baa08f-0
The following example shows how to load characters that match the delimiter character \(in this case, the pipe character\)\. In the input file, make sure that all of the pipe characters \(\|\) that you want to load are escaped with the backslash character \(\\\)\. Then load the file with the ESCAPE parameter\. ``` $ more redshiftinfo.txt 1|public\|event\|dwuser 2|public\|sales\|dwuser create table redshiftinfo(infoid int,tableinfo varchar(50)); copy redshiftinfo from 's3://mybucket/data/redshiftinfo.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|' escape; select * from redshiftinfo order by 1; infoid | tableinfo -------+-------------------- 1 | public|event|dwuser 2 | public|sales|dwuser (2 rows) ``` Without the ESCAPE parameter, this COPY command fails with an `Extra column(s) found` error\. **Important**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
359737baa08f-1
``` Without the ESCAPE parameter, this COPY command fails with an `Extra column(s) found` error\. **Important** If you load your data using a COPY with the ESCAPE parameter, you must also specify the ESCAPE parameter with your UNLOAD command to generate the reciprocal output file\. Similarly, if you UNLOAD using the ESCAPE parameter, you need to use ESCAPE when you COPY the same data\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
8560b2eaca50-0
In the following examples, you load the CATEGORY table with the following data\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html) **Topics** + [Load from JSON data using the 'auto' option](#copy-from-json-examples-using-auto) + [Load from JSON data using a JSONPaths file](#copy-from-json-examples-using-jsonpaths) + [Load from JSON arrays using a JSONPaths file](#copy-from-json-examples-using-jsonpaths-arrays)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
f9b43e885a22-0
To load from JSON data using the `'auto'` argument, the JSON data must consist of a set of objects\. The key names must match the column names, but in this case, order doesn't matter\. The following shows the contents of a file named `category_object_auto.json`\. ``` { "catdesc": "Major League Baseball", "catid": 1, "catgroup": "Sports", "catname": "MLB" } { "catgroup": "Sports", "catid": 2, "catname": "NHL", "catdesc": "National Hockey League" }{ "catid": 3, "catname": "NFL", "catgroup": "Sports", "catdesc": "National Football League" } { "bogus": "Bogus Sports LLC", "catid": 4, "catgroup": "Sports", "catname": "NBA", "catdesc": "National Basketball Association" } { "catid": 5, "catgroup": "Shows", "catname": "Musicals",
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
f9b43e885a22-1
{ "catid": 5, "catgroup": "Shows", "catname": "Musicals", "catdesc": "All symphony, concerto, and choir concerts" } ``` To load from the JSON data file in the previous example, execute the following COPY command\. ``` copy category from 's3://mybucket/category_object_auto.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' json 'auto'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
d36257de2b72-0
If the JSON data objects don't correspond directly to column names, you can use a JSONPaths file to map the JSON elements to columns\. Again, the order doesn't matter in the JSON source data, but the order of the JSONPaths file expressions must match the column order\. Suppose that you have the following data file, named `category_object_paths.json`\. ``` { "one": 1, "two": "Sports", "three": "MLB", "four": "Major League Baseball" } { "three": "NHL", "four": "National Hockey League", "one": 2, "two": "Sports" } { "two": "Sports", "three": "NFL", "one": 3, "four": "National Football League" } { "one": 4, "two": "Sports", "three": "NBA", "four": "National Basketball Association" } { "one": 6, "two": "Shows", "three": "Musicals", "four": "All symphony, concerto, and choir concerts" } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md