id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
4d87f8b379b0-0
|
```
COPY table-name
[ column-list ]
FROM data_source
authorization
[ [ FORMAT ] [ AS ] data_format ]
[ parameter [ argument ] [, ... ] ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
cb60e80b17de-0
|
You can perform a COPY operation with as few as three parameters: a table name, a data source, and authorization to access the data\.
Amazon Redshift extends the functionality of the COPY command to enable you to load data in several data formats from multiple data sources, control access to load data, manage data transformations, and manage the load operation\.
This section presents the required COPY command parameters and groups the optional parameters by function\. Subsequent topics describe each parameter and explain how various options work together\. You can also go directly to a parameter description by using the alphabetical parameter list\.
**Topics**
+ [Required parameters](#r_COPY-syntax-required-parameters)
+ [Optional parameters](#r_COPY-syntax-overview-optional-parameters)
+ [Using the COPY command](#r_COPY-using-the-copy-command)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
efd6eaf8c674-0
|
The COPY command requires three elements:
+ [Table Name](#r_COPY-syntax-overview-table-name)
+ [Data Source](#r_COPY-syntax-overview-data-source)
+ [Authorization](#r_COPY-syntax-overview-credentials)
The simplest COPY command uses the following format\.
```
COPY table-name
FROM data-source
authorization;
```
The following example creates a table named CATDEMO, and then loads the table with sample data from a data file in Amazon S3 named `category_pipe.txt`\.
```
create table catdemo(catid smallint, catgroup varchar(10), catname varchar(10), catdesc varchar(50));
```
In the following example, the data source for the COPY command is a data file named `category_pipe.txt` in the `tickit` folder of an Amazon S3 bucket named `awssampledbuswest2`\. The COPY command is authorized to access the Amazon S3 bucket through an AWS Identity and Access Management \(IAM\) role\. If your cluster has an existing IAM role with permission to access Amazon S3 attached, you can substitute your role's Amazon Resource Name \(ARN\) in the following COPY command and execute it\.
```
copy catdemo
from 's3://awssampledbuswest2/tickit/category_pipe.txt'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
efd6eaf8c674-1
|
copy catdemo
from 's3://awssampledbuswest2/tickit/category_pipe.txt'
iam_role 'arn:aws:iam::<aws-account-id>:role/<role-name>'
region 'us-west-2';
```
For steps to create an IAM role, see [Step 2: Create an IAM Role](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-an-iam-role.html) in the Amazon Redshift Getting Started\. For complete instructions on how to use COPY commands to load sample data, including instructions for loading data from other AWS regions, see [Step 6: Load Sample Data from Amazon S3](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-create-sample-db.html) in the Amazon Redshift Getting Started\.\.
*table\-name* <a name="r_COPY-syntax-overview-table-name"></a>
The name of the target table for the COPY command\. The table must already exist in the database\. The table can be temporary or persistent\. The COPY command appends the new input data to any existing rows in the table\.
FROM *data\-source* <a name="r_COPY-syntax-overview-data-source"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
efd6eaf8c674-2
|
FROM *data\-source* <a name="r_COPY-syntax-overview-data-source"></a>
The location of the source data to be loaded into the target table\. A manifest file can be specified with some data sources\.
The most commonly used data repository is an Amazon S3 bucket\. You can also load from data files located in an Amazon EMR cluster, an Amazon EC2 instance, or a remote host that your cluster can access using an SSH connection, or you can load directly from a DynamoDB table\.
+ [COPY from Amazon S3](copy-parameters-data-source-s3.md)
+ [COPY from Amazon EMR](copy-parameters-data-source-emr.md)
+ [COPY from remote host \(SSH\)](copy-parameters-data-source-ssh.md)
+ [COPY from Amazon DynamoDB](copy-parameters-data-source-dynamodb.md)
Authorization <a name="r_COPY-syntax-overview-credentials"></a>
A clause that indicates the method that your cluster uses for authentication and authorization to access other AWS resources\. The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2\. You can provide that authorization by referencing an IAM role that is attached to your cluster or by providing the access key ID and secret access key for an IAM user\.
+ [Authorization parameters](copy-parameters-authorization.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
efd6eaf8c674-3
|
+ [Authorization parameters](copy-parameters-authorization.md)
+ [Role\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-role-based)
+ [Key\-based access control](copy-usage_notes-access-permissions.md#copy-usage_notes-access-key-based)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
9d04f9b3562c-0
|
You can optionally specify how COPY maps field data to columns in the target table, define source data attributes to enable the COPY command to correctly read and parse the source data, and manage which operations the COPY command performs during the load process\.
+ [Column mapping options](copy-parameters-column-mapping.md)
+ [Data format parameters](#r_COPY-syntax-overview-data-format)
+ [Data conversion parameters](#r_COPY-syntax-overview-data-conversion)
+ [Data load operations](#r_COPY-syntax-overview-data-load)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
a1e76837ab76-0
|
By default, COPY inserts field values into the target table's columns in the same order as the fields occur in the data files\. If the default column order will not work, you can specify a column list or use JSONPath expressions to map source data fields to the target columns\.
+ [Column List](copy-parameters-column-mapping.md#copy-column-list)
+ [JSONPaths File](copy-parameters-column-mapping.md#copy-column-mapping-jsonpaths)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
f8cc849092bb-0
|
You can load data from text files in fixed\-width, character\-delimited, comma\-separated values \(CSV\), or JSON format, or from Avro files\.
By default, the COPY command expects the source data to be in character\-delimited UTF\-8 text files\. The default delimiter is a pipe character \( \| \)\. If the source data is in another format, use the following parameters to specify the data format\.
+ [FORMAT](copy-parameters-data-format.md#copy-format)
+ [CSV](copy-parameters-data-format.md#copy-csv)
+ [DELIMITER](copy-parameters-data-format.md#copy-delimiter)
+ [FIXEDWIDTH](copy-parameters-data-format.md#copy-fixedwidth)
+ [AVRO](copy-parameters-data-format.md#copy-avro)
+ [JSON](copy-parameters-data-format.md#copy-json)
+ [ENCRYPTED](copy-parameters-data-source-s3.md#copy-encrypted)
+ [BZIP2](copy-parameters-file-compression.md#copy-bzip2)
+ [GZIP](copy-parameters-file-compression.md#copy-gzip)
+ [LZOP](copy-parameters-file-compression.md#copy-lzop)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
f8cc849092bb-1
|
+ [LZOP](copy-parameters-file-compression.md#copy-lzop)
+ [PARQUET](copy-parameters-data-format.md#copy-parquet)
+ [ORC](copy-parameters-data-format.md#copy-orc)
+ [ZSTD](copy-parameters-file-compression.md#copy-zstd)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
d052c5d7b148-0
|
As it loads the table, COPY attempts to implicitly convert the strings in the source data to the data type of the target column\. If you need to specify a conversion that is different from the default behavior, or if the default conversion results in errors, you can manage data conversions by specifying the following parameters\.
+ [ACCEPTANYDATE](copy-parameters-data-conversion.md#copy-acceptanydate)
+ [ACCEPTINVCHARS](copy-parameters-data-conversion.md#copy-acceptinvchars)
+ [BLANKSASNULL](copy-parameters-data-conversion.md#copy-blanksasnull)
+ [DATEFORMAT](copy-parameters-data-conversion.md#copy-dateformat)
+ [EMPTYASNULL](copy-parameters-data-conversion.md#copy-emptyasnull)
+ [ENCODING](copy-parameters-data-conversion.md#copy-encoding)
+ [ESCAPE](copy-parameters-data-conversion.md#copy-escape)
+ [EXPLICIT_IDS](copy-parameters-data-conversion.md#copy-explicit-ids)
+ [FILLRECORD](copy-parameters-data-conversion.md#copy-fillrecord)
+ [IGNOREBLANKLINES](copy-parameters-data-conversion.md#copy-ignoreblanklines)
+ [IGNOREHEADER](copy-parameters-data-conversion.md#copy-ignoreheader)
+ [NULL AS](copy-parameters-data-conversion.md#copy-null-as)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
d052c5d7b148-1
|
+ [NULL AS](copy-parameters-data-conversion.md#copy-null-as)
+ [REMOVEQUOTES](copy-parameters-data-conversion.md#copy-removequotes)
+ [ROUNDEC](copy-parameters-data-conversion.md#copy-roundec)
+ [TIMEFORMAT](copy-parameters-data-conversion.md#copy-timeformat)
+ [TRIMBLANKS](copy-parameters-data-conversion.md#copy-trimblanks)
+ [TRUNCATECOLUMNS](copy-parameters-data-conversion.md#copy-truncatecolumns)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
4101c6c64ff5-0
|
Manage the default behavior of the load operation for troubleshooting or to reduce load times by specifying the following parameters\.
+ [COMPROWS](copy-parameters-data-load.md#copy-comprows)
+ [COMPUPDATE](copy-parameters-data-load.md#copy-compupdate)
+ [MAXERROR](copy-parameters-data-load.md#copy-maxerror)
+ [NOLOAD](copy-parameters-data-load.md#copy-noload)
+ [STATUPDATE](copy-parameters-data-load.md#copy-statupdate)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
680fdc0dd8cf-0
|
For more information about how to use the COPY command, see the following topics:
+ [COPY examples](r_COPY_command_examples.md)
+ [Usage notes](r_COPY_usage_notes.md)
+ [Tutorial: Loading data from Amazon S3](tutorial-loading-data.md)
+ [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)
+ [Using a COPY command to load data](t_Loading_tables_with_the_COPY_command.md)
+ [Loading data from Amazon S3](t_Loading-data-from-S3.md)
+ [Loading data from Amazon EMR](loading-data-from-emr.md)
+ [Loading data from remote hosts](loading-data-from-remote-hosts.md)
+ [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md)
+ [Troubleshooting data loads](t_Troubleshooting_load_errors.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY.md
|
eaecf2d05004-0
|
ST\_XMin returns the minimum first coordinate of an input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMin-function.md
|
02b0dff9dda2-0
|
```
ST_XMin(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMin-function.md
|
52fc2c478c78-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMin-function.md
|
0fb8e8e3cee4-0
|
`DOUBLE PRECISION` value of the minimum first coordinate\.
If *geom* is empty, then null is returned\.
If *geom* is null, then null is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMin-function.md
|
d84f2f903ff5-0
|
The following SQL returns the smallest first coordinate of a linestring\.
```
SELECT ST_XMin(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)'));
```
```
st_xmin
-----------
77.27
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMin-function.md
|
4ffc697ddbc5-0
|
Retrieves rows using a cursor\. For information about declaring a cursor, see [DECLARE](declare.md)\.
FETCH retrieves rows based on the current position within the cursor\. When a cursor is created, it is positioned before the first row\. After a FETCH, the cursor is positioned on the last row retrieved\. If FETCH runs off the end of the available rows, such as following a FETCH ALL, the cursor is left positioned after the last row\.
FORWARD 0 fetches the current row without moving the cursor; that is, it fetches the most recently fetched row\. If the cursor is positioned before the first row or after the last row, no row is returned\.
When the first row of a cursor is fetched, the entire result set is materialized on the leader node, in memory or on disk, if needed\. Because of the potential negative performance impact of using cursors with large result sets, we recommend using alternative approaches whenever possible\. For more information, see [Performance considerations when using cursors](declare.md#declare-performance)\.
For more information, see [DECLARE](declare.md), [CLOSE](close.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/fetch.md
|
5f17a8b9aca4-0
|
```
FETCH [ NEXT | ALL | {FORWARD [ count | ALL ] } ] FROM cursor
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/fetch.md
|
0f8cac1e2f8c-0
|
NEXT
Fetches the next row\. This is the default\.
ALL
Fetches all remaining rows\. \(Same as FORWARD ALL\.\) ALL isn't supported for single\-node clusters\.
FORWARD \[ *count* \| ALL \]
Fetches the next *count* rows, or all remaining rows\. `FORWARD 0` fetches the current row\. For single\-node clusters, the maximum value for count is `1000`\. FORWARD ALL isn't supported for single\-node clusters\.
*cursor*
Name of the new cursor\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/fetch.md
|
c8c4f723fa25-0
|
The following example declares a cursor named LOLLAPALOOZA to select sales information for the Lollapalooza event, and then fetches rows from the result set using the cursor:
```
-- Begin a transaction
begin;
-- Declare a cursor
declare lollapalooza cursor for
select eventname, starttime, pricepaid/qtysold as costperticket, qtysold
from sales, event
where sales.eventid = event.eventid
and eventname='Lollapalooza';
-- Fetch the first 5 rows in the cursor lollapalooza:
fetch forward 5 from lollapalooza;
eventname | starttime | costperticket | qtysold
--------------+---------------------+---------------+---------
Lollapalooza | 2008-05-01 19:00:00 | 92.00000000 | 3
Lollapalooza | 2008-11-15 15:00:00 | 222.00000000 | 2
Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 3
Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 4
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/fetch.md
|
c8c4f723fa25-1
|
Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 4
Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 1
(5 rows)
-- Fetch the next row:
fetch next from lollapalooza;
eventname | starttime | costperticket | qtysold
--------------+---------------------+---------------+---------
Lollapalooza | 2008-10-06 14:00:00 | 114.00000000 | 2
-- Close the cursor and end the transaction:
close lollapalooza;
commit;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/fetch.md
|
3c72b1b57796-0
|
STV tables are virtual system tables that contain snapshots of the current system data\.
**Topics**
+ [STV\_ACTIVE\_CURSORS](r_STV_ACTIVE_CURSORS.md)
+ [STV\_BLOCKLIST](r_STV_BLOCKLIST.md)
+ [STV\_CURSOR\_CONFIGURATION](r_STV_CURSOR_CONFIGURATION.md)
+ [STV\_EXEC\_STATE](r_STV_EXEC_STATE.md)
+ [STV\_INFLIGHT](r_STV_INFLIGHT.md)
+ [STV\_LOAD\_STATE](r_STV_LOAD_STATE.md)
+ [STV\_LOCKS](r_STV_LOCKS.md)
+ [STV\_MV\_INFO](r_STV_MV_INFO.md)
+ [STV\_PARTITIONS](r_STV_PARTITIONS.md)
+ [STV\_QUERY\_METRICS](r_STV_QUERY_METRICS.md)
+ [STV\_RECENTS](r_STV_RECENTS.md)
+ [STV\_SESSIONS](r_STV_SESSIONS.md)
+ [STV\_SLICES](r_STV_SLICES.md)
+ [STV\_STARTUP\_RECOVERY\_STATE](r_STV_STARTUP_RECOVERY_STATE.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STV_tables.md
|
3c72b1b57796-1
|
+ [STV\_STARTUP\_RECOVERY\_STATE](r_STV_STARTUP_RECOVERY_STATE.md)
+ [STV\_TBL\_PERM](r_STV_TBL_PERM.md)
+ [STV\_TBL\_TRANS](r_STV_TBL_TRANS.md)
+ [STV\_WLM\_QMR\_CONFIG](r_STV_WLM_QMR_CONFIG.md)
+ [STV\_WLM\_CLASSIFICATION\_CONFIG](r_STV_WLM_CLASSIFICATION_CONFIG.md)
+ [STV\_WLM\_QUERY\_QUEUE\_STATE](r_STV_WLM_QUERY_QUEUE_STATE.md)
+ [STV\_WLM\_QUERY\_STATE](r_STV_WLM_QUERY_STATE.md)
+ [STV\_WLM\_QUERY\_TASK\_STATE](r_STV_WLM_QUERY_TASK_STATE.md)
+ [STV\_WLM\_SERVICE\_CLASS\_CONFIG](r_STV_WLM_SERVICE_CLASS_CONFIG.md)
+ [STV\_WLM\_SERVICE\_CLASS\_STATE](r_STV_WLM_SERVICE_CLASS_STATE.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STV_tables.md
|
e01c266d804a-0
|
The STL\_MV\_STATE view contains a row for every state transition of a materialized view\.
For more information about materialized views, see [Creating materialized views in Amazon Redshift](materialized-view-overview.md)\.
STL\_MV\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MV_STATE.md
|
dbaa10b4b613-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_MV_STATE.html)
The following table shows example combinations of `event_desc` and `state`\.
```
event_desc | state
-------------------------+---------------
TRUNCATE | Recompute
TRUNCATE | Recompute
Small table conversion | Recompute
Vacuum | Recompute
Column was renamed | Unrefreshable
Column was dropped | Unrefreshable
Table was renamed | Unrefreshable
Column type was changed | Unrefreshable
Schema name was changed | Unrefreshable
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MV_STATE.md
|
905d62d8a18d-0
|
To view the log of state transitions of materialized views, run the following query\.
```
select * from stl_mv_state;
```
This query returns the following sample output:
```
userid | starttime | xid | event_desc | db_name | base_table_schema | base_table_name | mv_schema | mv_name | state
--------+----------------------------+------+-----------------------------+---------+----------------------+----------------------+----------------------+---------------+---------------
138 | 2020-02-14 02:21:25.578885 | 5180 | TRUNCATE | dev | public | mv_base_table | public | mv_test | Recompute
138 | 2020-02-14 02:21:56.846774 | 5275 | Column was dropped | dev | | mv_base_table | public | mv_test | Unrefreshable
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MV_STATE.md
|
905d62d8a18d-1
|
100 | 2020-02-13 22:09:53.041228 | 1794 | Column was renamed | dev | | mv_base_table | public | mv_test | Unrefreshable
1 | 2020-02-13 22:10:23.630914 | 1893 | ALTER TABLE ALTER SORTKEY | dev | public | mv_base_table_sorted | public | mv_test | Recompute
1 | 2020-02-17 22:57:22.497989 | 8455 | ALTER TABLE ALTER DISTSTYLE | dev | public | mv_base_table | public | mv_test | Recompute
173 | 2020-02-17 22:57:23.591434 | 8504 | Table was renamed | dev | | mv_base_table | public | mv_test | Unrefreshable
173 | 2020-02-17 22:57:27.229423 | 8592 | Column type was changed | dev | | mv_base_table | public | mv_test | Unrefreshable
197 | 2020-02-17 22:59:06.212569 | 9668 | TRUNCATE | dev | schemaf796e415850f4f | mv_base_table | schemaf796e415850f4f | mv_test | Recompute
138 | 2020-02-14 02:21:55.705655 | 5226 | Column was renamed | dev | | mv_base_table | public | mv_test | Unrefreshable
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MV_STATE.md
|
905d62d8a18d-2
|
1 | 2020-02-14 02:22:26.292434 | 5325 | ALTER TABLE ALTER SORTKEY | dev | public | mv_base_table_sorted | public | mv_test | Recompute
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MV_STATE.md
|
7aad684d59e4-0
|
Your cluster continues to accrue charges as long as it is running\. When you have completed this tutorial, you should return your environment to the previous state by following the steps in [Step 5: Revoke access and delete your sample cluster](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-clean-up-tasks.html) in the *Amazon Redshift Getting Started*\.
If you want to keep the cluster, but recover the storage used by the SSB tables, execute the following commands\.
```
drop table part;
drop table supplier;
drop table customer;
drop table dwdate;
drop table lineorder;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-clean-up.md
|
dd5a4b3f0249-0
|
[Summary](tutorial-loading-data-summary.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-clean-up.md
|
48e90f00493a-0
|
The TRIM function trims a string by removing leading and trailing blanks or by removing characters that match an optional specified string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRIM.md
|
524ebb3cc1aa-0
|
```
TRIM( [ BOTH ] ['characters' FROM ] string ] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRIM.md
|
e92abf57a41e-0
|
*characters*
\(Optional\) The characters to be trimmed from the string\. If this parameter is omitted, blanks are trimmed\.
*string*
The string to be trimmed\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRIM.md
|
4d2143f7e3d2-0
|
The TRIM function returns a VARCHAR or CHAR string\. If you use the TRIM function with a SQL command, Amazon Redshift implicitly converts the results to VARCHAR\. If you use the TRIM function in the SELECT list for a SQL function, Amazon Redshift does not implicitly convert the results, and you might need to perform an explicit conversion to avoid a data type mismatch error\. See the [CAST and CONVERT functions](r_CAST_function.md) and
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRIM.md
|
4d2143f7e3d2-1
|
the [CAST and CONVERT functions](r_CAST_function.md) and [CONVERT](r_CAST_function.md#convert-function) functions for information about explicit conversions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRIM.md
|
3e83c356194d-0
|
The following example removes the double quotes that surround the string `"dog"`:
```
select trim('"' FROM '"dog"');
btrim
-------
dog
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRIM.md
|
8d3f27bcea6b-0
|
Searches a string for a regular expression pattern and replaces every occurrence of the pattern with the specified string\. REGEXP\_REPLACE is similar to the [REPLACE function](r_REPLACE.md), but lets you search a string for a regular expression pattern\. For more information about regular expressions, see [POSIX operators](pattern-matching-conditions-posix.md)\.
REGEXP\_REPLACE is similar to the [TRANSLATE function](r_TRANSLATE.md) and the [REPLACE function](r_REPLACE.md), except that TRANSLATE makes multiple single\-character substitutions and REPLACE substitutes one entire string with another string, while REGEXP\_REPLACE lets you search a string for a regular expression pattern\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_REPLACE.md
|
60d17c4f65b6-0
|
```
REGEXP_REPLACE ( source_string, pattern [, replace_string [ , position ] ] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_REPLACE.md
|
f9b3f0cade73-0
|
*source\_string*
A string expression, such as a column name, to be searched\.
*pattern*
A string literal that represents a SQL standard regular expression pattern\.
*replace\_string*
A string expression, such as a column name, that will replace each occurrence of pattern\. The default is an empty string \( "" \)\.
*position*
A positive integer that indicates the position within *source\_string* to begin searching\. The position is based on the number of characters, not bytes, so that multibyte characters are counted as single characters\. The default is 1\. If *position* is less than 1, the search begins at the first character of *source\_string*\. If *position* is greater than the number of characters in *source\_string*, the result is *source\_string*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_REPLACE.md
|
89009bcc8bc9-0
|
VARCHAR
If either *pattern* or *replace\_string* is NULL, the return is NULL\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_REPLACE.md
|
088bcd6e1b61-0
|
The following example deletes the `@` and domain name from email addresses\.
```
select email, regexp_replace( email, '@.*\\.(org|gov|com)$')
from users limit 5;
email | regexp_replace
-----------------------------------+----------------
DonecFri@semperpretiumneque.com | DonecFri
mk1wait@UniOfTech.org | mk1wait
sed@redshiftemails.com | sed
bunyung@integermath.gov | bunyung
tomsupporter@galaticmess.org | tomsupporter
```
The following example selects URLs from the fictional WEBSITES table and replaces the domain names with this value: `internal.company.com/`
```
select url, regexp_replace(url, '^.*\\.[[:alpha:]]{3}/',
'internal.company.com/') from websites limit 4;
url
-----------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_REPLACE.md
|
088bcd6e1b61-1
|
| regexp_replace
+-----------------------------------------------------
example.com/cuisine/locations/home.html
| internal.company.com/cuisine/locations/home.html
anycompany.employersthere.com/employed/A/index.html
| internal.company.com/employed/A/index.html
example.gov/credentials/keys/public
| internal.company.com/credentials/keys/public
yourcompany.com/2014/Q1/summary.pdf
| internal.company.com/2014/Q1/summary.pdf
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_REPLACE.md
|
d42b98f0b1f1-0
|
**'$user', public, ***schema\_names*
A comma\-separated list of existing schema names\. If **'$user'** is present, then the schema having the same name as `SESSION_USER` is substituted, otherwise it is ignored\. If **public** is present and no schema with the name `public` exists, it is ignored\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_search_path.md
|
7d32449b013d-0
|
This parameter specifies the order in which schemas are searched when an object \(such as a table or a function\) is referenced by a simple name with no schema component\.
+ Search paths are not supported with external schemas and external tables\. External tables must be explicitly qualified by an external schema\.
+ When objects are created without a specific target schema, they are placed in the first schema listed in the search path\. If the search path is empty, the system returns an error\.
+ When objects with identical names exist in different schemas, the one found first in the search path is used\.
+ An object that is not in any of the schemas in the search path can only be referenced by specifying its containing schema with a qualified \(dotted\) name\.
+ The system catalog schema, pg\_catalog, is always searched\. If it is mentioned in the path, it is searched in the specified order\. If not, it is searched before any of the path items\.
+ The current session's temporary\-table schema, pg\_temp\_nnn, is always searched if it exists\. It can be explicitly listed in the path by using the alias pg\_temp\. If it is not listed in the path, it is searched first \(even before pg\_catalog\)\. However, the temporary schema is only searched for relation names \(tables, views\)\. It is not searched for function names\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_search_path.md
|
c4a81c574f86-0
|
The following example creates the schema ENTERPRISE and sets the search\_path to the new schema\.
```
create schema enterprise;
set search_path to enterprise;
show search_path;
search_path
-------------
enterprise
(1 row)
```
The following example adds the schema ENTERPRISE to the default search\_path\.
```
set search_path to '$user', public, enterprise;
show search_path;
search_path
-----------------------------
"$user", public, enterprise
(1 row)
```
The following example adds the table FRONTIER to the schema ENTERPRISE:
```
create table enterprise.frontier (c1 int);
```
When the table PUBLIC\.FRONTIER is created in the same database, and the user does not specify the schema name in a query, PUBLIC\.FRONTIER takes precedence over ENTERPRISE\.FRONTIER:\.
```
create table public.frontier(c1 int);
insert into enterprise.frontier values(1);
select * from frontier;
frontier
----
(0 rows)
select * from enterprise.frontier;
c1
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_search_path.md
|
c4a81c574f86-1
|
frontier
----
(0 rows)
select * from enterprise.frontier;
c1
----
1
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_search_path.md
|
c9700760b04f-0
|
The VERSION function returns details about the currently installed release, with specific Amazon Redshift version information at the end\.
**Note**
This is a leader\-node function\. This function returns an error if it references a user\-created table, an STL or STV system table, or an SVV or SVL system view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VERSION.md
|
76f9ed5df842-0
|
```
VERSION()
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VERSION.md
|
5b0af2cf7a69-0
|
Returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VERSION.md
|
1723dd1255f1-0
|
The following example shows the cluster version information of the current cluster:
```
select version();
```
```
version
------------------------------------------------------------------------------------------------------------------------
PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.12103
```
Where `1.0.12103` is the cluster version number\.
**Note**
To force your cluster to update to the latest cluster version, adjust your [maintenance window](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-maintenance-windows)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VERSION.md
|
756a37db6101-0
|
**Topics**
+ [Compression encodings](c_Compression_encodings.md)
+ [Testing compression encodings](t_Verifying_data_compression.md)
+ [Example: Choosing compression encodings for the CUSTOMER table](Examples__compression_encodings_in_CREATE_TABLE_statements.md)
Compression is a column\-level operation that reduces the size of data when it is stored\. Compression conserves storage space and reduces the size of data that is read from storage, which reduces the amount of disk I/O and therefore improves query performance\.
You can apply a compression type, or *encoding*, to the columns in a table manually when you create the table, or you can use the COPY command to analyze and apply compression automatically\. For details about applying automatic compression, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
**Note**
We strongly recommend using the COPY command to apply automatic compression\.
You might choose to apply compression encodings manually if the new table shares the same data characteristics as another table, or if in testing you discover that the compression encodings that are applied during automatic compression are not the best fit for your data\. If you choose to apply compression encodings manually, you can run the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command against an already populated table and use the results to choose compression encodings\.
To apply compression manually, you specify compression encodings for individual columns as part of the CREATE TABLE statement\. The syntax is as follows:
```
CREATE TABLE table_name (column_name
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Compressing_data_on_disk.md
|
756a37db6101-1
|
```
CREATE TABLE table_name (column_name
data_type ENCODE encoding-type)[, ...]
```
Where *encoding\-type* is taken from the keyword table in the following section\.
For example, the following statement creates a two\-column table, PRODUCT\. When data is loaded into the table, the PRODUCT\_ID column is not compressed, but the PRODUCT\_NAME column is compressed, using the byte dictionary encoding \(BYTEDICT\)\.
```
create table product(
product_id int encode raw,
product_name char(20) encode bytedict);
```
You cannot change the compression encoding for a column after the table is created\. You can specify the encoding for a column when it is added to a table using the ALTER TABLE command\.
```
ALTER TABLE table-name ADD [ COLUMN ] column_name column_type ENCODE encoding-type
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Compressing_data_on_disk.md
|
2f0b5b96a9ef-0
|
Returns `true` if the user has the specified privilege for the specified database\. For more information about privileges, see [GRANT](r_GRANT.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_DATABASE_PRIVILEGE.md
|
c3ac7e2cbf16-0
|
**Note**
This is a leader\-node function\. This function returns an error if it references a user\-created table, an STL or STV system table, or an SVV or SVL system view\.
```
has_database_privilege( [ user, ] database, privilege)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_DATABASE_PRIVILEGE.md
|
cec9a0db2a82-0
|
*user*
Name of the user to check for database privileges\. Default is to check the current user\.
*database*
Database associated with the privilege\.
*privilege*
Privilege to check\. Valid values are:
+ CREATE
+ TEMPORARY
+ TEMP
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_DATABASE_PRIVILEGE.md
|
f78d7ae83d4b-0
|
Returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_DATABASE_PRIVILEGE.md
|
302cd3b67282-0
|
The following query confirms that the GUEST user has the TEMP privilege on the TICKIT database:
```
select has_database_privilege('guest', 'tickit', 'temp');
has_database_privilege
------------------------
true
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAS_DATABASE_PRIVILEGE.md
|
ec0bd5a65f0d-0
|
By default, the COPY command expects the source data to be character\-delimited UTF\-8 text\. The default delimiter is a pipe character \( \| \)\. If the source data is in another format, use the following parameters to specify the data format:
+ [FORMAT](#copy-format)
+ [CSV](#copy-csv)
+ [DELIMITER](#copy-delimiter)
+ [FIXEDWIDTH](#copy-fixedwidth)
+ [AVRO](#copy-avro)
+ [JSON](#copy-json)
+ [PARQUET](#copy-parquet)
+ [ORC](#copy-orc)
In addition to the standard data formats, COPY supports the following columnar data formats for COPY from Amazon S3:
+ [ORC](#copy-orc)
+ [PARQUET](#copy-parquet)
COPY from columnar format is supported with certain restriction\. For more information, see [COPY from columnar data formats](copy-usage_notes-copy-from-columnar.md)\. <a name="copy-data-format-parameters"></a>Data format parameters
FORMAT \[AS\] <a name="copy-format"></a>
\(Optional\) Identifies data format keywords\. The FORMAT arguments are described following\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-1
|
\(Optional\) Identifies data format keywords\. The FORMAT arguments are described following\.
CSV \[ QUOTE \[AS\] *'quote\_character'* \] <a name="copy-csv"></a>
Enables use of CSV format in the input data\. To automatically escape delimiters, newline characters, and carriage returns, enclose the field in the character specified by the QUOTE parameter\. The default quote character is a double quotation mark \( " \)\. When the quote character is used within a field, escape the character with an additional quote character\. For example, if the quote character is a double quotation mark, to insert the string `A "quoted" word` the input file should include the string `"A ""quoted"" word"`\. When the CSV parameter is used, the default delimiter is a comma \( , \)\. You can specify a different delimiter by using the DELIMITER parameter\.
When a field is enclosed in quotes, white space between the delimiters and the quote characters is ignored\. If the delimiter is a white space character, such as a tab, the delimiter isn't treated as white space\.
CSV cannot be used with FIXEDWIDTH, REMOVEQUOTES, or ESCAPE\.
QUOTE \[AS\] *'quote\_character'* <a name="copy-csv-quote"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-2
|
QUOTE \[AS\] *'quote\_character'* <a name="copy-csv-quote"></a>
Optional\. Specifies the character to be used as the quote character when using the CSV parameter\. The default is a double quotation mark \( " \)\. If you use the QUOTE parameter to define a quote character other than double quotation mark, you don't need to escape double quotation marks within the field\. The QUOTE parameter can be used only with the CSV parameter\. The AS keyword is optional\.
DELIMITER \[AS\] \['*delimiter\_char*'\] <a name="copy-delimiter"></a>
Specifies the single ASCII character that is used to separate fields in the input file, such as a pipe character \( \| \), a comma \( , \), or a tab \( \\t \)\. Non\-printing ASCII characters are supported\. ASCII characters can also be represented in octal, using the format '\\ddd', where 'd' is an octal digit \(0–7\)\. The default delimiter is a pipe character \( \| \), unless the CSV parameter is used, in which case the default delimiter is a comma \( , \)\. The AS keyword is optional\. DELIMITER cannot be used with FIXEDWIDTH\.
FIXEDWIDTH '*fixedwidth\_spec*' <a name="copy-fixedwidth"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-3
|
FIXEDWIDTH '*fixedwidth\_spec*' <a name="copy-fixedwidth"></a>
Loads the data from a file where each column width is a fixed length, rather than columns being separated by a delimiter\. The *fixedwidth\_spec* is a string that specifies a user\-defined column label and column width\. The column label can be either a text string or an integer, depending on what the user chooses\. The column label has no relation to the column name\. The order of the label/width pairs must match the order of the table columns exactly\. FIXEDWIDTH cannot be used with CSV or DELIMITER\. In Amazon Redshift, the length of CHAR and VARCHAR columns is expressed in bytes, so be sure that the column width that you specify accommodates the binary length of multibyte characters when preparing the file to be loaded\. For more information, see [Character types](r_Character_types.md)\.
The format for *fixedwidth\_spec* is shown following:
```
'colLabel1:colWidth1,colLabel:colWidth2, ...'
```
AVRO \[AS\] '*avro\_option*' <a name="copy-avro"></a>
Specifies that the source data is in Avro format\.
Avro format is supported for COPY from these services and protocols:
+ Amazon S3
+ Amazon EMR
+ Remote hosts \(SSH\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-4
|
+ Amazon S3
+ Amazon EMR
+ Remote hosts \(SSH\)
Avro isn't supported for COPY from DynamoDB\.
Avro is a data serialization protocol\. An Avro source file includes a schema that defines the structure of the data\. The Avro schema type must be `record`\. COPY accepts Avro files creating using the default uncompressed codec as well as the `deflate` and `snappy` compression codecs\. For more information about Avro, go to [Apache Avro](https://avro.apache.org/)\.
Valid values for *avro\_option* are as follows:
+ 'auto'
+ 's3://*jsonpaths\_file*'
The default is `'auto'`\.
'auto' <a name="copy-avro-auto"></a>
COPY automatically maps the data elements in the Avro source data to the columns in the target table by matching field names in the Avro schema to column names in the target table\. The matching is case\-sensitive\. Column names in Amazon Redshift tables are always lowercase, so when you use the ‘auto’ option, matching field names must also be lowercase\. If the field names aren't all lowercase, you can use a [JSONPaths file](#copy-json-jsonpaths) to explicitly map column names to Avro field names\.With the default `'auto'` argument, COPY recognizes only the first level of fields, or *outer fields*, in the structure\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-5
|
By default, COPY attempts to match all columns in the target table to Avro field names\. To load a subset of the columns, you can optionally specify a column list\.
If a column in the target table is omitted from the column list, then COPY loads the target column's [DEFAULT](r_CREATE_TABLE_NEW.md#create-table-default) expression\. If the target column doesn't have a default, then COPY attempts to load NULL\.
If a column is included in the column list and COPY doesn't find a matching field in the Avro data, then COPY attempts to load NULL to the column\.
If COPY attempts to assign NULL to a column that is defined as NOT NULL, the COPY command fails\.
's3://*jsonpaths\_file*' <a name="copy-avro-pathfile"></a>
To explicitly map Avro data elements to columns, you can use an *JSONPaths* file\. For more information about using a JSONPaths file to map Avro data, see [JSONPaths file](#copy-json-jsonpaths)\.
<a name="copy-avro-schema"></a>**Avro Schema**
An Avro source data file includes a schema that defines the structure of the data\. COPY reads the schema that is part of the Avro source data file to map data elements to target table columns\. The following example shows an Avro schema\.
```
{
"name": "person",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-6
|
```
{
"name": "person",
"type": "record",
"fields": [
{"name": "id", "type": "int"},
{"name": "guid", "type": "string"},
{"name": "name", "type": "string"},
{"name": "address", "type": "string"}]
}
```
The Avro schema is defined using JSON format\. The top\-level JSON object contains three name/value pairs with the names, or *keys*, `"name"`, `"type"`, and `"fields"`\.
The `"fields"` key pairs with an array of objects that define the name and data type of each field in the data structure\. By default, COPY automatically matches the field names to column names\. Column names are always lowercase, so matching field names must also be lowercase\. Any field names that don't match a column name are ignored\. Order doesn't matter\. In the previous example, COPY maps to the column names `id`, `guid`, `name`, and `address`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-7
|
With the default `'auto'` argument, COPY matches only the first\-level objects to columns\. To map to deeper levels in the schema, or if field names and column names don't match, use a JSONPaths file to define the mapping\. For more information, see [JSONPaths file](#copy-json-jsonpaths)\.
If the value associated with a key is a complex Avro data type such as byte, array, record, map, or link, COPY loads the value as a string, where the string is the JSON representation of the data\. COPY loads Avro enum data types as strings, where the content is the name of the type\. For an example, see [COPY from JSON format](copy-usage_notes-copy-from-json.md)\.
The maximum size of the Avro file header, which includes the schema and file metadata, is 1 MB\.
The maximum size of a single Avro data block is 4 MB\. This is distinct from the maximum row size\. If the maximum size of a single Avro data block is exceeded, even if the resulting row size is less than the 4 MB row\-size limit, the COPY command fails\.
In calculating row size, Amazon Redshift internally counts pipe characters \( \| \) twice\. If your input data contains a very large number of pipe characters, it is possible for row size to exceed 4 MB even if the data block is less than 4 MB\.
JSON \[AS\] '*json\_option*' <a name="copy-json"></a>
The source data is in JSON format\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-8
|
The source data is in JSON format\.
JSON format is supported for COPY from these services and protocols:
+ Amazon S3
+ COPY from Amazon EMR
+ COPY from SSH
JSON isn't supported for COPY from DynamoDB\.
Valid values for *json\_option* are as follows :
+ 'auto'
+ 's3://*jsonpaths\_file*'
The default is `'auto'`\.
'auto' <a name="copy-json-auto"></a>
COPY maps the data elements in the JSON source data to the columns in the target table by matching *object keys*, or names, in the source name/value pairs to the names of columns in the target table\. The matching is case\-sensitive\. Column names in Amazon Redshift tables are always lowercase, so when you use the ‘auto’ option, matching JSON field names must also be lowercase\. If the JSON field name keys aren't all lowercase, you can use a [JSONPaths file](#copy-json-jsonpaths) to explicitly map column names to JSON field name keys\.
By default, COPY attempts to match all columns in the target table to JSON field name keys\. To load a subset of the columns, you can optionally specify a column list\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-9
|
If a column in the target table is omitted from the column list, then COPY loads the target column's [DEFAULT](r_CREATE_TABLE_NEW.md#create-table-default) expression\. If the target column doesn't have a default, then COPY attempts to load NULL\.
If a column is included in the column list and COPY doesn't find a matching field in the JSON data, then COPY attempts to load NULL to the column\.
If COPY attempts to assign NULL to a column that is defined as NOT NULL, the COPY command fails\.
's3://*jsonpaths\_file*' <a name="copy-json-pathfile"></a>
COPY uses the named JSONPaths file to map the data elements in the JSON source data to the columns in the target table\. The *`s3://jsonpaths_file`* argument must be an Amazon S3 object key that explicitly references a single file, such as `'s3://mybucket/jsonpaths.txt`'; it can't be a key prefix\. For more information about using a JSONPaths file, see [JSONPaths file](#copy-json-jsonpaths)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ec0bd5a65f0d-10
|
If the file specified by *jsonpaths\_file* has the same prefix as the path specified by *copy\_from\_s3\_objectpath* for the data files, COPY reads the JSONPaths file as a data file and returns errors\. For example, if your data files use the object path `s3://mybucket/my_data.json` and your JSONPaths file is `s3://mybucket/my_data.jsonpaths`, COPY attempts to load `my_data.jsonpaths` as a data file\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
cc5aee64eced-0
|
The JSON data file contains a set of either objects or arrays\. COPY loads each JSON object or array into one row in the target table\. Each object or array corresponding to a row must be a stand\-alone, root\-level structure; that is, it must not be a member of another JSON structure\.
A JSON *object* begins and ends with braces \( \{ \} \) and contains an unordered collection of name/value pairs\. Each paired name and value are separated by a colon, and the pairs are separated by commas\. By default, the *object key*, or name, in the name/value pairs must match the name of the corresponding column in the table\. Column names in Amazon Redshift tables are always lowercase, so matching JSON field name keys must also be lowercase\. If your column names and JSON keys don't match, use a [JSONPaths file](#copy-json-jsonpaths) to explicitly map columns to keys\.
Order in a JSON object doesn't matter\. Any names that don't match a column name are ignored\. The following shows the structure of a simple JSON object\.
```
{
"column1": "value1",
"column2": value2,
"notacolumn" : "ignore this value"
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
cc5aee64eced-1
|
"column2": value2,
"notacolumn" : "ignore this value"
}
```
A JSON *array* begins and ends with brackets \( \[ \] \), and contains an ordered collection of values separated by commas\. If your data files use arrays, you must specify a JSONPaths file to match the values to columns\. The following shows the structure of a simple JSON array\.
```
["value1", value2]
```
The JSON must be well\-formed\. For example, the objects or arrays cannot be separated by commas or any other characters except white space\. Strings must be enclosed in double quote characters\. Quote characters must be simple quotation marks \(0x22\), not slanted or "smart" quotation marks\.
The maximum size of a single JSON object or array, including braces or brackets, is 4 MB\. This is distinct from the maximum row size\. If the maximum size of a single JSON object or array is exceeded, even if the resulting row size is less than the 4 MB row\-size limit, the COPY command fails\.
In calculating row size, Amazon Redshift internally counts pipe characters \( \| \) twice\. If your input data contains a very large number of pipe characters, it is possible for row size to exceed 4 MB even if the object size is less than 4 MB\.
COPY loads `\n` as a newline character and loads `\t` as a tab character\. To load a backslash, escape it with a backslash \( `\\` \)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
cc5aee64eced-2
|
COPY searches the specified JSON source for a well\-formed, valid JSON object or array\. If COPY encounters any non–white space characters before locating a usable JSON structure, or between valid JSON objects or arrays, COPY returns an error for each instance\. These errors count toward the MAXERROR error count\. When the error count equals or exceeds MAXERROR, COPY fails\.
For each error, Amazon Redshift records a row in the STL\_LOAD\_ERRORS system table\. The LINE\_NUMBER column records the last line of the JSON object that caused the error\.
If IGNOREHEADER is specified, COPY ignores the specified number of lines in the JSON data\. Newline characters in the JSON data are always counted for IGNOREHEADER calculations\.
COPY loads empty strings as empty fields by default\. If EMPTYASNULL is specified, COPY loads empty strings for CHAR and VARCHAR fields as NULL\. Empty strings for other data types, such as INT, are always loaded with NULL\.
The following options aren't supported with JSON:
+ CSV
+ DELIMITER
+ ESCAPE
+ FILLRECORD
+ FIXEDWIDTH
+ IGNOREBLANKLINES
+ NULL AS
+ READRATIO
+ REMOVEQUOTES
For more information, see [COPY from JSON format](copy-usage_notes-copy-from-json.md)\. For more information about JSON data structures, go to [www\.json\.org](https://www.json.org/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
e296df5dd738-0
|
If you are loading from JSON\-formatted or Avro source data, by default COPY maps the first\-level data elements in the source data to the columns in the target table by matching each name, or object key, in a name/value pair to the name of a column in the target table\.
If your column names and object keys don't match, or to map to deeper levels in the data hierarchy, you can use a JSONPaths file to explicitly map JSON or Avro data elements to columns\. The JSONPaths file maps JSON data elements to columns by matching the column order in the target table or column list\.
The JSONPaths file must contain only a single JSON object \(not an array\)\. The JSON object is a name/value pair\. The *object key*, which is the name in the name/value pair, must be `"jsonpaths"`\. The *value* in the name/value pair is an array of *JSONPath expressions*\. Each JSONPath expression references a single element in the JSON data hierarchy or Avro schema, similarly to how an XPath expression refers to elements in an XML document\. For more information, see [JSONPath expressions](#copy-json-jsonpath-expressions)\.
To use a JSONPaths file, add the JSON or AVRO keyword to the COPY command and specify the S3 bucket name and object path of the JSONPaths file, using the following format\.
```
COPY tablename
FROM 'data_source'
CREDENTIALS 'credentials-args'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
e296df5dd738-1
|
```
COPY tablename
FROM 'data_source'
CREDENTIALS 'credentials-args'
FORMAT AS { AVRO | JSON } 's3://jsonpaths_file';
```
The `s3://jsonpaths_file` argument must be an Amazon S3 object key that explicitly references a single file, such as `'s3://mybucket/jsonpaths.txt'`; it cannot be a key prefix\.
**Note**
If you are loading from Amazon S3 and the file specified by *jsonpaths\_file* has the same prefix as the path specified by *copy\_from\_s3\_objectpath* for the data files, COPY reads the JSONPaths file as a data file and returns errors\. For example, if your data files use the object path `s3://mybucket/my_data.json` and your JSONPaths file is `s3://mybucket/my_data.jsonpaths`, COPY attempts to load `my_data.jsonpaths` as a data file\.
**Note**
If the key name is any string other than `"jsonpaths"`, the COPY command doesn't return an error, but it ignores *jsonpaths\_file* and uses the `'auto'` argument instead\.
If any of the following occurs, the COPY command fails:
+ The JSON is malformed\.
+ There is more than one JSON object\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
e296df5dd738-2
|
+ The JSON is malformed\.
+ There is more than one JSON object\.
+ Any characters except white space exist outside the object\.
+ An array element is an empty string or isn't a string\.
MAXERROR doesn't apply to the JSONPaths file\.
The JSONPaths file must not be encrypted, even if the [ENCRYPTED](copy-parameters-data-source-s3.md#copy-encrypted) option is specified\.
For more information, see [COPY from JSON format](copy-usage_notes-copy-from-json.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
9b4da8db44de-0
|
The JSONPaths file uses JSONPath expressions to map data fields to target columns\. Each JSONPath expression corresponds to one column in the Amazon Redshift target table\. The order of the JSONPath array elements must match the order of the columns in the target table or the column list, if a column list is used\.
The double quote characters are required as shown, both for the field names and the values\. The quote characters must be simple quotation marks \(0x22\), not slanted or "smart" quotation marks\.
If an object element referenced by a JSONPath expression isn't found in the JSON data, COPY attempts to load a NULL value\. If the referenced object is malformed, COPY returns a load error\.
If an array element referenced by a JSONPath expression isn't found in the JSON or Avro data, COPY fails with the following error: `Invalid JSONPath format: Not an array or index out of range.` Remove any array elements from the JSONPaths that don't exist in the source data and verify that the arrays in the source data are well formed\.
The JSONPath expressions can use either bracket notation or dot notation, but you cannot mix notations\. The following example shows JSONPath expressions using bracket notation\.
```
{
"jsonpaths": [
"$['venuename']",
"$['venuecity']",
"$['venuestate']",
"$['venueseats']"
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
9b4da8db44de-1
|
"$['venueseats']"
]
}
```
The following example shows JSONPath expressions using dot notation\.
```
{
"jsonpaths": [
"$.venuename",
"$.venuecity",
"$.venuestate",
"$.venueseats"
]
}
```
In the context of Amazon Redshift COPY syntax, a JSONPath expression must specify the explicit path to a single name element in a JSON or Avro hierarchical data structure\. Amazon Redshift doesn't support any JSONPath elements, such as wildcard characters or filter expressions, that might resolve to an ambiguous path or multiple name elements\.
For more information, see [COPY from JSON format](copy-usage_notes-copy-from-json.md)\.
Using JSONPaths with Avro Data
The following example shows an Avro schema with multiple levels\.
```
{
"name": "person",
"type": "record",
"fields": [
{"name": "id", "type": "int"},
{"name": "guid", "type": "string"},
{"name": "isActive", "type": "boolean"},
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
9b4da8db44de-2
|
{"name": "isActive", "type": "boolean"},
{"name": "age", "type": "int"},
{"name": "name", "type": "string"},
{"name": "address", "type": "string"},
{"name": "latitude", "type": "double"},
{"name": "longitude", "type": "double"},
{
"name": "tags",
"type": {
"type" : "array",
"name" : "inner_tags",
"items" : "string"
}
},
{
"name": "friends",
"type": {
"type" : "array",
"name" : "inner_friends",
"items" : {
"name" : "friends_record",
"type" : "record",
"fields" : [
{"name" : "id", "type" : "int"},
{"name" : "name", "type" : "string"}
]
}
}
},
{"name": "randomArrayItem", "type": "string"}
]
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
9b4da8db44de-3
|
}
},
{"name": "randomArrayItem", "type": "string"}
]
}
```
The following example shows a JSONPaths file that uses AvroPath expressions to reference the previous schema\.
```
{
"jsonpaths": [
"$.id",
"$.guid",
"$.address",
"$.friends[0].id"
]
}
```
The JSONPaths example includes the following elements:
jsonpaths
The name of the JSON object that contains the AvroPath expressions\.
\[ … \]
Brackets enclose the JSON array that contains the path elements\.
$
The dollar sign refers to the root element in the Avro schema, which is the `"fields"` array\.
"$\.id",
The target of the AvroPath expression\. In this instance, the target is the element in the `"fields"` array with the name `"id"`\. The expressions are separated by commas\.
"$\.friends\[0\]\.id"
Brackets indicate an array index\. JSONPath expressions use zero\-based indexing, so this expression references the first element in the `"friends"` array with the name `"id"`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
9b4da8db44de-4
|
The Avro schema syntax requires using *inner fields* to define the structure of record and array data types\. The inner fields are ignored by the AvroPath expressions\. For example, the field `"friends"` defines an array named `"inner_friends"`, which in turn defines a record named `"friends_record"`\. The AvroPath expression to reference the field `"id"` can ignore the extra fields to reference the target field directly\. The following AvroPath expressions reference the two fields that belong to the `"friends"` array\.
```
"$.friends[0].id"
"$.friends[0].name"
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
fbc404b5a008-0
|
In addition to the standard data formats, COPY supports the following columnar data formats for COPY from Amazon S3\. COPY from columnar format is supported with certain restrictions\. For more information, see [COPY from columnar data formats](copy-usage_notes-copy-from-columnar.md)\.
ORC <a name="copy-orc"></a>
Loads the data from a file that uses Optimized Row Columnar \(ORC\) file format\.
PARQUET <a name="copy-parquet"></a>
Loads the data from a file that uses Parquet file format\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-format.md
|
ca91445b2673-0
|
**Topics**
+ [S3ServiceException errors](s3serviceexception-error.md)
+ [System tables for troubleshooting data loads](system-tables-for-troubleshooting-data-loads.md)
+ [Multibyte character load errors](multi-byte-character-load-errors.md)
+ [Load error reference](r_Load_Error_Reference.md)
This section provides information about identifying and resolving data loading errors\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Troubleshooting_load_errors.md
|
2ce9d37470bb-0
|
Lists the relationship between streams and concurrent segments\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_STREAM_SEGS.md
|
398682f0c4d2-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_STREAM_SEGS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_STREAM_SEGS.md
|
e18e81d198fa-0
|
To view the relationship between streams and concurrent segments for the most recent query, type the following query:
```
select *
from stl_stream_segs
where query = pg_last_query_id();
query | stream | segment
-------+--------+---------
10 | 1 | 2
10 | 0 | 0
10 | 2 | 4
10 | 1 | 3
10 | 0 | 1
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_STREAM_SEGS.md
|
d2f20067e3d1-0
|
**Topics**
+ [SQL functions supported on the leader node](c_sql-functions-leader-node.md)
+ [Amazon Redshift and PostgreSQL](c_redshift-and-postgres-sql.md)
Amazon Redshift is built around industry\-standard SQL, with added functionality to manage very large datasets and support high\-performance analysis and reporting of those data\.
**Note**
The maximum size for a single Amazon Redshift SQL statement is 16 MB\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift-sql.md
|
d78b6b6dbab8-0
|
Compares the value of two strings and returns an integer\. If the strings are identical, returns 0\. If the first string is "greater" alphabetically, returns 1\. If the second string is "greater", returns \-1\.
For multibyte characters, the comparison is based on the byte encoding\.
Synonym of [BTTEXT\_PATTERN\_CMP function](r_BTTEXT_PATTERN_CMP.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BPCHARCMP.md
|
faaff04c7b98-0
|
```
BPCHARCMP(string1, string2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BPCHARCMP.md
|
1520c03c8e16-0
|
*string1*
The first input parameter is a CHAR or VARCHAR string\.
*string2*
The second parameter is a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BPCHARCMP.md
|
d73edfea8ee2-0
|
The BPCHARCMP function returns an integer\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BPCHARCMP.md
|
1433b0d1203e-0
|
The following example determines whether a user's first name is alphabetically greater than the user's last name for the first ten entries in USERS:
```
select userid, firstname, lastname,
bpcharcmp(firstname, lastname)
from users
order by 1, 2, 3, 4
limit 10;
```
This example returns the following sample output:
```
userid | firstname | lastname | bpcharcmp
--------+-----------+-----------+-----------
1 | Rafael | Taylor | -1
2 | Vladimir | Humphrey | 1
3 | Lars | Ratliff | -1
4 | Barry | Roy | -1
5 | Reagan | Hodge | 1
6 | Victor | Hernandez | 1
7 | Tamekah | Juarez | 1
8 | Colton | Roy | -1
9 | Mufutau | Watkins | -1
10 | Naida | Calderon | 1
(10 rows)
```
You can see that for entries where the string for the FIRSTNAME is later alphabetically than the LASTNAME, BPCHARCMP returns 1\. If the LASTNAME is alphabetically later than FIRSTNAME, BPCHARCMP returns \-1\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BPCHARCMP.md
|
1433b0d1203e-1
|
This example returns all entries in the USER table whose FIRSTNAME is identical to their LASTNAME:
```
select userid, firstname, lastname,
bpcharcmp(firstname, lastname)
from users where bpcharcmp(firstname, lastname)=0
order by 1, 2, 3, 4;
userid | firstname | lastname | bpcharcmp
-------+-----------+----------+-----------
62 | Chase | Chase | 0
4008 | Whitney | Whitney | 0
12516 | Graham | Graham | 0
13570 | Harper | Harper | 0
16712 | Cooper | Cooper | 0
18359 | Chase | Chase | 0
27530 | Bradley | Bradley | 0
31204 | Harding | Harding | 0
(8 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BPCHARCMP.md
|
fcee4a92d4b0-0
|
Removes a materialized view\.
For more information about materialized views, see [Creating materialized views in Amazon Redshift](materialized-view-overview.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-drop-sql-command.md
|
e0b2f78376d1-0
|
```
DROP MATERIALIZED VIEW [ IF EXISTS ] mv_name
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-drop-sql-command.md
|
f7171c2c3816-0
|
IF EXISTS
A clause that specifies to check if the named materialized view exists\. If the materialized view doesn't exist, then the `DROP MATERIALIZED VIEW` command returns an error message\. This clause is useful when scripting, to keep the script from failing if you drop a nonexistent materialized view\.
*mv\_name*
The name of the materialized view to be dropped\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-drop-sql-command.md
|
054d29ba987a-0
|
Only the owner of a materialized view can use `DROP MATERIALIZED VIEW` on that view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-drop-sql-command.md
|
fbe59c9aa0b0-0
|
The following example drops the `tickets_mv` materialized view\.
```
DROP MATERIALIZED VIEW tickets_mv;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-drop-sql-command.md
|
211666d8e8c9-0
|
**Topics**
+ [Managing data consistency](managing-data-consistency.md)
+ [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md)
+ [Verifying that the correct files are present in your bucket](verifying-that-correct-files-are-present.md)
After splitting your files, you can upload them to your bucket\. You can optionally compress or encrypt the files before you load them\.
Create an Amazon S3 bucket to hold your data files, and then upload the data files to the bucket\. For information about creating buckets and uploading files, see [Working with Amazon S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html) in the *Amazon Simple Storage Service Developer Guide\.*
Amazon S3 provides eventual consistency for some operations, so it is possible that new data will not be available immediately after the upload\. For more information see, [Managing data consistency](managing-data-consistency.md)
**Important**
The Amazon S3 bucket that holds the data files must be created in the same AWS Region as your cluster unless you use the [REGION](copy-parameters-data-source-s3.md#copy-region) option to specify the Region in which the Amazon S3 bucket is located\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-data-to-S3.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.