id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
a9beb270e205-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_PLAN_INFO.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md
|
8ef67534cc2d-0
|
The following examples compare the query plans for a simple SELECT query returned by using the EXPLAIN command and by querying the STL\_PLAN\_INFO view\.
```
explain select * from category;
QUERY PLAN
-------------------------------------------------------------
XN Seq Scan on category (cost=0.00..0.11 rows=11 width=49)
(1 row)
select * from category;
catid | catgroup | catname | catdesc
-------+----------+-----------+--------------------------------------------
1 | Sports | MLB | Major League Baseball
3 | Sports | NFL | National Football League
5 | Sports | MLS | Major League Soccer
...
select * from stl_plan_info where query=256;
query | nodeid | segment | step | locus | plannode | startupcost | totalcost
| rows | bytes
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md
|
8ef67534cc2d-1
|
query | nodeid | segment | step | locus | plannode | startupcost | totalcost
| rows | bytes
-------+--------+---------+------+-------+----------+-------------+-----------+------+-------
256 | 1 | 0 | 1 | 0 | 104 | 0 | 0.11 | 11 | 539
256 | 1 | 0 | 0 | 0 | 104 | 0 | 0.11 | 11 | 539
(2 rows)
```
In this example, PLANNODE 104 refers to the sequential scan of the CATEGORY table\.
```
select distinct eventname from event order by 1;
eventname
------------------------------------------------------------------------
.38 Special
3 Doors Down
70s Soul Jam
A Bronx Tale
...
explain select distinct eventname from event order by 1;
QUERY PLAN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md
|
8ef67534cc2d-2
|
70s Soul Jam
A Bronx Tale
...
explain select distinct eventname from event order by 1;
QUERY PLAN
-------------------------------------------------------------------------------------
XN Merge (cost=1000000000136.38..1000000000137.82 rows=576 width=17)
Merge Key: eventname
-> XN Network (cost=1000000000136.38..1000000000137.82 rows=576
width=17)
Send to leader
-> XN Sort (cost=1000000000136.38..1000000000137.82 rows=576
width=17)
Sort Key: eventname
-> XN Unique (cost=0.00..109.98 rows=576 width=17)
-> XN Seq Scan on event (cost=0.00..87.98 rows=8798
width=17)
(8 rows)
select * from stl_plan_info where query=240 order by nodeid desc;
query | nodeid | segment | step | locus | plannode | startupcost |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md
|
8ef67534cc2d-3
|
query | nodeid | segment | step | locus | plannode | startupcost |
totalcost | rows | bytes
-------+--------+---------+------+-------+----------+------------------+------------------+------+--------
240 | 5 | 0 | 0 | 0 | 104 | 0 | 87.98 | 8798 | 149566
240 | 5 | 0 | 1 | 0 | 104 | 0 | 87.98 | 8798 | 149566
240 | 4 | 0 | 2 | 0 | 117 | 0 | 109.975 | 576 | 9792
240 | 4 | 0 | 3 | 0 | 117 | 0 | 109.975 | 576 | 9792
240 | 4 | 1 | 0 | 0 | 117 | 0 | 109.975 | 576 | 9792
240 | 4 | 1 | 1 | 0 | 117 | 0 | 109.975 | 576 | 9792
240 | 3 | 1 | 2 | 0 | 114 | 1000000000136.38 | 1000000000137.82 | 576 | 9792
240 | 3 | 2 | 0 | 0 | 114 | 1000000000136.38 | 1000000000137.82 | 576 | 9792
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md
|
8ef67534cc2d-4
|
240 | 3 | 2 | 0 | 0 | 114 | 1000000000136.38 | 1000000000137.82 | 576 | 9792
240 | 2 | 2 | 1 | 0 | 123 | 1000000000136.38 | 1000000000137.82 | 576 | 9792
240 | 1 | 3 | 0 | 0 | 122 | 1000000000136.38 | 1000000000137.82 | 576 | 9792
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md
|
da66d40b373c-0
|
CURRENT\_SETTING returns the current value of the specified configuration parameter\.
This function is equivalent to the [SHOW](r_SHOW.md) command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SETTING.md
|
83fe0ce436c2-0
|
```
current_setting('parameter')
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SETTING.md
|
14ad4ec9173f-0
|
*parameter*
Parameter value to display\. For a list of configuration parameters, see [Configuration reference](cm_chap_ConfigurationRef.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SETTING.md
|
30f4775fa44f-0
|
Returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SETTING.md
|
e1aa5b9d29d0-0
|
The following query returns the current setting for the `query_group` parameter:
```
select current_setting('query_group');
current_setting
-----------------
unset
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SETTING.md
|
693e2371a732-0
|
Contains details for *save* steps in queries\. A save step saves the input stream to a transient table\. A transient table is a temporary table that stores intermediate results during query execution\.
A query consists of multiple segments, and each segment consists of one or more steps\. For more information, see [Query processing](c-query-processing.md)\.
STL\_SAVE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SAVE.md
|
8429c4d41daf-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SAVE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SAVE.md
|
d48c5de701f4-0
|
The following query shows which save steps in the most recent query were executed on each slice\.
```
select query, slice, segment, step, tasknum, rows, tbl
from stl_save where query = pg_last_query_id();
query | slice | segment | step | tasknum | rows | tbl
-------+-------+---------+------+---------+------+-----
52236 | 3 | 0 | 2 | 21 | 0 | 239
52236 | 2 | 0 | 2 | 20 | 0 | 239
52236 | 2 | 2 | 2 | 20 | 0 | 239
52236 | 3 | 2 | 2 | 21 | 0 | 239
52236 | 1 | 0 | 2 | 21 | 0 | 239
52236 | 0 | 0 | 2 | 20 | 0 | 239
52236 | 0 | 2 | 2 | 20 | 0 | 239
52236 | 1 | 2 | 2 | 21 | 0 | 239
(8 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SAVE.md
|
1f0e2677f298-0
|
Upload the manifest file to an Amazon S3 bucket\. If the Amazon S3 bucket does not reside in the same AWS Region as your Amazon Redshift cluster, you must use the [REGION](copy-parameters-data-source-s3.md#copy-region) option to specify the AWS Region in which the manifest is located\. For information about creating an Amazon S3 bucket and uploading a file, see [Amazon Simple Storage Service Getting Started Guide](https://docs.aws.amazon.com/AmazonS3/latest/gsg/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-upload-manifest.md
|
f3594bd8b48d-0
|
If you are working with an Amazon EC2 instance or an Amazon EMR cluster, add Inbound rules to the host's security group to allow traffic from each Amazon Redshift cluster node\. For **Type**, select SSH with TCP protocol on Port 22\. For **Source**, enter the Amazon Redshift cluster node IP addresses you retrieved in [Step 1: Retrieve the cluster public key and cluster node IP addresses](load-from-host-steps-retrieve-key-and-ips.md)\. For information about adding rules to an Amazon EC2 security group, see [Authorizing Inbound Traffic for Your Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html) in the *Amazon EC2 User Guide*\.
Use the Private IP addresses when:
+ You have an Amazon Redshift cluster that is not in a Virtual Private Cloud \(VPC\), and an Amazon EC2 \-Classic instance, both of which are in the same AWS Region\.
+ You have an Amazon Redshift cluster that is in a VPC, and an Amazon EC2 \-VPC instance, both of which are in the same AWS Region and in the same VPC\.
Otherwise, use the Public IP addresses\.
For more information about using Amazon Redshift in a VPC, see [Managing Clusters in Virtual Private Cloud \(VPC\)](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-vpc.html) in the *Amazon Redshift Cluster Management Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-configure-security-groups.md
|
9df6ed689184-0
|
When you create a table, you can define one or more of its columns as *sort keys*\. When data is initially loaded into the empty table, the rows are stored on disk in sorted order\. Information about sort key columns is passed to the query planner, and the planner uses this information to construct plans that exploit the way that the data is sorted\.
Sorting enables efficient handling of range\-restricted predicates\. Amazon Redshift stores columnar data in 1 MB disk blocks\. The min and max values for each block are stored as part of the metadata\. If query uses a range\-restricted predicate, the query processor can use the min and max values to rapidly skip over large numbers of blocks during table scans\. For example, if a table stores five years of data sorted by date and a query specifies a date range of one month, up to 98 percent of the disk blocks can be eliminated from the scan\. If the data is not sorted, more of the disk blocks \(possibly all of them\) have to be scanned\.
You can specify either a compound or interleaved sort key\. A compound sort key is more efficient when query predicates use a *prefix*, which is a subset of the sort key columns in order\. An interleaved sort key gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order\. For examples of using compound sort keys and interleaved sort keys, see [Comparing sort styles](t_Sorting_data-compare-sort-styles.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Sorting_data.md
|
9df6ed689184-1
|
To understand the impact of the chosen sort key on query performance, use the [EXPLAIN](r_EXPLAIN.md) command\. For more information, see [Query planning and execution workflow](c-query-planning.md)
To define a sort type, use either the INTERLEAVED or COMPOUND keyword with your CREATE TABLE or CREATE TABLE AS statement\. The default is COMPOUND\. The default COMPOUND is recommended unless your tables aren't updated regularly with INSERT, UPDATE, or DELETE\. An INTERLEAVED sort key can use a maximum of eight columns\. Depending on your data and cluster size, VACUUM REINDEX takes significantly longer than VACUUM FULL because it makes an additional pass to analyze the interleaved sort keys\. The sort and merge operation can take longer for interleaved tables because the interleaved sort might need to rearrange more rows than a compound sort\.
To view the sort keys for a table, query the [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md) system view\.
**Topics**
+ [Compound sort key](#t_Sorting_data-compound)
+ [Interleaved sort key](#t_Sorting_data-interleaved)
+ [Comparing sort styles](t_Sorting_data-compare-sort-styles.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Sorting_data.md
|
a08cdde4b8eb-0
|
A compound key is made up of all of the columns listed in the sort key definition, in the order they are listed\. A compound sort key is most useful when a query's filter applies conditions, such as filters and joins, that use a prefix of the sort keys\. The performance benefits of compound sorting decrease when queries depend only on secondary sort columns, without referencing the primary columns\. COMPOUND is the default sort type\.
Compound sort keys might speed up joins, GROUP BY and ORDER BY operations, and window functions that use PARTITION BY and ORDER BY\. For example, a merge join, which is often faster than a hash join, is feasible when the data is distributed and presorted on the joining columns\. Compound sort keys also help improve compression\.
As you add rows to a sorted table that already contains data, the unsorted region grows, which has a significant effect on performance\. The effect is greater when the table uses interleaved sorting, especially when the sort columns include data that increases monotonically, such as date or timestamp columns\. You should run a VACUUM operation regularly, especially after large data loads, to re\-sort and re\-analyze the data\. For more information, see [Managing the size of the unsorted region](r_vacuum_diskspacereqs.md)\. After vacuuming to resort the data, it's a good practice to run an ANALYZE command to update the statistical metadata for the query planner\. For more information, see [Analyzing tables](t_Analyzing_tables.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Sorting_data.md
|
0ddf4483c322-0
|
An interleaved sort gives equal weight to each column, or subset of columns, in the sort key\. If multiple queries use different columns for filters, then you can often improve performance for those queries by using an interleaved sort style\. When a query uses restrictive predicates on secondary sort columns, interleaved sorting significantly improves query performance as compared to compound sorting\.
**Important**
Don't use an interleaved sort key on columns with monotonically increasing attributes, such as identity columns, dates, or timestamps\.
The performance improvements you gain by implementing an interleaved sort key should be weighed against increased load and vacuum times\.
Interleaved sorts are most effective with highly selective queries that filter on one or more of the sort key columns in the WHERE clause, for example `select c_name from customer where c_region = 'ASIA'`\. The benefits of interleaved sorting increase with the number of sorted columns that are restricted\.
An interleaved sort is more effective with large tables\. Sorting is applied on each slice, so an interleaved sort is most effective when a table is large enough to require multiple 1 MB blocks per slice and the query processor is able to skip a significant proportion of the blocks using restrictive predicates\. To view the number of blocks a table uses, query the [STV\_BLOCKLIST](r_STV_BLOCKLIST.md) system view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Sorting_data.md
|
0ddf4483c322-1
|
When sorting on a single column, an interleaved sort might give better performance than a compound sort if the column values have a long common prefix\. For example, URLs commonly begin with "http://www"\. Compound sort keys use a limited number of characters from the prefix, which results in a lot of duplication of keys\. Interleaved sorts use an internal compression scheme for zone map values that enables them to better discriminate among column values that have a long common prefix\.
<a name="t_Sorting_data-interleaved-reindex"></a>
**VACUUM REINDEX**
As you add rows to a sorted table that already contains data, performance might deteriorate over time\. This deterioration occurs for both compound and interleaved sorts, but it has a greater effect on interleaved tables\. A VACUUM restores the sort order, but the operation can take longer for interleaved tables because merging new interleaved data might involve modifying every data block\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Sorting_data.md
|
0ddf4483c322-2
|
When tables are initially loaded, Amazon Redshift analyzes the distribution of the values in the sort key columns and uses that information for optimal interleaving of the sort key columns\. As a table grows, the distribution of the values in the sort key columns can change, or skew, especially with date or timestamp columns\. If the skew becomes too large, performance might be affected\. To re\-analyze the sort keys and restore performance, run the VACUUM command with the REINDEX key word\. Because it needs to take an extra analysis pass over the data, VACUUM REINDEX can take longer than a standard VACUUM for interleaved tables\. To view information about key distribution skew and last reindex time, query the [SVV\_INTERLEAVED\_COLUMNS](r_SVV_INTERLEAVED_COLUMNS.md) system view\.
For more information about how to determine how often to run VACUUM and when to run a VACUUM REINDEX, see [Deciding whether to reindex](r_vacuum-decide-whether-to-reindex.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Sorting_data.md
|
f61c3e01c4f3-0
|
**false**, true
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_json_serialization_parse_nested_strings.md
|
6fb5c9159d8d-0
|
A session configuration that modifies the JSON serialization behavior of ORC, JSON, Ion, and Parquet formatted data\. When both `json_serialization_parse_nested_strings` and `json_serialization_enable` are true, string values that are stored in complex types \(such as, maps, structs, or arrays\) are parsed and written inline directly into the result if they are valid JSON\. If `json_serialization_parse_nested_strings` is false, strings within nested complex types are serialized as escaped JSON strings\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_json_serialization_parse_nested_strings.md
|
6fb5c9159d8d-1
|
types are serialized as escaped JSON strings\. For more information, see [Serializing complex types containing JSON strings](serializing-complex-JSON.md#serializing-complex-JSON-strings)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_json_serialization_parse_nested_strings.md
|
fd896cead623-0
|
TAN is a trigonometric function that returns the tangent of a number\. The input parameter must be a non\-zero number \(in radians\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TAN.md
|
89421c195a0d-0
|
```
TAN(number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TAN.md
|
37af54afd631-0
|
*number*
The input parameter is a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TAN.md
|
6f9281ad2700-0
|
The TAN function returns a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TAN.md
|
78dcd8377f4a-0
|
The following example returns the tangent of 0:
```
select tan(0);
tan
-----
0
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TAN.md
|
916c58971867-0
|
GETDATE returns the current date and time in the current session time zone \(UTC by default\)\. It returns the start date or time of the current statement, even when it is within a transaction block\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GETDATE.md
|
12d86bdd02a3-0
|
```
GETDATE()
```
The parentheses are required\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GETDATE.md
|
af25587264a1-0
|
TIMESTAMP
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GETDATE.md
|
bd4e6af9a98b-0
|
The following example uses the GETDATE\(\) function to return the full time stamp for the current date:
```
select getdate();
timestamp
---------------------
2008-12-04 16:10:43
(1 row)
```
The following example uses the GETDATE\(\) function inside the TRUNC function to return the current date without the time:
```
select trunc(getdate());
trunc
------------
2008-12-04
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GETDATE.md
|
5e049b4a5370-0
|
Use SVV\_EXTERNAL\_DATABASES to view details for external databases\.
SVV\_EXTERNAL\_DATABASES is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_DATABASES.md
|
fa71dc2c90e5-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_EXTERNAL_DATABASES.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_DATABASES.md
|
ffe8dec28411-0
|
You run COPY commands to load each of the tables in the SSB schema\. The COPY command examples demonstrate loading from different file formats, using several COPY command options, and troubleshooting load errors\.
**Topics**
+ [COPY command syntax](#tutorial-loading-data-copy-syntax)
+ [Loading the SSB tables](#tutorial-loading-run-copy-load-tables)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
4dc156ea488f-0
|
The basic [COPY](r_COPY.md) command syntax is as follows\.
```
COPY table_name [ column_list ] FROM data_source CREDENTIALS access_credentials [options]
```
To execute a COPY command, you provide the following values\.
<a name="tutorial-loading-syntax-table-name"></a>
**Table name**
The target table for the COPY command\. The table must already exist in the database\. The table can be temporary or persistent\. The COPY command appends the new input data to any existing rows in the table\.
<a name="tutorial-loading-syntax-column-list"></a>
**Column list**
By default, COPY loads fields from the source data to the table columns in order\. You can optionally specify a *column list,* that is a comma\-separated list of column names, to map data fields to specific columns\. You don't use column lists in this tutorial\. For more information, see [Column List](copy-parameters-column-mapping.md#copy-column-list) in the COPY command reference\.
<a name="tutorial-loading-syntax-data-source.title"></a>Data source
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
4dc156ea488f-1
|
<a name="tutorial-loading-syntax-data-source.title"></a>Data source
You can use the COPY command to load data from an Amazon S3 bucket, an Amazon EMR cluster, a remote host using an SSH connection, or an Amazon DynamoDB table\. For this tutorial, you load from data files in an Amazon S3 bucket\. When loading from Amazon S3, you must provide the name of the bucket and the location of the data files\. To do this, provide either an object path for the data files or the location of a manifest file that explicitly lists each data file and its location\.
+ Key prefix
An object stored in Amazon S3 is uniquely identified by an object key, which includes the bucket name, folder names, if any, and the object name\. A *key prefix *refers to a set of objects with the same prefix\. The object path is a key prefix that the COPY command uses to load all objects that share the key prefix\. For example, the key prefix `custdata.txt` can refer to a single file or to a set of files, including `custdata.txt.001`, `custdata.txt.002`, and so on\.
+ Manifest file
In some cases, you might need to load files with different prefixes, for example from multiple buckets or folders\. In others, you might need to exclude files that share a prefix\. In these cases, you can use a manifest file\. A *manifest file* explicitly lists each load file and its unique object key\. You use a manifest file to load the PART table later in this tutorial\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
4dc156ea488f-2
|
<a name="tutorial-loading-syntax-credentials"></a>
**Credentials**
To access the AWS resources that contain the data to load, you must provide AWS access credentials for an AWS user or an IAM user with sufficient privileges\. These credentials are an access key ID and a secret access key\. To load data from Amazon S3, the credentials must include ListBucket and GetObject permissions\. Additional credentials are required if your data is encrypted or if you are using temporary access credentials\. For more information, see [Authorization parameters](copy-parameters-authorization.md) in the COPY command reference\. For more information about managing access, go to [Managing access permissions to your Amazon S3 resources](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html)\. If you do not have an access key ID and secret access key, you need to get them\. For more information, go to [Administering access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html)\.
<a name="tutorial-loading-syntax-options.title"></a>Options
You can specify a number of parameters with the COPY command to specify file formats, manage data formats, manage errors, and control other features\. In this tutorial, you use the following COPY command options and features:
+ Key prefix
For information on how to load from multiple files by specifying a key prefix, see [Load the PART table using NULL AS](#tutorial-loading-load-part)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
4dc156ea488f-3
|
+ CSV format
For information on how to load data that is in CSV format, see [Load the PART table using NULL AS](#tutorial-loading-load-part)\.
+ NULL AS
For information on how to load PART using the NULL AS option, see [Load the PART table using NULL AS](#tutorial-loading-load-part)\.
+ Character\-delimited format
For information on how to use the DELIMITER option, see [Load the SUPPLIER table using REGION](#tutorial-loading-load-supplier)\.
+ REGION
For information on how to use the REGION option, see [Load the SUPPLIER table using REGION](#tutorial-loading-load-supplier)\.
+ Fixed\-format width
For information on how to load the CUSTOMER table from fixed\-width data, see [Load the CUSTOMER table using MANIFEST](#tutorial-loading-load-customer)\.
+ MAXERROR
For information on how to use the MAXERROR option, see [Load the CUSTOMER table using MANIFEST](#tutorial-loading-load-customer)\.
+ ACCEPTINVCHARS
For information on how to use the ACCEPTINVCHARS option, see [Load the CUSTOMER table using MANIFEST](#tutorial-loading-load-customer)\.
+ MANIFEST
For information on how to use the MANIFEST option, see [Load the CUSTOMER table using MANIFEST](#tutorial-loading-load-customer)\.
+ DATEFORMAT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
4dc156ea488f-4
|
+ DATEFORMAT
For information on how to use the DATEFORMAT option, see [Load the DWDATE table using DATEFORMAT](#tutorial-loading-load-dwdate)\.
+ GZIP, LZOP and BZIP2
For information on how to compress your files, see [Load the LINEORDER table using multiple files](#tutorial-loading-load-lineorder)\.
+ COMPUPDATE
For information on how to use the COMPUPDATE option, see [Load the LINEORDER table using multiple files](#tutorial-loading-load-lineorder)\.
+ Multiple files
For information on how to load multiple files, see [Load the LINEORDER table using multiple files](#tutorial-loading-load-lineorder)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
5674be001429-0
|
You use the following COPY commands to load each of the tables in the SSB schema\. The command to each table demonstrates different COPY options and troubleshooting techniques\.
To load the SSB tables, follow these steps:
1. [Replace the bucket name and AWS credentials](#tutorial-loading-run-copy-replaceables)
1. [Load the PART table using NULL AS](#tutorial-loading-load-part)
1. [Load the SUPPLIER table using REGION](#tutorial-loading-load-supplier)
1. [Load the CUSTOMER table using MANIFEST](#tutorial-loading-load-customer)
1. [Load the DWDATE table using DATEFORMAT](#tutorial-loading-load-dwdate)
1. [Load the LINEORDER table using multiple files](#tutorial-loading-load-lineorder)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
e4168e91c9ac-0
|
The COPY commands in this tutorial are presented in the following format\.
```
copy table from 's3://<your-bucket-name>/load/key_prefix'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
options;
```
For each COPY command, do the following:
1. Replace *<your\-bucket\-name>* with the name of a bucket in the same region as your cluster\.
This step assumes the bucket and the cluster are in the same region\. Alternatively, you can specify the region using the [REGION](copy-parameters-data-source-s3.md#copy-region) option with the COPY command\.
1. Replace *<Your\-Access\-Key\-ID>* and *<Your\-Secret\-Access\-Key>* with your own AWS IAM account credentials\. The segment of the credentials string that is enclosed in single quotation marks must not contain any spaces or line breaks\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
ee8fa2abaf50-0
|
In this step, you use the CSV and NULL AS options to load the PART table\.
The COPY command can load data from multiple files in parallel, which is much faster than loading from a single file\. To demonstrate this principle, the data for each table in this tutorial is split into eight files, even though the files are very small\. In a later step, you compare the time difference between loading from a single file and loading from multiple files\. For more information, see [Split your load data into multiple files](c_best-practices-use-multiple-files.md)\.
<a name="tutorial-loading-key-prefix"></a>
**Key prefix**
You can load from multiple files by specifying a key prefix for the file set, or by explicitly listing the files in a manifest file\. In this step, you use a key prefix\. In a later step, you use a manifest file\. The key prefix `'s3://mybucket/load/part-csv.tbl'` loads the following set of the files in the `load` folder\.
```
part-csv.tbl-000
part-csv.tbl-001
part-csv.tbl-002
part-csv.tbl-003
part-csv.tbl-004
part-csv.tbl-005
part-csv.tbl-006
part-csv.tbl-007
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
ee8fa2abaf50-1
|
part-csv.tbl-006
part-csv.tbl-007
```
<a name="tutorial-loading-csv-format"></a>
**CSV format**
CSV, which stands for comma separated values, is a common format used for importing and exporting spreadsheet data\. CSV is more flexible than comma\-delimited format because it enables you to include quoted strings within fields\. The default quote character for COPY from CSV format is a double quotation mark \( " \), but you can specify another quote character by using the QUOTE AS option\. When you use the quote character within the field, escape the character with an additional quote character\.
The following excerpt from a CSV\-formatted data file for the PART table shows strings enclosed in double quotation marks \(`"LARGE ANODIZED BRASS"`\)\. It also shows a string enclosed in two double quotation marks within a quoted string \(`"MEDIUM ""BURNISHED"" TIN"`\)\.
```
15,dark sky,MFGR#3,MFGR#47,MFGR#3438,indigo,"LARGE ANODIZED BRASS",45,LG CASE
22,floral beige,MFGR#4,MFGR#44,MFGR#4421,medium,"PROMO, POLISHED BRASS",19,LG DRUM
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
ee8fa2abaf50-2
|
22,floral beige,MFGR#4,MFGR#44,MFGR#4421,medium,"PROMO, POLISHED BRASS",19,LG DRUM
23,bisque slate,MFGR#4,MFGR#41,MFGR#4137,firebrick,"MEDIUM ""BURNISHED"" TIN",42,JUMBO JAR
```
The data for the PART table contains characters that cause COPY to fail\. In this exercise, you troubleshoot the errors and correct them\.
To load data that is in CSV format, add `csv` to your COPY command\. Execute the following command to load the PART table\.
```
copy part from 's3://<your-bucket-name>/load/part-csv.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
csv;
```
You should get an error message similar to the following\.
```
An error occurred when executing the SQL command:
copy part from 's3://mybucket/load/part-csv.tbl'
credentials' ...
ERROR: Load into table 'part' failed. Check 'stl_load_errors' system table for details. [SQL State=XX000]
Execution time: 1.46s
1 statement(s) failed.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
ee8fa2abaf50-3
|
Execution time: 1.46s
1 statement(s) failed.
1 statement(s) failed.
```
To get more information about the error, query the STL\_LOAD\_ERRORS table\. The following query uses the SUBSTRING function to shorten columns for readability and uses LIMIT 10 to reduce the number of rows returned\. You can adjust the values in `substring(filename,22,25)` to allow for the length of your bucket name\.
```
select query, substring(filename,22,25) as filename,line_number as line,
substring(colname,0,12) as column, type, position as pos, substring(raw_line,0,30) as line_text,
substring(raw_field_value,0,15) as field_text,
substring(err_reason,0,45) as reason
from stl_load_errors
order by query desc
limit 10;
```
```
query | filename | line | column | type | pos |
--------+-------------------------+-----------+------------+------------+-----+----
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
ee8fa2abaf50-4
|
333765 | part-csv.tbl-000 | 1 | | | 0 |
line_text | field_text | reason
------------------+------------+----------------------------------------------
15,NUL next, | | Missing newline: Unexpected character 0x2c f
```
<a name="tutorial-loading-null-as"></a>
**NULL AS**
The `part-csv.tbl` data files use the NUL terminator character \(`\x000` or `\x0`\) to indicate NULL values\.
**Note**
Despite very similar spelling, NUL and NULL are not the same\. NUL is a UTF\-8 character with codepoint `x000` that is often used to indicate end of record \(EOR\)\. NULL is a SQL value that represents an absence of data\.
By default, COPY treats a NUL terminator character as an EOR character and terminates the record, which often results in unexpected results or an error\. There is no single standard method of indicating NULL in text data\. Thus, the NULL AS COPY command option enables you to specify which character to substitute with NULL when loading the table\. In this example, you want COPY to treat the NUL terminator character as a NULL value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
ee8fa2abaf50-5
|
**Note**
The table column that receives the NULL value must be configured as *nullable\.* That is, it must not include the NOT NULL constraint in the CREATE TABLE specification\.
To load PART using the NULL AS option, execute the following COPY command\.
```
copy part from 's3://<your-bucket-name>/load/part-csv.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
csv
null as '\000';
```
To verify that COPY loaded NULL values, execute the following command to select only the rows that contain NULL\.
```
select p_partkey, p_name, p_mfgr, p_category from part where p_mfgr is null;
```
```
p_partkey | p_name | p_mfgr | p_category
-----------+----------+--------+------------
15 | NUL next | | MFGR#47
81 | NUL next | | MFGR#23
133 | NUL next | | MFGR#44
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
92bb9771c1dc-0
|
In this step, you use the DELIMITER and REGION options to load the SUPPLIER table\.
**Note**
The files for loading the SUPPLIER table are provided in an AWS sample bucket\. You don't need to upload files for this step\.
<a name="tutorial-loading-character-delimited-format"></a>
**Character\-Delimited Format**
The fields in a character\-delimited file are separated by a specific character, such as a pipe character \( \| \), a comma \( , \) or a tab \( \\t \)\. Character\-delimited files can use any single ASCII character, including one of the nonprinting ASCII characters, as the delimiter\. You specify the delimiter character by using the DELIMITER option\. The default delimiter is a pipe character \( \| \)\.
The following excerpt from the data for the SUPPLIER table uses pipe\-delimited format\.
```
1|1|257368|465569|41365|19950218|2-HIGH|0|17|2608718|9783671|4|2504369|92072|2|19950331|TRUCK
1|2|257368|201928|8146|19950218|2-HIGH|0|36|6587676|9783671|9|5994785|109794|6|19950416|MAIL
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
92bb9771c1dc-1
|
```
<a name="tutorial-loading-region"></a>
**REGION**
Whenever possible, you should locate your load data in the same AWS region as your Amazon Redshift cluster\. If your data and your cluster are in the same region, you reduce latency, minimize eventual consistency issues, and avoid cross\-region data transfer costs\. For more information, see [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)
If you must load data from a different AWS region, use the REGION option to specify the AWS region in which the load data is located\. If you specify a region, all of the load data, including manifest files, must be in the named region\. For more information, see [REGION](copy-parameters-data-source-s3.md#copy-region)\.
If your cluster is in the US East \(N\. Virginia\) region, execute the following command to load the SUPPLIER table from pipe\-delimited data in an Amazon S3 bucket located in the US West \(Oregon\) region\. For this example, do not change the bucket name\.
```
copy supplier from 's3://awssampledbuswest2/ssbgz/supplier.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
delimiter '|'
gzip
region 'us-west-2';
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
92bb9771c1dc-2
|
delimiter '|'
gzip
region 'us-west-2';
```
If your cluster is *not* in the US East \(N\. Virginia\) region, execute the following command to load the SUPPLIER table from pipe\-delimited data in an Amazon S3 bucket located in the US East \(N\. Virginia\) region\. For this example, do not change the bucket name\.
```
copy supplier from 's3://awssampledb/ssbgz/supplier.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
delimiter '|'
gzip
region 'us-east-1';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-0
|
In this step, you use the FIXEDWIDTH, MAXERROR, ACCEPTINVCHARS, and MANIFEST options to load the CUSTOMER table\.
The sample data for this exercise contains characters that cause errors when COPY attempts to load them\. You use the MAXERRORS option and the STL\_LOAD\_ERRORS system table to troubleshoot the load errors and then use the ACCEPTINVCHARS and MANIFEST options to eliminate the errors\.
<a name="tutorial-loading-fixed-width"></a>
**Fixed\-Width Format**
Fixed\-width format defines each field as a fixed number of characters, rather than separating fields with a delimiter\. The following excerpt from the data for the CUSTOMER table uses fixed\-width format\.
```
1 Customer#000000001 IVhzIApeRb MOROCCO 0MOROCCO AFRICA 25-705
2 Customer#000000002 XSTf4,NCwDVaWNe6tE JORDAN 6JORDAN MIDDLE EAST 23-453
3 Customer#000000003 MG9kdTD ARGENTINA5ARGENTINAAMERICA 11-783
```
The order of the label/width pairs must match the order of the table columns exactly\. For more information, see [FIXEDWIDTH](copy-parameters-data-format.md#copy-fixedwidth)\.
The fixed\-width specification string for the CUSTOMER table data is as follows\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-1
|
The fixed\-width specification string for the CUSTOMER table data is as follows\.
```
fixedwidth 'c_custkey:10, c_name:25, c_address:25, c_city:10, c_nation:15,
c_region :12, c_phone:15,c_mktsegment:10'
```
To load the CUSTOMER table from fixed\-width data, execute the following command\.
```
copy customer
from 's3://<your-bucket-name>/load/customer-fw.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
fixedwidth 'c_custkey:10, c_name:25, c_address:25, c_city:10, c_nation:15, c_region :12, c_phone:15,c_mktsegment:10';
```
You should get an error message, similar to the following\.
```
An error occurred when executing the SQL command:
copy customer
from 's3://mybucket/load/customer-fw.tbl'
credentials'aws_access_key_id=...
ERROR: Load into table 'customer' failed. Check 'stl_load_errors' system table for details. [SQL State=XX000]
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-2
|
ERROR: Load into table 'customer' failed. Check 'stl_load_errors' system table for details. [SQL State=XX000]
Execution time: 2.95s
1 statement(s) failed.
```
<a name="tutorial-loading-maxerror"></a>
**MAXERROR**
By default, the first time COPY encounters an error, the command fails and returns an error message\. To save time during testing, you can use the MAXERROR option to instruct COPY to skip a specified number of errors before it fails\. Because we expect errors the first time we test loading the CUSTOMER table data, add `maxerror 10` to the COPY command\.
To test using the FIXEDWIDTH and MAXERROR options, execute the following command\.
```
copy customer
from 's3://<your-bucket-name>/load/customer-fw.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
fixedwidth 'c_custkey:10, c_name:25, c_address:25, c_city:10, c_nation:15, c_region :12, c_phone:15,c_mktsegment:10'
maxerror 10;
```
This time, instead of an error message, you get a warning message similar to the following\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-3
|
```
This time, instead of an error message, you get a warning message similar to the following\.
```
Warnings:
Load into table 'customer' completed, 112497 record(s) loaded successfully.
Load into table 'customer' completed, 7 record(s) could not be loaded. Check 'stl_load_errors' system table for details.
```
The warning indicates that COPY encountered seven errors\. To check the errors, query the STL\_LOAD\_ERRORS table, as shown in the following example\.
```
select query, substring(filename,22,25) as filename,line_number as line,
substring(colname,0,12) as column, type, position as pos, substring(raw_line,0,30) as line_text,
substring(raw_field_value,0,15) as field_text,
substring(err_reason,0,45) as error_reason
from stl_load_errors
order by query desc, filename
limit 7;
```
The results of the STL\_LOAD\_ERRORS query should look similar to the following\.
```
query | filename | line | column | type | pos | line_text | field_text | error_reason
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-4
|
```
query | filename | line | column | type | pos | line_text | field_text | error_reason
--------+---------------------------+------+-----------+------------+-----+-------------------------------+------------+----------------------------------------------
334489 | customer-fw.tbl.log | 2 | c_custkey | int4 | -1 | customer-fw.tbl | customer-f | Invalid digit, Value 'c', Pos 0, Type: Integ
334489 | customer-fw.tbl.log | 6 | c_custkey | int4 | -1 | Complete | Complete | Invalid digit, Value 'C', Pos 0, Type: Integ
334489 | customer-fw.tbl.log | 3 | c_custkey | int4 | -1 | #Total rows | #Total row | Invalid digit, Value '#', Pos 0, Type: Integ
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-5
|
334489 | customer-fw.tbl.log | 5 | c_custkey | int4 | -1 | #Status | #Status | Invalid digit, Value '#', Pos 0, Type: Integ
334489 | customer-fw.tbl.log | 1 | c_custkey | int4 | -1 | #Load file | #Load file | Invalid digit, Value '#', Pos 0, Type: Integ
334489 | customer-fw.tbl000 | 1 | c_address | varchar | 34 | 1 Customer#000000001 | .Mayag.ezR | String contains invalid or unsupported UTF8
334489 | customer-fw.tbl000 | 1 | c_address | varchar | 34 | 1 Customer#000000001 | .Mayag.ezR | String contains invalid or unsupported UTF8
(7 rows)
```
By examining the results, you can see that there are two messages in the `error_reasons` column:
+
```
Invalid digit, Value '#', Pos 0, Type: Integ
```
These errors are caused by the `customer-fw.tbl.log` file\. The problem is that it is a log file, not a data file, and should not be loaded\. You can use a manifest file to avoid loading the wrong file\.
+
```
String contains invalid or unsupported UTF8
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-6
|
+
```
String contains invalid or unsupported UTF8
```
The VARCHAR data type supports multibyte UTF\-8 characters up to three bytes\. If the load data contains unsupported or invalid characters, you can use the ACCEPTINVCHARS option to replace each invalid character with a specified alternative character\.
Another problem with the load is more difficult to detect—the load produced unexpected results\. To investigate this problem, execute the following command to query the CUSTOMER table\.
```
select c_custkey, c_name, c_address
from customer
order by c_custkey
limit 10;
```
```
c_custkey | c_name | c_address
-----------+---------------------------+---------------------------
2 | Customer#000000002 | XSTf4,NCwDVaWNe6tE
2 | Customer#000000002 | XSTf4,NCwDVaWNe6tE
3 | Customer#000000003 | MG9kdTD
3 | Customer#000000003 | MG9kdTD
4 | Customer#000000004 | XxVSJsL
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-7
|
3 | Customer#000000003 | MG9kdTD
4 | Customer#000000004 | XxVSJsL
4 | Customer#000000004 | XxVSJsL
5 | Customer#000000005 | KvpyuHCplrB84WgAi
5 | Customer#000000005 | KvpyuHCplrB84WgAi
6 | Customer#000000006 | sKZz0CsnMD7mp4Xd0YrBvx
6 | Customer#000000006 | sKZz0CsnMD7mp4Xd0YrBvx
(10 rows)
```
The rows should be unique, but there are duplicates\.
Another way to check for unexpected results is to verify the number of rows that were loaded\. In our case, 100000 rows should have been loaded, but the load message reported loading 112497 records\. The extra rows were loaded because the COPY loaded an extraneous file, `customer-fw.tbl0000.bak`\.
In this exercise, you use a manifest file to avoid loading the wrong files\.
<a name="tutorial-loading-acceptinvchars"></a>
**ACCEPTINVCHARS**
By default, when COPY encounters a character that is not supported by the column's data type, it skips the row and returns an error\. For information about invalid UTF\-8 characters, see [Multibyte character load errors](multi-byte-character-load-errors.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-8
|
You could use the MAXERRORS option to ignore errors and continue loading, then query STL\_LOAD\_ERRORS to locate the invalid characters, and then fix the data files\. However, MAXERRORS is best used for troubleshooting load problems and should generally not be used in a production environment\.
The ACCEPTINVCHARS option is usually a better choice for managing invalid characters\. ACCEPTINVCHARS instructs COPY to replace each invalid character with a specified valid character and continue with the load operation\. You can specify any valid ASCII character, except NULL, as the replacement character\. The default replacement character is a question mark \( ? \)\. COPY replaces multibyte characters with a replacement string of equal length\. For example, a 4\-byte character would be replaced with `'????'`\.
COPY returns the number of rows that contained invalid UTF\-8 characters\. It also adds an entry to the STL\_REPLACEMENTS system table for each affected row, up to a maximum of 100 rows per node slice\. Additional invalid UTF\-8 characters are also replaced, but those replacement events are not recorded\.
ACCEPTINVCHARS is valid only for VARCHAR columns\.
For this step, you add the ACCEPTINVCHARS with the replacement character `'^'`\.
<a name="tutorial-loading-manifest"></a>
**MANIFEST**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-9
|
<a name="tutorial-loading-manifest"></a>
**MANIFEST**
When you COPY from Amazon S3 using a key prefix, there is a risk that you might load unwanted tables\. For example, the `'s3://mybucket/load/` folder contains eight data files that share the key prefix `customer-fw.tbl`: `customer-fw.tbl0000`, `customer-fw.tbl0001`, and so on\. However, the same folder also contains the extraneous files `customer-fw.tbl.log` and `customer-fw.tbl-0001.bak`\.
To ensure that you load all of the correct files, and only the correct files, use a manifest file\. The manifest is a text file in JSON format that explicitly lists the unique object key for each source file to be loaded\. The file objects can be in different folders or different buckets, but they must be in the same region\. For more information, see [MANIFEST](copy-parameters-data-source-s3.md#copy-manifest)\.
The following shows the `customer-fw-manifest` text\.
```
{
"entries": [
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-000"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-001"},
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-10
|
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-001"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-002"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-003"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-004"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-005"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-006"},
{"url":"s3://<your-bucket-name>/load/customer-fw.tbl-007"}
]
}
```
**To load the data for the CUSTOMER table using the manifest file**
1. Open the file `customer-fw-manifest` in a text editor\.
1. Replace *<your\-bucket\-name>* with the name of your bucket\.
1. Save the file\.
1. Upload the file to the load folder on your bucket\.
1. Execute the following COPY command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
854d8da99da8-11
|
1. Upload the file to the load folder on your bucket\.
1. Execute the following COPY command\.
```
copy customer from 's3://<your-bucket-name>/load/customer-fw-manifest'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
fixedwidth 'c_custkey:10, c_name:25, c_address:25, c_city:10, c_nation:15, c_region :12, c_phone:15,c_mktsegment:10'
maxerror 10
acceptinvchars as '^'
manifest;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
1513386302a3-0
|
In this step, you use the DELIMITER and DATEFORMAT options to load the DWDATE table\.
When loading DATE and TIMESTAMP columns, COPY expects the default format, which is YYYY\-MM\-DD for dates and YYYY\-MM\-DD HH:MI:SS for time stamps\. If the load data does not use a default format, you can use DATEFORMAT and TIMEFORMAT to specify the format\.
The following excerpt shows date formats in the DWDATE table\. Notice that the date formats in column two are inconsistent\.
```
19920104 1992-01-04 Sunday January 1992 199201 Jan1992 1 4 4 1...
19920112 January 12, 1992 Monday January 1992 199201 Jan1992 2 12 12 1...
19920120 January 20, 1992 Tuesday January 1992 199201 Jan1992 3 20 20 1...
```
<a name="tutorial-loading-dateformat"></a>
**DATEFORMAT**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
1513386302a3-1
|
```
<a name="tutorial-loading-dateformat"></a>
**DATEFORMAT**
You can specify only one date format\. If the load data contains inconsistent formats, possibly in different columns, or if the format is not known at load time, you use DATEFORMAT with the `'auto'` argument\. When `'auto'` is specified, COPY recognizes any valid date or time format and convert it to the default format\. The `'auto'` option recognizes several formats that are not supported when using a DATEFORMAT and TIMEFORMAT string\. For more information, see [Using automatic recognition with DATEFORMAT and TIMEFORMAT](automatic-recognition.md)\.
To load the DWDATE table, execute the following COPY command\.
```
copy dwdate from 's3://<your-bucket-name>/load/dwdate-tab.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
delimiter '\t'
dateformat 'auto';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
340ce7b23a46-0
|
This step uses the GZIP and COMPUPDATE options to load the LINEORDER table\.
In this exercise, you load the LINEORDER table from a single data file and then load it again from multiple files\. Doing this enables you to compare the load times for the two methods\.
**Note**
The files for loading the LINEORDER table are provided in an AWS sample bucket\. You don't need to upload files for this step\.
<a name="tutorial-loading-gzip-lzop"></a>
**GZIP, LZOP and BZIP2**
You can compress your files using either gzip, lzop, or bzip2 compression formats\. When loading from compressed files, COPY uncompresses the files during the load process\. Compressing your files saves storage space and shortens upload times\.
<a name="tutorial-loading-compupdate"></a>
**COMPUPDATE**
When COPY loads an empty table with no compression encodings, it analyzes the load data to determine the optimal encodings\. It then alters the table to use those encodings before beginning the load\. This analysis process takes time, but it occurs, at most, once per table\. To save time, you can skip this step by turning COMPUPDATE off\. To enable an accurate evaluation of COPY times, you turn COMPUPDATE off for this step\.
<a name="tutorial-loading-multiple-files"></a>
**Multiple Files**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
340ce7b23a46-1
|
<a name="tutorial-loading-multiple-files"></a>
**Multiple Files**
The COPY command can load data very efficiently when it loads from multiple files in parallel instead of from a single file\. You can split your data into files so that the number of files is a multiple of the number of slices in your cluster\. If you do, Amazon Redshift divides the workload and distributes the data evenly among the slices\. The number of slices per node depends on the node size of the cluster\. For more information about the number of slices that each node size has, go to [About clusters and nodes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Cluster Management Guide*\.
For example, the dc2\.large compute nodes used in this tutorial have two slices each, so the four\-node cluster has eight slices\. In previous steps, the load data was contained in eight files, even though the files are very small\. In this step, you compare the time difference between loading from a single large file and loading from multiple files\.
The files you use for this tutorial contain about 15 million records and occupy about 1\.2 GB\. These files are very small in Amazon Redshift scale, but sufficient to demonstrate the performance advantage of loading from multiple files\. The files are large enough that the time required to download them and then upload them to Amazon S3 is excessive for this tutorial\. Thus, you load the files directly from an AWS sample bucket\.
The following screenshot shows the data files for LINEORDER\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
340ce7b23a46-2
|
The following screenshot shows the data files for LINEORDER\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-load-lineorder-files.png)
**To evaluate the performance of COPY with multiple files**
1. Execute the following command to COPY from a single file\. Do not change the bucket name\.
```
copy lineorder from 's3://awssampledb/load/lo/lineorder-single.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip
compupdate off
region 'us-east-1';
```
1. Your results should be similar to the following\. Note the execution time\.
```
Warnings:
Load into table 'lineorder' completed, 14996734 record(s) loaded successfully.
0 row(s) affected.
copy executed successfully
Execution time: 51.56s
```
1. Execute the following command to COPY from multiple files\. Do not change the bucket name\.
```
copy lineorder from 's3://awssampledb/load/lo/lineorder-multi.tbl'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
340ce7b23a46-3
|
```
copy lineorder from 's3://awssampledb/load/lo/lineorder-multi.tbl'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip
compupdate off
region 'us-east-1';
```
1. Your results should be similar to the following\. Note the execution time\.
```
Warnings:
Load into table 'lineorder' completed, 14996734 record(s) loaded successfully.
0 row(s) affected.
copy executed successfully
Execution time: 17.7s
```
1. Compare execution times\.
In our example, the time to load 15 million records decreased from 51\.56 seconds to 17\.7 seconds, a reduction of 65\.7 percent\.
These results are based on using a four\-node cluster\. If your cluster has more nodes, the time savings is multiplied\. For typical Amazon Redshift clusters, with tens to hundreds of nodes, the difference is even more dramatic\. If you have a single node cluster, there is little difference between the execution times\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
24dc1c228be8-0
|
[Step 6: Vacuum and analyze the database](tutorial-loading-data-vacuum.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-run-copy.md
|
c7737554f45e-0
|
The STDDEV\_SAMP and STDDEV\_POP functions return the sample and population standard deviation of a set of numeric values \(integer, decimal, or floating\-point\)\. The result of the STDDEV\_SAMP function is equivalent to the square root of the sample variance of the same set of values\.
STDDEV\_SAMP and STDDEV are synonyms for the same function\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STDDEV_functions.md
|
fcf9bdea06d7-0
|
```
STDDEV_SAMP | STDDEV ( [ DISTINCT | ALL ] expression)
STDDEV_POP ( [ DISTINCT | ALL ] expression)
```
The expression must have an integer, decimal, or floating point data type\. Regardless of the data type of the expression, the return type of this function is a double precision number\.
**Note**
Standard deviation is calculated using floating point arithmetic, which might result in slight imprecision\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STDDEV_functions.md
|
5b3440548535-0
|
When the sample standard deviation \(STDDEV or STDDEV\_SAMP\) is calculated for an expression that consists of a single value, the result of the function is NULL not 0\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STDDEV_functions.md
|
a66bfad4d2c7-0
|
The following query returns the average of the values in the VENUESEATS column of the VENUE table, followed by the sample standard deviation and population standard deviation of the same set of values\. VENUESEATS is an INTEGER column\. The scale of the result is reduced to 2 digits\.
```
select avg(venueseats),
cast(stddev_samp(venueseats) as dec(14,2)) stddevsamp,
cast(stddev_pop(venueseats) as dec(14,2)) stddevpop
from venue;
avg | stddevsamp | stddevpop
-------+------------+-----------
17503 | 27847.76 | 27773.20
(1 row)
```
The following query returns the sample standard deviation for the COMMISSION column in the SALES table\. COMMISSION is a DECIMAL column\. The scale of the result is reduced to 10 digits\.
```
select cast(stddev(commission) as dec(18,10))
from sales;
stddev
----------------
130.3912659086
(1 row)
```
The following query casts the sample standard deviation for the COMMISSION column as an integer\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STDDEV_functions.md
|
a66bfad4d2c7-1
|
```
The following query casts the sample standard deviation for the COMMISSION column as an integer\.
```
select cast(stddev(commission) as integer)
from sales;
stddev
--------
130
(1 row)
```
The following query returns both the sample standard deviation and the square root of the sample variance for the COMMISSION column\. The results of these calculations are the same\.
```
select
cast(stddev_samp(commission) as dec(18,10)) stddevsamp,
cast(sqrt(var_samp(commission)) as dec(18,10)) sqrtvarsamp
from sales;
stddevsamp | sqrtvarsamp
----------------+----------------
130.3912659086 | 130.3912659086
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STDDEV_functions.md
|
37264359c058-0
|
Returns the base 10 logarithm of a number\.
Synonym of [DLOG10 function](r_DLOG10.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOG.md
|
baee6417e382-0
|
```
LOG(number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOG.md
|
bcb445b0c33c-0
|
*number*
The input parameter is a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOG.md
|
4549fbe8d540-0
|
The LOG function returns a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOG.md
|
f666ba7d2f8b-0
|
The following example returns the base 10 logarithm of the number 100:
```
select log(100);
dlog10
--------
2
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOG.md
|
6c42d74ef5a0-0
|
The default automatic commit behavior causes each SQL command that runs separately to commit individually\. A call to a stored procedure is treated as a single SQL command\. The SQL statements inside a procedure behave as if they are in a transaction block that implicitly begins when the call starts and ends when the call finishes\. A nested call to another procedure is treated like any other SQL statement and operates within the context of the same transaction as the caller\. For more information about automatic commit behavior, see [Serializable isolation](c_serial_isolation.md)\.
However, when you call a stored procedure from within a user specified transaction block \(defined by BEGIN\.\.\.COMMIT\), all statements in the stored procedure run in the context of the user specified transaction\. The procedure doesn't commit implicitly on exit\. The caller controls the procedure commit or rollback\.
If any error is encountered while running a stored procedure, all changes made in the current transaction are rolled back\.
You can use the following transaction control statements in a stored procedure:
+ COMMIT – commits all work done in the current transaction and implicitly begins a new transaction\. For more information, see [COMMIT](r_COMMIT.md)\.
+ ROLLBACK – rolls back the work done in the current transaction and implicitly begins a new transaction\. For more information, see [ROLLBACK](r_ROLLBACK.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-1
|
TRUNCATE is another statement that you can issue from within a stored procedure and influences transaction management\. In Amazon Redshift, TRUNCATE issues a commit implicitly\. This behavior stays the same in the context of stored procedures\. When a TRUNCATE statement is issued from within a stored procedure, it commits the current transaction and begins a new one\. For more information, see [TRUNCATE](r_TRUNCATE.md)\.
All statements that follow a COMMIT, ROLLBACK, or TRUNCATE statement run in the context of a new transaction\. They do so until a COMMIT, ROLLBACK, or TRUNCATE statement is encountered or the stored procedure exits\.
When you use a COMMIT, ROLLBACK, or TRUNCATE statement from within a stored procedure, the following constraints apply:
+ If the stored procedure is called from within a transaction block, it can't issue a COMMIT, ROLLBACK, or TRUNCATE statement\. This restriction applies within the stored procedure's own body and within any nested procedure call\.
+ If the stored procedure is created with `SET config` options, it can't issue a COMMIT, ROLLBACK, or TRUNCATE statement\. This restriction applies within the stored procedure's own body and within any nested procedure call\.
+ Any cursor that is open \(explicitly or implicitly\) is closed automatically when a COMMIT, ROLLBACK, or TRUNCATE statement is processed\. For constraints on explicit and implicit cursors, see [Limits and differences for stored procedure support](stored-procedure-constraints.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-2
|
Additionally, you can't run COMMIT or ROLLBACK using dynamic SQL\. However, you can run TRUNCATE using dynamic SQL\. For more information, see [Dynamic SQL](c_PLpgSQL-statements.md#r_PLpgSQL-dynamic-sql)\.
When working with stored procedures, consider that the BEGIN and END statements in PL/pgSQL are only for grouping\. They don't start or end a transaction\. For more information, see [Block](c_PLpgSQL-structure.md#r_PLpgSQL-block)\.
The following example demonstrates transaction behavior when calling a stored procedure from within an explicit transaction block\. The two insert statements issued from outside the stored procedure and the one from within it are all part of the same transaction \(3382\)\. The transaction is committed when the user issues the explicit commit\.
```
CREATE OR REPLACE PROCEDURE sp_insert_table_a(a int) LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO test_table_a values (a);
END;
$$;
Begin;
insert into test_table_a values (1);
Call sp_insert_table_a(2);
insert into test_table_a values (3);
Commit;
select userid, xid, pid, type, trim(text) as stmt_text
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-3
|
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
userid | xid | pid | type | stmt_text
--------+------+-----+---------+----------------------------------------
103 | 3382 | 599 | UTILITY | Begin;
103 | 3382 | 599 | QUERY | insert into test_table_a values (1);
103 | 3382 | 599 | UTILITY | Call sp_insert_table_a(2);
103 | 3382 | 599 | QUERY | INSERT INTO test_table_a values ( $1 )
103 | 3382 | 599 | QUERY | insert into test_table_a values (3);
103 | 3382 | 599 | UTILITY | COMMIT
```
In contrast, take an example when the same statements are issued from outside of an explicit transaction block and the session has autocommit set to ON\. In this case, each statement runs in its own transaction\.
```
insert into test_table_a values (1);
Call sp_insert_table_a(2);
insert into test_table_a values (3);
select userid, xid, pid, type, trim(text) as stmt_text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-4
|
insert into test_table_a values (3);
select userid, xid, pid, type, trim(text) as stmt_text
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
userid | xid | pid | type | stmt_text
--------+------+-----+---------+-------------------------------------------------------------------------------------------------------------------------------------------------
103 | 3388 | 599 | QUERY | insert into test_table_a values (1);
103 | 3388 | 599 | UTILITY | COMMIT
103 | 3389 | 599 | UTILITY | Call sp_insert_table_a(2);
103 | 3389 | 599 | QUERY | INSERT INTO test_table_a values ( $1 )
103 | 3389 | 599 | UTILITY | COMMIT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-5
|
103 | 3389 | 599 | QUERY | INSERT INTO test_table_a values ( $1 )
103 | 3389 | 599 | UTILITY | COMMIT
103 | 3390 | 599 | QUERY | insert into test_table_a values (3);
103 | 3390 | 599 | UTILITY | COMMIT
```
The following example issues a TRUNCATE statement after inserting into `test_table_a`\. The TRUNCATE statement issues an implicit commit that commits the current transaction \(3335\) and starts a new one \(3336\)\. The new transaction is committed when the procedure exits\.
```
CREATE OR REPLACE PROCEDURE sp_truncate_proc(a int, b int) LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO test_table_a values (a);
TRUNCATE test_table_b;
INSERT INTO test_table_b values (b);
END;
$$;
Call sp_truncate_proc(1,2);
select userid, xid, pid, type, trim(text) as stmt_text
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
userid | xid | pid | type | stmt_text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-6
|
userid | xid | pid | type | stmt_text
--------+------+-------+---------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
103 | 3335 | 23636 | UTILITY | Call sp_truncate_proc(1,2);
103 | 3335 | 23636 | QUERY | INSERT INTO test_table_a values ( $1 )
103 | 3335 | 23636 | UTILITY | TRUNCATE test_table_b
103 | 3335 | 23636 | UTILITY | COMMIT
103 | 3336 | 23636 | QUERY | INSERT INTO test_table_b values ( $1 )
103 | 3336 | 23636 | UTILITY | COMMIT
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-7
|
103 | 3336 | 23636 | UTILITY | COMMIT
```
The following example issues a TRUNCATE from a nested call\. The TRUNCATE commits all work done so far in the outer and inner procedures in a transaction \(3344\)\. It starts a new transaction \(3345\)\. The new transaction is committed when the outer procedure exits\.
```
CREATE OR REPLACE PROCEDURE sp_inner(c int, d int) LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO inner_table values (c);
TRUNCATE outer_table;
INSERT INTO inner_table values (d);
END;
$$;
CREATE OR REPLACE PROCEDURE sp_outer(a int, b int, c int, d int) LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO outer_table values (a);
Call sp_inner(c, d);
INSERT INTO outer_table values (b);
END;
$$;
Call sp_outer(1, 2, 3, 4);
select userid, xid, pid, type, trim(text) as stmt_text
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
userid | xid | pid | type | stmt_text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-8
|
userid | xid | pid | type | stmt_text
--------+------+-------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
103 | 3344 | 23636 | UTILITY | Call sp_outer(1, 2, 3, 4);
103 | 3344 | 23636 | QUERY | INSERT INTO outer_table values ( $1 )
103 | 3344 | 23636 | UTILITY | CALL sp_inner( $1 , $2 )
103 | 3344 | 23636 | QUERY | INSERT INTO inner_table values ( $1 )
103 | 3344 | 23636 | UTILITY | TRUNCATE outer_table
103 | 3344 | 23636 | UTILITY | COMMIT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-9
|
103 | 3344 | 23636 | UTILITY | TRUNCATE outer_table
103 | 3344 | 23636 | UTILITY | COMMIT
103 | 3345 | 23636 | QUERY | INSERT INTO inner_table values ( $1 )
103 | 3345 | 23636 | QUERY | INSERT INTO outer_table values ( $1 )
103 | 3345 | 23636 | UTILITY | COMMIT
```
The following example shows that cursor `cur1` was closed when the TRUNCATE statement committed\.
```
CREATE OR REPLACE PROCEDURE sp_open_cursor_truncate()
LANGUAGE plpgsql
AS $$
DECLARE
rec RECORD;
cur1 cursor for select * from test_table_a order by 1;
BEGIN
open cur1;
TRUNCATE table test_table_b;
Loop
fetch cur1 into rec;
raise info '%', rec.c1;
exit when not found;
End Loop;
END
$$;
call sp_open_cursor_truncate();
ERROR: cursor "cur1" does not exist
CONTEXT: PL/pgSQL function "sp_open_cursor_truncate" line 8 at fetch
```
The following example issues a TRUNCATE statement and can't be called from within an explicit transaction block\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-10
|
```
The following example issues a TRUNCATE statement and can't be called from within an explicit transaction block\.
```
CREATE OR REPLACE PROCEDURE sp_truncate_atomic() LANGUAGE plpgsql
AS $$
BEGIN
TRUNCATE test_table_b;
END;
$$;
Begin;
Call sp_truncate_atomic();
ERROR: TRUNCATE cannot be invoked from a procedure that is executing in an atomic context.
HINT: Try calling the procedure as a top-level call i.e. not from within an explicit transaction block.
Or, if this procedure (or one of its ancestors in the call chain) was created with SET config options, recreate the procedure without them.
CONTEXT: SQL statement "TRUNCATE test_table_b"
PL/pgSQL function "sp_truncate_atomic" line 2 at SQL statement
```
The following example shows that a user who is not a superuser or the owner of a table can issue a TRUNCATE statement on the table using a `Security Definer` stored procedure\. The example shows the following actions:
+ The user1 creates table `test_tbl`\.
+ The user1 creates stored procedure `sp_truncate_test_tbl`\.
+ The user1 grants `EXECUTE` privilege on the stored procedure to user2\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-11
|
+ The user1 grants `EXECUTE` privilege on the stored procedure to user2\.
+ The user2 runs the stored procedure to truncate table `test_tbl`\. The example shows the row count before and after the `TRUNCATE` command\.
```
set session_authorization to user1;
create table test_tbl(id int, name varchar(20));
insert into test_tbl values (1,'john'), (2, 'mary');
CREATE OR REPLACE PROCEDURE sp_truncate_test_tbl() LANGUAGE plpgsql
AS $$
DECLARE
tbl_rows int;
BEGIN
select count(*) into tbl_rows from test_tbl;
RAISE INFO 'RowCount before Truncate: %', tbl_rows;
TRUNCATE test_tbl;
select count(*) into tbl_rows from test_tbl;
RAISE INFO 'RowCount after Truncate: %', tbl_rows;
END;
$$ SECURITY DEFINER;
grant execute on procedure sp_truncate_test_tbl() to user2;
reset session_authorization;
set session_authorization to user2;
call sp_truncate_test_tbl();
INFO: RowCount before Truncate: 2
INFO: RowCount after Truncate: 0
CALL
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-12
|
INFO: RowCount before Truncate: 2
INFO: RowCount after Truncate: 0
CALL
reset session_authorization;
```
The following example issues COMMIT twice\. The first COMMIT commits all work done in transaction 10363 and implicitly starts transaction 10364\. Transaction 10364 is committed by the second COMMIT statement\.
```
CREATE OR REPLACE PROCEDURE sp_commit(a int, b int) LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO test_table values (a);
COMMIT;
INSERT INTO test_table values (b);
COMMIT;
END;
$$;
call sp_commit(1,2);
select userid, xid, pid, type, trim(text) as stmt_text
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
userid | xid | pid | type | stmt_text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-13
|
userid | xid | pid | type | stmt_text
--------+-------+------+---------+-----------------------------------------------------------------------------------------------------------------
100 | 10363 | 3089 | UTILITY | call sp_commit(1,2);
100 | 10363 | 3089 | QUERY | INSERT INTO test_table values ( $1 )
100 | 10363 | 3089 | UTILITY | COMMIT
100 | 10364 | 3089 | QUERY | INSERT INTO test_table values ( $1 )
100 | 10364 | 3089 | UTILITY | COMMIT
```
The following example issues a ROLLBACK statement if `sum_vals` is greater than 2\. The first ROLLBACK statement rolls back all the work done in transaction 10377 and starts a new transaction 10378\. Transaction 10378 is committed when the procedure exits\.
```
CREATE OR REPLACE PROCEDURE sp_rollback(a int, b int) LANGUAGE plpgsql
AS $$
DECLARE
sum_vals int;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-14
|
AS $$
DECLARE
sum_vals int;
BEGIN
INSERT INTO test_table values (a);
SELECT sum(c1) into sum_vals from test_table;
IF sum_vals > 2 THEN
ROLLBACK;
END IF;
INSERT INTO test_table values (b);
END;
$$;
call sp_rollback(1, 2);
select userid, xid, pid, type, trim(text) as stmt_text
from svl_statementtext where pid = pg_backend_pid() order by xid , starttime , sequence;
userid | xid | pid | type | stmt_text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
6c42d74ef5a0-15
|
userid | xid | pid | type | stmt_text
--------+-------+------+---------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
100 | 10377 | 3089 | UTILITY | call sp_rollback(1, 2);
100 | 10377 | 3089 | QUERY | INSERT INTO test_table values ( $1 )
100 | 10377 | 3089 | QUERY | SELECT sum(c1) from test_table
100 | 10377 | 3089 | QUERY | Undoing 1 transactions on table 133646 with current xid 10377 : 10377
100 | 10378 | 3089 | QUERY | INSERT INTO test_table values ( $1 )
100 | 10378 | 3089 | UTILITY | COMMIT
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-transaction-management.md
|
588dc139de3d-0
|
Changes the definition of an existing schema\. Use this command to rename a schema or change the owner of a schema\.
For example, rename an existing schema to preserve a backup copy of that schema when you plan to create a new version of that schema\. For more information about schemas, see [CREATE SCHEMA](r_CREATE_SCHEMA.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_SCHEMA.md
|
5fdb6b29ae2e-0
|
```
ALTER SCHEMA schema_name
{
RENAME TO new_name |
OWNER TO new_owner |
QUOTA { quota [MB | GB | TB] | UNLIMITED }
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_SCHEMA.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.