id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
0cd1fcfdcfdc-1
|
```
String contains invalid or unsupported UTF-8 codepoints.
Bad UTF-8 hex sequence: a4 (error 3)
```
The following table lists the descriptions and suggested workarounds for VARCHAR load errors\. If one of these errors occurs, replace the character with a valid UTF\-8 code sequence or remove the character\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/multi-byte-character-load-errors.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/multi-byte-character-load-errors.md
|
b0b91aa75324-0
|
The following examples demonstrate basic usage of the ALTER TABLE command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
12723b9c0246-0
|
The following command renames the USERS table to USERS\_BKUP:
```
alter table users
rename to users_bkup;
```
You can also use this type of command to rename a view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
a2434a238140-0
|
The following command changes the VENUE table owner to the user DWUSER:
```
alter table venue
owner to dwuser;
```
The following commands create a view, then change its owner:
```
create view vdate as select * from date;
alter table vdate owner to vuser;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
d334106f4aeb-0
|
The following command renames the VENUESEATS column in the VENUE table to VENUESIZE:
```
alter table venue
rename column venueseats to venuesize;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
37dac398bb5e-0
|
To drop a table constraint, such as a primary key, foreign key, or unique constraint, first find the internal name of the constraint\. Then specify the constraint name in the ALTER TABLE command\. The following example finds the constraints for the CATEGORY table, then drops the primary key with the name `category_pkey`\.
```
select constraint_name, constraint_type
from information_schema.table_constraints
where constraint_schema ='public'
and table_name = 'category';
constraint_name | constraint_type
----------------+----------------
category_pkey | PRIMARY KEY
alter table category
drop constraint category_pkey;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
bf20f5462fe6-0
|
To conserve storage, you can define a table initially with VARCHAR columns with the minimum size needed for your current data requirements\. Later, to accommodate longer strings, you can alter the table to increase the size of the column\.
The following example increases the size of the EVENTNAME column to VARCHAR\(300\)\.
```
alter table event alter column eventname type varchar(300);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
8a33749d04d0-0
|
The following examples show how to change the DISTSTYLE and DISTKEY of a table\.
Create a table with an EVEN distribution style\. The SVV\_TABLE\_INFO view shows that the DISTSTYLE is EVEN\.
```
create table inventory(
inv_date_sk int4 not null ,
inv_item_sk int4 not null ,
inv_warehouse_sk int4 not null ,
inv_quantity_on_hand int4
) diststyle even;
Insert into inventory values(1,1,1,1);
select "table", "diststyle" from svv_table_info;
table | diststyle
-----------+----------------
inventory | EVEN
```
Alter the table DISTKEY to `inv_warehouse_sk`\. The SVV\_TABLE\_INFO view shows the `inv_warehouse_sk` column as the resulting distribution key\.
```
alter table inventory alter diststyle key distkey inv_warehouse_sk;
select "table", "diststyle" from svv_table_info;
table | diststyle
-----------+-----------------------
inventory | KEY(inv_warehouse_sk)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
8a33749d04d0-1
|
inventory | KEY(inv_warehouse_sk)
```
Alter the table DISTKEY to `inv_item_sk`\. The SVV\_TABLE\_INFO view shows the `inv_item_sk` column as the resulting distribution key\.
```
alter table inventory alter distkey inv_item_sk;
select "table", "diststyle" from svv_table_info;
table | diststyle
-----------+-----------------------
inventory | KEY(inv_item_sk)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
096124e807ac-0
|
The following examples show how to change a table to DISTSTYLE ALL\.
Create a table with an EVEN distribution style\. The SVV\_TABLE\_INFO view shows that the DISTSTYLE is EVEN\.
```
create table inventory(
inv_date_sk int4 not null ,
inv_item_sk int4 not null ,
inv_warehouse_sk int4 not null ,
inv_quantity_on_hand int4
) diststyle even;
Insert into inventory values(1,1,1,1);
select "table", "diststyle" from svv_table_info;
table | diststyle
-----------+----------------
inventory | EVEN
```
Alter the table DISTSTYLE to ALL\. The SVV\_TABLE\_INFO view shows the changed DISTSYTLE\.
```
alter table inventory alter diststyle all;
select "table", "diststyle" from svv_table_info;
table | diststyle
-----------+----------------
inventory | ALL
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_examples_basic.md
|
270dd4386000-0
|
To load data from an existing DynamoDB table, use the FROM clause to specify the DynamoDB table name\.
**Topics**
+ [Syntax](#copy-parameters-data-source-dynamodb-syntax)
+ [Examples](#copy-parameters-data-source-dynamodb-examples)
+ [Optional parameters](#copy-parameters-data-source-dynamodb-optional-parms)
+ [Unsupported parameters](#copy-parameters-data-source-dynamodb-unsupported-parms)
**Important**
If the DynamoDB table doesn't reside in the same region as your Amazon Redshift cluster, you must use the REGION parameter to specify the region in which the data is located\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
0715d858c3d6-0
|
```
FROM 'dynamodb://table-name'
authorization
READRATIO ratio
| REGION [AS] 'aws_region'
| optional-parameters
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
ce837563ffed-0
|
The following example loads data from a DynamoDB table\.
```
copy favoritemovies from 'dynamodb://ProductCatalog'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
readratio 50;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
f71f5534d2b7-0
|
FROM
The source of the data to be loaded\.
'dynamodb://*table\-name*' <a name="copy-dynamodb"></a>
The name of the DynamoDB table that contains the data, for example `'dynamodb://ProductCatalog'`\. For details about how DynamoDB attributes are mapped to Amazon Redshift columns, see [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md)\.
A DynamoDB table name is unique to an AWS account, which is identified by the AWS access credentials\.
*authorization*
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2\. You can provide that authorization by referencing an AWS Identity and Access Management \(IAM\) role that is attached to your cluster \(role\-based access control\) or by providing the access credentials for an IAM user \(key\-based access control\)\. For increased security and flexibility, we recommend using IAM role\-based access control\. For more information, see [Authorization parameters](copy-parameters-authorization.md)\.
READRATIO \[AS\] *ratio* <a name="copy-readratio"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
f71f5534d2b7-1
|
READRATIO \[AS\] *ratio* <a name="copy-readratio"></a>
The percentage of the DynamoDB table's provisioned throughput to use for the data load\. READRATIO is required for COPY from DynamoDB\. It cannot be used with COPY from Amazon S3\. We highly recommend setting the ratio to a value less than the average unused provisioned throughput\. Valid values are integers 1–200\.
Setting READRATIO to 100 or higher enables Amazon Redshift to consume the entirety of the DynamoDB table's provisioned throughput, which seriously degrades the performance of concurrent read operations against the same table during the COPY session\. Write traffic is unaffected\. Values higher than 100 are allowed to troubleshoot rare scenarios when Amazon Redshift fails to fulfill the provisioned throughput of the table\. If you load data from DynamoDB to Amazon Redshift on an ongoing basis, consider organizing your DynamoDB tables as a time series to separate live traffic from the COPY operation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
2cc7d880dfb5-0
|
You can optionally specify the following parameters with COPY from Amazon DynamoDB:
+ [Column mapping options](copy-parameters-column-mapping.md)
+ The following data conversion parameters are supported:
+ [ACCEPTANYDATE](copy-parameters-data-conversion.md#copy-acceptanydate)
+ [BLANKSASNULL](copy-parameters-data-conversion.md#copy-blanksasnull)
+ [DATEFORMAT](copy-parameters-data-conversion.md#copy-dateformat)
+ [EMPTYASNULL](copy-parameters-data-conversion.md#copy-emptyasnull)
+ [ROUNDEC](copy-parameters-data-conversion.md#copy-roundec)
+ [TIMEFORMAT](copy-parameters-data-conversion.md#copy-timeformat)
+ [TRIMBLANKS](copy-parameters-data-conversion.md#copy-trimblanks)
+ [TRUNCATECOLUMNS](copy-parameters-data-conversion.md#copy-truncatecolumns)
+ [ Data load operations](copy-parameters-data-load.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
3a4f3da02b42-0
|
You cannot use the following parameters with COPY from DynamoDB:
+ All data format parameters
+ ESCAPE
+ FILLRECORD
+ IGNOREBLANKLINES
+ IGNOREHEADER
+ NULL
+ REMOVEQUOTES
+ ACCEPTINVCHARS
+ MANIFEST
+ ENCRYPTED
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-dynamodb.md
|
cf797794268f-0
|
The following query is an outer join\. Left and right outer joins retain values from one of the joined tables when no match is found in the other table\. The left and right tables are the first and second tables listed in the syntax\. NULL values are used to fill the "gaps" in the result set\.
This query matches LISTID column values in LISTING \(the left table\) and SALES \(the right table\)\. The results show that listings 2, 3, and 5 did not result in any sales\.
```
select listing.listid, sum(pricepaid) as price, sum(commission) as comm
from listing left outer join sales on sales.listid = listing.listid
where listing.listid between 1 and 5
group by 1
order by 1;
listid | price | comm
--------+--------+--------
1 | 728.00 | 109.20
2 | |
3 | |
4 | 76.00 | 11.40
5 | 525.00 | 78.75
(5 rows)
```
The following query is an inner join of two subqueries in the FROM clause\. The query finds the number of sold and unsold tickets for different categories of events \(concerts and shows\):
```
select catgroup1, sold, unsold
from
(select catgroup, sum(qtysold) as sold
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Join_examples.md
|
cf797794268f-1
|
select catgroup1, sold, unsold
from
(select catgroup, sum(qtysold) as sold
from category c, event e, sales s
where c.catid = e.catid and e.eventid = s.eventid
group by catgroup) as a(catgroup1, sold)
join
(select catgroup, sum(numtickets)-sum(qtysold) as unsold
from category c, event e, sales s, listing l
where c.catid = e.catid and e.eventid = s.eventid
and s.listid = l.listid
group by catgroup) as b(catgroup2, unsold)
on a.catgroup1 = b.catgroup2
order by 1;
catgroup1 | sold | unsold
-----------+--------+--------
Concerts | 195444 |1067199
Shows | 149905 | 817736
(2 rows)
```
These FROM clause subqueries are *table* subqueries; they can return multiple columns and rows\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Join_examples.md
|
4c3cb8f3a6f3-0
|
**Topics**
+ [Leader node–only functions](c_SQL_functions_leader_node_only.md)
+ [Compute node–only functions](c_SQL_functions_compute_node_only.md)
+ [Aggregate functions](c_Aggregate_Functions.md)
+ [Bit\-wise aggregate functions](c_bitwise_aggregate_functions.md)
+ [Window functions](c_Window_functions.md)
+ [Conditional expressions](c_conditional_expressions.md)
+ [Date and time functions](Date_functions_header.md)
+ [Spatial functions](geospatial-functions.md)
+ [Math functions](Math_functions.md)
+ [String functions](String_functions_header.md)
+ [Hash functions](hash-functions.md)
+ [JSON functions](json-functions.md)
+ [Data type formatting functions](r_Data_type_formatting.md)
+ [System administration functions](r_System_administration_functions.md)
+ [System information functions](r_System_information_functions.md)
Amazon Redshift supports a number of functions that are extensions to the SQL standard, as well as standard aggregate functions, scalar functions, and window functions\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_functions.md
|
4c3cb8f3a6f3-1
|
**Note**
Amazon Redshift is based on PostgreSQL 8\.0\.2\. Amazon Redshift and PostgreSQL have a number of very important differences that you must be aware of as you design and develop your data warehouse applications\. For more information about how Amazon Redshift SQL differs from PostgreSQL, see [Amazon Redshift and PostgreSQL](c_redshift-and-postgres-sql.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_functions.md
|
7f3cc72ec2cc-0
|
There are two classes of visibility for data in system tables and views: visible to users and visible to superusers\.
Only users with superuser privileges can see the data in those tables that are in the superuser\-visible category\. Regular users can see data in the user\-visible tables\. To give a regular user access to superuser\-visible tables, [GRANT](r_GRANT.md) SELECT privilege on that table to the regular user\.
By default, in most user\-visible tables, rows generated by another user are invisible to a regular user\. If a regular user is given unrestricted [SYSLOG ACCESS](r_ALTER_USER.md#alter-user-syslog-access), that user can see all rows in user\-visible tables, including rows generated by another user\. For more information, see [ALTER USER](r_ALTER_USER.md) or [CREATE USER](r_CREATE_USER.md)\. All rows in STV\_RECENTS and SVV\_TRANSACTIONS are visible to all users\.
**Note**
Giving a user unrestricted access to system tables gives the user visibility to data generated by other users\. For example, STL\_QUERY and STL\_QUERY\_TEXT contain the full text of INSERT, UPDATE, and DELETE statements, which might contain sensitive user\-generated data\.
A superuser can see all rows in all tables\. To give a regular user access to superuser\-visible tables, [GRANT](r_GRANT.md) SELECT privilege on that table to the regular user\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_visibility-of-data.md
|
948a0c38707a-0
|
The query\-related system tables and views, such as SVL\_QUERY\_SUMMARY, SVL\_QLOG, and others, usually contain a large number of automatically generated statements that Amazon Redshift uses to monitor the status of the database\. These system\-generated queries are visible to a superuser, but are seldom useful\. To filter them out when selecting from a system table or system view that uses the `userid` column, add the condition `userid > 1` to the WHERE clause\. For example:
```
select * from svl_query_summary where userid > 1
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_visibility-of-data.md
|
a531b23c0e1a-0
|
This section provides a quick reference for identifying and addressing some of the most common and most serious issues you are likely to encounter with Amazon Redshift queries\.
**Topics**
+ [Connection fails](#queries-troubleshooting-connection-fails)
+ [Query hangs](#queries-troubleshooting-query-hangs)
+ [Query takes too long](#queries-troubleshooting-query-takes-too-long)
+ [Load fails](#queries-troubleshooting-load-fails)
+ [Load takes too long](#queries-troubleshooting-load-takes-too-long)
+ [Load data is incorrect](#queries-troubleshooting-load-data-incorrect)
+ [Setting the JDBC fetch size parameter](#set-the-JDBC-fetch-size-parameter)
These suggestions give you a starting point for troubleshooting\. You can also refer to the following resources for more detailed information\.
+ [Accessing Amazon Redshift clusters and databases](https://docs.aws.amazon.com/redshift/latest/mgmt/using-rs-tools.html)
+ [Designing tables](t_Creating_tables.md)
+ [Loading data](t_Loading_data.md)
+ [Tutorial: Tuning table design](tutorial-tuning-tables.md)
+ [Tutorial: Loading data from Amazon S3](tutorial-loading-data.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
d4b1594589ec-0
|
Your query connection can fail for the following reasons; we suggest the following troubleshooting approaches\.
**Client cannot connect to server**
If you are using SSL or server certificates, first remove this complexity while you troubleshoot the connection issue\. Then add SSL or server certificates back when you have found a solution\. For more information, go to [Configure Security Options for Connections](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html) in the *Amazon Redshift Cluster Management Guide\.*
**Connection is refused**
Generally, when you receive an error message indicating that there is a failure to establish a connection, it means that there is an issue with the permission to access the cluster\. For more information, go to [The connection is refused or fails](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-refusal-failure-issues.html) in the *Amazon Redshift Cluster Management Guide\.*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
7b5338ad8b4b-0
|
Your query can hang, or stop responding, for the following reasons; we suggest the following troubleshooting approaches\.
**Connection to the database is dropped**
Reduce the size of maximum transmission unit \(MTU\)\. The MTU size determines the maximum size, in bytes, of a packet that can be transferred in one Ethernet frame over your network connection\. For more information, go to [The connection to the database is dropped](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-drop-issues.html) in the *Amazon Redshift Cluster Management Guide\.*
**Connection to the database times out**
Your client connection to the database appears to hang or time out when running long queries, such as a COPY command\. In this case, you might observe that the Amazon Redshift console displays that the query has completed, but the client tool itself still appears to be running the query\. The results of the query might be missing or incomplete depending on when the connection stopped\. This effect happens when idle connections are terminated by an intermediate network component\. For more information, go to [Firewall Timeout Issue](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html) in the *Amazon Redshift Cluster Management Guide\.*
**Client\-side out\-of\-memory error occurs with ODBC**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
7b5338ad8b4b-1
|
**Client\-side out\-of\-memory error occurs with ODBC**
If your client application uses an ODBC connection and your query creates a result set that is too large to fit in memory, you can stream the result set to your client application by using a cursor\. For more information, see [DECLARE](declare.md) and [Performance considerations when using cursors](declare.md#declare-performance)\.
**Client\-side out\-of\-memory error occurs with JDBC**
When you attempt to retrieve large result sets over a JDBC connection, you might encounter client\-side out\-of\-memory errors\. For more information, see [Setting the JDBC fetch size parameter](#set-the-JDBC-fetch-size-parameter)\.
**There is a potential deadlock**
If there is a potential deadlock, try the following:
+ View the [STV\_LOCKS](r_STV_LOCKS.md) and [STL\_TR\_CONFLICT](r_STL_TR_CONFLICT.md) system tables to find conflicts involving updates to more than one table\.
+ Use the [PG\_CANCEL\_BACKEND](PG_CANCEL_BACKEND.md) function to cancel one or more conflicting queries\.
+ Use the [PG\_TERMINATE\_BACKEND](PG_TERMINATE_BACKEND.md) function to terminate a session, which forces any currently running transactions in the terminated session to release all locks and roll back the transaction\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
7b5338ad8b4b-2
|
+ Schedule concurrent write operations carefully\. For more information, see [Managing concurrent write operations](c_Concurrent_writes.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
a9eb39385de7-0
|
Your query can take too long for the following reasons; we suggest the following troubleshooting approaches\.
**Tables are not optimized**
Set the sort key, distribution style, and compression encoding of the tables to take full advantage of parallel processing\. For more information, see [Designing tables](t_Creating_tables.md) and [Tutorial: Tuning table design](tutorial-tuning-tables.md)\.
**Query is writing to disk**
Your queries might be writing to disk for at least part of the query execution\. For more information, see [Improving query performance](query-performance-improvement-opportunities.md)\.
**Query must wait for other queries to finish**
You might be able to improve overall system performance by creating query queues and assigning different types of queries to the appropriate queues\. For more information, see [Implementing workload management](cm-c-implementing-workload-management.md)\.
**Queries are not optimized**
Analyze the explain plan to find opportunities for rewriting queries or optimizing the database\. For more information, see [Query plan](c-the-query-plan.md)\.
**Query needs more memory to run**
If a specific query needs more memory, you can increase the available memory by increasing the [wlm\_query\_slot\_count](r_wlm_query_slot_count.md)\.
**Database requires a VACUUM command to be run**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
a9eb39385de7-1
|
**Database requires a VACUUM command to be run**
Run the VACUUM command whenever you add, delete, or modify a large number of rows, unless you load your data in sort key order\. The VACUUM command reorganizes your data to maintain the sort order and restore performance\. For more information, see [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
575fce231e0e-0
|
Your data load can fail for the following reasons; we suggest the following troubleshooting approaches\.
**Data Source is in a different AWS Region**
By default, the Amazon S3 bucket or Amazon DynamoDB table specified in the COPY command must be in the same AWS Region as the cluster\. If your data and your cluster are in different Regions, you receive an error similar to the following:
```
The bucket you are attempting to access must be addressed using the specified endpoint.
```
If at all possible, make sure your cluster and your data source are the same Region\. You can specify a different Region by using the [REGION](copy-parameters-data-source-s3.md#copy-region) option with the COPY command\.
**Note**
If your cluster and your data source are in different AWS Regions, you incur data transfer costs\. You also have higher latency and more issues with eventual consistency\.
**COPY Command Fails**
Query STL\_LOAD\_ERRORS to discover the errors that occurred during specific loads\. For more information, see [STL\_LOAD\_ERRORS](r_STL_LOAD_ERRORS.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
9ed2c8133dac-0
|
Your load operation can take too long for the following reasons; we suggest the following troubleshooting approaches\.
**COPY loads data from a single file**
Split your load data into multiple files\. When you load all the data from a single large file, Amazon Redshift is forced to perform a serialized load, which is much slower\. The number of files should be a multiple of the number of slices in your cluster, and the files should be about equal size, between 1 MB and 1 GB after compression\. For more information, see [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)\.
**Load operation uses multiple COPY commands**
If you use multiple concurrent COPY commands to load one table from multiple files, Amazon Redshift is forced to perform a serialized load, which is much slower\. In this case, use a single COPY command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
387c69ce3724-0
|
Your COPY operation can load incorrect data in the following ways; we suggest the following troubleshooting approaches\.
**Not all files are loaded**
Eventual consistency can cause a discrepancy in some cases between the files listed using an Amazon S3 ListBuckets action and the files available to the COPY command\. For more information, see [Verifying that the data loaded correctly](verifying-that-data-loaded-correctly.md)\.
**Wrong files are loaded**
Using an object prefix to specify data files can cause unwanted files to be read\. Instead, use a manifest file to specify exactly which files to load\. For more information, see the [copy_from_s3_manifest_file](copy-parameters-data-source-s3.md#copy-manifest-file) option for the COPY command and [Example: COPY from Amazon S3 using a manifest](r_COPY_command_examples.md#copy-command-examples-manifest) in the COPY examples\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
27887b6e6370-0
|
By default, the JDBC driver collects all the results for a query at one time\. As a result, when you attempt to retrieve a large result set over a JDBC connection, you might encounter a client\-side out\-of\-memory error\. To enable your client to retrieve result sets in batches instead of in a single all\-or\-nothing fetch, set the JDBC fetch size parameter in your client application\.
**Note**
Fetch size is not supported for ODBC\.
For the best performance, set the fetch size to the highest value that does not lead to out of memory errors\. A lower fetch size value results in more server trips, which prolong execution times\. The server reserves resources, including the WLM query slot and associated memory, until the client retrieves the entire result set or the query is canceled\. When you tune the fetch size appropriately, those resources are released more quickly, making them available to other queries\.
**Note**
If you need to extract large datasets, we recommend using an [UNLOAD](r_UNLOAD.md) statement to transfer the data to Amazon S3\. When you use UNLOAD, the compute nodes work in parallel to speed up the transfer of data\.
For more information about setting the JDBC fetch size parameter, go to [Getting results based on a cursor](https://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor) in the PostgreSQL documentation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/queries-troubleshooting.md
|
8a2fa8997a13-0
|
Returns information about user session history\.
STL\_SESSIONS differs from STV\_SESSIONS in that STL\_SESSIONS contains session history, where STV\_SESSIONS contains the current active sessions\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SESSIONS.md
|
cf8bfaec686e-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SESSIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SESSIONS.md
|
7f9b6e7df8b9-0
|
To view session history for the TICKIT database, type the following query:
```
select starttime, process, user_name
from stl_sessions
where db_name='tickit' order by starttime;
```
This query returns the following sample output:
```
starttime | process | user_name
---------------------------+---------+-------------
2008-09-15 09:54:06.746705 | 32358 | dwuser
2008-09-15 09:56:34.30275 | 32744 | dwuser
2008-09-15 11:20:34.694837 | 14906 | dwuser
2008-09-15 11:22:16.749818 | 15148 | dwuser
2008-09-15 14:32:44.66112 | 14031 | dwuser
2008-09-15 14:56:30.22161 | 18380 | dwuser
2008-09-15 15:28:32.509354 | 24344 | dwuser
2008-09-15 16:01:00.557326 | 30153 | dwuser
2008-09-15 17:28:21.419858 | 12805 | dwuser
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SESSIONS.md
|
7f9b6e7df8b9-1
|
2008-09-15 17:28:21.419858 | 12805 | dwuser
2008-09-15 20:58:37.601937 | 14951 | dwuser
2008-09-16 11:12:30.960564 | 27437 | dwuser
2008-09-16 14:11:37.639092 | 23790 | dwuser
2008-09-16 15:13:46.02195 | 1355 | dwuser
2008-09-16 15:22:36.515106 | 2878 | dwuser
2008-09-16 15:44:39.194579 | 6470 | dwuser
2008-09-16 16:50:27.02138 | 17254 | dwuser
2008-09-17 12:05:02.157208 | 8439 | dwuser
(17 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SESSIONS.md
|
abc01cd9d947-0
|
PG\_DATABASE\_INFO is an Amazon Redshift system view that extends the PostgreSQL catalog table PG\_DATABASE\.
PG\_DATABASE\_INFO is visible to all users\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_DATABASE_INFO.md
|
19ad49fadb2e-0
|
PG\_DATABASE\_INFO contains the following columns in addition to columns in PG\_DATABASE\. For more information, see the [PostgreSQL 8\.0 documentation](https://www.postgresql.org/docs/8.0/catalog-pg-database.html)\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_DATABASE_INFO.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_DATABASE_INFO.md
|
5aa607ce68ef-0
|
**Topics**
+ [Syntax](#r_ALTER_TABLE-synopsis)
+ [Parameters](#r_ALTER_TABLE-parameters)
+ [ALTER TABLE examples](r_ALTER_TABLE_examples_basic.md)
+ [ALTER EXTERNAL TABLE examples](r_ALTER_TABLE_external-table.md)
+ [ALTER TABLE ADD and DROP COLUMN examples](r_ALTER_TABLE_COL_ex-add-drop.md)
Changes the definition of a database table or Amazon Redshift Spectrum external table\. This command updates the values and properties set by CREATE TABLE or CREATE EXTERNAL TABLE\.
You can't run ALTER TABLE on an external table within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
**Note**
ALTER TABLE locks the table for read and write operations until the transaction enclosing the ALTER TABLE operation completes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
f0a228ec48c0-0
|
```
ALTER TABLE table_name
{
ADD table_constraint
| DROP CONSTRAINT constraint_name [ RESTRICT | CASCADE ]
| OWNER TO new_owner
| RENAME TO new_name
| RENAME COLUMN column_name TO new_name
| ALTER COLUMN column_name TYPE new_data_type
| ALTER DISTKEY column_name
| ALTER DISTSTYLE ALL
| ALTER DISTSTYLE EVEN
| ALTER DISTSTYLE KEY DISTKEY column_name
| ALTER [COMPOUND] SORTKEY ( column_name [,...] )
| ADD [ COLUMN ] column_name column_type
[ DEFAULT default_expr ]
[ ENCODE encoding ]
[ NOT NULL | NULL ] |
| DROP [ COLUMN ] column_name [ RESTRICT | CASCADE ] }
where table_constraint is:
[ CONSTRAINT constraint_name ]
{ UNIQUE ( column_name [, ... ] )
| PRIMARY KEY ( column_name [, ... ] )
| FOREIGN KEY (column_name [, ... ] )
REFERENCES reftable [ ( refcolumn ) ]}
The following options apply only to external tables:
SET LOCATION { 's3://bucket/folder/' | 's3://bucket/manifest_file' }
| SET FILE FORMAT format |
| SET TABLE PROPERTIES ('property_name'='property_value')
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
f0a228ec48c0-1
|
| SET FILE FORMAT format |
| SET TABLE PROPERTIES ('property_name'='property_value')
| PARTITION ( partition_column=partition_value [, ...] )
SET LOCATION { 's3://bucket/folder' |'s3://bucket/manifest_file' }
| ADD [IF NOT EXISTS]
PARTITION ( partition_column=partition_value [, ...] ) LOCATION { 's3://bucket/folder' |'s3://bucket/manifest_file' }
[, ... ]
| DROP PARTITION ( partition_column=partition_value [, ...] )
```
To reduce the time to run the ALTER TABLE command, you can combine some clauses of the ALTER TABLE command\.
Amazon Redshift supports the following combinations of the ALTER TABLE clauses:
```
ALTER TABLE tablename ALTER SORTKEY (column_list), ALTER DISTKEY column_Id;
ALTER TABLE tablename ALTER DISTKEY column_Id, ALTER SORTKEY (column_list);
ALTER TABLE tablename ALTER SORTKEY (column_list), ALTER DISTSTYLE ALL;
ALTER TABLE tablename ALTER DISTSTYLE ALL, ALTER SORTKEY (column_list);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-0
|
*table\_name*
The name of the table to alter\. Either specify just the name of the table, or use the format *schema\_name\.table\_name* to use a specific schema\. External tables must be qualified by an external schema name\. You can also specify a view name if you are using the ALTER TABLE statement to rename a view or change its owner\. The maximum length for the table name is 127 bytes; longer names are truncated to 127 bytes\. You can use UTF\-8 multibyte characters up to a maximum of four bytes\. For more information about valid names, see [Names and identifiers](r_names.md)\.
ADD *table\_constraint*
A clause that adds the specified constraint to the table\. For descriptions of valid *table\_constraint* values, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
You can't add a primary\-key constraint to a nullable column\. If the column was originally created with the NOT NULL constraint, you can add the primary\-key constraint\.
DROP CONSTRAINT *constraint\_name*
A clause that drops the named constraint from the table\. To drop a constraint, specify the constraint name, not the constraint type\. To view table constraint names, run the following query\.
```
select constraint_name, constraint_type
from information_schema.table_constraints;
```
RESTRICT
A clause that removes only the specified constraint\. RESTRICT is an option for DROP CONSTRAINT\. RESTRICT can't be used with CASCADE\.
CASCADE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-1
|
A clause that removes only the specified constraint\. RESTRICT is an option for DROP CONSTRAINT\. RESTRICT can't be used with CASCADE\.
CASCADE
A clause that removes the specified constraint and anything dependent on that constraint\. CASCADE is an option for DROP CONSTRAINT\. CASCADE can't be used with RESTRICT\.
OWNER TO *new\_owner*
A clause that changes the owner of the table \(or view\) to the *new\_owner* value\.
RENAME TO *new\_name*
A clause that renames a table \(or view\) to the value specified in *new\_name*\. The maximum table name length is 127 bytes; longer names are truncated to 127 bytes\.
You can't rename a permanent table to a name that begins with '\#'\. A table name beginning with '\#' indicates a temporary table\.
You can't rename an external table\.
ALTER COLUMN *column\_name* TYPE *new\_data\_type*
A clause that changes the size of a column defined as a VARCHAR data type\. Consider the following limitations:
+ You can't alter a column with compression encodings BYTEDICT, RUNLENGTH, TEXT255, or TEXT32K\.
+ You can't decrease the size less than maximum size of existing data\.
+ You can't alter columns with default values\.
+ You can't alter columns with UNIQUE, PRIMARY KEY, or FOREIGN KEY\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-2
|
+ You can't alter columns with default values\.
+ You can't alter columns with UNIQUE, PRIMARY KEY, or FOREIGN KEY\.
+ You can't alter columns within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
ALTER DISTSTYLE ALL
A clause that changes the existing distribution style of a table to `ALL`\. Consider the following:
+ An ALTER DISTSYTLE, ALTER SORTKEY, and VACUUM can't run concurrently on the same table\.
+ If VACUUM is currently running, then running ALTER DISTSTYLE ALL returns an error\.
+ If ALTER DISTSTYLE ALL is running, then a background vacuum doesn't start on a table\.
+ The ALTER DISTSTYLE ALL command is not supported for tables with interleaved sort keys and temporary tables\.
For more information about DISTSTYLE ALL, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
ALTER DISTSTYLE EVEN
A clause that changes the existing distribution style of a table to `EVEN`\. Consider the following:
+ An ALTER DISTSYTLE, ALTER SORTKEY, and VACUUM can't run concurrently on the same table\.
+ If VACUUM is currently running, then running ALTER DISTSTYLE EVEN returns an error\.
+ If ALTER DISTSTYLE EVEN is running, then a background vacuum doesn't start on a table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-3
|
+ If ALTER DISTSTYLE EVEN is running, then a background vacuum doesn't start on a table\.
+ The ALTER DISTSTYLE EVEN command is not supported for tables with interleaved sort keys and temporary tables\.
For more information about DISTSTYLE EVEN, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
ALTER DISTKEY *column\_name* or ALTER DISTSTYLE KEY DISTKEY *column\_name*
A clause that changes the column used as the distribution key of a table\. Consider the following:
+ VACUUM and ALTER DISTKEY can't run concurrently on the same table\.
+ If VACUUM is already running, then ALTER DISTKEY returns an error\.
+ If ALTER DISTKEY is running, then background vacuum doesn't start on a table\.
+ If ALTER DISTKEY is running, then foreground vacuum returns an error\.
+ You can only run one ALTER DISTKEY command on a table at a time\.
+ The ALTER DISTKEY command is not supported for tables with interleaved sort keys\.
When specifying DISTSTYLE KEY, the data is distributed by the values in the DISTKEY column\. For more information about DISTSTYLE, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
ALTER \[COMPOUND\] SORTKEY \( *column\_name* \[,\.\.\.\] \)
A clause that changes or adds the sort key used for a table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-4
|
A clause that changes or adds the sort key used for a table\.
When you alter a sort key, the compression encoding of columns in the new or original sort key can change\. If no encoding is explicitly defined for the table, then Amazon Redshift automatically assigns compression encodings as follows:
+ Columns that are defined as sort keys are assigned RAW compression\.
+ Columns that are defined as BOOLEAN, REAL, or DOUBLE PRECISION data types are assigned RAW compression\.
+ Columns that are defined as SMALLINT, INTEGER, BIGINT, DECIMAL, DATE, TIMESTAMP, or TIMESTAMPTZ are assigned AZ64 compression\.
+ Columns that are defined as CHAR or VARCHAR are assigned LZO compression\.
Consider the following:
+ You can define a maximum of 400 columns for a sort key per table\.
+ You can only alter a compound sort key\. You can't alter an interleaved sort key\.
When data is loaded into a table, the data is loaded in the order of the sort key\. When you alter the sort key, Amazon Redshift reorders the data\. For more information about SORTKEY, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
RENAME COLUMN *column\_name* TO *new\_name*
A clause that renames a column to the value specified in *new\_name*\. The maximum column name length is 127 bytes; longer names are truncated to 127 bytes\. For more information about valid names, see [Names and identifiers](r_names.md)\.
ADD \[ COLUMN \] *column\_name*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-5
|
ADD \[ COLUMN \] *column\_name*
A clause that adds a column with the specified name to the table\. You can add only one column in each ALTER TABLE statement\.
You can't add a column that is the distribution key \(DISTKEY\) or a sort key \(SORTKEY\) of the table\.
You can't use an ALTER TABLE ADD COLUMN command to modify the following table and column attributes:
+ UNIQUE
+ PRIMARY KEY
+ REFERENCES \(foreign key\)
+ IDENTITY or GENERATED BY DEFAULT AS IDENTITY
The maximum column name length is 127 bytes; longer names are truncated to 127 bytes\. The maximum number of columns you can define in a single table is 1,600\.
The following restrictions apply when adding a column to an external table:
+ You can't add a column to an external table with the column constraints DEFAULT, ENCODE, NOT NULL, or NULL\.
+ You can't add columns to an external table that's defined using the AVRO file format\.
+ If pseudocolumns are enabled, the maximum number of columns that you can define in a single external table is 1,598\. If pseudocolumns aren't enabled, the maximum number of columns that you can define in a single table is 1,600\.
For more information, see [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)\.
*column\_type*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-6
|
For more information, see [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)\.
*column\_type*
The data type of the column being added\. For CHAR and VARCHAR columns, you can use the MAX keyword instead of declaring a maximum length\. MAX sets the maximum length to 4,096 bytes for CHAR or 65,535 bytes for VARCHAR\. The maximum size of a GEOMETRY object is 1,048,447 bytes\.
Amazon Redshift supports the following [Data types](c_Supported_data_types.md):
+ SMALLINT \(INT2\)
+ INTEGER \(INT, INT4\)
+ BIGINT \(INT8\)
+ DECIMAL \(NUMERIC\)
+ REAL \(FLOAT4\)
+ DOUBLE PRECISION \(FLOAT8\)
+ BOOLEAN \(BOOL\)
+ CHAR \(CHARACTER\)
+ VARCHAR \(CHARACTER VARYING\)
+ DATE
+ TIMESTAMP
+ GEOMETRY
DEFAULT *default\_expr* <a name="alter-table-default"></a>
A clause that assigns a default data value for the column\. The data type of *default\_expr* must match the data type of the column\. The DEFAULT value must be a variable\-free expression\. Subqueries, cross\-references to other columns in the current table, and user\-defined functions aren't allowed\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-7
|
The *default\_expr* is used in any INSERT operation that doesn't specify a value for the column\. If no default value is specified, the default value for the column is null\.
If a COPY operation encounters a null field on a column that has a DEFAULT value and a NOT NULL constraint, the COPY command inserts the value of the *default\_expr*\.
DEFAULT isn't supported for external tables\.
ENCODE *encoding*
The compression encoding for a column\. If no compression is selected, Amazon Redshift automatically assigns compression encoding as follows:
+ All columns in temporary tables are assigned RAW compression by default\.
+ Columns that are defined as sort keys are assigned RAW compression\.
+ Columns that are defined as BOOLEAN, REAL, DOUBLE PRECISION, or GEOMETRY data types are assigned RAW compression\.
+ Columns that are defined as SMALLINT, INTEGER, BIGINT, DECIMAL, DATE, TIMESTAMP, or TIMESTAMPTZ are assigned AZ64 compression\.
+ Columns that are defined as CHAR or VARCHAR are assigned LZO compression\.
If you don't want a column to be compressed, explicitly specify RAW encoding\.
The following [compression encodings](c_Compression_encodings.md#compression-encoding-list) are supported:
+ AZ64
+ BYTEDICT
+ DELTA
+ DELTA32K
+ LZO
+ MOSTLY8
+ MOSTLY16
+ MOSTLY32
+ RAW \(no compression\)
+ RUNLENGTH
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-8
|
+ MOSTLY8
+ MOSTLY16
+ MOSTLY32
+ RAW \(no compression\)
+ RUNLENGTH
+ TEXT255
+ TEXT32K
+ ZSTD
ENCODE isn't supported for external tables\.
NOT NULL \| NULL
NOT NULL specifies that the column isn't allowed to contain null values\. NULL, the default, specifies that the column accepts null values\.
NOT NULL and NULL aren't supported for external tables\.
DROP \[ COLUMN \] *column\_name*
The name of the column to delete from the table\.
You can't drop the last column in a table\. A table must have at least one column\.
You can't drop a column that is the distribution key \(DISTKEY\) or a sort key \(SORTKEY\) of the table\. The default behavior for DROP COLUMN is RESTRICT if the column has any dependent objects, such as a view, primary key, foreign key, or UNIQUE restriction\.
The following restrictions apply when dropping a column from an external table:
+ You can't drop a column from an external table if the column is used as a partition\.
+ You can't drop a column from an external table that is defined using the AVRO file format\.
+ RESTRICT and CASCADE are ignored for external tables\.
For more information, see [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)\.
RESTRICT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-9
|
For more information, see [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)\.
RESTRICT
When used with DROP COLUMN, RESTRICT means that column to be dropped isn't dropped, in these cases:
+ If a defined view references the column that is being dropped
+ If a foreign key references the column
+ If the column takes part in a multipart key
RESTRICT can't be used with CASCADE\.
RESTRICT and CASCADE are ignored for external tables\.
CASCADE
When used with DROP COLUMN, removes the specified column and anything dependent on that column\. CASCADE can't be used with RESTRICT\.
RESTRICT and CASCADE are ignored for external tables\.
The following options apply only to external tables\.
SET LOCATION \{ 's3://*bucket/folder*/' \| 's3://*bucket/manifest\_file*' \}
The path to the Amazon S3 folder that contains the data files or a manifest file that contains a list of Amazon S3 object paths\. The buckets must be in the same AWS Region as the Amazon Redshift cluster\. For a list of supported AWS Regions, see [Amazon Redshift Spectrum considerations](c-using-spectrum.md#c-spectrum-considerations)\. For more information about using a manifest file, see LOCATION in the CREATE EXTERNAL TABLE [Parameters](r_CREATE_EXTERNAL_TABLE.md#r_CREATE_EXTERNAL_TABLE-parameters) reference\.
SET FILE FORMAT *format*
The file format for external data files\.
Valid formats are as follows:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-10
|
SET FILE FORMAT *format*
The file format for external data files\.
Valid formats are as follows:
+ AVRO
+ PARQUET
+ RCFILE
+ SEQUENCEFILE
+ TEXTFILE
SET TABLE PROPERTIES \( '*property\_name*'='*property\_value*'\)
A clause that sets the table definition for table properties for an external table\.
Table properties are case\-sensitive\.
'numRows'='*row\_count*'
A property that sets the numRows value for the table definition\. To explicitly update an external table's statistics, set the numRows property to indicate the size of the table\. Amazon Redshift doesn't analyze external tables to generate the table statistics that the query optimizer uses to generate a query plan\. If table statistics aren't set for an external table, Amazon Redshift generates a query execution plan\. This plan is based on an assumption that external tables are the larger tables and local tables are the smaller tables\.
'skip\.header\.line\.count'='*line\_count*'
A property that sets number of rows to skip at the beginning of each source file\.
PARTITION \( *partition\_column*=*partition\_value* \[, \.\.\.\] SET LOCATION \{ 's3://*bucket*/*folder*' \| 's3://*bucket*/*manifest\_file*' \}
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
650b23254bc1-11
|
A clause that sets a new location for one or more partition columns\.
ADD \[ IF NOT EXISTS \] PARTITION \( *partition\_column*=*partition\_value* \[, \.\.\.\] \) LOCATION \{ 's3://*bucket*/*folder*' \| 's3://*bucket*/*manifest\_file*' \} \[, \.\.\. \]
A clause that adds one or more partitions\. You can specify multiple PARTITION clauses using a single ALTER TABLE … ADD statement\.
If you use the AWS Glue catalog, you can add up to 100 partitions using a single ALTER TABLE statement\.
The IF NOT EXISTS clause indicates that if the specified partition already exists, the command should make no changes\. It also indicates that the command should return a message that the partition exists, rather than terminating with an error\. This clause is useful when scripting, so the script doesn’t fail if ALTER TABLE tries to add a partition that already exists\.
DROP PARTITION \(*partition\_column*=*partition\_value* \[, \.\.\.\] \)
A clause that drops the specified partition\. Dropping a partition alters only the external table metadata\. The data on Amazon S3 isn't affected\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE.md
|
687f0a65b002-0
|
The QUOTE\_IDENT function returns the specified string as a double quoted string so that it can be used as an identifier in a SQL statement\. Appropriately doubles any embedded double quotes\.
QUOTE\_IDENT adds double quotes only where necessary to create a valid identifier, when the string contains non\-identifier characters or would otherwise be folded to lowercase\. To always return a single\-quoted string, use [QUOTE\_LITERAL](r_QUOTE_LITERAL.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_IDENT.md
|
e524baefcf0c-0
|
```
QUOTE_IDENT(string)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_IDENT.md
|
b676d5bd6be0-0
|
*string*
The input parameter can be a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_IDENT.md
|
8bfe1b2a5e66-0
|
The QUOTE\_IDENT function returns the same type string as the input parameter\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_IDENT.md
|
a44e8c121729-0
|
The following example returns the CATNAME column surrounded by quotes:
```
select catid, quote_ident(catname)
from category
order by 1,2;
catid | quote_ident
-------+-------------
1 | "MLB"
2 | "NHL"
3 | "NFL"
4 | "NBA"
5 | "MLS"
6 | "Musicals"
7 | "Plays"
8 | "Opera"
9 | "Pop"
10 | "Jazz"
11 | "Classical"
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_QUOTE_IDENT.md
|
5b92389750b4-0
|
The DATE\_PART\_YEAR function extracts the year from a date\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_YEAR.md
|
01254303b0e5-0
|
```
DATE_PART_YEAR(date)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_YEAR.md
|
9667b548b80a-0
|
*date*
A date column or an expression that implicitly converts to a date\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_YEAR.md
|
17baee69e5fa-0
|
INTEGER
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_YEAR.md
|
c34faa71cf7e-0
|
The following example extracts the year from the CALDATE column:
```
select caldate, date_part_year(caldate)
from date
order by
dateid limit 10;
caldate | date_part_year
-----------+----------------
2008-01-01 | 2008
2008-01-02 | 2008
2008-01-03 | 2008
2008-01-04 | 2008
2008-01-05 | 2008
2008-01-06 | 2008
2008-01-07 | 2008
2008-01-08 | 2008
2008-01-09 | 2008
2008-01-10 | 2008
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_YEAR.md
|
b1cbda3c9277-0
|
ST\_IsEmpty returns true if the input geometry is empty\. A geometry is empty if it contains no points\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsEmpty-function.md
|
a241daca1fcf-0
|
```
ST_IsEmpty(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsEmpty-function.md
|
db0bf6586734-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsEmpty-function.md
|
01324e41fb62-0
|
`BOOLEAN`
If *geom* is null, then null is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsEmpty-function.md
|
8670c396ef34-0
|
The following SQL checks if the specified polygon is empty\.
```
SELECT ST_IsEmpty(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'));
```
```
st_isempty
-----------
false
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsEmpty-function.md
|
df47a8daa23a-0
|
You create an external table in an external schema\. To create external tables, you must be the owner of the external schema or a superuser\. To transfer ownership of an external schema, use [ALTER SCHEMA](r_ALTER_SCHEMA.md) to change the owner\. The following example changes the owner of the `spectrum_schema` schema to `newowner`\.
```
alter schema spectrum_schema owner to newowner;
```
To run a Redshift Spectrum query, you need the following permissions:
+ Usage permission on the schema
+ Permission to create temporary tables in the current database
The following example grants usage permission on the schema `spectrum_schema` to the `spectrumusers` user group\.
```
grant usage on schema spectrum_schema to group spectrumusers;
```
The following example grants temporary permission on the database `spectrumdb` to the `spectrumusers` user group\.
```
grant temp on database spectrumdb to group spectrumusers;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
df47a8daa23a-1
|
```
grant temp on database spectrumdb to group spectrumusers;
```
You can create an external table in Amazon Redshift, AWS Glue, Amazon Athena, or an Apache Hive metastore\. For more information, see [Getting Started Using AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/getting-started.html) in the *AWS Glue Developer Guide*, [Getting Started](https://docs.aws.amazon.com/athena/latest/ug/getting-started.html) in the *Amazon Athena User Guide*, or [Apache Hive](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive.html) in the *Amazon EMR Developer Guide*\.
If your external table is defined in AWS Glue, Athena, or a Hive metastore, you first create an external schema that references the external database\. Then you can reference the external table in your SELECT statement by prefixing the table name with the schema name, without needing to create the table in Amazon Redshift\. For more information, see [Creating external schemas for Amazon Redshift Spectrum](c-spectrum-external-schemas.md)\.
To allow Amazon Redshift to view tables in the AWS Glue Data Catalog, add `glue:GetTable` to the Amazon Redshift IAM role\. Otherwise you might get an error similar to the following\.
```
RedshiftIamRoleSession is not authorized to perform: glue:GetTable on resource: *;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
df47a8daa23a-2
|
```
RedshiftIamRoleSession is not authorized to perform: glue:GetTable on resource: *;
```
For example, suppose that you have an external table named `lineitem_athena` defined in an Athena external catalog\. In this case, you can define an external schema named `athena_schema`, then query the table using the following SELECT statement\.
```
select count(*) from athena_schema.lineitem_athena;
```
To define an external table in Amazon Redshift, use the [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md) command\. The external table statement defines the table columns, the format of your data files, and the location of your data in Amazon S3\. Redshift Spectrum scans the files in the specified folder and any subfolders\. Redshift Spectrum ignores hidden files and files that begin with a period, underscore, or hash mark \( \. , \_, or \#\) or end with a tilde \(\~\)\.
The following example creates a table named SALES in the Amazon Redshift external schema named `spectrum`\. The data is in tab\-delimited text files\.
```
create external table spectrum.sales(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
dateid smallint,
qtysold smallint,
pricepaid decimal(8,2),
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
df47a8daa23a-3
|
eventid integer,
dateid smallint,
qtysold smallint,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp)
row format delimited
fields terminated by '\t'
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/sales/'
table properties ('numRows'='172000');
```
To view external tables, query the [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md) system view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
53a2da72b9ba-0
|
By default, Amazon Redshift creates external tables with the pseudocolumns `$path` and `$size`\. Select these columns to view the path to the data files on Amazon S3 and the size of the data files for each row returned by a query\. The `$path` and `$size` column names must be delimited with double quotation marks\. A `SELECT *` clause doesn't return the pseudocolumns\. You must explicitly include the $path and $size column names in your query, as the following example shows\.
```
select "$path", "$size"
from spectrum.sales_part
where saledate = '2008-12-01';
```
You can disable creation of pseudocolumns for a session by setting the `spectrum_enable_pseudo_columns` configuration parameter to false\.
**Important**
Selecting `$size` or `$path` incurs charges because Redshift Spectrum scans the data files on Amazon S3 to determine the size of the result set\. For more information, see [Amazon Redshift Pricing](https://aws.amazon.com/redshift/pricing/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
4115519b43d9-0
|
The following example returns the total size of related data files for an external table\.
```
select distinct "$path", "$size"
from spectrum.sales_part;
$path | $size
---------------------------------------+-------
s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/ | 1616
s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/ | 1444
s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03/ | 1644
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
a9714dcafa20-0
|
When you partition your data, you can restrict the amount of data that Redshift Spectrum scans by filtering on the partition key\. You can partition your data by any key\.
A common practice is to partition the data based on time\. For example, you might choose to partition by year, month, date, and hour\. If you have data coming from multiple sources, you might partition by a data source identifier and date\.
The following procedure describes how to partition your data\.
**To partition your data**
1. Store your data in folders in Amazon S3 according to your partition key\.
Create one folder for each partition value and name the folder with the partition key and value\. For example, if you partition by date, you might have folders named `saledate=2017-04-01`, `saledate=2017-04-02`, and so on\. Redshift Spectrum scans the files in the partition folder and any subfolders\. Redshift Spectrum ignores hidden files and files that begin with a period, underscore, or hash mark \( \. , \_, or \#\) or end with a tilde \(\~\)\.
1. Create an external table and specify the partition key in the PARTITIONED BY clause\.
The partition key can't be the name of a table column\. The data type can be SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE PRECISION, BOOLEAN, CHAR, VARCHAR, DATE, or TIMESTAMP data type\.
1. Add the partitions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
a9714dcafa20-1
|
1. Add the partitions\.
Using [ALTER TABLE](r_ALTER_TABLE.md) … ADD PARTITION, add each partition, specifying the partition column and key value, and the location of the partition folder in Amazon S3\. You can add multiple partitions in a single ALTER TABLE … ADD statement\. The following example adds partitions for `'2008-01'` and `'2008-02'`\.
```
alter table spectrum.sales_part add
partition(saledate='2008-01-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/'
partition(saledate='2008-02-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/';
```
**Note**
If you use the AWS Glue catalog, you can add up to 100 partitions using a single ALTER TABLE statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
18e002d547d3-0
|
In this example, you create an external table that is partitioned by a single partition key and an external table that is partitioned by two partition keys\.
The sample data for this example is located in an Amazon S3 bucket that gives read access to all authenticated AWS users\. Your cluster and your external data files must be in the same AWS Region\. The sample data bucket is in the US West \(Oregon\) Region \(us\-west\-2\)\. To access the data using Redshift Spectrum, your cluster must also be in us\-west\-2\. To list the folders in Amazon S3, run the following command\.
```
aws s3 ls s3://awssampledbuswest2/tickit/spectrum/sales_partition/
```
```
PRE saledate=2008-01/
PRE saledate=2008-02/
PRE saledate=2008-03/
```
If you don't already have an external schema, run the following command\. Substitute the Amazon Resource Name \(ARN\) for your AWS Identity and Access Management \(IAM\) role\.
```
create external schema spectrum
from data catalog
database 'spectrumdb'
iam_role 'arn:aws:iam::123456789012:role/myspectrumrole'
create external database if not exists;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
920e02a5b2a3-0
|
In the following example, you create an external table that is partitioned by month\.
To create an external table partitioned by month, run the following command\.
```
create external table spectrum.sales_part(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
dateid smallint,
qtysold smallint,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp)
partitioned by (saledate char(10))
row format delimited
fields terminated by '|'
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/'
table properties ('numRows'='172000');
```
To add the partitions, run the following ALTER TABLE command\.
```
alter table spectrum.sales_part add
partition(saledate='2008-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/'
partition(saledate='2008-02')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
920e02a5b2a3-1
|
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/'
partition(saledate='2008-03')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03/';
```
To select data from the partitioned table, run the following query\.
```
select top 5 spectrum.sales_part.eventid, sum(spectrum.sales_part.pricepaid)
from spectrum.sales_part, event
where spectrum.sales_part.eventid = event.eventid
and spectrum.sales_part.pricepaid > 30
and saledate = '2008-01'
group by spectrum.sales_part.eventid
order by 2 desc;
```
```
eventid | sum
--------+---------
4124 | 21179.00
1924 | 20569.00
2294 | 18830.00
2260 | 17669.00
6032 | 17265.00
```
To view external table partitions, query the [SVV\_EXTERNAL\_PARTITIONS](r_SVV_EXTERNAL_PARTITIONS.md) system view\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
920e02a5b2a3-2
|
```
select schemaname, tablename, values, location from svv_external_partitions
where tablename = 'sales_part';
```
```
schemaname | tablename | values | location
-----------+------------+-------------+-------------------------------------------------------------------------
spectrum | sales_part | ["2008-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01
spectrum | sales_part | ["2008-02"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02
spectrum | sales_part | ["2008-03"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
a5698e2016eb-0
|
To create an external table partitioned by `date` and `eventid`, run the following command\.
```
create external table spectrum.sales_event(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
dateid smallint,
qtysold smallint,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp)
partitioned by (salesmonth char(10), event integer)
row format delimited
fields terminated by '|'
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/'
table properties ('numRows'='172000');
```
To add the partitions, run the following ALTER TABLE command\.
```
alter table spectrum.sales_event add
partition(salesmonth='2008-01', event='101')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-01/event=101/'
partition(salesmonth='2008-01', event='102')
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
a5698e2016eb-1
|
partition(salesmonth='2008-01', event='102')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-01/event=102/'
partition(salesmonth='2008-01', event='103')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-01/event=103/'
partition(salesmonth='2008-02', event='101')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-02/event=101/'
partition(salesmonth='2008-02', event='102')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-02/event=102/'
partition(salesmonth='2008-02', event='103')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-02/event=103/'
partition(salesmonth='2008-03', event='101')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-03/event=101/'
partition(salesmonth='2008-03', event='102')
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
a5698e2016eb-2
|
partition(salesmonth='2008-03', event='102')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-03/event=102/'
partition(salesmonth='2008-03', event='103')
location 's3://awssampledbuswest2/tickit/spectrum/salesevent/salesmonth=2008-03/event=103/';
```
Run the following query to select data from the partitioned table\.
```
select spectrum.sales_event.salesmonth, event.eventname, sum(spectrum.sales_event.pricepaid)
from spectrum.sales_event, event
where spectrum.sales_event.eventid = event.eventid
and salesmonth = '2008-02'
and (event = '101'
or event = '102'
or event = '103')
group by event.eventname, spectrum.sales_event.salesmonth
order by 3 desc;
```
```
salesmonth | eventname | sum
-----------+-----------------+--------
2008-02 | The Magic Flute | 5062.00
2008-02 | La Sonnambula | 3498.00
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
a5698e2016eb-3
|
2008-02 | The Magic Flute | 5062.00
2008-02 | La Sonnambula | 3498.00
2008-02 | Die Walkure | 534.00
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
f0b0387f85bd-0
|
You use Amazon Redshift Spectrum external tables to query data from files in ORC format\. Optimized row columnar \(ORC\) format is a columnar storage file format that supports nested data structures\. For more information about querying nested data, see [Querying Nested Data with Amazon Redshift Spectrum](tutorial-query-nested-data.md#tutorial-nested-data-overview)\.
When you create an external table that references data in an ORC file, you map each column in the external table to a column in the ORC data\. To do so, you use one of the following methods:
+ [Mapping by position](#orc-mapping-by-position)
+ [Mapping by column name](#orc-mapping-by-name)
Mapping by column name is the default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
6ae0b82f49ff-0
|
With position mapping, the first column defined in the external table maps to the first column in the ORC data file, the second to the second, and so on\. Mapping by position requires that the order of columns in the external table and in the ORC file match\. If the order of the columns doesn't match, then you can map the columns by name\.
**Important**
In earlier releases, Redshift Spectrum used position mapping by default\. If you need to continue using position mapping for existing tables, set the table property `orc.schema.resolution` to `position`, as the following example shows\.
```
alter table spectrum.orc_example
set table properties('orc.schema.resolution'='position');
```
For example, the table `SPECTRUM.ORC_EXAMPLE` is defined as follows\.
```
create external table spectrum.orc_example(
int_col int,
float_col float,
nested_col struct<
"int_col" : int,
"map_col" : map<int, array<float >>
>
) stored as orc
location 's3://example/orc/files/';
```
The table structure can be abstracted as follows\.
```
• 'int_col' : int
• 'float_col' : float
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
6ae0b82f49ff-1
|
```
• 'int_col' : int
• 'float_col' : float
• 'nested_col' : struct
o 'int_col' : int
o 'map_col' : map
- key : int
- value : array
- value : float
```
The underlying ORC file has the following file structure\.
```
• ORC file root(id = 0)
o 'int_col' : int (id = 1)
o 'float_col' : float (id = 2)
o 'nested_col' : struct (id = 3)
- 'int_col' : int (id = 4)
- 'map_col' : map (id = 5)
- key : int (id = 6)
- value : array (id = 7)
- value : float (id = 8)
```
In this example, you can map each column in the external table to a column in ORC file strictly by position\. The following shows the mapping\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
1382d592ec3e-0
|
Using name mapping, you map columns in an external table to named columns in ORC files on the same level, with the same name\.
For example, suppose that you want to map the table from the previous example, `SPECTRUM.ORC_EXAMPLE`, with an ORC file that uses the following file structure\.
```
• ORC file root(id = 0)
o 'nested_col' : struct (id = 1)
- 'map_col' : map (id = 2)
- key : int (id = 3)
- value : array (id = 4)
- value : float (id = 5)
- 'int_col' : int (id = 6)
o 'int_col' : int (id = 7)
o 'float_col' : float (id = 8)
```
Using position mapping, Redshift Spectrum attempts the following mapping\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html)
When you query a table with the preceding position mapping, the SELECT command fails on type validation because the structures are different\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
1382d592ec3e-1
|
When you query a table with the preceding position mapping, the SELECT command fails on type validation because the structures are different\.
You can map the same external table to both file structures shown in the previous examples by using column name mapping\. The table columns `int_col`, `float_col`, and `nested_col` map by column name to columns with the same names in the ORC file\. The column named `nested_col` in the external table is a `struct` column with subcolumns named `map_col` and `int_col`\. The subcolumns also map correctly to the corresponding columns in the ORC file by column name\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-tables.md
|
15e7545f3f38-0
|
In each queue, WLM creates a number of query slots equal to the queue's concurrency level\. The amount of memory allocated to a query slot equals the percentage of memory allocated to the queue divided by the slot count\. If you change the memory allocation or concurrency, Amazon Redshift dynamically manages the transition to the new WLM configuration\. Thus, active queries can run to completion using the currently allocated amount of memory\. At the same time, Amazon Redshift ensures that total memory usage never exceeds 100 percent of available memory\.
The workload manager uses the following process to manage the transition:
1. WLM recalculates the memory allocation for each new query slot\.
1. If a query slot is not actively being used by a running query, WLM removes the slot, which makes that memory available for new slots\.
1. If a query slot is actively in use, WLM waits for the query to finish\.
1. As active queries complete, the empty slots are removed and the associated memory is freed\.
1. As enough memory becomes available to add one or more slots, new slots are added\.
1. When all queries that were running at the time of the change finish, the slot count equals the new concurrency level, and the transition to the new WLM configuration is complete\.
In effect, queries that are running when the change takes place continue to use the original memory allocation\. Queries that are queued when the change takes place are routed to new slots as they become available\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-memory-allocation.md
|
15e7545f3f38-1
|
If the WLM dynamic properties are changed during the transition process, WLM immediately begins to transition to the new configuration, starting from the current state\. To view the status of the transition, query the [STV\_WLM\_SERVICE\_CLASS\_CONFIG](r_STV_WLM_SERVICE_CLASS_CONFIG.md) system table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-memory-allocation.md
|
c67380ec045b-0
|
Displays the records of all Amazon Redshift load errors\.
STL\_LOAD\_ERRORS contains a history of all Amazon Redshift load errors\. See [Load error reference](r_Load_Error_Reference.md) for a comprehensive list of possible load errors and explanations\.
Query [STL\_LOADERROR\_DETAIL](r_STL_LOADERROR_DETAIL.md) for additional details, such as the exact data row and column where a parse error occurred, after you query STL\_LOAD\_ERRORS to find out general information about the error\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_ERRORS.md
|
7d986accd8b9-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_LOAD_ERRORS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_ERRORS.md
|
aaf917d5db51-0
|
The following query joins STL\_LOAD\_ERRORS to STL\_LOADERROR\_DETAIL to view the details errors that occurred during the most recent load\.
```
select d.query, substring(d.filename,14,20),
d.line_number as line,
substring(d.value,1,16) as value,
substring(le.err_reason,1,48) as err_reason
from stl_loaderror_detail d, stl_load_errors le
where d.query = le.query
and d.query = pg_last_copy_id();
query | substring | line | value | err_reason
-------+-------------------+------+----------+----------------------------
558| allusers_pipe.txt | 251 | 251 | String contains invalid or
unsupported UTF8 code
558| allusers_pipe.txt | 251 | ZRU29FGR | String contains invalid or
unsupported UTF8 code
558| allusers_pipe.txt | 251 | Kaitlin | String contains invalid or
unsupported UTF8 code
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_ERRORS.md
|
aaf917d5db51-1
|
558| allusers_pipe.txt | 251 | Kaitlin | String contains invalid or
unsupported UTF8 code
558| allusers_pipe.txt | 251 | Walter | String contains invalid or
unsupported UTF8 code
```
The following example uses STL\_LOAD\_ERRORS with STV\_TBL\_PERM to create a new view, and then uses that view to determine what errors occurred while loading data into the EVENT table:
```
create view loadview as
(select distinct tbl, trim(name) as table_name, query, starttime,
trim(filename) as input, line_number, colname, err_code,
trim(err_reason) as reason
from stl_load_errors sl, stv_tbl_perm sp
where sl.tbl = sp.id);
```
Next, the following query actually returns the last error that occurred while loading the EVENT table:
```
select table_name, query, line_number, colname, starttime,
trim(reason) as error
from loadview
where table_name ='event'
order by line_number limit 1;
```
The query returns the last load error that occurred for the EVENT table\. If no load errors occurred, the query returns zero rows\. In this example, the query returns a single error:
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_ERRORS.md
|
aaf917d5db51-2
|
```
table_name | query | line_number | colname | error | starttime
------+-----+----+----+--------------------------------------------------------+----------------------
event | 309 | 0 | 5 | Error in Timestamp value or format [%Y-%m-%d %H:%M:%S] | 2014-04-22 15:12:44
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOAD_ERRORS.md
|
9027c166214f-0
|
Now that you understand how queues work by default, you can learn how to configure query queues using manual WLM\. In this section, you create and configure a new parameter group for your cluster\. You create two additional user queues and configure them to accept queries based on the queries' user group or query group labels\. Any queries that don't get routed to one of these two queues are routed to the default queue at runtime\.
**Note**
A new console is available for Amazon Redshift\. Choose either the **New console** or the **Original console** instructions based on the console that you are using\. The **New console** instructions are open by default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
ec6f39cabd05-0
|
**To create a manual WLM configuration in a parameter group**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. On the navigation menu, choose **CONFIG**, then choose **Workload management** to display the **Workload management** page\.
1. Choose **Create** to display the **Create parameter group** window\.
1. Enter **WLMTutorial** for both **Parameter group name** and **Description**, and then choose **Create** to create the parameter group\.
**Note**
The **Parameter group name** is converted to all lower case format when created\.
1. On the **Workload management** page, choose the parameter group **wlmtutorial** to display the details page with tabs for **Parameters** and **Workload management**\.
1. Confirm that you're on the **Workload management** tab, then choose **Switch WLM mode** to display the **Concurrency settings** window\.
1. Choose **Manual WLM**, then choose **Save** to switch to manual WLM\.
1. Choose **Edit workload queues**\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-modifying-wlm-configuration.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.