id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
12713ab2497f-0
|
To view the details for open connections, execute the following query\.
```
select recordtime, username, dbname, remotehost, remoteport
from stl_connection_log
where event = 'initiating session'
and pid not in
(select pid from stl_connection_log
where event = 'disconnecting session')
order by 1 desc;
recordtime | username | dbname | remotehost | remoteport
--------------------+-------------+------------+---------------+---------------------------------
2014-11-06 20:30:06 | rdsdb | dev | [local] |
2014-11-06 20:29:37 | test001 | test | 10.49.42.138 | 11111
2014-11-05 20:30:29 | rdsdb | dev | 10.49.42.138 | 33333
2014-11-05 20:28:35 | rdsdb | dev | [local] |
(4 rows)
```
The following example reflects a failed authentication attempt and a successful connection and disconnection\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_CONNECTION_LOG.md
|
12713ab2497f-1
|
```
The following example reflects a failed authentication attempt and a successful connection and disconnection\.
```
select event, recordtime, remotehost, username
from stl_connection_log order by recordtime;
event | recordtime | remotehost | username
-----------------------+---------------------------+--------------+---------
authentication failure | 2012-10-25 14:41:56.96391 | 10.49.42.138 | john
authenticated | 2012-10-25 14:42:10.87613 | 10.49.42.138 | john
initiating session | 2012-10-25 14:42:10.87638 | 10.49.42.138 | john
disconnecting session | 2012-10-25 14:42:19.95992 | 10.49.42.138 | john
(4 rows)
```
The following example shows the version of the ODBC driver, the operating system on the client machine, and the plugin used to connect to the Amazon Redshift cluster\. In this example, the plugin used is for standard ODBC driver authentication using a login name and password\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_CONNECTION_LOG.md
|
12713ab2497f-2
|
```
select driver_version, os_version, plugin_name from stl_connection_log;
driver_version | os_version | plugin_name
----------------------------------------+-----------------------------------+--------------------
Amazon Redshift ODBC Driver 1.4.15.0001 | Darwin 18.7.0 x86_64 | none
Amazon Redshift ODBC Driver 1.4.15.0001 | Linux 4.15.0-101-generic x86_64 | none
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_CONNECTION_LOG.md
|
361724fc5eea-0
|
Use the STV\_EXEC\_STATE table to find out information about queries and query steps that are actively running on compute nodes\.
This information is usually used only to troubleshoot engineering issues\. The views SVV\_QUERY\_STATE and SVL\_QUERY\_SUMMARY extract their information from STV\_EXEC\_STATE\.
STV\_EXEC\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_EXEC_STATE.md
|
edab668fc991-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_EXEC_STATE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_EXEC_STATE.md
|
20a5e149ae4f-0
|
Rather than querying STV\_EXEC\_STATE directly, Amazon Redshift recommends querying SVL\_QUERY\_SUMMARY or SVV\_QUERY\_STATE to obtain the information in STV\_EXEC\_STATE in a more user\-friendly format\. See the [SVL\_QUERY\_SUMMARY](r_SVL_QUERY_SUMMARY.md) or [SVV\_QUERY\_STATE](r_SVV_QUERY_STATE.md) table documentation for more details\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_EXEC_STATE.md
|
bda2d2ad3258-0
|
Use the SVL\_S3RETRIES view to get information about why an Amazon Redshift Spectrum query based on Amazon S3 has failed\.
SVL\_S3RETRIES is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3RETRIES.md
|
84737e5a44f6-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_S3RETRIES.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3RETRIES.md
|
ce68025ea938-0
|
The following example retrieves data about failed S3 queries\.
```
SELECT svl_s3retries.query, svl_s3retries.segment, svl_s3retries.node, svl_s3retries.slice, svl_s3retries.eventtime, svl_s3retries.retries,
svl_s3retries.successful_fetches, svl_s3retries.file_size, btrim((svl_s3retries."location")::text) AS "location", btrim((svl_s3retries.message)::text)
AS message FROM svl_s3retries;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3RETRIES.md
|
163af61353b5-0
|
In this tutorial, you will learn how to optimize the design of your tables\. You will start by creating tables based on the Star Schema Benchmark \(SSB\) schema without sort keys, distribution styles, and compression encodings\. You will load the tables with test data and test system performance\. Next, you will apply best practices to recreate the tables using sort keys and distribution styles\. You will load the tables with test data using automatic compression and then you will test performance again so that you can compare the performance benefits of well\-designed tables\.
**Estimated time:** 60 minutes
**Estimated cost:** $1\.00 per hour for the cluster
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables.md
|
596a4e57f4d9-0
|
You will need your AWS credentials \(access key ID and secret access key\) to load test data from Amazon S3\. If you need to create new access keys, go to [Administering Access Keys for IAM Users](https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables.md
|
815bb8bbdf52-0
|
+ [Step 1: Create a test data set](tutorial-tuning-tables-create-test-data.md)
+ [Step 2: Test system performance to establish a baseline](tutorial-tuning-tables-test-performance.md)
+ [Step 3: Select sort keys](tutorial-tuning-tables-sort-keys.md)
+ [Step 4: Select distribution styles](tutorial-tuning-tables-distribution.md)
+ [Step 5: Review compression encodings](tutorial-tuning-tables-compression.md)
+ [Step 6: Recreate the test data set](tutorial-tuning-tables-recreate-test-data.md)
+ [Step 7: Retest system performance after tuning](tutorial-tuning-tables-retest.md)
+ [Step 8: Evaluate the results](tutorial-tuning-tables-evaluate.md)
+ [Step 9: Clean up your resources](tutorial-tuning-tables-clean-up.md)
+ [Summary](tutorial-tuning-tables-summary.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables.md
|
20b884341f37-0
|
**Topics**
+ [Mathematical operator symbols](r_OPERATOR_SYMBOLS.md)
+ [ABS function](r_ABS.md)
+ [ACOS function](r_ACOS.md)
+ [ASIN function](r_ASIN.md)
+ [ATAN function](r_ATAN.md)
+ [ATAN2 function](r_ATAN2.md)
+ [CBRT function](r_CBRT.md)
+ [CEILING \(or CEIL\) function](r_CEILING_FLOOR.md)
+ [COS function](r_COS.md)
+ [COT function](r_COT.md)
+ [DEGREES function](r_DEGREES.md)
+ [DEXP function](r_DEXP.md)
+ [DLOG1 function](r_DLOG1.md)
+ [DLOG10 function](r_DLOG10.md)
+ [EXP function](r_EXP.md)
+ [FLOOR function](r_FLOOR.md)
+ [LN function](r_LN.md)
+ [LOG function](r_LOG.md)
+ [MOD function](r_MOD.md)
+ [PI function](r_PI.md)
+ [POWER function](r_POWER.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Math_functions.md
|
20b884341f37-1
|
+ [PI function](r_PI.md)
+ [POWER function](r_POWER.md)
+ [RADIANS function](r_RADIANS.md)
+ [RANDOM function](r_RANDOM.md)
+ [ROUND function](r_ROUND.md)
+ [SIN function](r_SIN.md)
+ [SIGN function](r_SIGN.md)
+ [SQRT function](r_SQRT.md)
+ [TAN function](r_TAN.md)
+ [TO\_HEX function](r_TO_HEX.md)
+ [TRUNC function](r_TRUNC.md)
This section describes the mathematical operators and functions supported in Amazon Redshift\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Math_functions.md
|
5041b308c31d-0
|
The SUM window function returns the sum of the input column or expression values\. The SUM function works with numeric values and ignores NULL values\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
c442e0541172-0
|
```
SUM ( [ ALL ] expression ) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list
frame_clause ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
007ea7614c46-0
|
*expression *
The target column or expression that the function operates on\.
ALL
With the argument ALL, the function retains all duplicate values from the expression\. ALL is the default\. DISTINCT is not supported\.
OVER
Specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\.
PARTITION BY *expr\_list*
Defines the window for the SUM function in terms of one or more expressions\.
ORDER BY *order\_list*
Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\.
*frame\_clause*
If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
02776e56f4bd-0
|
The argument types supported by the SUM function are SMALLINT, INTEGER, BIGINT, NUMERIC, DECIMAL, REAL, and DOUBLE PRECISION\.
The return types supported by the SUM function are:
+ BIGINT for SMALLINT or INTEGER arguments
+ NUMERIC for BIGINT arguments
+ DOUBLE PRECISION for floating\-point arguments
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
b824e26a49c6-0
|
The following example creates a cumulative \(rolling\) sum of sales quantities ordered by date and sales ID:
```
select salesid, dateid, sellerid, qty,
sum(qty) over (order by dateid, salesid rows unbounded preceding) as sum
from winsales
order by 2,1;
salesid | dateid | sellerid | qty | sum
---------+------------+----------+-----+-----
30001 | 2003-08-02 | 3 | 10 | 10
10001 | 2003-12-24 | 1 | 10 | 20
10005 | 2003-12-24 | 1 | 30 | 50
40001 | 2004-01-09 | 4 | 40 | 90
10006 | 2004-01-18 | 1 | 10 | 100
20001 | 2004-02-12 | 2 | 20 | 120
40005 | 2004-02-12 | 4 | 10 | 130
20002 | 2004-02-16 | 2 | 20 | 150
30003 | 2004-04-18 | 3 | 15 | 165
30004 | 2004-04-18 | 3 | 20 | 185
30007 | 2004-09-07 | 3 | 30 | 215
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
b824e26a49c6-1
|
30007 | 2004-09-07 | 3 | 30 | 215
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
The following example creates a cumulative \(rolling\) sum of sales quantities by date, partition the results by seller ID, and order the results by date and sales ID within the partition:
```
select salesid, dateid, sellerid, qty,
sum(qty) over (partition by sellerid
order by dateid, salesid rows unbounded preceding) as sum
from winsales
order by 2,1;
salesid | dateid | sellerid | qty | sum
---------+------------+----------+-----+-----
30001 | 2003-08-02 | 3 | 10 | 10
10001 | 2003-12-24 | 1 | 10 | 10
10005 | 2003-12-24 | 1 | 30 | 40
40001 | 2004-01-09 | 4 | 40 | 40
10006 | 2004-01-18 | 1 | 10 | 50
20001 | 2004-02-12 | 2 | 20 | 20
40005 | 2004-02-12 | 4 | 10 | 50
20002 | 2004-02-16 | 2 | 20 | 40
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
b824e26a49c6-2
|
40005 | 2004-02-12 | 4 | 10 | 50
20002 | 2004-02-16 | 2 | 20 | 40
30003 | 2004-04-18 | 3 | 15 | 25
30004 | 2004-04-18 | 3 | 20 | 45
30007 | 2004-09-07 | 3 | 30 | 75
(11 rows)
```
The following example numbers all of the rows sequentially in the result set, ordered by the SELLERID and SALESID columns:
```
select salesid, sellerid, qty,
sum(1) over (order by sellerid, salesid rows unbounded preceding) as rownum
from winsales
order by 2,1;
salesid | sellerid | qty | rownum
--------+----------+------+--------
10001 | 1 | 10 | 1
10005 | 1 | 30 | 2
10006 | 1 | 10 | 3
20001 | 2 | 20 | 4
20002 | 2 | 20 | 5
30001 | 3 | 10 | 6
30003 | 3 | 15 | 7
30004 | 3 | 20 | 8
30007 | 3 | 30 | 9
40001 | 4 | 40 | 10
40005 | 4 | 10 | 11
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
b824e26a49c6-3
|
40001 | 4 | 40 | 10
40005 | 4 | 10 | 11
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
The following example numbers all rows sequentially in the result set, partition the results by SELLERID, and order the results by SELLERID and SALESID within the partition:
```
select salesid, sellerid, qty,
sum(1) over (partition by sellerid
order by sellerid, salesid rows unbounded preceding) as rownum
from winsales
order by 2,1;
salesid | sellerid | qty | rownum
---------+----------+-----+--------
10001 | 1 | 10 | 1
10005 | 1 | 30 | 2
10006 | 1 | 10 | 3
20001 | 2 | 20 | 1
20002 | 2 | 20 | 2
30001 | 3 | 10 | 1
30003 | 3 | 15 | 2
30004 | 3 | 20 | 3
30007 | 3 | 30 | 4
40001 | 4 | 40 | 1
40005 | 4 | 10 | 2
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_SUM.md
|
89c45b9a1417-0
|
Comparison conditions state logical relationships between two values\. All comparison conditions are binary operators with a Boolean return type\. Amazon Redshift supports the comparison operators described in the following table:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_comparison_condition.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_comparison_condition.md
|
784047f813bb-0
|
= ANY \| SOME
The ANY and SOME keywords are synonymous with the *IN* condition, and return true if the comparison is true for at least one value returned by a subquery that returns one or more values\. Amazon Redshift supports only the = \(equals\) condition for ANY and SOME\. Inequality conditions are not supported\.
The ALL predicate is not supported\.
<> ALL
The ALL keyword is synonymous with NOT IN \(see [IN condition](r_in_condition.md) condition\) and returns true if the expression is not included in the results of the subquery\. Amazon Redshift supports only the <> or \!= \(not equals\) condition for ALL\. Other comparison conditions are not supported\.
IS TRUE/FALSE/UNKNOWN
Non\-zero values equate to TRUE, 0 equates to FALSE, and null equates to UNKNOWN\. See the [Boolean type](r_Boolean_type.md) data type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_comparison_condition.md
|
7f55972bb818-0
|
Here are some simple examples of comparison conditions:
```
a = 5
a < b
min(x) >= 5
qtysold = any (select qtysold from sales where dateid = 1882
```
The following query returns venues with more than 10000 seats from the VENUE table:
```
select venueid, venuename, venueseats from venue
where venueseats > 10000
order by venueseats desc;
venueid | venuename | venueseats
---------+--------------------------------+------------
83 | FedExField | 91704
6 | New York Giants Stadium | 80242
79 | Arrowhead Stadium | 79451
78 | INVESCO Field | 76125
69 | Dolphin Stadium | 74916
67 | Ralph Wilson Stadium | 73967
76 | Jacksonville Municipal Stadium | 73800
89 | Bank of America Stadium | 73298
72 | Cleveland Browns Stadium | 73200
86 | Lambeau Field | 72922
...
(57 rows)
```
This example selects the users \(USERID\) from the USERS table who like rock music:
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_comparison_condition.md
|
7f55972bb818-1
|
```
This example selects the users \(USERID\) from the USERS table who like rock music:
```
select userid from users where likerock = 't' order by 1 limit 5;
userid
--------
3
5
6
13
16
(5 rows)
```
This example selects the users \(USERID\) from the USERS table where it is unknown whether they like rock music:
```
select firstname, lastname, likerock
from users
where likerock is unknown
order by userid limit 10;
firstname | lastname | likerock
----------+----------+----------
Rafael | Taylor |
Vladimir | Humphrey |
Barry | Roy |
Tamekah | Juarez |
Mufutau | Watkins |
Naida | Calderon |
Anika | Huff |
Bruce | Beck |
Mallory | Farrell |
Scarlett | Mayer |
(10 rows
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_comparison_condition.md
|
0707ca8f16d4-0
|
Returns the process ID \(PID\) of the server process handling the current session\.
**Note**
The PID is not globally unique\. It can be reused over time\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_BACKEND_PID.md
|
37fe7d07da0c-0
|
```
pg_backend_pid()
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_BACKEND_PID.md
|
a40c35de445b-0
|
Returns an integer\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_BACKEND_PID.md
|
e3261f7ac732-0
|
You can correlate PG\_BACKEND\_PID\(\) with log tables to retrieve information for the current session\. For example, the following query returns the query ID and a portion of the query text for queries executed in the current session\.
```
select query, substring(text,1,40)
from stl_querytext
where pid = PG_BACKEND_PID()
order by query desc;
query | substring
-------+------------------------------------------
14831 | select query, substring(text,1,40) from
14827 | select query, substring(path,0,80) as pa
14826 | copy category from 's3://dw-tickit/manif
14825 | Count rows in target table
14824 | unload ('select * from category') to 's3
(5 rows)
```
You can correlate PG\_BACKEND\_PID\(\) with the pid column in the following log tables \(exceptions are noted in parentheses\):
+ [STL\_CONNECTION\_LOG](r_STL_CONNECTION_LOG.md)
+ [STL\_DDLTEXT](r_STL_DDLTEXT.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_BACKEND_PID.md
|
e3261f7ac732-1
|
+ [STL\_DDLTEXT](r_STL_DDLTEXT.md)
+ [STL\_ERROR](r_STL_ERROR.md)
+ [STL\_QUERY](r_STL_QUERY.md)
+ [STL\_QUERYTEXT](r_STL_QUERYTEXT.md)
+ [STL\_SESSIONS](r_STL_SESSIONS.md) \(process\)
+ [STL\_TR\_CONFLICT](r_STL_TR_CONFLICT.md)
+ [STL\_UTILITYTEXT](r_STL_UTILITYTEXT.md)
+ [STV\_ACTIVE\_CURSORS](r_STV_ACTIVE_CURSORS.md)
+ [STV\_INFLIGHT](r_STV_INFLIGHT.md)
+ [STV\_LOCKS](r_STV_LOCKS.md) \(lock\_owner\_pid\)
+ [STV\_RECENTS](r_STV_RECENTS.md) \(process\_id\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_BACKEND_PID.md
|
6bdf02160cfc-0
|
AT TIME ZONE specifies which time zone to use with a TIMESTAMP or TIMESTAMPTZ expression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AT_TIME_ZONE.md
|
c8362ff77cda-0
|
```
AT TIME ZONE 'timezone'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AT_TIME_ZONE.md
|
d9d6501ad8f9-0
|
*timezone*
The time zone for the return value\. The time zone can be specified as a time zone name \(such as **'Africa/Kampala'** or **'Singapore'**\) or as a time zone abbreviation \(such as **'UTC'** or **'PDT'**\)\.
To view a list of supported time zone names, execute the following command\.
```
select pg_timezone_names();
```
To view a list of supported time zone abbreviations, execute the following command\.
```
select pg_timezone_abbrevs();
```
For more information and examples, see [Time zone usage notes](CONVERT_TIMEZONE.md#CONVERT_TIMEZONE-usage-notes)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AT_TIME_ZONE.md
|
d1c4941f0bad-0
|
TIMESTAMPTZ when used with a TIMESTAMP expression\. TIMESTAMP when used with a TIMESTAMPTZ expression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AT_TIME_ZONE.md
|
c7f99cd4ca8c-0
|
The following example converts a time stamp value without time zone and interprets it as MST time \(UTC–7\), which is then converted to PST \(UTC–8\) for display\.
```
SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST';
timestamptz
------------------------
'2001-02-16 19:38:40-08'
```
The following example takes an input time stamp with a time zone value where the specified time zone is UTC\-5 \(EST\) and converts it to MST \(UTC\-7\)\.
```
SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST';
timestamp
------------------------
'2001-02-16 18:38:40'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AT_TIME_ZONE.md
|
2037863af6ff-0
|
Deletes a user group\. This command isn't reversible\. This command doesn't delete the individual users in a group\.
See DROP USER to delete an individual user\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_GROUP.md
|
cdac773366fb-0
|
```
DROP GROUP name
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_GROUP.md
|
4860f60393bd-0
|
*name*
Name of the user group to delete\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_GROUP.md
|
3dc933a98ba5-0
|
The following example deletes the GUEST user group:
```
drop group guests;
```
You can't drop a group if the group has any privileges on an object\. If you attempt to drop such a group, you will receive the following error\.
```
ERROR: group "guest" can't be dropped because the group has a privilege on some object
```
If the group has privileges for an object, first revoke the privileges before dropping the group\. The following example revokes all privileges on all tables in the `public` schema from the `GUEST` user group, and then drops the group\.
```
revoke all on all tables in schema public from group guest;
drop group guests;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_GROUP.md
|
efb90495dad4-0
|
MONTHS\_BETWEEN determines the number of months between two dates\.
If the first date is later than the second date, the result is positive; otherwise, the result is negative\.
If either argument is null, the result is NULL\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MONTHS_BETWEEN_function.md
|
59da05da9d07-0
|
```
MONTHS_BETWEEN ( date1, date2 )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MONTHS_BETWEEN_function.md
|
56703ea0f74b-0
|
*date1*
An expression, such as a column name, that evaluates to a valid date or time stamp value\.
*date2*
An expression, such as a column name, that evaluates to a valid date or time stamp value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MONTHS_BETWEEN_function.md
|
39616293dc17-0
|
FLOAT8
The whole number portion of the result is based on the difference between the year and month values of the dates\. The fractional portion of the result is calculated from the day and timestamp values of the dates and assumes a 31\-day month\.
If *date1* and *date2* both contain the same date within a month \(for example, 1/15/14 and 2/15/14\) or the last day of the month \(for example, 8/31/14 and 9/30/14\), then the result is a whole number based on the year and month values of the dates, regardless of whether the timestamp portion matches, if present\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MONTHS_BETWEEN_function.md
|
4fd98db90797-0
|
The following example returns the months between 1/18/1969 and 3/18/1969:
```
select months_between('1969-01-18', '1969-03-18')
as months;
months
----------
-2
```
The following example returns the months between the first and last showings of an event:
```
select eventname,
min(starttime) as first_show,
max(starttime) as last_show,
months_between(max(starttime),min(starttime)) as month_diff
from event
group by eventname
order by eventname
limit 5;
eventname first_show last_show month_diff
---------------------------------------------------------------------------
.38 Special 2008-01-21 19:30:00.0 2008-12-25 15:00:00.0 11.12
3 Doors Down 2008-01-03 15:00:00.0 2008-12-01 19:30:00.0 10.94
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MONTHS_BETWEEN_function.md
|
4fd98db90797-1
|
3 Doors Down 2008-01-03 15:00:00.0 2008-12-01 19:30:00.0 10.94
70s Soul Jam 2008-01-16 19:30:00.0 2008-12-07 14:00:00.0 10.7
A Bronx Tale 2008-01-21 19:00:00.0 2008-12-15 15:00:00.0 10.8
A Catered Affair 2008-01-08 19:30:00.0 2008-12-19 19:00:00.0 11.35
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MONTHS_BETWEEN_function.md
|
9ee8a48fc7e4-0
|
**Topics**
+ [Syntax](#r_INSERT_30-synopsis)
+ [Parameters](#r_INSERT_30-parameters)
+ [Usage notes](#r_INSERT_30_usage_notes)
+ [INSERT examples](c_Examples_of_INSERT_30.md)
Inserts new rows into a table\. You can insert a single row with the VALUES syntax, multiple rows with the VALUES syntax, or one or more rows defined by the results of a query \(INSERT INTO\.\.\.SELECT\)\.
**Note**
We strongly encourage you to use the [COPY](r_COPY.md) command to load large amounts of data\. Using individual INSERT statements to populate a table might be prohibitively slow\. Alternatively, if your data already exists in other Amazon Redshift database tables, use INSERT INTO SELECT or [CREATE TABLE AS](r_CREATE_TABLE_AS.md) to improve performance\. For more information about using the COPY command to load tables, see [Loading data](t_Loading_data.md)\.
**Note**
The maximum size for a single SQL statement is 16 MB\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_30.md
|
7d6a22d7ae2b-0
|
```
INSERT INTO table_name [ ( column [, ...] ) ]
{DEFAULT VALUES |
VALUES ( { expression | DEFAULT } [, ...] )
[, ( { expression | DEFAULT } [, ...] )
[, ...] ] |
query }
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_30.md
|
a3ad211d0f46-0
|
*table\_name*
A temporary or persistent table\. Only the owner of the table or a user with INSERT privilege on the table can insert rows\. If you use the *query* clause to insert rows, you must have SELECT privilege on the tables named in the query\.
Use INSERT \(external table\) to insert results of a SELECT query into existing tables on external catalog\. For more information, see [INSERT \(external table\)](r_INSERT_external_table.md)\.
*column*
You can insert values into one or more columns of the table\. You can list the target column names in any order\. If you don't specify a column list, the values to be inserted must correspond to the table columns in the order in which they were declared in the CREATE TABLE statement\. If the number of values to be inserted is less than the number of columns in the table, the first *n* columns are loaded\.
Either the declared default value or a null value is loaded into any column that isn't listed \(implicitly or explicitly\) in the INSERT statement\.
DEFAULT VALUES
If the columns in the table were assigned default values when the table was created, use these keywords to insert a row that consists entirely of default values\. If any of the columns don't have default values, nulls are inserted into those columns\. If any of the columns are declared NOT NULL, the INSERT statement returns an error\.
VALUES
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_30.md
|
a3ad211d0f46-1
|
VALUES
Use this keyword to insert one or more rows, each row consisting of one or more values\. The VALUES list for each row must align with the column list\. To insert multiple rows, use a comma delimiter between each list of expressions\. Do not repeat the VALUES keyword\. All VALUES lists for a multiple\-row INSERT statement must contain the same number of values\.
*expression*
A single value or an expression that evaluates to a single value\. Each value must be compatible with the data type of the column where it is being inserted\. If possible, a value whose data type doesn't match the column's declared data type is automatically converted to a compatible data type\. For example:
+ A decimal value `1.1` is inserted into an INT column as `1`\.
+ A decimal value `100.8976` is inserted into a DEC\(5,2\) column as `100.90`\.
You can explicitly convert a value to a compatible data type by including type cast syntax in the expression\. For example, if column COL1 in table T1 is a CHAR\(3\) column:
```
insert into t1(col1) values('Incomplete'::char(3));
```
This statement inserts the value `Inc` into the column\.
For a single\-row INSERT VALUES statement, you can use a scalar subquery as an expression\. The result of the subquery is inserted into the appropriate column\.
Subqueries aren't supported as expressions for multiple\-row INSERT VALUES statements\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_30.md
|
a3ad211d0f46-2
|
Subqueries aren't supported as expressions for multiple\-row INSERT VALUES statements\.
DEFAULT
Use this keyword to insert the default value for a column, as defined when the table was created\. If no default value exists for a column, a null is inserted\. You can't insert a default value into a column that has a NOT NULL constraint if that column doesn't have an explicit default value assigned to it in the CREATE TABLE statement\.
*query*
Insert one or more rows into the table by defining any query\. All of the rows that the query produces are inserted into the table\. The query must return a column list that is compatible with the columns in the table, but the column names don't have to match\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_30.md
|
b8776ef3c51e-0
|
**Note**
We strongly encourage you to use the [COPY](r_COPY.md) command to load large amounts of data\. Using individual INSERT statements to populate a table might be prohibitively slow\. Alternatively, if your data already exists in other Amazon Redshift database tables, use INSERT INTO SELECT or [CREATE TABLE AS](r_CREATE_TABLE_AS.md) to improve performance\. For more information about using the COPY command to load tables, see [Loading data](t_Loading_data.md)\.
The data format for the inserted values must match the data format specified by the CREATE TABLE definition\.
After inserting a large number of new rows into a table:
+ Vacuum the table to reclaim storage space and re\-sort rows\.
+ Analyze the table to update statistics for the query planner\.
When values are inserted into DECIMAL columns and they exceed the specified scale, the loaded values are rounded up as appropriate\. For example, when a value of `20.259` is inserted into a DECIMAL\(8,2\) column, the value that is stored is `20.26`\.
You can insert into a GENERATED BY DEFAULT AS IDENTITY column\. You can update columns defined as GENERATED BY DEFAULT AS IDENTITY with values that you supply\. For more information, see [GENERATED BY DEFAULT AS IDENTITY](r_CREATE_TABLE_NEW.md#identity-generated-bydefault-clause)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_30.md
|
1762811ba06f-0
|
Most of the examples in this guide use the TICKIT sample database\. If you want to follow the examples using your SQL query tool, you will need to load the sample data for the TICKIT database\.
The sample data for this tutorial is provided in Amazon S3 buckets that give read access to all authenticated AWS users, so any valid AWS credentials that permit access to Amazon S3 will work\.
To load the sample data for the TICKIT database, you will first create the tables, then use the COPY command to load the tables with sample data that is stored in an Amazon S3 bucket\. For steps to create tables and load sample data, see [Amazon Redshift getting started guide](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-dev-t-load-sample-data.md
|
6c93c83b1708-0
|
ST\_GeomFromText constructs a geometry object from a well\-known text \(WKT\) representation of an input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromText-function.md
|
2b8632587212-0
|
```
ST_GeomFromText(wkt_string)
```
```
ST_GeomFromText(wkt_string, srid)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromText-function.md
|
1eb7ec05ba63-0
|
*wkt\_string*
A value of data type `VARCHAR` that is a WKT representation of a geometry\.
*srid*
A value of data type `INTEGER` that is a spatial reference identifier \(SRID\)\. If an SRID value is provided, the returned geometry has this SRID value\. Otherwise, the SRID value of the returned geometry is set to zero \(0\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromText-function.md
|
bf8ab66a9993-0
|
`GEOMETRY`
If *wkt\_string* or *srid* is null, then null is returned\.
If *srid* is negative, then null is returned\.
If *wkt\_string* is not valid, then an error is returned\.
If *srid* is not valid, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromText-function.md
|
7c97aeac5a62-0
|
The following SQL constructs a geometry object from the WKT representation and SRID value\.
```
SELECT ST_GeomFromText('POLYGON((0 0,0 1,1 1,1 0,0 0))',4326);
```
```
st_geomfromtext
--------------------------------
0103000020E61000000100000005000000000000000000000000000000000000000000000000000000000000000000F03F000000000000F03F000000000000F03F000000000000F03F000000000000000000000000000000000000000000000000
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromText-function.md
|
6bbc92b84c38-0
|
**Topics**
+ [Syntax](#r_CREATE_TABLE_AS-synopsis)
+ [Parameters](#r_CREATE_TABLE_AS-parameters)
+ [CTAS usage notes](r_CTAS_usage_notes.md)
+ [CTAS examples](r_CTAS_examples.md)
Creates a new table based on a query\. The owner of this table is the user that issues the command\.
The new table is loaded with data defined by the query in the command\. The table columns have names and data types associated with the output columns of the query\. The CREATE TABLE AS \(CTAS\) command creates a new table and evaluates the query to load the new table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
73befbe34b94-0
|
```
CREATE [ [LOCAL ] { TEMPORARY | TEMP } ]
TABLE table_name
[ ( column_name [, ... ] ) ]
[ BACKUP { YES | NO } ]
[ table_attributes ]
AS query
where table_attributes are:
[ DISTSTYLE { EVEN | ALL | KEY } ]
[ DISTKEY( distkey_identifier ) ]
[ [ COMPOUND | INTERLEAVED ] SORTKEY( column_name [, ...] ) ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
fdd312fde72d-0
|
LOCAL
Although this optional keyword is accepted in the statement, it has no effect in Amazon Redshift\.
TEMPORARY \| TEMP
Creates a temporary table\. A temporary table is automatically dropped at the end of the session in which it was created\.
*table\_name*
The name of the table to be created\.
If you specify a table name that begins with '\# ', the table is created as a temporary table\. For example:
```
create table #newtable (id) as select * from oldtable;
```
The maximum table name length is 127 bytes; longer names are truncated to 127 bytes\. Amazon Redshift enforces a quota of the number of tables per cluster by node type\. The table name can be qualified with the database and schema name, as the following table shows\.
```
create table tickit.public.test (c1) as select * from oldtable;
```
In this example, `tickit` is the database name and `public` is the schema name\. If the database or schema doesn't exist, the statement returns an error\.
If a schema name is given, the new table is created in that schema \(assuming the creator has access to the schema\)\. The table name must be a unique name for that schema\. If no schema is specified, the table is created using the current database schema\. If you are creating a temporary table, you can't specify a schema name, since temporary tables exist in a special schema\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
fdd312fde72d-1
|
Multiple temporary tables with the same name are allowed to exist at the same time in the same database if they are created in separate sessions\. These tables are assigned to different schemas\.
*column\_name*
The name of a column in the new table\. If no column names are provided, the column names are taken from the output column names of the query\. Default column names are used for expressions\.
BACKUP \{ YES \| NO \}
A clause that specifies whether the table should be included in automated and manual cluster snapshots\. For tables, such as staging tables, that won't contain critical data, specify BACKUP NO to save processing time when creating snapshots and restoring from snapshots and to reduce storage space on Amazon Simple Storage Service\. The BACKUP NO setting has no effect on automatic replication of data to other nodes within the cluster, so tables with BACKUP NO specified are restored in the event of a node failure\. The default is BACKUP YES\.
DISTSTYLE \{ EVEN \| KEY \| ALL \}
Defines the data distribution style for the whole table\. Amazon Redshift distributes the rows of a table to the compute nodes according the distribution style specified for the table\.
The distribution style that you select for tables affects the overall performance of your database\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\.
+ EVEN: The data in the table is spread evenly across the nodes in a cluster in a round\-robin distribution\. Row IDs are used to determine the distribution, and roughly the same number of rows are distributed to each node\. This is the default distribution method\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
fdd312fde72d-2
|
+ KEY: The data is distributed by the values in the DISTKEY column\. When you set the joining columns of joining tables as distribution keys, the joining rows from both tables are collocated on the compute nodes\. When data is collocated, the optimizer can perform joins more efficiently\. If you specify DISTSTYLE KEY, you must name a DISTKEY column\.
+ ALL: A copy of the entire table is distributed to every node\. This distribution style ensures that all the rows required for any join are available on every node, but it multiplies storage requirements and increases the load and maintenance times for the table\. ALL distribution can improve execution time when used with certain dimension tables where KEY distribution isn't appropriate, but performance improvements must be weighed against maintenance costs\.
DISTKEY \(*column*\)
Specifies a column name or positional number for the distribution key\. Use the name specified in either the optional column list for the table or the select list of the query\. Alternatively, use a positional number, where the first column selected is 1, the second is 2, and so on\. Only one column in a table can be the distribution key:
+ If you declare a column as the DISTKEY column, DISTSTYLE must be set to KEY or not set at all\.
+ If you don't declare a DISTKEY column, you can set DISTSTYLE to EVEN\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
fdd312fde72d-3
|
+ If you don't declare a DISTKEY column, you can set DISTSTYLE to EVEN\.
+ If you don't specify DISTKEY or DISTSTYLE, CTAS determines the distribution style for the new table based on the query plan for the SELECT clause\. For more information, see [Inheritance of column and table attributes](r_CTAS_usage_notes.md#r_CTAS_usage_notes-inheritance-of-column-and-table-attributes)\.
You can define the same column as the distribution key and the sort key; this approach tends to accelerate joins when the column in question is a joining column in the query\.
\[ COMPOUND \| INTERLEAVED \] SORTKEY \( *column\_name* \[, \.\.\. \] \)
Specifies one or more sort keys for the table\. When data is loaded into the table, the data is sorted by the columns that are designated as sort keys\.
You can optionally specify COMPOUND or INTERLEAVED sort style\. The default is COMPOUND\. For more information, see [Choosing sort keys](t_Sorting_data.md)\.
You can define a maximum of 400 COMPOUND SORTKEY columns or 8 INTERLEAVED SORTKEY columns per table\.
If you don't specify SORTKEY, CTAS determines the sort keys for the new table based on the query plan for the SELECT clause\. For more information, see [Inheritance of column and table attributes](r_CTAS_usage_notes.md#r_CTAS_usage_notes-inheritance-of-column-and-table-attributes)\.
COMPOUND
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
fdd312fde72d-4
|
COMPOUND
Specifies that the data is sorted using a compound key made up of all of the listed columns, in the order they are listed\. A compound sort key is most useful when a query scans rows according to the order of the sort columns\. The performance benefits of sorting with a compound key decrease when queries rely on secondary sort columns\. You can define a maximum of 400 COMPOUND SORTKEY columns per table\.
INTERLEAVED
Specifies that the data is sorted using an interleaved sort key\. A maximum of eight columns can be specified for an interleaved sort key\.
An interleaved sort gives equal weight to each column, or subset of columns, in the sort key, so queries don't depend on the order of the columns in the sort key\. When a query uses one or more secondary sort columns, interleaved sorting significantly improves query performance\. Interleaved sorting carries a small overhead cost for data loading and vacuuming operations\.
AS *query*
Any query \(SELECT statement\) that Amazon Redshift supports\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_AS.md
|
c1328ea50530-0
|
The SVL\_QUERY\_METRICS view shows the metrics for completed queries\. This view is derived from the [STL\_QUERY\_METRICS](r_STL_QUERY_METRICS.md) system table\. Use the values in this view as an aid to determine threshold values for defining query monitoring rules\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_METRICS.md
|
871150bc1470-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QUERY_METRICS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_METRICS.md
|
4a2ab845acb8-0
|
The following example sets the numRows table property for the SPECTRUM\.SALES external table to 170,000 rows\.
```
alter table spectrum.sales
set table properties ('numRows'='170000');
```
The following example changes the location for the SPECTRUM\.SALES external table\.
```
alter table spectrum.sales
set location 's3://awssampledbuswest2/tickit/spectrum/sales/';
```
The following example changes the format for the SPECTRUM\.SALES external table to Parquet\.
```
alter table spectrum.sales
set file format parquet;
```
The following example adds one partition for the table SPECTRUM\.SALES\_PART\.
```
alter table spectrum.sales_part
add if not exists partition(saledate='2008-01-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/';
```
The following example adds three partitions for the table SPECTRUM\.SALES\_PART\.
```
alter table spectrum.sales_part add if not exists
partition(saledate='2008-01-01')
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_external-table.md
|
4a2ab845acb8-1
|
```
alter table spectrum.sales_part add if not exists
partition(saledate='2008-01-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/'
partition(saledate='2008-02-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/'
partition(saledate='2008-03-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03/';
```
The following example alters SPECTRUM\.SALES\_PART to drop the partition with `saledate='2008-01-01''`\.
```
alter table spectrum.sales_part
drop partition(saledate='2008-01-01');
```
The following example sets a new Amazon S3 path for the partition with `saledate='2008-01-01'`\.
```
alter table spectrum.sales_part
partition(saledate='2008-01-01')
set location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01-01/';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_external-table.md
|
4a2ab845acb8-2
|
```
The following example changes the name of `sales_date` to `transaction_date`\.
```
alter table spectrum.sales rename column sales_date to transaction_date;
```
The following example sets the column mapping to position mapping for an external table that uses optimized row columnar \(ORC\) format\.
```
alter table spectrum.orc_example
set table properties('orc.schema.resolution'='position');
```
The following example sets the column mapping to name mapping for an external table that uses ORC format\.
```
alter table spectrum.orc_example
set table properties('orc.schema.resolution'='name');
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_external-table.md
|
edfe0958ca40-0
|
Following, you can find best practices for planning a proof of concept, designing tables, loading data into tables, and writing queries for Amazon Redshift, and also a discussion of working with Amazon Redshift Advisor\.
Amazon Redshift is not the same as other SQL database systems\. To fully realize the benefits of the Amazon Redshift architecture, you must specifically design, build, and load your tables to use massively parallel processing, columnar data storage, and columnar data compression\. If your data loading and query execution times are longer than you expect, or longer than you want, you might be overlooking key information\.
If you are an experienced SQL database developer, we strongly recommend that you review this topic before you begin developing your Amazon Redshift data warehouse\.
If you are new to developing SQL databases, this topic is not the best place to start\. We recommend that you begin by reading [Getting started using databases](c_intro_to_admin.md) and trying the examples yourself\.
In this topic, you can find an overview of the most important development principles, along with specific tips, examples, and best practices for implementing those principles\. No single practice can apply to every application\. You should evaluate all of your options before finalizing a database design\. For more information, see [Designing tables](t_Creating_tables.md), [Loading data](t_Loading_data.md), [Tuning query performance](c-optimizing-query-performance.md), and the reference chapters\.
**Topics**
+ [Conducting a proof of concept for Amazon Redshift](proof-of-concept-playbook.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/best-practices.md
|
edfe0958ca40-1
|
**Topics**
+ [Conducting a proof of concept for Amazon Redshift](proof-of-concept-playbook.md)
+ [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)
+ [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)
+ [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)
+ [Working with recommendations from Amazon Redshift Advisor](advisor.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/best-practices.md
|
7619352fdddc-0
|
It's useful to know when the last ANALYZE command was run on a table or database\. When an ANALYZE command is run, Amazon Redshift executes multiple queries that look like this:
```
padb_fetch_sample: select * from table_name
```
Query STL\_ANALYZE to view the history of analyze operations\. If Amazon Redshift analyzes a table using automatic analyze, the `is_background` column is set to `t` \(true\)\. Otherwise, it is set to `f` \(false\)\. The following example joins STV\_TBL\_PERM to show the table name and execution details\.
```
select distinct a.xid, trim(t.name) as name, a.status, a.rows, a.modified_rows, a.starttime, a.endtime
from stl_analyze a
join stv_tbl_perm t on t.id=a.table_id
where name = 'users'
order by starttime;
xid | name | status | rows | modified_rows | starttime | endtime
-------+-------+-----------------+-------+---------------+---------------------+--------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_check_last_analyze.md
|
7619352fdddc-1
|
1582 | users | Full | 49990 | 49990 | 2016-09-22 22:02:23 | 2016-09-22 22:02:28
244287 | users | Full | 24992 | 74988 | 2016-10-04 22:50:58 | 2016-10-04 22:51:01
244712 | users | Full | 49984 | 24992 | 2016-10-04 22:56:07 | 2016-10-04 22:56:07
245071 | users | Skipped | 49984 | 0 | 2016-10-04 22:58:17 | 2016-10-04 22:58:17
245439 | users | Skipped | 49984 | 1982 | 2016-10-04 23:00:13 | 2016-10-04 23:00:13
(5 rows)
```
Alternatively, you can run a more complex query that returns all the statements that ran in every completed transaction that included an ANALYZE command:
```
select xid, to_char(starttime, 'HH24:MM:SS.MS') as starttime,
datediff(sec,starttime,endtime ) as secs, substring(text, 1, 40)
from svl_statementtext
where sequence = 0
and xid in (select xid from svl_statementtext s where s.text like 'padb_fetch_sample%' )
order by xid desc, starttime;
xid | starttime | secs | substring
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_check_last_analyze.md
|
7619352fdddc-2
|
order by xid desc, starttime;
xid | starttime | secs | substring
-----+--------------+------+------------------------------------------
1338 | 12:04:28.511 | 4 | Analyze date
1338 | 12:04:28.511 | 1 | padb_fetch_sample: select count(*) from
1338 | 12:04:29.443 | 2 | padb_fetch_sample: select * from date
1338 | 12:04:31.456 | 1 | padb_fetch_sample: select * from date
1337 | 12:04:24.388 | 1 | padb_fetch_sample: select count(*) from
1337 | 12:04:24.388 | 4 | Analyze sales
1337 | 12:04:25.322 | 2 | padb_fetch_sample: select * from sales
1337 | 12:04:27.363 | 1 | padb_fetch_sample: select * from sales
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_check_last_analyze.md
|
71c260f7f222-0
|
Amazon Redshift uses queries based on structured query language \(SQL\) to interact with data and objects in the system\. Data manipulation language \(DML\) is the subset of SQL that you use to view, add, change, and delete data\. Data definition language \(DDL\) is the subset of SQL that you use to add, change, and delete database objects such as tables and views\.
Once your system is set up, you typically work with DML the most, especially the [SELECT](r_SELECT_synopsis.md) command for retrieving and viewing data\. To write effective data retrieval queries in Amazon Redshift, become familiar with SELECT and apply the tips outlined in [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md) to maximize query efficiency\.
To understand how Amazon Redshift processes queries, use the [Query processing](c-query-processing.md) and [Analyzing and improving queries](c-query-tuning.md) sections\. Then you can apply this information in combination with diagnostic tools to identify and eliminate issues in query performance\.
To identify and address some of the most common and most serious issues you are likely to encounter with Amazon Redshift queries, use the [Troubleshooting queries](queries-troubleshooting.md) section\.
**Topics**
+ [Query processing](c-query-processing.md)
+ [Analyzing and improving queries](c-query-tuning.md)
+ [Troubleshooting queries](queries-troubleshooting.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-optimizing-query-performance.md
|
e52f6eddec13-0
|
Records details for the following changes to a database user:
+ Create user
+ Drop user
+ Alter user \(rename\)
+ Alter user \(alter properties\)
This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_USERLOG.md
|
dd70fdb55303-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_USERLOG.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_USERLOG.md
|
798c88cd6797-0
|
The following example performs four user actions, then queries the STL\_USERLOG view\.
```
create user userlog1 password 'Userlog1';
alter user userlog1 createdb createuser;
alter user userlog1 rename to userlog2;
drop user userlog2;
select userid, username, oldusername, action, usecreatedb, usesuper from stl_userlog order by recordtime desc;
```
```
userid | username | oldusername | action | usecreatedb | usesuper
--------+-----------+-------------+---------+-------------+----------
108 | userlog2 | | drop | 1 | 1
108 | userlog2 | userlog1 | rename | 1 | 1
108 | userlog1 | | alter | 1 | 1
108 | userlog1 | | create | 0 | 0
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_USERLOG.md
|
21b2815f6f14-0
|
Creates a new external table in the specified schema\. All external tables must be created in an external schema\. Search path isn't supported for external schemas and external tables\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
To create external tables, you must be the owner of the external schema or a superuser\. To transfer ownership of an external schema, use ALTER SCHEMA to change the owner\. Access to external tables is controlled by access to the external schema\. You can't [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md) permissions on an external table\. Instead, grant or revoke USAGE on the external schema\.
In addition to external tables created using the CREATE EXTERNAL TABLE command, Amazon Redshift can reference external tables defined in an AWS Glue or AWS Lake Formation catalog or an Apache Hive metastore\. Use the [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md) command to register an external database defined in the external catalog and make the external tables available for use in Amazon Redshift\. If the external table exists in an AWS Glue or AWS Lake Formation catalog or Hive metastore, you don't need to create the table using CREATE EXTERNAL TABLE\. To view external tables, query the [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md) system view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
21b2815f6f14-1
|
By running the CREATE EXTERNAL TABLE AS command, you can create an external table based on the column definition from a query and write the results of that query into Amazon S3\. The results are in Apache Parquet or delimited text format\. If the external table has a partition key or keys, Amazon Redshift partitions new files according to those partition keys and registers new partitions into the external catalog automatically\. For more information about CREATE EXTERNAL TABLE AS, see [Usage notes](#r_CREATE_EXTERNAL_TABLE_usage)\.
You can query an external table using the same SELECT syntax you use with other Amazon Redshift tables\. You can also use the INSERT syntax to write new files into the location of external table on Amazon S3\. For more information, see [INSERT \(external table\)](r_INSERT_external_table.md)\.
To create a view with an external table, include the WITH NO SCHEMA BINDING clause in the [CREATE VIEW](r_CREATE_VIEW.md) statement\.
You can't run CREATE EXTERNAL TABLE inside a transaction \(BEGIN … END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
1dbb685300f2-0
|
```
CREATE EXTERNAL TABLE
external_schema.table_name
(column_name data_type [, …] )
[ PARTITIONED BY (col_name data_type [, … ] )]
[ { ROW FORMAT DELIMITED row_format |
ROW FORMAT SERDE 'serde_name'
[ WITH SERDEPROPERTIES ( 'property_name' = 'property_value' [, ...] ) ] } ]
STORED AS file_format
LOCATION { 's3://bucket/folder/' | 's3://bucket/manifest_file' }
[ TABLE PROPERTIES ( 'property_name'='property_value' [, ...] ) ]
```
The following is the syntax for CREATE EXTERNAL TABLE AS\.
```
CREATE EXTERNAL TABLE
external_schema.table_name
[ PARTITIONED BY (col_name [, … ] ) ]
[ ROW FORMAT DELIMITED row_format ]
STORED AS file_format
LOCATION { 's3://bucket/folder/' }
[ TABLE PROPERTIES ( 'property_name'='property_value' [, ...] ) ]
AS
{ select_statement }
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-0
|
*external\_schema\.table\_name*
The name of the table to be created, qualified by an external schema name\. External tables must be created in an external schema\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
The maximum length for the table name is 127 bytes; longer names are truncated to 127 bytes\. You can use UTF\-8 multibyte characters up to a maximum of four bytes\. Amazon Redshift enforces a limit of 9,900 tables per cluster, including user\-defined temporary tables and temporary tables created by Amazon Redshift during query processing or system maintenance\. Optionally, you can qualify the table name with the database name\. In the following example, the database name is `spectrum_db` , the external schema name is `spectrum_schema`, and the table name is `test`\.
```
create external table spectrum_db.spectrum_schema.test (c1 int)
stored as parquet
location 's3://mybucket/myfolder/';
```
If the database or schema specified doesn't exist, the table isn't created, and the statement returns an error\. You can't create tables or views in the system databases `template0`, `template1`, and `padb_harvest`\.
The table name must be a unique name for the specified schema\.
For more information about valid names, see [Names and identifiers](r_names.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-1
|
For more information about valid names, see [Names and identifiers](r_names.md)\.
\( *column\_name* *data\_type* \)
The name and data type of each column being created\.
The maximum length for the column name is 127 bytes; longer names are truncated to 127 bytes\. You can use UTF\-8 multibyte characters up to a maximum of four bytes\. You can't specify column names `"$path"` or `"$size"`\. For more information about valid names, see [Names and identifiers](r_names.md)\.
By default, Amazon Redshift creates external tables with the pseudocolumns `$path` and `$size`\. You can disable creation of pseudocolumns for a session by setting the `spectrum_enable_pseudo_columns` configuration parameter to `false`\. For more information, see [Pseudocolumns ](#r_CREATE_EXTERNAL_TABLE_usage-pseudocolumns)\.
If pseudocolumns are enabled, the maximum number of columns you can define in a single table is 1,598\. If pseudocolumns aren't enabled, the maximum number of columns you can define in a single table is 1,600\.
If you are creating a "wide table," make sure that your list of columns doesn't exceed row\-width boundaries for intermediate results during loads and query processing\. For more information, see [Usage notes](r_CREATE_TABLE_usage.md)\.
For a CREATE EXTERNAL TABLE AS command, a column list is not required, because columns are derived from the query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-2
|
For a CREATE EXTERNAL TABLE AS command, a column list is not required, because columns are derived from the query\.
*data\_type*
The following [Data types](c_Supported_data_types.md) are supported:
+ SMALLINT \(INT2\)
+ INTEGER \(INT, INT4\)
+ BIGINT \(INT8\)
+ DECIMAL \(NUMERIC\)
+ REAL \(FLOAT4\)
+ DOUBLE PRECISION \(FLOAT8\)
+ BOOLEAN \(BOOL\)
+ CHAR \(CHARACTER\)
+ VARCHAR \(CHARACTER VARYING\)
+ DATE \(DATE data type can be used only with text, Parquet, or ORC data files, or as a partition column\)
+ TIMESTAMP
Timestamp values in text files must be in the format `yyyy-MM-dd HH:mm:ss.SSSSSS`, as the following timestamp value shows: `2017-05-01 11:30:59.000000` \.
The length of a VARCHAR column is defined in bytes, not characters\. For example, a VARCHAR\(12\) column can contain 12 single\-byte characters or 6 two\-byte characters\. When you query an external table, results are truncated to fit the defined column size without returning an error\. For more information, see [Storage and ranges](r_Character_types.md#r_Character_types-storage-and-ranges)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-3
|
For best performance, we recommend specifying the smallest column size that fits your data\. To find the maximum size in bytes for values in a column, use the [OCTET\_LENGTH](r_OCTET_LENGTH.md) function\. The following example returns the maximum size of values in the email column\.
```
select max(octet_length(email)) from users;
max
---
62
```
PARTITIONED BY \(*col\_name* *data\_type* \[, … \] \)
A clause that defines a partitioned table with one or more partition columns\. A separate data directory is used for each specified combination, which can improve query performance in some circumstances\. Partitioned columns don't exist within the table data itself\. If you use a value for *col\_name* that is the same as a table column, you get an error\.
After creating a partitioned table, alter the table using an [ALTER TABLE](r_ALTER_TABLE.md) … ADD PARTITION statement to register new partitions to the external catalog\. When you add a partition, you define the location of the subfolder on Amazon S3 that contains the partition data\.
For example, if the table `spectrum.lineitem_part` is defined with `PARTITIONED BY (l_shipdate date)`, run the following ALTER TABLE command to add a partition\.
```
ALTER TABLE spectrum.lineitem_part ADD PARTITION (l_shipdate='1992-01-29')
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-4
|
```
ALTER TABLE spectrum.lineitem_part ADD PARTITION (l_shipdate='1992-01-29')
LOCATION 's3://spectrum-public/lineitem_partition/l_shipdate=1992-01-29';
```
If you are using CREATE EXTERNAL TABLE AS, you don't need to run ALTER TABLE \.\.\. ADD PARTITION \. Amazon Redshift automatically registers new partitions in the external catalog\. Amazon Redshift also automatically writes corresponding data to partitions in Amazon S3 based on the partition key or keys defined in the table\.
To view partitions, query the [SVV\_EXTERNAL\_PARTITIONS](r_SVV_EXTERNAL_PARTITIONS.md) system view\.
For a CREATE EXTERNAL TABLE AS command, you don't need to specify the data type of the partition column because this column is derived from the query\.
ROW FORMAT DELIMITED *rowformat*
A clause that specifies the format of the underlying data\. Possible values for *rowformat* are as follows:
+ LINES TERMINATED BY '*delimiter*'
+ FIELDS TERMINATED BY '*delimiter*'
Specify a single ASCII character for '*delimiter*'\. You can specify non\-printing ASCII characters using octal, in the format `'\`*`ddd`*`'` where *`d`* is an octal digit \(0–7\) up to ‘\\177’\. The following example specifies the BEL \(bell\) character using octal\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-5
|
```
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\007'
```
If ROW FORMAT is omitted, the default format is DELIMITED FIELDS TERMINATED BY '\\A' \(start of heading\) and LINES TERMINATED BY '\\n' \(newline\)\.
ROW FORMAT SERDE '*serde\_name*' \[WITH SERDEPROPERTIES \( '*property\_name*' = '*property\_value*' \[, \.\.\.\] \) \]
A clause that specifies the SERDE format for the underlying data\.
'*serde\_name*'
The name of the SerDe\. The following are supported:
+ org\.apache\.hadoop\.hive\.serde2\.RegexSerDe
+ com\.amazonaws\.glue\.serde\.GrokSerDe
+ org\.apache\.hadoop\.hive\.serde2\.OpenCSVSerde
+ org\.openx\.data\.jsonserde\.JsonSerDe
+ The JSON SERDE also supports Ion files\.
+ The JSON must be well\-formed\.
+ Timestamps in Ion and JSON must use ISO8601 format\.
+ The following SerDe property is supported for the JsonSerDe:
```
'strip.outer.array'='true'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-6
|
```
'strip.outer.array'='true'
```
Processes Ion/JSON files containing one very large array enclosed in outer brackets \( \[ … \] \) as if it contains multiple JSON records within the array\.
WITH SERDEPROPERTIES \( '*property\_name*' = '*property\_value*' \[, \.\.\.\] \) \]
Optionally, specify property names and values, separated by commas\.
If ROW FORMAT is omitted, the default format is DELIMITED FIELDS TERMINATED BY '\\A' \(start of heading\) and LINES TERMINATED BY '\\n' \(newline\)\.
STORED AS *file\_format*
The file format for data files\.
Valid formats are as follows:
+ PARQUET
+ RCFILE \(for data using ColumnarSerDe only, not LazyBinaryColumnarSerDe\)
+ SEQUENCEFILE
+ TEXTFILE
+ ORC
+ AVRO
+ INPUTFORMAT '*input\_format\_classname*' OUTPUTFORMAT '*output\_format\_classname*'
The CREATE EXTERNAL TABLE AS command only supports two file formats, TEXTFILE and PARQUET\.
For INPUTFORMAT and OUTPUTFORMAT, specify a class name, as the following example shows\.
```
'org.apache.hadoop.mapred.TextInputFormat'
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-7
|
```
'org.apache.hadoop.mapred.TextInputFormat'
```
LOCATION \{ 's3://*bucket/folder*/' \| 's3://*bucket/manifest\_file*'\} <a name="create-external-table-location"></a>
The path to the Amazon S3 bucket or folder that contains the data files or a manifest file that contains a list of Amazon S3 object paths\. The buckets must be in the same AWS Region as the Amazon Redshift cluster\. For a list of supported AWS Regions, see [Amazon Redshift Spectrum considerations](c-using-spectrum.md#c-spectrum-considerations)\.
If the path specifies a bucket or folder, for example `'s3://mybucket/custdata/'`, Redshift Spectrum scans the files in the specified bucket or folder and any subfolders\. Redshift Spectrum ignores hidden files and files that begin with a period or underscore\.
If the path specifies a manifest file, the `'s3://bucket/manifest_file'` argument must explicitly reference a single file—for example, `'s3://mybucket/manifest.txt'`\. It can't reference a key prefix\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-8
|
The manifest is a text file in JSON format that lists the URL of each file that is to be loaded from Amazon S3 and the size of the file, in bytes\. The URL includes the bucket name and full object path for the file\. The files that are specified in the manifest can be in different buckets, but all the buckets must be in the same AWS Region as the Amazon Redshift cluster\. If a file is listed twice, the file is loaded twice\. The following example shows the JSON for a manifest that loads three files\.
```
{
"entries": [
{"url":"s3://mybucket-alpha/custdata.1", "meta": { "content_length": 5956875 } },
{"url":"s3://mybucket-alpha/custdata.2", "meta": { "content_length": 5997091 } },
{"url":"s3://mybucket-beta/custdata.1", "meta": { "content_length": 5978675 } }
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-9
|
]
}
```
You can make the inclusion of a particular file mandatory\. To do this, include a `mandatory` option at the file level in the manifest\. When you query an external table with a mandatory file that is missing, the SELECT statement fails\. Ensure that all files included in the definition of the external table are present\. If they aren't all present, an error appears showing the first mandatory file that isn't found\. The following example shows the JSON for a manifest with the `mandatory` option set to `true`\.
```
{
"entries": [
{"url":"s3://mybucket-alpha/custdata.1", "mandatory":true, "meta": { "content_length": 5956875 } },
{"url":"s3://mybucket-alpha/custdata.2", "mandatory":false, "meta": { "content_length": 5997091 } },
{"url":"s3://mybucket-beta/custdata.1", "meta": { "content_length": 5978675 } }
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-10
|
]
}
```
To reference files created using UNLOAD, you can use the manifest created using [UNLOAD](r_UNLOAD.md) with the MANIFEST parameter\. The manifest file is compatible with a manifest file for [COPY from Amazon S3](copy-parameters-data-source-s3.md), but uses different keys\. Keys that aren't used are ignored\.
TABLE PROPERTIES \( '*property\_name*'='*property\_value*' \[, \.\.\.\] \)
A clause that sets the table definition for table properties\.
Table properties are case\-sensitive\.
'compression\_type'='*value*'
A property that sets the type of compression to use if the file name doesn't contain an extension\. If you set this property and there is a file extension, the extension is ignored and the value set by the property is used\. Valid values for compression type are as follows:
+ bzip2
+ gzip
+ none
+ snappy
'numRows'='*row\_count*'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-11
|
+ gzip
+ none
+ snappy
'numRows'='*row\_count*'
A property that sets the numRows value for the table definition\. To explicitly update an external table's statistics, set the numRows property to indicate the size of the table\. Amazon Redshift doesn't analyze external tables to generate the table statistics that the query optimizer uses to generate a query plan\. If table statistics aren't set for an external table, Amazon Redshift generates a query execution plan based on an assumption that external tables are the larger tables and local tables are the smaller tables\.
'skip\.header\.line\.count'='*line\_count*'
A property that sets number of rows to skip at the beginning of each source file\.
'serialization\.null\.format'=' '
A property that specifies Spectrum should return a `NULL` value when there is an exact match with the text supplied in a field\.
'orc\.schema\.resolution'='mapping\_type'
A property that sets the column mapping type for tables that use ORC data format\. This property is ignored for other data formats\.
Valid values for column mapping type are as follows:
+ name
+ position
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-12
|
Valid values for column mapping type are as follows:
+ name
+ position
If the *orc\.schema\.resolution* property is omitted, columns are mapped by name by default\. If *orc\.schema\.resolution* is set to any value other than *'name'* or *'position'*, columns are mapped by position\. For more information about column mapping, see [Mapping external table columns to ORC columns](c-spectrum-external-tables.md#c-spectrum-column-mapping-orc)
The COPY command maps to ORC data files only by position\. The *orc\.schema\.resolution* table property has no effect on COPY command behavior\.
'write\.parallel'='on / off’
A property that sets whether CREATE EXTERNAL TABLE AS should write data in parallel\. By default, CREATE EXTERNAL TABLE AS writes data in parallel to multiple files, according to the number of slices in the cluster\. The default option is on\. When 'write\.parallel' is set to off, CREATE EXTERNAL TABLE AS writes to one or more data files serially onto Amazon S3\. This table property also applies to any subsequent INSERT statement into the same external table\.
‘write\.maxfilesize\.mb’=‘size’
A property that sets the maximum size \(in MB\) of each file written to Amazon S3 by CREATE EXTERNAL TABLE AS\. The size must be a valid integer between 5 and 6200\. The default maximum file size is 6,200 MB\. This table property also applies to any subsequent INSERT statement into the same external table\.
*select\_statement*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
f80835ba4775-13
|
*select\_statement*
A statement that inserts one or more rows into the external table by defining any query\. All rows that the query produces are written to Amazon S3 in either text or Parquet format based on the table definition\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ef0fcf914936-0
|
You can't view details for Amazon Redshift Spectrum tables using the same resources you use for standard Amazon Redshift tables, such as [PG\_TABLE\_DEF](r_PG_TABLE_DEF.md), [STV\_TBL\_PERM](r_STV_TBL_PERM.md), PG\_CLASS, or information\_schema\. If your business intelligence or analytics tool doesn't recognize Redshift Spectrum external tables, configure your application to query [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md) and [SVV\_EXTERNAL\_COLUMNS](r_SVV_EXTERNAL_COLUMNS.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
fb6dfd79008a-0
|
In some cases, you might run the CREATE EXTERNAL TABLE AS command on a AWS Glue Data Catalog, AWS Lake Formation external catalog, or Apache Hive metastore\. In such cases, you use an AWS Identity and Access Management \(IAM\) role to create the external schema\. This IAM role must have both read and write permissions on Amazon S3\.
If you use a Lake Formation catalog, the IAM role must have the permission to create table in the catalog\. In this case, it must also have the data lake location permission on the target Amazon S3 path\. This IAM role becomes the owner of the new AWS Lake Formation table\.
To ensure that file names are unique, Amazon Redshift uses the following format for the name of each file uploaded to Amazon S3 by default\.
`<date>_<time>_<microseconds>_<query_id>_<slice-number>_part_<part-number>.<format>`\.
An example is `20200303_004509_810669_1007_0001_part_00.parquet`\.
Consider the following when running the CREATE EXTERNAL TABLE AS command:
+ The Amazon S3 location must be empty\.
+ Amazon Redshift only supports PARQUET and TEXTFILE formats when using the STORED AS clause\.
+ You don't need to define a column definition list\. Column names and column data types of the new external table are derived directly from the SELECT query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
fb6dfd79008a-1
|
+ You don't need to define a column definition list\. Column names and column data types of the new external table are derived directly from the SELECT query\.
+ You don't need to define the data type of the partition column in the PARTITIONED BY clause\. If you specify a partition key, the name of this column must exist in the SELECT query result\. When having multiple partition columns, their order in the SELECT query doesn't matter\. Amazon Redshift uses their order defined in the PARTITIONED BY clause to create the external table\.
+ Amazon Redshift automatically partitions output files into partition folders based on the partition key values\. By default, Amazon Redshift removes partition columns from the output files\.
+ The LINES TERMINATED BY 'delimiter' clause isn't supported\.
+ The ROW FORMAT SERDE 'serde\_name' clause isn't supported\.
+ The use of manifest files isn't supported\. Thus, you can't define the LOCATION clause to a manifest file on Amazon S3\.
+ Amazon Redshift automatically updates the 'numRows' table property at the end of the command\.
+ The 'compression\_type' table property only accepts 'none' or 'snappy' for the PARQUET file format\.
+ Amazon Redshift doesn't allow the LIMIT clause in the outer SELECT query\. Instead, you can use a nested LIMIT clause\.
+ You can use STL\_UNLOAD\_LOG to track the files that are written to Amazon S3 by each CREATE EXTERNAL TABLE AS operation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
c45ccf035872-0
|
To create external tables, make sure that you're the owner of the external schema or a superuser\. To transfer ownership of an external schema, use [ALTER SCHEMA](r_ALTER_SCHEMA.md)\. The following example changes the owner of the `spectrum_schema` schema to `newowner`\.
```
alter schema spectrum_schema owner to newowner;
```
To run a Redshift Spectrum query, you need the following permissions:
+ Usage permission on the schema
+ Permission to create temporary tables in the current database
The following example grants usage permission on the schema `spectrum_schema` to the `spectrumusers` user group\.
```
grant usage on schema spectrum_schema to group spectrumusers;
```
The following example grants temporary permission on the database `spectrumdb` to the `spectrumusers` user group\.
```
grant temp on database spectrumdb to group spectrumusers;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.