id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
83a47dad7e10-2
|
+ JSON fields are case sensitive\.
+ White space between JSON structural elements \(such as `{ }, [ ]`\) is ignored\.
The Amazon Redshift JSON functions and the Amazon Redshift COPY command use the same methods to work with JSON\-formatted data\. For more information about working with JSON, see [COPY from JSON format](copy-usage_notes-copy-from-json.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/json-functions.md
|
e9629efc4ac3-0
|
A WITH clause is an optional clause that precedes the SELECT list in a query\. The WITH clause defines one or more subqueries\. Each subquery defines a temporary table, similar to a view definition\. These temporary tables can be referenced in the FROM clause and are used only during the execution of the query to which they belong\. Each subquery in the WITH clause specifies a table name, an optional list of column names, and a query expression that evaluates to a table \(a SELECT statement\)\.
WITH clause subqueries are an efficient way of defining tables that can be used throughout the execution of a single query\. In all cases, the same results can be achieved by using subqueries in the main body of the SELECT statement, but WITH clause subqueries may be simpler to write and read\. Where possible, WITH clause subqueries that are referenced multiple times are optimized as common subexpressions; that is, it may be possible to evaluate a WITH subquery once and reuse its results\. \(Note that common subexpressions aren't limited to those defined in the WITH clause\.\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
2b24be67c347-0
|
```
[ WITH with_subquery [, ...] ]
```
where *with\_subquery* is:
```
with_subquery_table_name [ ( column_name [, ...] ) ] AS ( query )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
0990795ae4be-0
|
*with\_subquery\_table\_name*
A unique name for a temporary table that defines the results of a WITH clause subquery\. You can't use duplicate names within a single WITH clause\. Each subquery must be given a table name that can be referenced in the [FROM clause](r_FROM_clause30.md)\.
*column\_name*
An optional list of output column names for the WITH clause subquery, separated by commas\. The number of column names specified must be equal to or less than the number of columns defined by the subquery\.
*query*
Any SELECT query that Amazon Redshift supports\. See [SELECT](r_SELECT_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
03aee482d828-0
|
You can use a WITH clause in the following SQL statements:
+ SELECT \(including subqueries within SELECT statements\)
+ SELECT INTO
+ CREATE TABLE AS
+ CREATE VIEW
+ DECLARE
+ EXPLAIN
+ INSERT INTO\.\.\.SELECT
+ PREPARE
+ UPDATE \(within a WHERE clause subquery\)
If the FROM clause of a query that contains a WITH clause doesn't reference any of the tables defined by the WITH clause, the WITH clause is ignored and the query executes as normal\.
A table defined by a WITH clause subquery can be referenced only in the scope of the SELECT query that the WITH clause begins\. For example, you can reference such a table in the FROM clause of a subquery in the SELECT list, WHERE clause, or HAVING clause\. You can't use a WITH clause in a subquery and reference its table in the FROM clause of the main query or another subquery\. This query pattern results in an error message of the form `relation table_name doesn't exist` for the WITH clause table\.
You can't specify another WITH clause inside a WITH clause subquery\.
You can't make forward references to tables defined by WITH clause subqueries\. For example, the following query returns an error because of the forward reference to table W2 in the definition of table W1:
```
with w1 as (select * from w2), w2 as (select * from w1)
select * from sales;
ERROR: relation "w2" does not exist
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
03aee482d828-1
|
select * from sales;
ERROR: relation "w2" does not exist
```
A WITH clause subquery may not consist of a SELECT INTO statement; however, you can use a WITH clause in a SELECT INTO statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
d852dfaef564-0
|
The following example shows the simplest possible case of a query that contains a WITH clause\. The WITH query named VENUECOPY selects all of the rows from the VENUE table\. The main query in turn selects all of the rows from VENUECOPY\. The VENUECOPY table exists only for the duration of this query\.
```
with venuecopy as (select * from venue)
select * from venuecopy order by 1 limit 10;
```
```
venueid | venuename | venuecity | venuestate | venueseats
---------+----------------------------+-----------------+------------+------------
1 | Toyota Park | Bridgeview | IL | 0
2 | Columbus Crew Stadium | Columbus | OH | 0
3 | RFK Stadium | Washington | DC | 0
4 | CommunityAmerica Ballpark | Kansas City | KS | 0
5 | Gillette Stadium | Foxborough | MA | 68756
6 | New York Giants Stadium | East Rutherford | NJ | 80242
7 | BMO Field | Toronto | ON | 0
8 | The Home Depot Center | Carson | CA | 0
9 | Dick's Sporting Goods Park | Commerce City | CO | 0
v 10 | Pizza Hut Park | Frisco | TX | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
d852dfaef564-1
|
9 | Dick's Sporting Goods Park | Commerce City | CO | 0
v 10 | Pizza Hut Park | Frisco | TX | 0
(10 rows)
```
The following example shows a WITH clause that produces two tables, named VENUE\_SALES and TOP\_VENUES\. The second WITH query table selects from the first\. In turn, the WHERE clause of the main query block contains a subquery that constrains the TOP\_VENUES table\.
```
with venue_sales as
(select venuename, venuecity, sum(pricepaid) as venuename_sales
from sales, venue, event
where venue.venueid=event.venueid and event.eventid=sales.eventid
group by venuename, venuecity),
top_venues as
(select venuename
from venue_sales
where venuename_sales > 800000)
select venuename, venuecity, venuestate,
sum(qtysold) as venue_qty,
sum(pricepaid) as venue_sales
from sales, venue, event
where venue.venueid=event.venueid and event.eventid=sales.eventid
and venuename in(select venuename from top_venues)
group by venuename, venuecity, venuestate
order by venuename;
```
```
venuename | venuecity | venuestate | venue_qty | venue_sales
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
d852dfaef564-2
|
```
```
venuename | venuecity | venuestate | venue_qty | venue_sales
------------------------+---------------+------------+-----------+-------------
August Wilson Theatre | New York City | NY | 3187 | 1032156.00
Biltmore Theatre | New York City | NY | 2629 | 828981.00
Charles Playhouse | Boston | MA | 2502 | 857031.00
Ethel Barrymore Theatre | New York City | NY | 2828 | 891172.00
Eugene O'Neill Theatre | New York City | NY | 2488 | 828950.00
Greek Theatre | Los Angeles | CA | 2445 | 838918.00
Helen Hayes Theatre | New York City | NY | 2948 | 978765.00
Hilton Theatre | New York City | NY | 2999 | 885686.00
Imperial Theatre | New York City | NY | 2702 | 877993.00
Lunt-Fontanne Theatre | New York City | NY | 3326 | 1115182.00
Majestic Theatre | New York City | NY | 2549 | 894275.00
Nederlander Theatre | New York City | NY | 2934 | 936312.00
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
d852dfaef564-3
|
Nederlander Theatre | New York City | NY | 2934 | 936312.00
Pasadena Playhouse | Pasadena | CA | 2739 | 820435.00
Winter Garden Theatre | New York City | NY | 2838 | 939257.00
(14 rows)
```
The following two examples demonstrate the rules for the scope of table references based on WITH clause subqueries\. The first query runs, but the second fails with an expected error\. The first query has WITH clause subquery inside the SELECT list of the main query\. The table defined by the WITH clause \(HOLIDAYS\) is referenced in the FROM clause of the subquery in the SELECT list:
```
select caldate, sum(pricepaid) as daysales,
(with holidays as (select * from date where holiday ='t')
select sum(pricepaid)
from sales join holidays on sales.dateid=holidays.dateid
where caldate='2008-12-25') as dec25sales
from sales join date on sales.dateid=date.dateid
where caldate in('2008-12-25','2008-12-31')
group by caldate
order by caldate;
caldate | daysales | dec25sales
-----------+----------+------------
2008-12-25 | 70402.00 | 70402.00
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
d852dfaef564-4
|
2008-12-25 | 70402.00 | 70402.00
2008-12-31 | 12678.00 | 70402.00
(2 rows)
```
The second query fails because it attempts to reference the HOLIDAYS table in the main query as well as in the SELECT list subquery\. The main query references are out of scope\.
```
select caldate, sum(pricepaid) as daysales,
(with holidays as (select * from date where holiday ='t')
select sum(pricepaid)
from sales join holidays on sales.dateid=holidays.dateid
where caldate='2008-12-25') as dec25sales
from sales join holidays on sales.dateid=holidays.dateid
where caldate in('2008-12-25','2008-12-31')
group by caldate
order by caldate;
ERROR: relation "holidays" does not exist
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WITH_clause.md
|
6bc22218c400-0
|
Text255 and text32k encodings are useful for compressing VARCHAR columns in which the same words recur often\. A separate dictionary of unique words is created for each block of column values on disk\. \(An Amazon Redshift disk block occupies 1 MB\.\) The dictionary contains the first 245 unique words in the column\. Those words are replaced on disk by a one\-byte index value representing one of the 245 values, and any words that are not represented in the dictionary are stored uncompressed\. The process repeats for each 1 MB disk block\. If the indexed words occur frequently in the column, the column will yield a high compression ratio\.
For the text32k encoding, the principle is the same, but the dictionary for each block does not capture a specific number of words\. Instead, the dictionary indexes each unique word it finds until the combined entries reach a length of 32K, minus some overhead\. The index values are stored in two bytes\.
For example, consider the VENUENAME column in the VENUE table\. Words such as **Arena**, **Center**, and **Theatre** recur in this column and are likely to be among the first 245 words encountered in each block if text255 compression is applied\. If so, this column will benefit from compression because every time those words appear, they will occupy only 1 byte of storage \(instead of 5, 6, or 7 bytes, respectively\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Text255_encoding.md
|
0d2da8452207-0
|
JSON\_EXTRACT\_PATH\_TEXT returns the value for the *key:value* pair referenced by a series of path elements in a JSON string\. The JSON path can be nested up to five levels deep\. Path elements are case\-sensitive\. If a path element does not exist in the JSON string, JSON\_EXTRACT\_PATH\_TEXT returns an empty string\. If the `null_if_invalid` argument is set to `true` and the JSON string is invalid, the function returns NULL instead of returning an error\.
For more information, see [JSON functions](json-functions.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_PATH_TEXT.md
|
a9ea4141c574-0
|
```
json_extract_path_text('json_string', 'path_elem' [,'path_elem'[, …] ] [, null_if_invalid ] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_PATH_TEXT.md
|
6df949a92f45-0
|
*json\_string*
A properly formatted JSON string\.
*path\_elem*
A path element in a JSON string\. One path element is required\. Additional path elements can be specified, up to five levels deep\.
*null\_if\_invalid*
A Boolean value that specifies whether to return NULL if the input JSON string is invalid instead of returning an error\. To return NULL if the JSON is invalid, specify `true` \(`t`\)\. To return an error if the JSON is invalid, specify `false` \(`f`\)\. The default is `false`\.
In a JSON string, Amazon Redshift recognizes `\n` as a newline character and `\t` as a tab character\. To load a backslash, escape it with a backslash \(`\\`\)\. For more information, see [Escape characters in JSON](copy-usage_notes-copy-from-json.md#copy-usage-json-escape-characters)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_PATH_TEXT.md
|
11cab86cbf37-0
|
VARCHAR string representing the JSON value referenced by the path elements\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_PATH_TEXT.md
|
8f82cddc79fd-0
|
The following example returns the value for the path `'f4', 'f6'`:
```
select json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"star"}}','f4', 'f6');
json_extract_path_text
----------------------
star
```
The following example returns an error because the JSON is invalid\.
```
select json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"star"}','f4', 'f6');
An error occurred when executing the SQL command:
select json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"star"}','f4', 'f6')
```
The following example sets *null\_if\_invalid* to *true*, so the statement returns NULL for invalid JSON instead of returning an error\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_PATH_TEXT.md
|
8f82cddc79fd-1
|
```
select json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"star"}','f4', 'f6',true);
json_extract_path_text
-------------------------------
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/JSON_EXTRACT_PATH_TEXT.md
|
5244062bc0fe-0
|
Stores information about the number of rows inserted or deleted since the last ANALYZE\. The PG\_STATISTIC\_INDICATOR table is updated frequently following DML operations, so statistics are approximate\.
PG\_STATISTIC\_INDICATOR is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_STATISTIC_INDICATOR.md
|
27f23f060ed9-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_STATISTIC_INDICATOR.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_STATISTIC_INDICATOR.md
|
058c09662ad8-0
|
The following example returns information for table changes since the last ANALYZE\.
```
select * from pg_statistic_indicator;
stairelid | stairows | staiins | staidels
----------+----------+---------+---------
108271 | 11 | 0 | 0
108275 | 365 | 0 | 0
108278 | 8798 | 0 | 0
108280 | 91865 | 0 | 100632
108267 | 89981 | 49990 | 9999
108269 | 808 | 606 | 374
108282 | 152220 | 76110 | 248566
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_STATISTIC_INDICATOR.md
|
5cd4b69e6878-0
|
Use SVV\_EXTERNAL\_COLUMNS to view details for columns in external tables\.
SVV\_EXTERNAL\_COLUMNS is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_COLUMNS.md
|
bbdf17793085-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_EXTERNAL_COLUMNS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_COLUMNS.md
|
f120684d0ead-0
|
This example creates user groups and user accounts and then grants them various privileges for an Amazon Redshift database that connects to a web application client\. This example assumes three groups of users: regular users of a web application, power users of a web application, and web developers\.
1. Create the groups where the user accounts will be assigned\. The following set of commands creates three different user groups:
```
create group webappusers;
create group webpowerusers;
create group webdevusers;
```
1. Create several database user accounts with different privileges and add them to the groups\.
1. Create two users and add them to the WEBAPPUSERS group:
```
create user webappuser1 password 'webAppuser1pass'
in group webappusers;
create user webappuser2 password 'webAppuser2pass'
in group webappusers;
```
1. Create an account for a web developer and adds it to the WEBDEVUSERS group:
```
create user webdevuser1 password 'webDevuser2pass'
in group webdevusers;
```
1. Create a superuser account\. This user will have administrative rights to create other users:
```
create user webappadmin password 'webAppadminpass1'
createuser;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_user_group_examples.md
|
f120684d0ead-1
|
```
create user webappadmin password 'webAppadminpass1'
createuser;
```
1. Create a schema to be associated with the database tables used by the web application, and grant the various user groups access to this schema:
1. Create the WEBAPP schema:
```
create schema webapp;
```
1. Grant USAGE privileges to the WEBAPPUSERS group:
```
grant usage on schema webapp to group webappusers;
```
1. Grant USAGE privileges to the WEBPOWERUSERS group:
```
grant usage on schema webapp to group webpowerusers;
```
1. Grant ALL privileges to the WEBDEVUSERS group:
```
grant all on schema webapp to group webdevusers;
```
The basic users and groups are now set up\. You can now make changes to alter the users and groups\.
1. For example, the following command alters the search\_path parameter for the WEBAPPUSER1\.
```
alter user webappuser1 set search_path to webapp, public;
```
The SEARCH\_PATH specifies the schema search order for database objects, such as tables and functions, when the object is referenced by a simple name with no schema specified\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_user_group_examples.md
|
f120684d0ead-2
|
The SEARCH\_PATH specifies the schema search order for database objects, such as tables and functions, when the object is referenced by a simple name with no schema specified\.
1. You can also add users to a group after creating the group, such as adding WEBAPPUSER2 to the WEBPOWERUSERS group:
```
alter group webpowerusers add user webappuser2;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_user_group_examples.md
|
c38f70c92c17-0
|
To help you improve the performance and decrease the operating costs for your Amazon Redshift cluster, Amazon Redshift Advisor offers you specific recommendations about changes to make\. Advisor develops its customized recommendations by analyzing performance and usage metrics for your cluster\. These tailored recommendations relate to operations and cluster settings\. To help you prioritize your optimizations, Advisor ranks recommendations by order of impact\.
Advisor bases its recommendations on observations regarding performance statistics or operations data\. Advisor develops observations by running tests on your clusters to determine if a test value is within a specified range\. If the test result is outside of that range, Advisor generates an observation for your cluster\. At the same time, Advisor creates a recommendation about how to bring the observed value back into the best\-practice range\. Advisor only displays recommendations that should have a significant impact on performance and operations\. When Advisor determines that a recommendation has been addressed, it removes it from your recommendation list\.
For example, suppose that your data warehouse contains a large number of uncompressed table columns\. In this case, you can save on cluster storage costs by rebuilding tables using the `ENCODE` parameter to specify column compression\. In another example, suppose that Advisor observes that your cluster contains a significant amount of data in uncompressed table data\. In this case, it provides you with the SQL code block to find the table columns that are candidates for compression and resources that describe how to compress those columns\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor.md
|
4ccdc0f241ff-0
|
The Amazon Redshift Advisor feature is available only in the following AWS Regions:
+ US East \(N\. Virginia\) Region \(us\-east\-1\)
+ US West \(N\. California\) Region \(us\-west\-1\)
+ US West \(Oregon\) Region \(us\-west\-2\)
+ Asia Pacific \(Seoul\) Region \(ap\-northeast\-2\)
+ Asia Pacific \(Singapore\) Region \(ap\-southeast\-1\)
+ Asia Pacific \(Sydney\) Region \(ap\-southeast\-2\)
+ Asia Pacific \(Tokyo\) Region \(ap\-northeast\-1\)
+ Europe \(Frankfurt\) Region \(eu\-central\-1\)
+ Europe \(Ireland\) Region \(eu\-west\-1\)
**Topics**
+ [Amazon Redshift Regions](#advisor-regions)
+ [Viewing Amazon Redshift Advisor recommendations on the console](access-advisor.md)
+ [Amazon Redshift Advisor recommendations](advisor-recommendations.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/advisor.md
|
7379ebbb772b-0
|
Use the SVL\_S3QUERY view to get details about Amazon Redshift Spectrum queries at the segment and node slice level\.
SVL\_S3QUERY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY.md
|
a2444259f54e-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_S3QUERY.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY.md
|
ef4b5ff1b36b-0
|
The following example gets the scan step details for the last query executed\.
```
select query, segment, slice, elapsed, s3_scanned_rows, s3_scanned_bytes, s3query_returned_rows, s3query_returned_bytes, files
from svl_s3query
where query = pg_last_query_id()
order by query,segment,slice;
```
```
query | segment | slice | elapsed | s3_scanned_rows | s3_scanned_bytes | s3query_returned_rows | s3query_returned_bytes | files
------+---------+-------+---------+-----------------+------------------+-----------------------+------------------------+------
4587 | 2 | 0 | 67811 | 0 | 0 | 0 | 0 | 0
4587 | 2 | 1 | 591568 | 172462 | 11260097 | 8513 | 170260 | 1
4587 | 2 | 2 | 216849 | 0 | 0 | 0 | 0 | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY.md
|
ef4b5ff1b36b-1
|
4587 | 2 | 2 | 216849 | 0 | 0 | 0 | 0 | 0
4587 | 2 | 3 | 216671 | 0 | 0 | 0 | 0 | 0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY.md
|
3169c64ea4c5-0
|
Details about stored procedures are logged in the following system tables and views:
+ SVL\_STORED\_PROC\_CALL – details are logged about the stored procedure call's start time and end time, and whether the call is ended before completion\. For more information, see [SVL\_STORED\_PROC\_CALL](r_SVL_STORED_PROC_CALL.md)\.
+ SVL\_STORED\_PROC\_MESSAGES – messages in stored procedures emitted by the RAISE query are logged with the corresponding logging level\. For more information, see [SVL\_STORED\_PROC\_MESSAGES](r_SVL_STORED_PROC_MESSAGES.md)\.
+ SVL\_QLOG – the query ID of the procedure call is logged for each query called from a stored procedure\. For more information, see [SVL\_QLOG](r_SVL_QLOG.md)\.
+ STL\_UTILITYTEXT – stored procedure calls are logged after they are completed\. For more information, see [STL\_UTILITYTEXT](r_STL_UTILITYTEXT.md)\.
+ PG\_PROC\_INFO – this system catalog view shows information about stored procedures\. For more information, see [PG\_PROC\_INFO](r_PG_PROC_INFO.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-logging.md
|
b1f6af7c7f73-0
|
When you create a table, you can designate one of four distribution styles; AUTO, EVEN, KEY, or ALL\.
If you don't specify a distribution style, Amazon Redshift uses AUTO distribution\.
**AUTO distribution**
With AUTO distribution, Amazon Redshift assigns an optimal distribution style based on the size of the table data\. For example, Amazon Redshift initially assigns ALL distribution to a small table, then changes to EVEN distribution when the table grows larger\. When a table is changed from ALL to EVEN distribution, storage utilization might change slightly\. The change in distribution occurs in the background, in a few seconds\. To view the distribution style applied to a table, query the PG\_CLASS\_INFO system catalog view\. For more information, see [Viewing distribution styles](viewing-distribution-styles.md)\. If you don't specify a distribution style with the CREATE TABLE statement, Amazon Redshift applies AUTO distribution\.
**EVEN distribution**
The leader node distributes the rows across the slices in a round\-robin fashion, regardless of the values in any particular column\. EVEN distribution is appropriate when a table does not participate in joins or when there is not a clear choice between KEY distribution and ALL distribution\.
**KEY distribution**
The rows are distributed according to the values in one column\. The leader node places matching values on the same node slice\. If you distribute a pair of tables on the joining keys, the leader node collocates the rows on the slices according to the values in the joining columns so that matching values from the common columns are physically stored together\.
**ALL distribution**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_choosing_dist_sort.md
|
b1f6af7c7f73-1
|
**ALL distribution**
A copy of the entire table is distributed to every node\. Where EVEN distribution or KEY distribution place only a portion of a table's rows on each node, ALL distribution ensures that every row is collocated for every join that the table participates in\.
ALL distribution multiplies the storage required by the number of nodes in the cluster, and so it takes much longer to load, update, or insert data into multiple tables\. ALL distribution is appropriate only for relatively slow moving tables; that is, tables that are not updated frequently or extensively\. Because the cost of redistributing small tables during a query is low, there isn't a significant benefit to define small dimension tables as DISTSTYLE ALL\.
**Note**
After you have specified a distribution style for a column, Amazon Redshift handles data distribution at the cluster level\. Amazon Redshift does not require or support the concept of partitioning data within database objects\. You do not need to create table spaces or define partitioning schemes for tables\.
In certain scenarios, you can change the distribution style of a table after it is created\. For more information, see [ALTER TABLE](r_ALTER_TABLE.md)\. For scenarios when you can't change the distribution style of a table after it's created, you can recreate the table and populate the new table with a deep copy\. For more information, see [Performing a deep copy](performing-a-deep-copy.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_choosing_dist_sort.md
|
3043643e3283-0
|
Use the STV\_SESSIONS table to view information about the active user sessions for Amazon Redshift\.
To view session history, use the [STL\_SESSIONS](r_STL_SESSIONS.md) table instead of STV\_SESSIONS\.
All rows in STV\_SESSIONS, including rows generated by another user, are visible to all users\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SESSIONS.md
|
5cc873ade041-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_SESSIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SESSIONS.md
|
fc4aae2a7c17-0
|
To perform a quick check to see if any other users are currently logged into Amazon Redshift, type the following query:
```
select count(*)
from stv_sessions;
```
If the result is greater than one, then at least one other user is currently logged into the database\.
To view all active sessions for Amazon Redshift, type the following query:
```
select *
from stv_sessions;
```
The following result shows four active sessions currently running on Amazon Redshift:
```
starttime | process |user_name | db_name
-------------------------+---------+----------------------------+---------
2018-08-06 08:44:07.50 | 13779 | IAMA:aws_admin:admin_grp | dev
2008-08-06 08:54:20.50 | 19829 | dwuser | dev
2008-08-06 08:56:34.50 | 20279 | dwuser | dev
2008-08-06 08:55:00.50 | 19996 | dwuser | tickit
(3 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SESSIONS.md
|
fc4aae2a7c17-1
|
2008-08-06 08:55:00.50 | 19996 | dwuser | tickit
(3 rows)
```
The user name prefixed with IAMA indicates that the user signed on using federated single sign\-on \(SSO\)\. For more information, see [Using IAM authentication to generate database user credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SESSIONS.md
|
0c7f0c66beac-0
|
An NVL expression is identical to a COALESCE expression\. NVL and COALESCE are synonyms\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL_function.md
|
25856f2bd414-0
|
```
NVL | COALESCE ( expression, expression, ... )
```
An NVL or COALESCE expression returns the value of the first expression in the list that is not null\. If all expressions are null, the result is null\. When a non\-null value is found, the remaining expressions in the list are not evaluated\.
This type of expression is useful when you want to return a backup value for something when the preferred value is missing or null\. For example, a query might return one of three phone numbers \(cell, home, or work, in that order\), whichever is found first in the table \(not null\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL_function.md
|
fc2062e468cf-0
|
Create a table with START\_DATE and END\_DATE columns, insert some rows that include null values, then apply an NVL expression to the two columns\.
```
create table datetable (start_date date, end_date date);
```
```
insert into datetable values ('2008-06-01','2008-12-31');
insert into datetable values (null,'2008-12-31');
insert into datetable values ('2008-12-31',null);
```
```
select nvl(start_date, end_date)
from datetable
order by 1;
coalesce
------------
2008-06-01
2008-12-31
2008-12-31
```
The default column name for an NVL expression is COALESCE\. The following query would return the same results:
```
select coalesce(start_date, end_date)
from datetable
order by 1;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL_function.md
|
fc2062e468cf-1
|
select coalesce(start_date, end_date)
from datetable
order by 1;
```
If you expect a query to return null values for certain functions or columns, you can use an NVL expression to replace the nulls with some other value\. For example, aggregate functions, such as SUM, return null values instead of zeroes when they have no rows to evaluate\. You can use an NVL expression to replace these null values with `0.0`:
```
select nvl(sum(sales), 0.0) as sumresult, ...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NVL_function.md
|
2e40e153d66b-0
|
Create external tables in an external schema\. The external schema references a database in the external data catalog and provides the IAM role ARN that authorizes your cluster to access Amazon S3 on your behalf\. You can create an external database in an Amazon Athena Data Catalog, AWS Glue Data Catalog, or an Apache Hive metastore, such as Amazon EMR\. For this example, you create the external database in an Amazon Athena Data Catalog when you create the external schema Amazon Redshift\. For more information, see [Creating external schemas for Amazon Redshift Spectrum](c-spectrum-external-schemas.md)\. <a name="spectrum-get-started-create-external-table"></a>
**To create an external schema and an external table**
1. To create an external schema, replace the IAM role ARN in the following command with the role ARN you created in [step 1](c-getting-started-using-spectrum-create-role.md)\. Then run the command in your SQL client\.
```
create external schema spectrum
from data catalog
database 'spectrumdb'
iam_role 'arn:aws:iam::123456789012:role/mySpectrumRole'
create external database if not exists;
```
1. To create an external table, run the following CREATE EXTERNAL TABLE command\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-external-table.md
|
2e40e153d66b-1
|
```
1. To create an external table, run the following CREATE EXTERNAL TABLE command\.
**Note**
The Amazon S3 bucket with the sample data for this example is located in the us\-west\-2 region\. Your cluster and the Redshift Spectrum files must be in the same AWS Region, so, for this example, your cluster must also be located in us\-west\-2\.
To use this example in a different AWS Region, you can copy the sales data with an Amazon S3 copy command\. Then update the location of the bucket in the example `CREATE EXTERNAL TABLE` command\.
```
aws s3 cp s3://awssampledbuswest2/tickit/spectrum/sales/ s3://bucket-name/tickit/spectrum/sales/ --recursive
```
```
create external table spectrum.sales(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
dateid smallint,
qtysold smallint,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp)
row format delimited
fields terminated by '\t'
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/sales/'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-external-table.md
|
2e40e153d66b-2
|
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/sales/'
table properties ('numRows'='172000');
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-create-external-table.md
|
dceb3797ec53-0
|
Returns the location of the specified substring within a string\. Synonym of the STRPOS function\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARINDEX.md
|
31d5fcfd0216-0
|
```
CHARINDEX( substring, string )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARINDEX.md
|
837cb857270e-0
|
*substring*
The substring to search for within the *string*\.
*string*
The string or column to be searched\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARINDEX.md
|
96277e455c8d-0
|
The CHARINDEX function returns an integer corresponding to the position of the substring \(one\-based, not zero\-based\)\. The position is based on the number of characters, not bytes, so that multi\-byte characters are counted as single characters\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARINDEX.md
|
266a59a6641a-0
|
CHARINDEX returns 0 if the substring is not found within the `string`:
```
select charindex('dog', 'fish');
charindex
----------
0
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARINDEX.md
|
e71250c780a7-0
|
The following example shows the position of the string `fish` within the word `dogfish`:
```
select charindex('fish', 'dogfish');
charindex
----------
4
(1 row)
```
The following example returns the number of sales transactions with a COMMISSION over 999\.00 from the SALES table:
```
select distinct charindex('.', commission), count (charindex('.', commission))
from sales where charindex('.', commission) > 4 group by charindex('.', commission)
order by 1,2;
charindex | count
----------+-------
5 | 629
(1 row)
```
See [STRPOS function](r_STRPOS.md) for details\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARINDEX.md
|
eb57baec846e-0
|
You can load from compressed data files by specifying the following parameters\. File compression parameters
BZIP2 <a name="copy-bzip2"></a>
A value that specifies that the input file or files are in compressed bzip2 format \(\.bz2 files\)\. The COPY operation reads each compressed file and uncompresses the data as it loads\.
GZIP <a name="copy-gzip"></a>
A value that specifies that the input file or files are in compressed gzip format \(\.gz files\)\. The COPY operation reads each compressed file and uncompresses the data as it loads\.
LZOP <a name="copy-lzop"></a>
A value that specifies that the input file or files are in compressed lzop format \(\.lzo files\)\. The COPY operation reads each compressed file and uncompresses the data as it loads\.
COPY doesn't support files that are compressed using the lzop *\-\-filter* option\.
ZSTD <a name="copy-zstd"></a>
A value that specifies that the input file or files are in compressed Zstandard format \(\.zst files\)\. The COPY operation reads each compressed file and uncompresses the data as it loads\. For more information, see [Zstandard encoding](zstd-encoding.md)\.
ZSTD is supported only with COPY from Amazon S3\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-file-compression.md
|
ce68e98407a2-0
|
You can use the COPY command to load data files that were uploaded to Amazon S3 using server\-side encryption, client\-side encryption, or both\.
The COPY command supports the following types of Amazon S3 encryption:
+ Server\-side encryption with Amazon S3\-managed keys \(SSE\-S3\)
+ Server\-side encryption with AWS KMS\-managed keys \(SSE\-KMS\)
+ Client\-side encryption using a client\-side symmetric master key
The COPY command doesn't support the following types of Amazon S3 encryption:
+ Server\-side encryption with customer\-provided keys \(SSE\-C\)
+ Client\-side encryption using an AWS KMS\-managed customer master key
+ Client\-side encryption using a customer\-provided asymmetric master key
For more information about Amazon S3 encryption, see [ Protecting Data Using Server\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html) and [Protecting Data Using Client\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html) in the Amazon Simple Storage Service Developer Guide\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading-encrypted-files.md
|
ce68e98407a2-1
|
The [UNLOAD](r_UNLOAD.md) command automatically encrypts files using SSE\-S3\. You can also unload using SSE\-KMS or client\-side encryption with a customer\-managed symmetric key\. For more information, see [Unloading encrypted data files](t_unloading_encrypted_files.md)
The COPY command automatically recognizes and loads files encrypted using SSE\-S3 and SSE\-KMS\. You can load files encrypted using a client\-side symmetric master key by specifying the ENCRYPTED option and providing the key value\. For more information, see [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md)\.
To load client\-side encrypted data files, provide the master key value using the MASTER\_SYMMETRIC\_KEY parameter and include the ENCRYPTED option\.
```
copy customer from 's3://mybucket/encrypted/customer'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
master_symmetric_key '<master_key>'
encrypted
delimiter '|';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading-encrypted-files.md
|
ce68e98407a2-2
|
master_symmetric_key '<master_key>'
encrypted
delimiter '|';
```
To load encrypted data files that are gzip, lzop, or bzip2 compressed, include the GZIP, LZOP, or BZIP2 option along with the master key value and the ENCRYPTED option\.
```
copy customer from 's3://mybucket/encrypted/customer'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
master_symmetric_key '<master_key>'
encrypted
delimiter '|'
gzip;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading-encrypted-files.md
|
deac330b165b-0
|
ST\_RemovePoint returns a linestring geometry that has the point of the input geometry at an index position removed\.
The index is zero\-based\. The spatial reference system identifier \(SRID\) of the result is the same as the input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_RemovePoint-function.md
|
70a21bc103a9-0
|
```
ST_RemovePoint(geom, index)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_RemovePoint-function.md
|
dcc284e92514-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING`\.
*index*
A value of data type `INTEGER` that represents the position of a zero\-based index\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_RemovePoint-function.md
|
a70d4a3215a4-0
|
`GEOMETRY`
If *geom* or *index* is null, then null is returned\.
If *geom* is not subtype `LINESTRING`, then an error is returned\.
If *index* is out of range, then an error is returned\. Valid values for the index position are between 0 and `ST_NumPoints(geom)` minus 1\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_RemovePoint-function.md
|
797b03722331-0
|
The following SQL removes the last point in a linestring\.
```
WITH tmp(g) AS (SELECT ST_GeomFromText('LINESTRING(0 0,10 0,10 10,5 5,0 5)',4326))
SELECT ST_AsEWKT(ST_RemovePoint(g, ST_NumPoints(g) - 1)) FROM tmp;
```
```
st_asewkt
-----------------------------------------
SRID=4326;LINESTRING(0 0,10 0,10 10,5 5)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_RemovePoint-function.md
|
72f5a8b44b75-0
|
Removes a table from a database\. Only the owner of the table, the schema owner, or a superuser can drop a table\.
If you are trying to empty a table of rows, without removing the table, use the DELETE or TRUNCATE command\.
DROP TABLE removes constraints that exist on the target table\. Multiple tables can be removed with a single DROP TABLE command\.
DROP TABLE with an external table can't be run inside a transaction \(BEGIN … END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
4ac334a977c4-0
|
```
DROP TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
dbd2f1074342-0
|
IF EXISTS
Clause that indicates that if the specified table doesn’t exist, the command should make no changes and return a message that the table doesn't exist, rather than terminating with an error\.
This clause is useful when scripting, so the script doesn’t fail if DROP TABLE runs against a nonexistent table\.
*name*
Name of the table to drop\.
CASCADE
Clause that indicates to automatically drop objects that depend on the table, such as views\.
To create a view that isn't dependent on a table referenced by the view, include the WITH NO SCHEMA BINDING clause in the view definition\. For more information, see [CREATE VIEW](r_CREATE_VIEW.md)\.
RESTRICT
Clause that indicates not to drop the table if any objects depend on it\. This action is the default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
df11b6d0cd64-0
|
**Dropping a table with no dependencies**
The following example creates and drops a table called FEEDBACK that has no dependencies:
```
create table feedback(a int);
drop table feedback;
```
If a table contains columns that are referenced by views or other tables, Amazon Redshift displays a message such as the following\.
```
Invalid operation: cannot drop table feedback because other objects depend on it
```
**Dropping two tables simultaneously**
The following command set creates a FEEDBACK table and a BUYERS table and then drops both tables with a single command:
```
create table feedback(a int);
create table buyers(a int);
drop table feedback, buyers;
```
**Dropping a table with a dependency**
The following steps show how to drop a table called FEEDBACK using the CASCADE switch\.
First, create a simple table called FEEDBACK using the CREATE TABLE command:
```
create table feedback(a int);
```
Next, use the CREATE VIEW command to create a view called FEEDBACK\_VIEW that relies on the table FEEDBACK:
```
create view feedback_view as select * from feedback;
```
The following example drops the table FEEDBACK and also drops the view FEEDBACK\_VIEW, because FEEDBACK\_VIEW is dependent on the table FEEDBACK:
```
drop table feedback cascade;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
df11b6d0cd64-1
|
```
drop table feedback cascade;
```
**Viewing the dependencies for a table**
You can create a view that holds the dependency information for all of the tables in a database\. Before dropping a given table, query this view to determine if the table has dependencies\.
Type the following command to create a FIND\_DEPEND view, which joins dependencies with object references:
```
create view find_depend as
select distinct c_p.oid as tbloid,
n_p.nspname as schemaname, c_p.relname as name,
n_c.nspname as refbyschemaname, c_c.relname as refbyname,
c_c.oid as viewoid
from pg_catalog.pg_class c_p
join pg_catalog.pg_depend d_p
on c_p.relfilenode = d_p.refobjid
join pg_catalog.pg_depend d_c
on d_p.objid = d_c.objid
join pg_catalog.pg_class c_c
on d_c.refobjid = c_c.relfilenode
left outer join pg_namespace n_p
on c_p.relnamespace = n_p.oid
left outer join pg_namespace n_c
on c_c.relnamespace = n_c.oid
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
df11b6d0cd64-2
|
left outer join pg_namespace n_c
on c_c.relnamespace = n_c.oid
where d_c.deptype = 'i'::"char"
and c_c.relkind = 'v'::"char";
```
Now create a SALES\_VIEW from the SALES table:
```
create view sales_view as select * from sales;
```
Now query the FIND\_DEPEND view to view dependencies in the database\. Limit the scope of the query to the PUBLIC schema, as shown in the following code:
```
select * from find_depend
where refbyschemaname='public'
order by name;
```
This query returns the following dependencies, showing that the SALES\_VIEW view is also dropped by using the CASCADE option when dropping the SALES table:
```
tbloid | schemaname | name | viewoid | refbyschemaname | refbyname
--------+------------+-------------+---------+-----------------+-------------
100241 | public | find_depend | 100241 | public | find_depend
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
df11b6d0cd64-3
|
100241 | public | find_depend | 100241 | public | find_depend
100203 | public | sales | 100245 | public | sales_view
100245 | public | sales_view | 100245 | public | sales_view
(3 rows)
```
**Dropping a table Using IF EXISTS**
The following example either drops the FEEDBACK table if it exists, or does nothing and returns a message if it doesn't:
```
drop table if exists feedback;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_TABLE.md
|
0531c96d79b6-0
|
**Topics**
+ [Using a COPY command to load data](t_Loading_tables_with_the_COPY_command.md)
+ [Updating tables with DML commands](t_Updating_tables_with_DML_commands.md)
+ [Updating and inserting new data](t_updating-inserting-using-staging-tables-.md)
+ [Performing a deep copy](performing-a-deep-copy.md)
+ [Analyzing tables](t_Analyzing_tables.md)
+ [Vacuuming tables](t_Reclaiming_storage_space202.md)
+ [Managing concurrent write operations](c_Concurrent_writes.md)
+ [Tutorial: Loading data from Amazon S3](tutorial-loading-data.md)
A COPY command is the most efficient way to load a table\. You can also add data to your tables using INSERT commands, though it is much less efficient than using COPY\. The COPY command is able to read from multiple data files or multiple data streams simultaneously\. Amazon Redshift allocates the workload to the cluster nodes and performs the load operations in parallel, including sorting the rows and distributing data across node slices\.
**Note**
Amazon Redshift Spectrum external tables are read\-only\. You can't COPY or INSERT to an external table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading_data.md
|
0531c96d79b6-1
|
**Note**
Amazon Redshift Spectrum external tables are read\-only\. You can't COPY or INSERT to an external table\.
To access data on other AWS resources, your cluster must have permission to access those resources and to perform the necessary actions to access the data\. You can use Identity and Access Management \(IAM\) to limit the access users have to your cluster resources and data\.
After your initial data load, if you add, modify, or delete a significant amount of data, you should follow up by running a VACUUM command to reorganize your data and reclaim space after deletes\. You should also run an ANALYZE command to update table statistics\.
This section explains how to load data and troubleshoot data loads and presents best practices for loading data\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading_data.md
|
4a9e07ba8bb8-0
|
ST\_NumInteriorRings returns the number of rings in an input polygon geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumInteriorRings-function.md
|
50e7bb29ea21-0
|
```
ST_NumInteriorRings(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumInteriorRings-function.md
|
1bddf2410d7c-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumInteriorRings-function.md
|
c4e8130d1a2b-0
|
`INTEGER`
If *geom* is null, then null is returned\.
If *geom* is not a polygon, then null is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumInteriorRings-function.md
|
d164ebba0900-0
|
The following SQL returns the number of interior rings in the input polygon\.
```
SELECT ST_NumInteriorRings(ST_GeomFromText('POLYGON((0 0,100 0,100 100,0 100,0 0),(1 1,1 5,5 1,1 1),(7 7,7 8,8 7,7 7))'));
```
```
st_numinteriorrings
-------------
2
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumInteriorRings-function.md
|
6e35662d46f7-0
|
Analyzes nested\-loop join execution steps for queries\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_NESTLOOP.md
|
b4eb3021a1bd-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_NESTLOOP.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_NESTLOOP.md
|
d75f0df97ded-0
|
Because the following query neglects to join the CATEGORY table, it produces a partial Cartesian product, which is not recommended\. It is shown here to illustrate a nested loop\.
```
select count(event.eventname), event.eventname, category.catname, date.caldate
from event, category, date
where event.dateid = date.dateid
group by event.eventname, category.catname, date.caldate;
```
The following query shows the results from the previous query in the STL\_NESTLOOP view\.
```
select query, slice, segment as seg, step,
datediff(msec, starttime, endtime) as duration, tasknum, rows, tbl
from stl_nestloop
where query = pg_last_query_id();
```
```
query | slice | seg | step | duration | tasknum | rows | tbl
-------+-------+-----+------+----------+---------+-------+-----
6028 | 0 | 4 | 5 | 41 | 22 | 24277 | 240
6028 | 1 | 4 | 5 | 26 | 23 | 24189 | 240
6028 | 3 | 4 | 5 | 25 | 23 | 24376 | 240
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_NESTLOOP.md
|
d75f0df97ded-1
|
6028 | 3 | 4 | 5 | 25 | 23 | 24376 | 240
6028 | 2 | 4 | 5 | 54 | 22 | 23936 | 240
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_NESTLOOP.md
|
d376dce4cef7-0
|
You won't need the GUEST user account for this tutorial, so you can delete it\. If you delete a database user account, the user will no longer be able to access any of the cluster databases\.
Issue the following command to drop the GUEST user:
```
drop user guest;
```
The master user you created when you launched your cluster continues to have access to the database\.
**Important**
Amazon Redshift strongly recommends that you do not delete the master user\.
For information about command options, see [DROP USER](r_DROP_USER.md) in the SQL Reference\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_deleting_redshift_user_cmd.md
|
8085dd6da78c-0
|
**To retrieve the cluster public key and cluster node IP addresses for your cluster using the console**
1. Access the Amazon Redshift Management Console\.
1. Click the **Clusters** link in the navigation pane\.
1. Select your cluster from the list\.
1. Locate the **SSH Ingestion Settings** group\.
Note the **Cluster Public Key** and **Node IP addresses**\. You will use them in later steps\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/copy-from-ssh-console-2.png)
You will use the IP addresses in Step 3 to configure the host to accept the connection from Amazon Redshift\. Depending on what type of host you connect to and whether it is in a VPC, you will use either the public IP addresses or the private IP addresses\.
To retrieve the cluster public key and cluster node IP addresses for your cluster using the Amazon Redshift CLI, execute the describe\-clusters command\.
For example:
```
aws redshift describe-clusters --cluster-identifier <cluster-identifier>
```
The response will include the ClusterPublicKey and the list of Private and Public IP addresses, similar to the following:
```
{
"Clusters": [
{
"VpcSecurityGroups": [],
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-retrieve-key-and-ips.md
|
8085dd6da78c-1
|
```
{
"Clusters": [
{
"VpcSecurityGroups": [],
"ClusterStatus": "available",
"ClusterNodes": [
{
"PrivateIPAddress": "10.nnn.nnn.nnn",
"NodeRole": "LEADER",
"PublicIPAddress": "10.nnn.nnn.nnn"
},
{
"PrivateIPAddress": "10.nnn.nnn.nnn",
"NodeRole": "COMPUTE-0",
"PublicIPAddress": "10.nnn.nnn.nnn"
},
{
"PrivateIPAddress": "10.nnn.nnn.nnn",
"NodeRole": "COMPUTE-1",
"PublicIPAddress": "10.nnn.nnn.nnn"
}
],
"AutomatedSnapshotRetentionPeriod": 1,
"PreferredMaintenanceWindow": "wed:05:30-wed:06:00",
"AvailabilityZone": "us-east-1a",
"NodeType": "ds2.xlarge",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-retrieve-key-and-ips.md
|
8085dd6da78c-2
|
"AvailabilityZone": "us-east-1a",
"NodeType": "ds2.xlarge",
"ClusterPublicKey": "ssh-rsa AAAABexamplepublickey...Y3TAl Amazon-Redshift",
...
...
}
```
To retrieve the cluster public key and cluster node IP addresses for your cluster using the Amazon Redshift API, use the DescribeClusters action\. For more information, see [describe\-clusters](https://docs.aws.amazon.com/cli/latest/reference/redshift/describe-clusters.html) in the *Amazon Redshift CLI Guide* or [DescribeClusters](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html) in the Amazon Redshift API Guide\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-retrieve-key-and-ips.md
|
b9cc53450d54-0
|
You can query the system view SVL\_STORED\_PROC\_MESSAGES to get information about stored procedure messages\. Raised messages are logged even if the stored procedure call is aborted\. Each stored procedure call receives a query ID\. For more information about how to set the minimum level for logged messages, see stored\_proc\_log\_min\_messages\.
SVL\_STORED\_PROC\_MESSAGES is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see Visibility of data in system tables and views\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_MESSAGES.md
|
99c303c3924a-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_STORED_PROC_MESSAGES.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_MESSAGES.md
|
462780bca259-0
|
The following SQL statements show how to use SVL\_STORED\_PROC\_MESSAGES to review raised messages\.
```
-- Create and run a stored procedure
CREATE OR REPLACE PROCEDURE test_proc1(f1 int) AS
$$
BEGIN
RAISE INFO 'Log Level: Input f1 is %',f1;
RAISE NOTICE 'Notice Level: Input f1 is %',f1;
EXECUTE 'select invalid';
RAISE NOTICE 'Should not print this';
EXCEPTION WHEN OTHERS THEN
raise exception 'EXCEPTION level: Exception Handling';
END;
$$ LANGUAGE plpgsql;
-- Call this stored procedure
CALL test_proc1(2);
-- Show raised messages with level higher than INFO
SELECT query, recordtime, loglevel, loglevel_text, trim(message) as message, aborted FROM svl_stored_proc_messages
WHERE loglevel > 30 AND query = 193 ORDER BY recordtime;
query | recordtime | loglevel | loglevel_text | message | aborted
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_MESSAGES.md
|
462780bca259-1
|
WHERE loglevel > 30 AND query = 193 ORDER BY recordtime;
query | recordtime | loglevel | loglevel_text | message | aborted
-------+----------------------------+----------+---------------+-------------------------------------+---------
193 | 2020-03-17 23:57:18.277196 | 40 | NOTICE | Notice Level: Input f1 is 2 | 1
193 | 2020-03-17 23:57:18.277987 | 60 | EXCEPTION | EXCEPTION level: Exception Handling | 1
(2 rows)
-- Show raised messages at EXCEPTION level
SELECT query, recordtime, loglevel, loglevel_text, trim(message) as message, aborted FROM svl_stored_proc_messages
WHERE loglevel_text = 'EXCEPTION' AND query = 193 ORDER BY recordtime;
query | recordtime | loglevel | loglevel_text | message | aborted
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_MESSAGES.md
|
462780bca259-2
|
query | recordtime | loglevel | loglevel_text | message | aborted
-------+----------------------------+----------+---------------+-------------------------------------+---------
193 | 2020-03-17 23:57:18.277987 | 60 | EXCEPTION | EXCEPTION level: Exception Handling | 1
```
The following SQL statments show how to use SVL\_STORED\_PROC\_MESSAGES to review raised messages with the SET option when creating a stored procedure\. Because test\_proc\(\) has a minimum log level of NOTICE, only NOTICE, WARNING, and EXCEPTION level messages are logged in SVL\_STORED\_PROC\_MESSAGES\.
```
-- Create a stored procedure with minimum log level of NOTICE
CREATE OR REPLACE PROCEDURE test_proc() AS
$$
BEGIN
RAISE LOG 'Raise LOG messages';
RAISE INFO 'Raise INFO messages';
RAISE NOTICE 'Raise NOTICE messages';
RAISE WARNING 'Raise WARNING messages';
RAISE EXCEPTION 'Raise EXCEPTION messages';
RAISE WARNING 'Raise WARNING messages again'; -- not reachable
END;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_MESSAGES.md
|
462780bca259-3
|
RAISE EXCEPTION 'Raise EXCEPTION messages';
RAISE WARNING 'Raise WARNING messages again'; -- not reachable
END;
$$ LANGUAGE plpgsql SET stored_proc_log_min_messages = NOTICE;
-- Call this stored procedure
CALL test_proc();
-- Show the raised messages
SELECT query, recordtime, loglevel_text, trim(message) as message, aborted FROM svl_stored_proc_messages
WHERE query = 149 ORDER BY recordtime;
query | recordtime | loglevel_text | message | aborted
-------+----------------------------+---------------+--------------------------+---------
149 | 2020-03-16 21:51:54.847627 | NOTICE | Raise NOTICE messages | 1
149 | 2020-03-16 21:51:54.84766 | WARNING | Raise WARNING messages | 1
149 | 2020-03-16 21:51:54.847668 | EXCEPTION | Raise EXCEPTION messages | 1
(3 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STORED_PROC_MESSAGES.md
|
603869a3ae3c-0
|
You can use the COPY command to load data in parallel from one or more remote hosts, such Amazon Elastic Compute Cloud \(Amazon EC2\) instances or other computers\. COPY connects to the remote hosts using Secure Shell \(SSH\) and executes commands on the remote hosts to generate text output\. The remote host can be an EC2 Linux instance or another Unix or Linux computer configured to accept SSH connections\. Amazon Redshift can connect to multiple hosts, and can open multiple SSH connections to each host\. Amazon Redshift sends a unique command through each connection to generate text output to the host's standard output, which Amazon Redshift then reads as it does a text file\.
Use the FROM clause to specify the Amazon S3 object key for the manifest file that provides the information COPY uses to open SSH connections and execute the remote commands\.
**Topics**
+ [Syntax](#copy-parameters-data-source-ssh-syntax)
+ [Examples](#copy-parameters-data-source-ssh-examples)
+ [Parameters](#copy-parameters-data-source-ssh-parameters)
+ [Optional parameters](#copy-parameters-data-source-ssh-optional-parms)
+ [Unsupported parameters](#copy-parameters-data-source-ssh-unsupported-parms)
**Important**
If the S3 bucket that holds the manifest file doesn't reside in the same AWS Region as your cluster, you must use the REGION parameter to specify the Region in which the bucket is located\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
dbb08fd0b7f4-0
|
```
FROM 's3://'ssh_manifest_file' }
authorization
SSH
| optional-parameters
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
79f0b5b37ce0-0
|
The following example uses a manifest file to load data from a remote host using SSH\.
```
copy sales
from 's3://mybucket/ssh_manifest'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
ssh;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
aa4f6db79f55-0
|
FROM
The source of the data to be loaded\.
's3://*copy\_from\_ssh\_manifest\_file*' <a name="copy-ssh-manifest"></a>
The COPY command can connect to multiple hosts using SSH, and can create multiple SSH connections to each host\. COPY executes a command through each host connection, and then loads the output from the commands in parallel into the table\. The *s3://copy\_from\_ssh\_manifest\_file* argument specifies the Amazon S3 object key for the manifest file that provides the information COPY uses to open SSH connections and execute the remote commands\.
The *s3://copy\_from\_ssh\_manifest\_file* argument must explicitly reference a single file; it cannot be a key prefix\. The following shows an example:
```
's3://mybucket/ssh_manifest.txt'
```
The manifest file is a text file in JSON format that Amazon Redshift uses to connect to the host\. The manifest file specifies the SSH host endpoints and the commands that will be executed on the hosts to return data to Amazon Redshift\. Optionally, you can include the host public key, the login user name, and a mandatory flag for each entry\. The following example shows a manifest file that creates two SSH connections:
```
{
"entries": [
{"endpoint":"<ssh_endpoint_or_IP>",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
aa4f6db79f55-1
|
{
"entries": [
{"endpoint":"<ssh_endpoint_or_IP>",
"command": "<remote_command>",
"mandatory":true,
"publickey": “<public_key>”,
"username": “<host_user_name>”},
{"endpoint":"<ssh_endpoint_or_IP>",
"command": "<remote_command>",
"mandatory":true,
"publickey": “<public_key>”,
"username": “<host_user_name>”}
]
}
```
The manifest file contains one `"entries"` construct for each SSH connection\. You can have multiple connections to a single host or multiple connections to multiple hosts\. The double quote characters are required as shown, both for the field names and the values\. The quote characters must be simple quotation marks \(0x22\), not slanted or "smart" quotation marks\. The only value that doesn't need double quote characters is the Boolean value `true` or `false` for the `"mandatory"` field\.
The following list describes the fields in the manifest file\.
endpoint <a name="copy-ssh-manifest-endpoint"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
aa4f6db79f55-2
|
The following list describes the fields in the manifest file\.
endpoint <a name="copy-ssh-manifest-endpoint"></a>
The URL address or IP address of the host—for example, `"ec2-111-222-333.compute-1.amazonaws.com"`, or `"198.51.100.0"`\.
command <a name="copy-ssh-manifest-command"></a>
The command to be executed by the host to generate text output or binary output in gzip, lzop, bzip2, or zstd format\. The command can be any command that the user *"host\_user\_name"* has permission to run\. The command can be as simple as printing a file, or it can query a database or launch a script\. The output \(text file, gzip binary file, lzop binary file, or bzip2 binary file\) must be in a form that the Amazon Redshift COPY command can ingest\. For more information, see [Preparing your input data](t_preparing-input-data.md)\.
publickey <a name="copy-ssh-manifest-publickey"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
aa4f6db79f55-3
|
publickey <a name="copy-ssh-manifest-publickey"></a>
\(Optional\) The public key of the host\. If provided, Amazon Redshift will use the public key to identify the host\. If the public key isn't provided, Amazon Redshift will not attempt host identification\. For example, if the remote host's public key is `ssh-rsa AbcCbaxxx…Example root@amazon.com`, type the following text in the public key field: `"AbcCbaxxx…Example"`
mandatory <a name="copy-ssh-manifest-mandatory"></a>
\(Optional\) A clause that indicates whether the COPY command should fail if the connection attempt fails\. The default is `false`\. If Amazon Redshift doesn't successfully make at least one connection, the COPY command fails\.
username <a name="copy-ssh-manifest-username"></a>
\(Optional\) The user name that will be used to log on to the host system and execute the remote command\. The user login name must be the same as the login that was used to add the Amazon Redshift cluster's public key to the host's authorized keys file\. The default username is `redshift`\.
For more information about creating a manifest file, see [Loading data process](loading-data-from-remote-hosts.md#load-from-host-process)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
aa4f6db79f55-4
|
For more information about creating a manifest file, see [Loading data process](loading-data-from-remote-hosts.md#load-from-host-process)\.
To COPY from a remote host, the SSH parameter must be specified with the COPY command\. If the SSH parameter isn't specified, COPY assumes that the file specified with FROM is a data file and will fail\.
If you use automatic compression, the COPY command performs two data read operations, which means it will execute the remote command twice\. The first read operation is to provide a data sample for compression analysis, then the second read operation actually loads the data\. If executing the remote command twice might cause a problem, you should disable automatic compression\. To disable automatic compression, run the COPY command with the COMPUPDATE parameter set to OFF\. For more information, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
For detailed procedures for using COPY from SSH, see [Loading data from remote hosts](loading-data-from-remote-hosts.md)\.
*authorization*
The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2\. You can provide that authorization by referencing an AWS Identity and Access Management \(IAM\) role that is attached to your cluster \(role\-based access control\) or by providing the access credentials for an IAM user \(key\-based access control\)\. For increased security and flexibility, we recommend using IAM role\-based access control\. For more information, see [Authorization parameters](copy-parameters-authorization.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
aa4f6db79f55-5
|
SSH <a name="copy-ssh"></a>
A clause that specifies that data is to be loaded from a remote host using the SSH protocol\. If you specify SSH, you must also provide a manifest file using the [s3://copy_from_ssh_manifest_file](#copy-ssh-manifest) argument\.
If you are using SSH to copy from a host using a private IP address in a remote VPC, the VPC must have enhanced VPC routing enabled\. For more information about Enhanced VPC routing, see [Amazon Redshift Enhanced VPC Routing](https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
bcbe294c6181-0
|
You can optionally specify the following parameters with COPY from SSH:
+ [Column mapping options](copy-parameters-column-mapping.md)
+ [Data format parameters](copy-parameters-data-format.md#copy-data-format-parameters)
+ [Data conversion parameters](copy-parameters-data-conversion.md)
+ [ Data load operations](copy-parameters-data-load.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
9dafb9ce6e1c-0
|
You cannot use the following parameters with COPY from SSH:
+ ENCRYPTED
+ MANIFEST
+ READRATIO
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-ssh.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.