id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
b4dc471e72f4-0
|
By default, Amazon Redshift creates external tables with the pseudocolumns *$path* and *$size*\. Select these columns to view the path to the data files on Amazon S3 and the size of the data files for each row returned by a query\. The *$path* and *$size* column names must be delimited with double quotation marks\. A *SELECT \** clause doesn't return the pseudocolumns \. You must explicitly include the *$path* and *$size* column names in your query, as the following example shows\.
```
select "$path", "$size"
from spectrum.sales_part
where saledate = '2008-12-01';
```
You can disable creation of pseudocolumns for a session by setting the *spectrum\_enable\_pseudo\_columns* configuration parameter to *false*\.
**Important**
Selecting *$size* or *$path* incurs charges because Redshift Spectrum scans the data files on Amazon S3 to determine the size of the result set\. For more information, see [Amazon Redshift Pricing](https://aws.amazon.com/redshift/pricing/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-0
|
The following example creates a table named SALES in the Amazon Redshift external schema named `spectrum`\. The data is in tab\-delimited text files\. The TABLE PROPERTIES clause sets the numRows property to 170,000 rows\.
```
create external table spectrum.sales(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
saledate date,
qtysold smallint,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp)
row format delimited
fields terminated by '\t'
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/sales/'
table properties ('numRows'='170000');
```
The following example creates a table that uses the JsonSerDe to reference data in JSON format\.
```
create external table spectrum.cloudtrail_json (
event_version int,
event_id bigint,
event_time timestamp,
event_type varchar(10),
awsregion varchar(20),
event_name varchar(max),
event_source varchar(max),
requesttime timestamp,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-1
|
event_name varchar(max),
event_source varchar(max),
requesttime timestamp,
useragent varchar(max),
recipientaccountid bigint)
row format serde 'org.openx.data.jsonserde.JsonSerDe'
with serdeproperties (
'dots.in.keys' = 'true',
'mapping.requesttime' = 'requesttimestamp'
) location 's3://mybucket/json/cloudtrail';
```
The following CREATE EXTERNAL TABLE AS example creates a nonpartitioned external table\. Then it writes the result of the SELECT query as Apache Parquet to the target Amazon S3 location\.
```
CREATE EXTERNAL TABLE spectrum.lineitem
STORED AS parquet
LOCATION 'S3://mybucket/cetas/lineitem/'
AS SELECT * FROM local_lineitem;
```
The following example creates a partitioned external table and includes the partition columns in the SELECT query\.
```
CREATE EXTERNAL TABLE spectrum.partitioned_lineitem
PARTITIONED BY (l_shipdate, l_shipmode)
STORED AS parquet
LOCATION 'S3://mybucket/cetas/partitioned_lineitem/'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-2
|
STORED AS parquet
LOCATION 'S3://mybucket/cetas/partitioned_lineitem/'
AS SELECT l_orderkey, l_shipmode, l_shipdate, l_partkey FROM local_table;
```
For a list of existing databases in the external data catalog, query the [SVV\_EXTERNAL\_DATABASES](r_SVV_EXTERNAL_DATABASES.md) system view\.
```
select eskind,databasename,esoptions from svv_external_databases order by databasename;
```
```
eskind | databasename | esoptions
-------+--------------+----------------------------------------------------------------------------------
1 | default | {"REGION":"us-west-2","IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"}
1 | sampledb | {"REGION":"us-west-2","IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"}
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-3
|
1 | spectrumdb | {"REGION":"us-west-2","IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"}
```
To view details of external tables, query the [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md) and [SVV\_EXTERNAL\_COLUMNS](r_SVV_EXTERNAL_COLUMNS.md) system views\.
The following example queries the SVV\_EXTERNAL\_TABLES view\.
```
select schemaname, tablename, location from svv_external_tables;
```
```
schemaname | tablename | location
-----------+----------------------+--------------------------------------------------------
spectrum | sales | s3://awssampledbuswest2/tickit/spectrum/sales
spectrum | sales_part | s3://awssampledbuswest2/tickit/spectrum/sales_partition
```
The following example queries the SVV\_EXTERNAL\_COLUMNS view\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-4
|
```
The following example queries the SVV\_EXTERNAL\_COLUMNS view\.
```
select * from svv_external_columns where schemaname like 'spectrum%' and tablename ='sales';
```
```
schemaname | tablename | columnname | external_type | columnnum | part_key
-----------+-----------+------------+---------------+-----------+---------
spectrum | sales | salesid | int | 1 | 0
spectrum | sales | listid | int | 2 | 0
spectrum | sales | sellerid | int | 3 | 0
spectrum | sales | buyerid | int | 4 | 0
spectrum | sales | eventid | int | 5 | 0
spectrum | sales | saledate | date | 6 | 0
spectrum | sales | qtysold | smallint | 7 | 0
spectrum | sales | pricepaid | decimal(8,2) | 8 | 0
spectrum | sales | commission | decimal(8,2) | 9 | 0
spectrum | sales | saletime | timestamp | 10 | 0
```
To view table partitions, use the following query\.
```
select schemaname, tablename, values, location
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-5
|
To view table partitions, use the following query\.
```
select schemaname, tablename, values, location
from svv_external_partitions
where tablename = 'sales_part';
```
```
schemaname | tablename | values | location
-----------+------------+----------------+-------------------------------------------------------------------------
spectrum | sales_part | ["2008-01-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01
spectrum | sales_part | ["2008-02-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02
spectrum | sales_part | ["2008-03-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-6
|
spectrum | sales_part | ["2008-04-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-04
spectrum | sales_part | ["2008-05-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-05
spectrum | sales_part | ["2008-06-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-06
spectrum | sales_part | ["2008-07-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-07
spectrum | sales_part | ["2008-08-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-08
spectrum | sales_part | ["2008-09-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-09
spectrum | sales_part | ["2008-10-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-10
spectrum | sales_part | ["2008-11-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-11
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
a9f8c7a09fcb-7
|
spectrum | sales_part | ["2008-12-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-12
```
The following example returns the total size of related data files for an external table\.
```
select distinct "$path", "$size"
from spectrum.sales_part;
$path | $size
---------------------------------------+-------
s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/ | 1616
s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/ | 1444
s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/ | 1444
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ea33d18aa32c-0
|
To create an external table partitioned by date, run the following command\.
```
create external table spectrum.sales_part(
salesid integer,
listid integer,
sellerid integer,
buyerid integer,
eventid integer,
dateid smallint,
qtysold smallint,
pricepaid decimal(8,2),
commission decimal(8,2),
saletime timestamp)
partitioned by (saledate date)
row format delimited
fields terminated by '|'
stored as textfile
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/'
table properties ('numRows'='170000');
```
To add the partitions, run the following ALTER TABLE commands\.
```
alter table spectrum.sales_part
add if not exists partition (saledate='2008-01-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-02-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/';
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ea33d18aa32c-1
|
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-03-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-04-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-04/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-05-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-05/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-06-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-06/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-07-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-07/';
alter table spectrum.sales_part
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ea33d18aa32c-2
|
alter table spectrum.sales_part
add if not exists partition (saledate='2008-08-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-08/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-09-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-09/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-10-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-10/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-11-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-11/';
alter table spectrum.sales_part
add if not exists partition (saledate='2008-12-01')
location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-12/';
```
To select data from the partitioned table, run the following query\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ea33d18aa32c-3
|
```
To select data from the partitioned table, run the following query\.
```
select top 10 spectrum.sales_part.eventid, sum(spectrum.sales_part.pricepaid)
from spectrum.sales_part, event
where spectrum.sales_part.eventid = event.eventid
and spectrum.sales_part.pricepaid > 30
and saledate = '2008-12-01'
group by spectrum.sales_part.eventid
order by 2 desc;
```
```
eventid | sum
--------+---------
914 | 36173.00
5478 | 27303.00
5061 | 26383.00
4406 | 26252.00
5324 | 24015.00
1829 | 23911.00
3601 | 23616.00
3665 | 23214.00
6069 | 22869.00
5638 | 22551.00
```
To view external table partitions, query the [SVV\_EXTERNAL\_PARTITIONS](r_SVV_EXTERNAL_PARTITIONS.md) system view\.
```
select schemaname, tablename, values, location from svv_external_partitions
where tablename = 'sales_part';
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ea33d18aa32c-4
|
select schemaname, tablename, values, location from svv_external_partitions
where tablename = 'sales_part';
```
```
schemaname | tablename | values | location
-----------+------------+----------------+--------------------------------------------------
spectrum | sales_part | ["2008-01-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-01
spectrum | sales_part | ["2008-02-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02
spectrum | sales_part | ["2008-03-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-03
spectrum | sales_part | ["2008-04-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-04
spectrum | sales_part | ["2008-05-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-05
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
ea33d18aa32c-5
|
spectrum | sales_part | ["2008-06-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-06
spectrum | sales_part | ["2008-07-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-07
spectrum | sales_part | ["2008-08-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-08
spectrum | sales_part | ["2008-09-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-09
spectrum | sales_part | ["2008-10-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-10
spectrum | sales_part | ["2008-11-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-11
spectrum | sales_part | ["2008-12-01"] | s3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-12
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
21e86f89586c-0
|
The following shows an example of specifying the ROW FORMAT SERDE parameters for data files stored in AVRO format\.
```
create external table spectrum.sales(salesid int, listid int, sellerid int, buyerid int, eventid int, dateid int, qtysold int, pricepaid decimal(8,2), comment VARCHAR(255))
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES ('avro.schema.literal'='{\"namespace\": \"dory.sample\",\"name\": \"dory_avro\",\"type\": \"record\", \"fields\": [{\"name\":\"salesid\", \"type\":\"int\"},
{\"name\":\"listid\", \"type\":\"int\"},
{\"name\":\"sellerid\", \"type\":\"int\"},
{\"name\":\"buyerid\", \"type\":\"int\"},
{\"name\":\"eventid\",\"type\":\"int\"},
{\"name\":\"dateid\",\"type\":\"int\"},
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
21e86f89586c-1
|
{\"name\":\"dateid\",\"type\":\"int\"},
{\"name\":\"qtysold\",\"type\":\"int\"},
{\"name\":\"pricepaid\", \"type\": {\"type\": \"bytes\", \"logicalType\": \"decimal\", \"precision\": 8, \"scale\": 2}}, {\"name\":\"comment\",\"type\":\"string\"}]}')
STORED AS AVRO
location 's3://mybucket/avro/sales' ;
```
The following shows an example of specifying the ROW FORMAT SERDE parameters using RegEx\.
```
create external table spectrum.types(
cbigint bigint,
cbigint_null bigint,
cint int,
cint_null int)
row format serde 'org.apache.hadoop.hive.serde2.RegexSerDe'
with serdeproperties ('input.regex'='([^\\x01]+)\\x01([^\\x01]+)\\x01([^\\x01]+)\\x01([^\\x01]+)')
stored as textfile
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
21e86f89586c-2
|
stored as textfile
location 's3://mybucket/regex/types';
```
The following shows an example of specifying the ROW FORMAT SERDE parameters using Grok\.
```
create external table spectrum.grok_log(
timestamp varchar(255),
pid varchar(255),
loglevel varchar(255),
progname varchar(255),
message varchar(255))
row format serde 'com.amazonaws.glue.serde.GrokSerDe'
with serdeproperties ('input.format'='[DFEWI], \\[%{TIMESTAMP_ISO8601:timestamp} #%{POSINT:pid:int}\\] *(?<loglevel>:DEBUG|FATAL|ERROR|WARN|INFO) -- +%{DATA:progname}: %{GREEDYDATA:message}')
stored as textfile
location 's3://mybucket/grok/logs';
```
The following shows an example of defining an Amazon S3 server access log in an S3 bucket\. You can use Redshift Spectrum to query Amazon S3 access logs\.
```
CREATE EXTERNAL TABLE spectrum.mybucket_s3_logs(
bucketowner varchar(255),
bucket varchar(255),
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
21e86f89586c-3
|
bucketowner varchar(255),
bucket varchar(255),
requestdatetime varchar(2000),
remoteip varchar(255),
requester varchar(255),
requested varchar(255),
operation varchar(255),
key varchar(255),
requesturi_operation varchar(255),
requesturi_key varchar(255),
requesturi_httpprotoversion varchar(255),
httpstatus varchar(255),
errorcode varchar(255),
bytessent bigint,
objectsize bigint,
totaltime varchar(255),
turnaroundtime varchar(255),
referrer varchar(255),
useragent varchar(255),
versionid varchar(255)
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
21e86f89586c-4
|
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
'input.regex' = '([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) \"([^ ]*)\\s*([^ ]*)\\s*([^ ]*)\" (- |[^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\") ([^ ]*).*$')
LOCATION 's3://mybucket/s3logs’;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_TABLE.md
|
35ff1b246f72-0
|
DATE\_CMP compares two dates\. The function returns `0` if the dates are identical, `1` if *date1* is greater, and `-1` if *date2* is greater\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP.md
|
aa49ff187f6e-0
|
```
DATE_CMP(date1, date2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP.md
|
d19f1d7c50b8-0
|
*date1*
A date column or an expression that implicitly converts to a date\.
*date2*
A date column or an expression that implicitly converts to a date\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP.md
|
a5a5f38d505a-0
|
INTEGER
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP.md
|
c1960172ebd1-0
|
The following query compares the CALDATE column to the date January 4, 2008 and returns whether the value in CALDATE is before \(`-1`\), equal to \(`0`\), or after \(`1`\) January 4, 2008:
```
select caldate, '2008-01-04',
date_cmp(caldate,'2008-01-04')
from date
order by dateid
limit 10;
caldate | ?column? | date_cmp
-----------+------------+----------
2008-01-01 | 2008-01-04 | -1
2008-01-02 | 2008-01-04 | -1
2008-01-03 | 2008-01-04 | -1
2008-01-04 | 2008-01-04 | 0
2008-01-05 | 2008-01-04 | 1
2008-01-06 | 2008-01-04 | 1
2008-01-07 | 2008-01-04 | 1
2008-01-08 | 2008-01-04 | 1
2008-01-09 | 2008-01-04 | 1
2008-01-10 | 2008-01-04 | 1
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP.md
|
5250727080a4-0
|
Converts a string to lowercase\. LOWER supports UTF\-8 multibyte characters, up to a maximum of four bytes per character\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOWER.md
|
9c6dde99ab14-0
|
```
LOWER(string)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOWER.md
|
ee8efe309bec-0
|
*string*
The input parameter is a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOWER.md
|
1c3553cb779a-0
|
The LOWER function returns a character string that is the same data type as the input string \(CHAR or VARCHAR\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOWER.md
|
1a54bdfd956c-0
|
The following example converts the CATNAME field to lowercase:
```
select catname, lower(catname) from category order by 1,2;
catname | lower
----------+-----------
Classical | classical
Jazz | jazz
MLB | mlb
MLS | mls
Musicals | musicals
NBA | nba
NFL | nfl
NHL | nhl
Opera | opera
Plays | plays
Pop | pop
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOWER.md
|
bcb435ec1e62-0
|
You can see the steps in a query plan by running the EXPLAIN command\. The following example shows an SQL query and explains the output\. Reading the query plan from the bottom up, you can see each of the logical operations used to perform the query\. For more information, see [Query plan](c-the-query-plan.md)\.
```
explain
select eventname, sum(pricepaid) from sales, event
where sales.eventid = event.eventid
group by eventname
order by 2 desc;
```
```
XN Merge (cost=1002815366604.92..1002815366606.36 rows=576 width=27)
Merge Key: sum(sales.pricepaid)
-> XN Network (cost=1002815366604.92..1002815366606.36 rows=576 width=27)
Send to leader
-> XN Sort (cost=1002815366604.92..1002815366606.36 rows=576 width=27)
Sort Key: sum(sales.pricepaid)
-> XN HashAggregate (cost=2815366577.07..2815366578.51 rows=576 width=27)
-> XN Hash Join DS_BCAST_INNER (cost=109.98..2815365714.80 rows=172456 width=27)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/reviewing-query-plan-steps.md
|
bcb435ec1e62-1
|
-> XN Hash Join DS_BCAST_INNER (cost=109.98..2815365714.80 rows=172456 width=27)
Hash Cond: ("outer".eventid = "inner".eventid)
-> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=14)
-> XN Hash (cost=87.98..87.98 rows=8798 width=21)
-> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=21)
```
As part of generating a query plan, the query optimizer breaks down the plan into streams, segments, and steps\. The query optimizer breaks the plan down to prepare for distributing the data and query workload to the compute nodes\. For more information about streams, segments, and steps, see [Query planning and execution workflow](c-query-planning.md)\.
The following illustration shows the preceding query and associated query plan\. It displays how the query operations involved map to steps that Amazon Redshift uses to generate compiled code for the compute node slices\. Each query plan operation maps to multiple steps within the segments, and sometimes to multiple segments within the streams\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/map-plan-to-streams.png)
In this illustration, the query optimizer runs the query plan as follows:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/reviewing-query-plan-steps.md
|
bcb435ec1e62-2
|
In this illustration, the query optimizer runs the query plan as follows:
1. In `Stream 0`, the query runs `Segment 0` with a sequential scan operation to scan the `events` table\. The query continues to `Segment 1` with a hash operation to create the hash table for the inner table in the join\.
1. In `Stream 1`, the query runs `Segment 2` with a sequential scan operation to scan the `sales` table\. It continues with `Segment 2` with a hash join to join tables where the join columns are not both distribution keys and sort keys\. It again continues with `Segment 2` with a hash aggregate to aggregate results\. Then the query runs `Segment 3` with a hash aggregate operation to perform unsorted grouped aggregate functions and a sort operation to evaluate the ORDER BY clause and other sort operations\.
1. In `Stream 2`, the query runs a network operation in `Segment 4` and `Segment 5` to send intermediate results to the leader node for further processing\.
The last segment of a query returns the data\. If the return set is aggregated or sorted, the compute nodes each send their piece of the intermediate result to the leader node\. The leader node then merges the data so the final result can be sent back to the requesting client\.
For more information about EXPLAIN operators, see [EXPLAIN](r_EXPLAIN.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/reviewing-query-plan-steps.md
|
ef7ed9711190-0
|
System views contain a subset of data found in several of the STL views and STV system tables\.
These views provide quicker and easier access to commonly queried data found in those tables\.
System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the views with the prefix SVL except that the SVL views provide information only for queries run on the main cluster\.
**Topics**
+ [STL views for logging](c_intro_STL_tables.md)
+ [SVCS views](svcs_views.md)
+ [SVL views](svl_views.md)
+ [SVV views](svv_views.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_system_views.md
|
0e71b639556b-0
|
**Topics**
+ [Permissions to access other AWS Resources](copy-usage_notes-access-permissions.md)
+ [Loading multibyte data from Amazon S3](copy-usage_notes-multi-byte.md)
+ [Loading a column of the GEOMETRY data type](copy-usage_notes-spatial-data.md)
+ [Errors when reading multiple files](copy-usage_notes-multiple-files.md)
+ [COPY from JSON format](copy-usage_notes-copy-from-json.md)
+ [COPY from columnar data formats](copy-usage_notes-copy-from-columnar.md)
+ [DATEFORMAT and TIMEFORMAT strings](r_DATEFORMAT_and_TIMEFORMAT_strings.md)
+ [Using automatic recognition with DATEFORMAT and TIMEFORMAT](automatic-recognition.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_usage_notes.md
|
292b1d509312-0
|
Records transfer time and other performance metrics\.
Use the STL\_S3CLIENT table to find the time spent transferring data from Amazon S3 as part of a COPY command\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT.md
|
31c423022a0d-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_S3CLIENT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT.md
|
d968dcc22004-0
|
The following query returns the time taken to load files using a COPY command\.
```
select slice, key, transfer_time
from stl_s3client
where query = pg_last_copy_id();
```
Result
```
slice | key | transfer_time
------+-----------------------------+---------------
0 | listing10M0003_part_00 | 16626716
1 | listing10M0001_part_00 | 12894494
2 | listing10M0002_part_00 | 14320978
3 | listing10M0000_part_00 | 11293439
3371 | prefix=listing10M;marker= | 99395
```
The following query converts the `start_time` and `end_time` to a timestamp\.
```
select userid,query,slice,pid,recordtime,start_time,end_time,
'2000-01-01'::timestamp + (start_time/1000000.0)* interval '1 second' as start_ts,
'2000-01-01'::timestamp + (end_time/1000000.0)* interval '1 second' as end_ts
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT.md
|
d968dcc22004-1
|
'2000-01-01'::timestamp + (end_time/1000000.0)* interval '1 second' as end_ts
from stl_s3client where query> -1 limit 5;
```
```
userid | query | slice | pid | recordtime | start_time | end_time | start_ts | end_ts
--------+-------+-------+-------+----------------------------+-----------------+-----------------+----------------------------+----------------------------
0 | 0 | 0 | 23449 | 2019-07-14 16:27:17.207839 | 616436837154256 | 616436837207838 | 2019-07-14 16:27:17.154256 | 2019-07-14 16:27:17.207838
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT.md
|
d968dcc22004-2
|
0 | 0 | 0 | 23449 | 2019-07-14 16:27:17.252521 | 616436837208208 | 616436837252520 | 2019-07-14 16:27:17.208208 | 2019-07-14 16:27:17.25252
0 | 0 | 0 | 23449 | 2019-07-14 16:27:17.284376 | 616436837208460 | 616436837284374 | 2019-07-14 16:27:17.20846 | 2019-07-14 16:27:17.284374
0 | 0 | 0 | 23449 | 2019-07-14 16:27:17.285307 | 616436837208980 | 616436837285306 | 2019-07-14 16:27:17.20898 | 2019-07-14 16:27:17.285306
0 | 0 | 0 | 23449 | 2019-07-14 16:27:17.353853 | 616436837302216 | 616436837353851 | 2019-07-14 16:27:17.302216 | 2019-07-14 16:27:17.353851
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_S3CLIENT.md
|
787a6ab4b106-0
|
Returns an integer corresponding to the slice number in the cluster where the data for a row is located\. SLICE\_NUM takes no parameters\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SLICE_NUM.md
|
61a4090a760c-0
|
```
SLICE_NUM()
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SLICE_NUM.md
|
10a15bd8fb10-0
|
The SLICE\_NUM function returns an integer\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SLICE_NUM.md
|
f795ed489ac0-0
|
The following example shows which slices contain data for the first ten EVENT rows in the EVENTS table:
```
select distinct eventid, slice_num() from event order by eventid limit 10;
eventid | slice_num
---------+-----------
1 | 1
2 | 2
3 | 3
4 | 0
5 | 1
6 | 2
7 | 3
8 | 0
9 | 1
10 | 2
(10 rows)
```
The following example returns a code \(10000\) to show that a query without a FROM statement executes on the leader node:
```
select slice_num();
slice_num
-----------
10000
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SLICE_NUM.md
|
524a03f1f4fb-0
|
ST\_DistanceSphere returns the distance between two point geometries lying on a sphere\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DistanceSphere-function.md
|
4ac87d3fce8b-0
|
```
ST_DistanceSphere(geom1, geom2)
```
```
ST_DistanceSphere(geom1, geom2, radius)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DistanceSphere-function.md
|
b301e82c0b7c-0
|
*geom1*
A point value in degrees of data type `GEOMETRY` lying on a sphere\. The first coordinate of the point is the longitude value\. The second coordinate of the point is the latitude value\.
*geom2*
A point value in degrees of data type `GEOMETRY` lying on a sphere\. The first coordinate of the point is the longitude value\. The second coordinate of the point is the latitude value\.
*radius*
The radius of a sphere of data type `DOUBLE PRECISION`\. If no *radius* is provided, the sphere defaults to Earth and the radius is computed from the World Geodetic System \(WGS\) 84 representation of the ellipsoid\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DistanceSphere-function.md
|
2e477210699f-0
|
`DOUBLE PRECISION` in the same units as the radius\.
If *geom1* or *geom2* is null or empty, then null is returned\.
If no *radius* is provided, then the result is in meters along the Earth's surface\.
If *radius* is a negative number, then an error is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is not a point, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DistanceSphere-function.md
|
c2c8f42a16bd-0
|
The following example SQL computes the distances in kilometers between three airport locations in Germany: Berlin Tegel \(TXL\), Munich International \(MUC\), and Frankfurt International \(FRA\)\.
```
WITH airports_raw(code,lon,lat) AS (
(SELECT 'MUC', 11.786111, 48.353889) UNION
(SELECT 'FRA', 8.570556, 50.033333) UNION
(SELECT 'TXL', 13.287778, 52.559722)),
airports1(code,location) AS (SELECT code, ST_Point(lon, lat) FROM airports_raw),
airports2(code,location) AS (SELECT * from airports1)
SELECT (airports1.code || ' <-> ' || airports2.code) AS airports,
round(ST_DistanceSphere(airports1.location, airports2.location) / 1000, 0) AS distance_in_km
FROM airports1, airports2 WHERE airports1.code < airports2.code ORDER BY 1;
```
```
airports | distance_in_km
-------------+----------------
FRA <-> MUC | 299
FRA <-> TXL | 432
MUC <-> TXL | 480
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DistanceSphere-function.md
|
4145c8beafd8-0
|
Removes a user\-defined function \(UDF\) from the database\. The function's signature, or list of argument data types, must be specified because multiple functions can exist with the same name but different signatures\. You can't drop an Amazon Redshift built\-in function\.
This command isn't reversible\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_FUNCTION.md
|
acc775bf0f37-0
|
```
DROP FUNCTION name
( [arg_name] arg_type [, ...] )
[ CASCADE | RESTRICT ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_FUNCTION.md
|
a8f8362d7289-0
|
*name*
The name of the function to be removed\.
*arg\_name*
The name of an input argument\. DROP FUNCTION ignores argument names, because only the argument data types are needed to determine the function's identity\.
*arg\_type*
The data type of the input argument\. You can supply a comma\-separated list with a maximum of 32 data types\.
CASCADE
Keyword specifying to automatically drop objects that depend on the function, such as views\.
To create a view that isn't dependent on a function, include the WITH NO SCHEMA BINDING clause in the view definition\. For more information, see [CREATE VIEW](r_CREATE_VIEW.md)\.
RESTRICT
Keyword specifying that if any objects depend on depend on the function, do not drop the function and return a message\. This action is the default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_FUNCTION.md
|
9de75270c667-0
|
The following example drops the function named `f_sqrt`:
```
drop function f_sqrt(int);
```
To remove a function that has dependencies, use the CASCADE option, as shown in the following example:
```
drop function f_sqrt(int)cascade;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_FUNCTION.md
|
d56995e5ac4a-0
|
Amazon Redshift has many system tables and views that contain information about how the system is functioning\. You can query these system tables and views the same way that you would query any other database tables\. This section shows some sample system table queries and explains:
+ How different types of system tables and views are generated
+ What types of information you can obtain from these tables
+ How to join Amazon Redshift system tables to catalog tables
+ How to manage the growth of system table log files
Some system tables can only be used by AWS staff for diagnostic purposes\. The following sections discuss the system tables that can be queried for useful information by system administrators or other database users\.
**Note**
System tables are not included in automated or manual cluster backups \(snapshots\)\. STL log views only retain approximately two to five days of log history, depending on log usage and available disk space\. If you want to retain the log data, you will need to periodically copy it to other tables or unload it to Amazon S3\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_system_tables.md
|
f1c0c9ba1834-0
|
If you maintain data for a rolling time period, use a series of tables, as the following diagram illustrates\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/vacuum-example-unsorted-region-copy-time-series.png)
Create a new table each time you add a set of data, then delete the oldest table in the series\. You gain a double benefit:
+ You avoid the added cost of deleting rows, because a DROP TABLE operation is much more efficient than a mass DELETE\.
+ If the tables are sorted by timestamp, no vacuum is needed\. If each table contains data for one month, a vacuum will at most have to rewrite one month’s worth of data, even if the tables are not sorted by timestamp\.
You can create a UNION ALL view for use by reporting queries that hides the fact that the data is stored in multiple tables\. If a query filters on the sort key, the query planner can efficiently skip all the tables that aren't used\. A UNION ALL can be less efficient for other types of queries, so you should evaluate query performance in the context of all queries that use the tables\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-time-series-tables.md
|
de32116a85e4-0
|
Converts a string expression of a number of the specified base to the equivalent integer value\. The converted value must be within the signed 64\-bit range\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STRTOL.md
|
7884c7c3f4ec-0
|
```
STRTOL(num_string, base)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STRTOL.md
|
994d690fdbd4-0
|
*num\_string*
String expression of a number to be converted\. If *num\_string* is empty \( `''` \) or begins with the null character \(`'\0'`\), the converted value is 0\. If *num\_string* is a column containing a NULL value, STRTOL returns NULL\. The string can begin with any amount of white space, optionally followed by a single plus '`+`' or minus '`-`' sign to indicate positive or negative\. The default is '`+`'\. If *base* is `16`, the string can optionally begin with '`0x`'\.
*base*
Integer between 2 and 36\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STRTOL.md
|
efeab4ba1f17-0
|
BIGINT\. If *num\_string* is null, returns NULL\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STRTOL.md
|
ab33c0643cc1-0
|
The following examples convert string and base value pairs to integers:
```
select strtol('0xf',16);
strtol
--------
15
(1 row)
select strtol('abcd1234',16);
strtol
------------
2882343476
(1 row)
select strtol('1234567', 10);
strtol
---------
1234567
(1 row)
select strtol('1234567', 8);
strtol
--------
342391
(1 row)
select strtol('110101', 2);
strtol
--------
53
select strtol('\0', 2);
strtol
--------
0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STRTOL.md
|
fc8d3ea90555-0
|
As you plan your database, certain key table design decisions heavily influence overall query performance\. These design choices also have a significant effect on storage requirements, which in turn affects query performance by reducing the number of I/O operations and minimizing the memory required to process queries\.
In this section, you can find a summary of the most important design decisions and presents best practices for optimizing query performance\. [Designing tables](t_Creating_tables.md) provides more detailed explanations and examples of table design options\.
**Topics**
+ [Take the tuning table design tutorial](c_best-practices-tutorial-tuning-tables.md)
+ [Choose the best sort key](c_best-practices-sort-key.md)
+ [Choose the best distribution style](c_best-practices-best-dist-key.md)
+ [Let COPY choose compression encodings](c_best-practices-use-auto-compression.md)
+ [Define primary key and foreign key constraints](c_best-practices-defining-constraints.md)
+ [Use the smallest possible column size](c_best-practices-smallest-column-size.md)
+ [Use date/time data types for date columns](c_best-practices-timestamp-date-columns.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_designing-tables-best-practices.md
|
eebc8c5effdc-0
|
The STV\_MV\_INFO table contains a row for every materialized view, whether the data is stale, and state information\.
For more information about materialized views, see [Creating materialized views in Amazon Redshift](materialized-view-overview.md)\.
STV\_MV\_INFO is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_MV_INFO.md
|
913315036a08-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_MV_INFO.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_MV_INFO.md
|
2be7cad4f80b-0
|
To view the state of all materialized views, run the following query\.
```
select * from stv_mv_info;
```
This query returns the following sample output\.
```
db_name | schema | name | updated_upto_xid | is_stale | owner_user_name | state
---------+----------------+---------+------------------+------+---------------------+-------
dev | test_setup | mv | 1031 | f | johndoea | 1
dev | test_setup | old_mv | 988 | t | paul | 1
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_MV_INFO.md
|
536f64073cd2-0
|
ST\_PointN returns a point in a linestring as specified by an index value\. Negative index values are counted backward from the end of the linestring, so that \-1 is the last point\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_PointN-function.md
|
726452fc87ab-0
|
```
ST_PointN(geom, index)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_PointN-function.md
|
4301a9bdd77e-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING`\.
*index*
A value of data type `INTEGER` that represents the index of a point in a linestring\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_PointN-function.md
|
05d006d143ff-0
|
`GEOMETRY` of subtype `POINT`\.
The spatial reference system identifier \(SRID\) value of the returned geometry is set to 0\.
If *geom* or *index* is null, then null is returned\.
If *index* is out of range, then null is returned\.
If *geom* is empty, then null is returned\.
If *geom* is not a `LINESTRING`, then null is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_PointN-function.md
|
8754b0d0fb22-0
|
The following SQL returns an extended well\-known text \(EWKT\) representation of a six\-point `LINESTRING` to a `GEOMETRY` object and returns the point at index 5 of the linestring\.
```
SELECT ST_AsEWKT(ST_PointN(ST_GeomFromText('LINESTRING(0 0,10 0,10 10,5 5,0 5,0 0)',4326), 5));
```
```
st_asewkt
-------------
SRID=4326;POINT(0 5)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_PointN-function.md
|
19426c453e7d-0
|
You can optionally define a column list in your COPY command\. If a column in the table is omitted from the column list, COPY will load the column with either the value supplied by the DEFAULT option that was specified in the CREATE TABLE command, or with NULL if the DEFAULT option was not specified\.
If COPY attempts to assign NULL to a column that is defined as NOT NULL, the COPY command fails\. For information about assigning the DEFAULT option, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
When loading from data files on Amazon S3, the columns in the column list must be in the same order as the fields in the data file\. If a field in the data file does not have a corresponding column in the column list, the COPY command fails\.
When loading from Amazon DynamoDB table, order does not matter\. Any fields in the Amazon DynamoDB attributes that do not match a column in the Amazon Redshift table are discarded\.
The following restrictions apply when using the COPY command to load DEFAULT values into a table:
+ If an [IDENTITY](r_CREATE_TABLE_NEW.md#identity-clause) column is included in the column list, the EXPLICIT\_IDS option must also be specified in the [COPY](r_COPY.md) command, or the COPY command will fail\. Similarly, if an IDENTITY column is omitted from the column list, and the EXPLICIT\_IDS option is specified, the COPY operation will fail\.
+ Because the evaluated DEFAULT expression for a given column is the same for all loaded rows, a DEFAULT expression that uses a RANDOM\(\) function will assign to same value to all the rows\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading_default_values.md
|
19426c453e7d-1
|
+ DEFAULT expressions that contain CURRENT\_DATE or SYSDATE are set to the timestamp of the current transaction\.
For an example, see "Load data from a file with default values" in [COPY examples](r_COPY_command_examples.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_loading_default_values.md
|
f962ac72d486-0
|
Appends rows to a target table by moving data from an existing source table\. Data in the source table is moved to matching columns in the target table\. Column order doesn't matter\. After data is successfully appended to the target table, the source table is empty\. ALTER TABLE APPEND is usually much faster than a similar [CREATE TABLE AS](r_CREATE_TABLE_AS.md) or [INSERT](r_INSERT_30.md) INTO operation because data is moved, not duplicated\.
**Note**
ALTER TABLE APPEND moves data blocks between the source table and the target table\. To improve performance, ALTER TABLE APPEND doesn't compact storage as part of the append operation\. As a result, storage usage increases temporarily\. To reclaim the space, run a [VACUUM](r_VACUUM_command.md) operation\.
Columns with the same names must also have identical column attributes\. If either the source table or the target table contains columns that don't exist in the other table, use the IGNOREEXTRA or FILLTARGET parameters to specify how extra columns should be managed\.
You can't append an identity column\. If both tables include an identity column, the command fails\. If only one table has an identity column, include the FILLTARGET or IGNOREEXTRA parameter\. For more information, see [ALTER TABLE APPEND usage notes](#r_ALTER_TABLE_APPEND_usage)\.
You can append a GENERATED BY DEFAULT AS IDENTITY column\. You can update columns defined as GENERATED BY DEFAULT AS IDENTITY with values that you supply\. For more information, see [ALTER TABLE APPEND usage notes](#r_ALTER_TABLE_APPEND_usage)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
f962ac72d486-1
|
Both the source table and the target table must be permanent tables\. Both tables must use the same distribution style and distribution key, if one was defined\. If the tables are sorted, both tables must use the same sort style and define the same columns as sort keys\.
An ALTER TABLE APPEND command automatically commits immediately upon completion of the operation\. It can't be rolled back\. You can't run ALTER TABLE APPEND within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
c33564e21f69-0
|
```
ALTER TABLE target_table_name APPEND FROM source_table_name
[ IGNOREEXTRA | FILLTARGET ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
1a2365d5224b-0
|
*target\_table\_name*
The name of the table to which rows are appended\. Either specify just the name of the table or use the format *schema\_name\.table\_name* to use a specific schema\. The target table must be an existing permanent table\.
FROM *source\_table\_name*
The name of the table that provides the rows to be appended\. Either specify just the name of the table or use the format *schema\_name\.table\_name* to use a specific schema\. The source table must be an existing permanent table\.
IGNOREEXTRA
A keyword that specifies that if the source table includes columns that are not present in the target table, data in the extra columns should be discarded\. You can't use IGNOREEXTRA with FILLTARGET\.
FILLTARGET
A keyword that specifies that if the target table includes columns that are not present in the source table, the columns should be filled with the [DEFAULT](r_CREATE_TABLE_NEW.md#create-table-default) column value, if one was defined, or NULL\. You can't use IGNOREEXTRA with FILLTARGET\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
5f6fa3cd0dcf-0
|
ALTER TABLE APPEND moves only identical columns from the source table to the target table\. Column order doesn't matter\.
If either the source table or the target tables contains extra columns, use either FILLTARGET or IGNOREEXTRA according to the following rules:
+ If the source table contains columns that don't exist in the target table, include IGNOREEXTRA\. The command ignores the extra columns in the source table\.
+ If the target table contains columns that don't exist in the source table, include FILLTARGET\. The command fills the extra columns in the target table with either the default column value or IDENTITY value, if one was defined, or NULL\.
+ If both the source table and the target table contain extra columns, the command fails\. You can't use both FILLTARGET and IGNOREEXTRA\.
If a column with the same name but different attributes exists in both tables, the command fails\. Like\-named columns must have the following attributes in common:
+ Data type
+ Column size
+ Compression encoding
+ Not null
+ Sort style
+ Sort key columns
+ Distribution style
+ Distribution key columns
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
5f6fa3cd0dcf-1
|
+ Not null
+ Sort style
+ Sort key columns
+ Distribution style
+ Distribution key columns
You can't append an identity column\. If both the source table and the target table have identity columns, the command fails\. If only the source table has an identity column, include the IGNOREEXTRA parameter so that the identity column is ignored\. If only the target table has an identity column, include the FILLTARGET parameter so that the identity column is populated according to the IDENTITY clause defined for the table\. For more information, see [DEFAULT](r_CREATE_TABLE_NEW.md#create-table-default)\.
You can append a default identity column with the ALTER TABLE APPEND statement\. For more information, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
698cb0156d58-0
|
Suppose your organization maintains a table, SALES\_MONTHLY, to capture current sales transactions\. You want to move data from the transaction table to the SALES table, every month\.
You can use the following INSERT INTO and TRUNCATE commands to accomplish the task\.
```
insert into sales (select * from sales_monthly);
truncate sales_monthly;
```
However, you can perform the same operation much more efficiently by using an ALTER TABLE APPEND command\.
First, query the [PG\_TABLE\_DEF](r_PG_TABLE_DEF.md) system catalog table to verify that both tables have the same columns with identical column attributes\.
```
select trim(tablename) as table, "column", trim(type) as type,
encoding, distkey, sortkey, "notnull"
from pg_table_def where tablename like 'sales%';
table | column | type | encoding | distkey | sortkey | notnull
-----------+------------+-----------------------------+----------+---------+---------+--------
sales | salesid | integer | lzo | false | 0 | true
sales | listid | integer | none | true | 1 | true
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
698cb0156d58-1
|
sales | salesid | integer | lzo | false | 0 | true
sales | listid | integer | none | true | 1 | true
sales | sellerid | integer | none | false | 2 | true
sales | buyerid | integer | lzo | false | 0 | true
sales | eventid | integer | mostly16 | false | 0 | true
sales | dateid | smallint | lzo | false | 0 | true
sales | qtysold | smallint | mostly8 | false | 0 | true
sales | pricepaid | numeric(8,2) | delta32k | false | 0 | false
sales | commission | numeric(8,2) | delta32k | false | 0 | false
sales | saletime | timestamp without time zone | lzo | false | 0 | false
salesmonth | salesid | integer | lzo | false | 0 | true
salesmonth | listid | integer | none | true | 1 | true
salesmonth | sellerid | integer | none | false | 2 | true
salesmonth | buyerid | integer | lzo | false | 0 | true
salesmonth | eventid | integer | mostly16 | false | 0 | true
salesmonth | dateid | smallint | lzo | false | 0 | true
salesmonth | qtysold | smallint | mostly8 | false | 0 | true
salesmonth | pricepaid | numeric(8,2) | delta32k | false | 0 | false
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
698cb0156d58-2
|
salesmonth | pricepaid | numeric(8,2) | delta32k | false | 0 | false
salesmonth | commission | numeric(8,2) | delta32k | false | 0 | false
salesmonth | saletime | timestamp without time zone | lzo | false | 0 | false
```
Next, look at the size of each table\.
```
select count(*) from sales_monthly;
count
-------
2000
(1 row)
select count(*) from sales;
count
-------
412,214
(1 row)
```
Now execute the following ALTER TABLE APPEND command\.
```
alter table sales append from sales_monthly;
```
Look at the size of each table again\. The SALES\_MONTHLY table now has 0 rows, and the SALES table has grown by 2000 rows\.
```
select count(*) from sales_monthly;
count
-------
0
(1 row)
select count(*) from sales;
count
-------
414214
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
698cb0156d58-3
|
count
-------
414214
(1 row)
```
If the source table has more columns than the target table, specify the IGNOREEXTRA parameter\. The following example uses the IGNOREEXTRA parameter to ignore extra columns in the SALES\_LISTING table when appending to the SALES table\.
```
alter table sales append from sales_listing ignoreextra;
```
If the target table has more columns than the source table, specify the FILLTARGET parameter\. The following example uses the FILLTARGET parameter to populate columns in the SALES\_REPORT table that don't exist in the SALES\_MONTH table\.
```
alter table sales_report append from sales_month filltarget;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_APPEND.md
|
1216b3dd0a35-0
|
The COPY command loads data from files on the Amazon EMR Hadoop Distributed File System \(HDFS\)\. When you create the Amazon EMR cluster, configure the cluster to output data files to the cluster's HDFS\.
**To create an Amazon EMR cluster**
1. Create an Amazon EMR cluster in the same AWS Region as the Amazon Redshift cluster\.
If the Amazon Redshift cluster is in a VPC, the Amazon EMR cluster must be in the same VPC group\. If the Amazon Redshift cluster uses EC2\-Classic mode \(that is, it is not in a VPC\), the Amazon EMR cluster must also use EC2\-Classic mode\. For more information, see [Managing Clusters in Virtual Private Cloud \(VPC\)](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-vpc.html) in the *Amazon Redshift Cluster Management Guide*\.
1. Configure the cluster to output data files to the cluster's HDFS\. The HDFS file names must not include asterisks \(\*\) or question marks \(?\)\.
**Important**
The file names must not include asterisks \( \* \) or question marks \( ? \)\.
1. Specify **No** for the **Auto\-terminate** option in the Amazon EMR cluster configuration so that the cluster remains available while the COPY command executes\.
**Important**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-create-cluster.md
|
1216b3dd0a35-1
|
**Important**
If any of the data files are changed or deleted before the COPY completes, you might have unexpected results, or the COPY operation might fail\.
1. Note the cluster ID and the master public DNS \(the endpoint for the Amazon EC2 instance that hosts the cluster\)\. You will use that information in later steps\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-create-cluster.md
|
9e2490ab3c03-0
|
Your cluster continues to accrue charges as long as it is running\. When you have completed this tutorial, return your environment to the previous state by following the steps in [Find Additional Resources and Reset Your Environment](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-clean-up-tasks.html) in *Amazon Redshift Getting Started*\.
For more information about WLM, see [Implementing workload management](cm-c-implementing-workload-management.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-wlm-cleaning-up-resources.md
|
8e61ae1490d9-0
|
You can efficiently update and insert new data by loading your data into a staging table first\.
Amazon Redshift doesn't support a single *merge* statement \(update or insert, also known as an *upsert*\) to insert and update data from a single data source\. However, you can effectively perform a merge operation\. To do so, load your data into a staging table and then join the staging table with your target table for an UPDATE statement and an INSERT statement\. For instructions, see [Updating and inserting new data](t_updating-inserting-using-staging-tables-.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-upsert.md
|
85f32b67b385-0
|
NEXT\_DAY returns the date of the first instance of the specified day that is later than the given date\.
If the *day* value is the same day of the week as *given\_date*, the next occurrence of that day is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NEXT_DAY.md
|
7df5c158d4ef-0
|
```
NEXT_DAY ( { date | timestamp }, day )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NEXT_DAY.md
|
cac182ee8b40-0
|
*date* \| *timestamp*
A date or timestamp column or an expression that implicitly converts to a date or time stamp\.
*day*
A string containing the name of any day\. Capitalization does not matter\.
Valid values are as follows\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_NEXT_DAY.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NEXT_DAY.md
|
8d7a40fae514-0
|
DATE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NEXT_DAY.md
|
1e08757b0c2f-0
|
The following example returns the date of the first Tuesday after 8/20/2014\.
```
select next_day('2014-08-20','Tuesday');
next_day
-----------
2014-08-26
```
The following example gets target marketing dates for the third quarter:
```
select username, (firstname ||' '|| lastname) as name,
eventname, caldate, next_day (caldate, 'Monday') as marketing_target
from sales, date, users, event
where sales.buyerid = users.userid
and sales.eventid = event.eventid
and event.dateid = date.dateid
and date.qtr = 3
order by marketing_target, eventname, name;
username | name | eventname | caldate | marketing_target
----------+-------------------+--------------------+---------------+------------------
MBO26QSG Callum Atkinson .38 Special 2008-07-06 2008-07-07
WCR50YIU Erasmus Alvarez A Doll's House 2008-07-03 2008-07-07
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NEXT_DAY.md
|
1e08757b0c2f-1
|
WCR50YIU Erasmus Alvarez A Doll's House 2008-07-03 2008-07-07
CKT70OIE Hadassah Adkins Ana Gabriel 2008-07-06 2008-07-07
VVG07OUO Nathan Abbott Armando Manzanero 2008-07-04 2008-07-07
GEW77SII Scarlet Avila August: Osage County 2008-07-06 2008-07-07
ECR71CVS Caryn Adkins Ben Folds 2008-07-03 2008-07-07
KUW82CYU Kaden Aguilar Bette Midler 2008-07-01 2008-07-07
WZE78DJZ Kay Avila Bette Midler 2008-07-01 2008-07-07
HXY04NVE Dante Austin Britney Spears 2008-07-02 2008-07-07
URY81YWF Wilma Anthony Britney Spears 2008-07-02 2008-07-07
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_NEXT_DAY.md
|
9118604c5859-0
|
Spatial data describes the position and shape of a geometry in a defined space \(a spatial reference system\)\. Amazon Redshift supports spatial data with the `GEOMETRY` data type, which contains spatial data and optionally its spatial reference system identifier \(SRID\)\.
Spatial data contains geometric data that can be used to represent geographic features\. Examples of this type of data include weather reports, map directions, tweets with geographic positions, store locations, and airline routes\. Spatial data plays an important role in business analytics, reporting, and forecasting\.
You can query spatial data with Amazon Redshift SQL functions\. Spatial data contains geometric values for an object\.
Using spatial data, you can run queries to do the following:
+ Find the distance between two points\.
+ Check whether one area \(polygon\) contains another\.
+ Check whether one linestring intersects another linestring or polygon\.
You can use the `GEOMETRY` data type to hold the values of spatial data\. A `GEOMETRY` value in Amazon Redshift can define two\-dimensional \(2D\) geometry primitives\. Currently, Amazon Redshift doesn't support 3D or 4D geometry primitives\. For more information about geometry primitives, see [Well\-known text representation of geometry](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) in Wikipedia\.
The `GEOMETRY` data type has the following subtypes:
+ `POINT`
+ `LINESTRING`
+ `POLYGON`
+ `MULTIPOINT`
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/geospatial-overview.md
|
9118604c5859-1
|
+ `POINT`
+ `LINESTRING`
+ `POLYGON`
+ `MULTIPOINT`
+ `MULTILINESTRING`
+ `MULTIPOLYGON`
+ `GEOMETRYCOLLECTION`
There are Amazon Redshift SQL functions that support the following representations of geometric data:
+ GeoJSON
+ Well\-known text \(WKT\)
+ Extended well\-known text \(EWKT\)
+ Well\-known binary \(WKB\) representation
+ Extended well\-known binary \(EWKB\)
For details about SQL functions to query spatial data, see [Spatial functions](geospatial-functions.md)\.
**Topics**
+ [Limitations when using spatial data with Amazon Redshift](spatial-limitations.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/geospatial-overview.md
|
f63280027eaa-0
|
ST\_MemSize returns the amount of memory space \(in bytes\) used by the input geometry\. This size depends on the Amazon Redshift internal representation of the geometry and thus can change if the internal representation changes\. You can use this size as an indication of the relative size of geometry objects in Amazon Redshift\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MemSize-function.md
|
b28ea11c3c4b-0
|
```
ST_MemSize(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MemSize-function.md
|
5ba57e9d6243-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MemSize-function.md
|
a4c8b3c23a6d-0
|
`INTEGER` representing the inherent dimension of *geom*\.
If *geom* is null, then null is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MemSize-function.md
|
9aeceec21526-0
|
The following SQL returns the memory size of a geometry collection\.
```
SELECT ST_MemSize(ST_GeomFromText('GEOMETRYCOLLECTION(POLYGON((0 0,10 0,0 10,0 0)),LINESTRING(20 10,20 0,10 0))'))::varchar + ' bytes';
```
```
?column?
-----------
172 bytes
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MemSize-function.md
|
83a47dad7e10-0
|
**Topics**
+ [IS\_VALID\_JSON function](IS_VALID_JSON.md)
+ [IS\_VALID\_JSON\_ARRAY function](IS_VALID_JSON_ARRAY.md)
+ [JSON\_ARRAY\_LENGTH function](JSON_ARRAY_LENGTH.md)
+ [JSON\_EXTRACT\_ARRAY\_ELEMENT\_TEXT function](JSON_EXTRACT_ARRAY_ELEMENT_TEXT.md)
+ [JSON\_EXTRACT\_PATH\_TEXT function](JSON_EXTRACT_PATH_TEXT.md)
When you need to store a relatively small set of key\-value pairs, you might save space by storing the data in JSON format\. Because JSON strings can be stored in a single column, using JSON might be more efficient than storing your data in tabular format\. For example, suppose you have a sparse table, where you need to have many columns to fully represent all possible attributes, but most of the column values are NULL for any given row or any given column\. By using JSON for storage, you might be able to store the data for a row in key:value pairs in a single JSON string and eliminate the sparsely\-populated table columns\.
In addition, you can easily modify JSON strings to store additional key:value pairs without needing to add columns to a table\.
We recommend using JSON sparingly\. JSON is not a good choice for storing larger datasets because, by storing disparate data in a single column, JSON does not leverage Amazon Redshift’s column store architecture\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/json-functions.md
|
83a47dad7e10-1
|
JSON uses UTF\-8 encoded text strings, so JSON strings can be stored as CHAR or VARCHAR data types\. Use VARCHAR if the strings include multi\-byte characters\.
JSON strings must be properly formatted JSON, according to the following rules:
+ The root level JSON can either be a JSON object or a JSON array\. A JSON object is an unordered set of comma\-separated key:value pairs enclosed by curly braces\.
For example, `{"one":1, "two":2} `
+ A JSON array is an ordered set of comma\-separated values enclosed by brackets\.
An example is the following: `["first", {"one":1}, "second", 3, null] `
+ JSON arrays use a zero\-based index; the first element in an array is at position 0\. In a JSON key:value pair, the key is a double quoted string\.
+ A JSON value can be any of:
+ JSON object
+ JSON array
+ string \(double quoted\)
+ number \(integer and float\)
+ boolean
+ null
+ Empty objects and empty arrays are valid JSON values\.
+ JSON fields are case sensitive\.
+ White space between JSON structural elements \(such as `{ }, [ ]`\) is ignored\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/json-functions.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.