id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
084c1f7a8fa9-0
|
The following SQL returns the geometries in a geometry collection\.
```
WITH tmp1(idx) AS (SELECT 1 UNION SELECT 2),
tmp2(g) AS (SELECT ST_GeomFromText('GEOMETRYCOLLECTION(POLYGON((0 0,10 0,0 10,0 0)),LINESTRING(20 10,20 0,10 0))'))
SELECT idx, ST_AsEWKT(ST_GeometryN(g, idx)) FROM tmp1, tmp2 ORDER BY idx;
```
```
idx | st_asewkt
-----+------------------------------
1 | POLYGON((0 0,10 0,0 10,0 0))
2 | LINESTRING(20 10,20 0,10 0)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeometryN-function.md
|
288c18c01d85-0
|
The HAVING clause applies a condition to the intermediate grouped result set that a query returns\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAVING_clause.md
|
fcb4b32b5ca3-0
|
```
[ HAVING condition ]
```
For example, you can restrict the results of a SUM function:
```
having sum(pricepaid) >10000
```
The HAVING condition is applied after all WHERE clause conditions are applied and GROUP BY operations are completed\.
The condition itself takes the same form as any WHERE clause condition\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAVING_clause.md
|
cb6aa19464c7-0
|
+ Any column that is referenced in a HAVING clause condition must be either a grouping column or a column that refers to the result of an aggregate function\.
+ In a HAVING clause, you can't specify:
+ An alias that was defined in the select list\. You must repeat the original, unaliased expression\.
+ An ordinal number that refers to a select list item\. Only the GROUP BY and ORDER BY clauses accept ordinal numbers\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAVING_clause.md
|
367a517852ab-0
|
The following query calculates total ticket sales for all events by name, then eliminates events where the total sales were less than $800,000\. The HAVING condition is applied to the results of the aggregate function in the select list: `sum(pricepaid)`\.
```
select eventname, sum(pricepaid)
from sales join event on sales.eventid = event.eventid
group by 1
having sum(pricepaid) > 800000
order by 2 desc, 1;
eventname | sum
------------------+-----------
Mamma Mia! | 1135454.00
Spring Awakening | 972855.00
The Country Girl | 910563.00
Macbeth | 862580.00
Jersey Boys | 811877.00
Legally Blonde | 804583.00
(6 rows)
```
The following query calculates a similar result set\. In this case, however, the HAVING condition is applied to an aggregate that isn't specified in the select list: `sum(qtysold)`\. Events that did not sell more than 2,000 tickets are eliminated from the final result\.
```
select eventname, sum(pricepaid)
from sales join event on sales.eventid = event.eventid
group by 1
having sum(qtysold) >2000
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAVING_clause.md
|
367a517852ab-1
|
from sales join event on sales.eventid = event.eventid
group by 1
having sum(qtysold) >2000
order by 2 desc, 1;
eventname | sum
------------------+-----------
Mamma Mia! | 1135454.00
Spring Awakening | 972855.00
The Country Girl | 910563.00
Macbeth | 862580.00
Jersey Boys | 811877.00
Legally Blonde | 804583.00
Chicago | 790993.00
Spamalot | 714307.00
(8 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_HAVING_clause.md
|
7d64e03537e4-0
|
Compares the value of two time stamps and returns an integer\. If the time stamps are identical, the function returns 0\. If the first time stamp is greater alphabetically, the function returns 1\. If the second time stamp is greater, the function returns –1\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP.md
|
00223761a0a5-0
|
```
TIMESTAMP_CMP(timestamp1, timestamp2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP.md
|
35a95ddef3f1-0
|
*timestamp1*
A TIMESTAMP column or an expression that implicitly converts to a time stamp\.
*timestamp2*
A TIMESTAMP column or an expression that implicitly converts to a time stamp\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP.md
|
1699d5cb7df4-0
|
INTEGER
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP.md
|
f78cb2ee5794-0
|
The following example compares the LISTTIME and SALETIME for a listing\. Note that the value for TIMESTAMP\_CMP is \-1 for all listings because the time stamp for the sale is after the time stamp for the listing:
```
select listing.listid, listing.listtime,
sales.saletime, timestamp_cmp(listing.listtime, sales.saletime)
from listing, sales
where listing.listid=sales.listid
order by 1, 2, 3, 4
limit 10;
listid | listtime | saletime | timestamp_cmp
--------+---------------------+---------------------+---------------
1 | 2008-01-24 06:43:29 | 2008-02-18 02:36:48 | -1
4 | 2008-05-24 01:18:37 | 2008-06-06 05:00:16 | -1
5 | 2008-05-17 02:29:11 | 2008-06-06 08:26:17 | -1
5 | 2008-05-17 02:29:11 | 2008-06-09 08:38:52 | -1
6 | 2008-08-15 02:08:13 | 2008-08-31 09:17:02 | -1
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP.md
|
f78cb2ee5794-1
|
6 | 2008-08-15 02:08:13 | 2008-08-31 09:17:02 | -1
10 | 2008-06-17 09:44:54 | 2008-06-26 12:56:06 | -1
10 | 2008-06-17 09:44:54 | 2008-07-10 02:12:36 | -1
10 | 2008-06-17 09:44:54 | 2008-07-16 11:59:24 | -1
10 | 2008-06-17 09:44:54 | 2008-07-22 02:23:17 | -1
12 | 2008-07-25 01:45:49 | 2008-08-04 03:06:36 | -1
(10 rows)
```
This example shows that TIMESTAMP\_CMP returns a 0 for identical time stamps:
```
select listid, timestamp_cmp(listtime, listtime)
from listing
order by 1 , 2
limit 10;
listid | timestamp_cmp
--------+---------------
1 | 0
2 | 0
3 | 0
4 | 0
5 | 0
6 | 0
7 | 0
8 | 0
9 | 0
10 | 0
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMP_CMP.md
|
8e9d81617910-0
|
Displays the execution plan for a query statement without running the query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
233ba5d5d016-0
|
```
EXPLAIN [ VERBOSE ] query
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
6114af41d9d3-0
|
VERBOSE
Displays the full query plan instead of just a summary\.
*query*
Query statement to explain\. The query can be a SELECT, INSERT, CREATE TABLE AS, UPDATE, or DELETE statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
096d19db428b-0
|
EXPLAIN performance is sometimes influenced by the time it takes to create temporary tables\. For example, a query that uses the common subexpression optimization requires temporary tables to be created and analyzed in order to return the EXPLAIN output\. The query plan depends on the schema and statistics of the temporary tables\. Therefore, the EXPLAIN command for this type of query might take longer to run than expected\.
You can use EXPLAIN only for the following commands:
+ SELECT
+ SELECT INTO
+ CREATE TABLE AS
+ INSERT
+ UPDATE
+ DELETE
The EXPLAIN command will fail if you use it for other SQL commands, such as data definition language \(DDL\) or database operations\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
a3bba8eaef7e-0
|
The execution plan for a specific Amazon Redshift query statement breaks down execution and calculation of a query into a discrete sequence of steps and table operations that eventually produce a final result set for the query\. The following table provides a summary of steps that Amazon Redshift can use in developing an execution plan for any query a user submits for execution\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_EXPLAIN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
7dcf7e34191d-0
|
**Note**
For these examples, the sample output might vary depending on Amazon Redshift configuration\.
The following example returns the query plan for a query that selects the EVENTID, EVENTNAME, VENUEID, and VENUENAME from the EVENT and VENUE tables:
```
explain
select eventid, eventname, event.venueid, venuename
from event, venue
where event.venueid = venue.venueid;
```
```
QUERY PLAN
--------------------------------------------------------------------------
XN Hash Join DS_DIST_OUTER (cost=2.52..58653620.93 rows=8712 width=43)
Hash Cond: ("outer".venueid = "inner".venueid)
-> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=23)
-> XN Hash (cost=2.02..2.02 rows=202 width=22)
-> XN Seq Scan on venue (cost=0.00..2.02 rows=202 width=22)
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
7dcf7e34191d-1
|
(5 rows)
```
The following example returns the query plan for the same query with verbose output:
```
explain verbose
select eventid, eventname, event.venueid, venuename
from event, venue
where event.venueid = venue.venueid;
```
```
QUERY PLAN
--------------------------------------------------------------------------
{HASHJOIN
:startup_cost 2.52
:total_cost 58653620.93
:plan_rows 8712
:plan_width 43
:best_pathkeys <>
:dist_info DS_DIST_OUTER
:dist_info.dist_keys (
TARGETENTRY
{
VAR
:varno 2
:varattno 1
...
XN Hash Join DS_DIST_OUTER (cost=2.52..58653620.93 rows=8712 width=43)
Hash Cond: ("outer".venueid = "inner".venueid)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
7dcf7e34191d-2
|
Hash Cond: ("outer".venueid = "inner".venueid)
-> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=23)
-> XN Hash (cost=2.02..2.02 rows=202 width=22)
-> XN Seq Scan on venue (cost=0.00..2.02 rows=202 width=22)
(519 rows)
```
The following example returns the query plan for a CREATE TABLE AS \(CTAS\) statement:
```
explain create table venue_nonulls as
select * from venue
where venueseats is not null;
QUERY PLAN
-----------------------------------------------------------
XN Seq Scan on venue (cost=0.00..2.02 rows=187 width=45)
Filter: (venueseats IS NOT NULL)
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXPLAIN.md
|
c38d88b89c2d-0
|
INTERVAL\_CMP compares two intervals and returns `1` if the first interval is greater, `-1` if the second interval is greater, and `0` if the intervals are equal\. For more information, see [Interval literals](r_interval_literals.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INTERVAL_CMP.md
|
30cab245fd3c-0
|
```
INTERVAL_CMP(interval1, interval2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INTERVAL_CMP.md
|
c80d6eedf8a3-0
|
*interval1*
An interval literal value\.
*interval2*
An interval literal value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INTERVAL_CMP.md
|
68a5af14f7ef-0
|
INTEGER
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INTERVAL_CMP.md
|
75d3b516d935-0
|
The following example compares the value of "3 days" to "1 year":
```
select interval_cmp('3 days','1 year');
interval_cmp
--------------
-1
```
This example compares the value "7 days" to "1 week":
```
select interval_cmp('7 days','1 week');
interval_cmp
--------------
0
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INTERVAL_CMP.md
|
f520a76cc163-0
|
You can use workload management \(WLM\) to define multiple query queues and to route queries to the appropriate queues at runtime\.
In some cases, you might have multiple sessions or users running queries at the same time\. In these cases, some queries might consume cluster resources for long periods of time and affect the performance of other queries\. For example, suppose that one group of users submits occasional complex, long\-running queries that select and sort rows from several large tables\. Another group frequently submits short queries that select only a few rows from one or two tables and run in a few seconds\. In this situation, the short\-running queries might have to wait in a queue for a long\-running query to complete\. WLM helps manage this situation\.
You can configure Amazon Redshift WLM to run with either automatic WLM or manual WLM\.
+ Automatic WLM
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-implementing-workload-management.md
|
f520a76cc163-1
|
You can configure Amazon Redshift WLM to run with either automatic WLM or manual WLM\.
+ Automatic WLM
To maximize system throughput and use resources effectively, you can enable Amazon Redshift to manage how resources are divided to run concurrent queries with automatic WLM\. *Automatic WLM* manages the resources required to run queries\. Amazon Redshift determines how many queries run concurrently and how much memory is allocated to each dispatched query\. You can enable automatic WLM using the Amazon Redshift console by choosing **Switch WLM mode** and then choosing **Auto WLM**\. With this choice, up to eight queues are used to manage queries, and the **Memory** and **Concurrency on main** fields are both set to **Auto**\. You can specify a priority that reflects the business priority of the workload or users that map to each queue\. The default priority of queries is set to **Normal**\. For information about how to change the priority of queries in a queue, see [Query priority](query-priority.md)\. For more information, see [Implementing automatic WLM](automatic-wlm.md)\.
At runtime, you can route queries to these queues according to user groups or query groups\. You can also configure a query monitoring rule \(QMR\) to limit long\-running queries\.
Working with concurrency scaling and automatic WLM, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance\. For more information, see [Working with concurrency scaling](concurrency-scaling.md)\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-implementing-workload-management.md
|
f520a76cc163-2
|
**Note**
We recommend that you create a parameter group and choose automatic WLM to manage your query resources\. For details about how to migrate from manual WLM to automatic WLM, see [Migrating from manual WLM to automatic WLM](cm-c-modifying-wlm-configuration.md#wlm-manual-to-automatic)\.
+ Manual WLM
Alternatively, you can manage system performance and your users' experience by modifying your WLM configuration to create separate queues for the long\-running queries and the short\-running queries\. At runtime, you can route queries to these queues according to user groups or query groups\. You can enable this manual configuration using the Amazon Redshift console by switching to **Manual WLM**\. With this choice, you specify the queues used to manage queries, and the **Memory** and **Concurrency on main** field values\. With a manual configuration, you can configure up to eight query queues and set the number of queries that can run in each of those queues concurrently\. You can set up rules to route queries to particular queues based on the user running the query or labels that you specify\. You can also configure the amount of memory allocated to each queue, so that large queries run in queues with more memory than other queues\. You can also configure a query monitoring rule \(QMR\) to limit long\-running queries\. For more information, see [Implementing manual WLM](cm-c-defining-query-queues.md)\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-implementing-workload-management.md
|
f520a76cc163-3
|
**Note**
We recommend configuring your manual WLM query queues with a total of 15 or fewer query slots\. For more information, see [Concurrency level](cm-c-defining-query-queues.md#cm-c-defining-query-queues-concurrency-level)\.
**Topics**
+ [Modifying the WLM configuration](cm-c-modifying-wlm-configuration.md)
+ [Implementing automatic WLM](automatic-wlm.md)
+ [Implementing manual WLM](cm-c-defining-query-queues.md)
+ [Working with concurrency scaling](concurrency-scaling.md)
+ [Working with short query acceleration](wlm-short-query-acceleration.md)
+ [WLM queue assignment rules](cm-c-wlm-queue-assignment-rules.md)
+ [Assigning queries to queues](cm-c-executing-queries.md)
+ [WLM dynamic and static configuration properties](cm-c-wlm-dynamic-properties.md)
+ [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)
+ [WLM system tables and views](cm-c-wlm-system-tables-and-views.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-implementing-workload-management.md
|
9057cb5b7709-0
|
The following are limitations when using spatial data with Amazon Redshift:
+ The maximum size of a `GEOMETRY` object is 1,048,447 bytes\.
+ Amazon Redshift Spectrum doesn't natively support spatial data\. Therefore, you can't create or alter an external table with a `GEOMETRY` column\.
+ Data types for Python user\-defined functions \(UDFs\) don't support the `GEOMETRY` data type\.
+ You can't use a `GEOMETRY` column as a sort key or a distribution key of an Amazon Redshift table\.
+ You can't use `GEOMETRY` columns in SQL ORDER BY, GROUP BY, or DISTINCT clauses\.
+ You can't use `GEOMETRY` columns in many SQL functions\.
+ You can't perform an UNLOAD operation on geometry columns into every format\. You can UNLOAD `GEOMETRY` columns to text or CSV, which writes `GEOMETRY` data in hexadecimal EWKB format\. If the size of the EWKB data is more than 4 MB, then a warning occurs because the data can't later be loaded into a table\.
+ The supported compression encoding of `GEOMETRY` data is RAW\.
+ When using JDBC or ODBC drivers, use customized type mappings\. In this case, the client application must have information on which parameters of a `ResultSet` object are `GEOMETRY` objects\. The `ResultSetMetadata` operation returns type `VARCHAR`\.
The following nonspatial functions can accept an input of type GEOMETRY or columns of type GEOMETRY:
+ The aggregate function COUNT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/spatial-limitations.md
|
9057cb5b7709-1
|
The following nonspatial functions can accept an input of type GEOMETRY or columns of type GEOMETRY:
+ The aggregate function COUNT
+ The conditional expressions COALESCE and NVL
+ CASE expressions
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/spatial-limitations.md
|
57a0574491a7-0
|
ADD\_MONTHS adds the specified number of months to a date or time stamp value or expression\. The [DATEADD](r_DATEADD_function.md) function provides similar functionality\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ADD_MONTHS.md
|
e78c4413d64b-0
|
```
ADD_MONTHS( {date | timestamp}, integer)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ADD_MONTHS.md
|
ee1f49b7c8fe-0
|
*date* \| *timestamp*
A date or timestamp column or an expression that implicitly converts to a date or time stamp\. If the date is the last day of the month, or if the resulting month is shorter, the function returns the last day of the month in the result\. For other dates, the result contains the same day number as the date expression\.
*integer*
A positive or negative integer\. Use a negative number to subtract months from dates\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ADD_MONTHS.md
|
a7dcd7183da5-0
|
TIMESTAMP
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ADD_MONTHS.md
|
fa9ee1232046-0
|
The following query uses the ADD\_MONTHS function inside a TRUNC function\. The TRUNC function removes the time of day from the result of ADD\_MONTHS\. The ADD\_MONTHS function adds 12 months to each value from the CALDATE column\.
```
select distinct trunc(add_months(caldate, 12)) as calplus12,
trunc(caldate) as cal
from date
order by 1 asc;
calplus12 | cal
------------+------------
2009-01-01 | 2008-01-01
2009-01-02 | 2008-01-02
2009-01-03 | 2008-01-03
...
(365 rows)
```
The following examples demonstrate the behavior when the ADD\_MONTHS function operates on dates with months that have different numbers of days\.
```
select add_months('2008-03-31',1);
add_months
---------------------
2008-04-30 00:00:00
(1 row)
select add_months('2008-04-30',1);
add_months
---------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ADD_MONTHS.md
|
fa9ee1232046-1
|
add_months
---------------------
2008-05-31 00:00:00
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ADD_MONTHS.md
|
5b3e28fdf397-0
|
**0 \(turns off limitation\)**, x milliseconds
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_statement_timeout.md
|
ff6cff408088-0
|
Aborts any statement that takes over the specified number of milliseconds\.
The statement\_timeout value is the maximum amount of time a query can run before Amazon Redshift terminates it\. This time includes planning, queueing in WLM, and execution time\. Compare this time to WLM timeout \(max\_execution\_time\) and a QMR \(query\_execution\_time\), which include only execution time\.
If WLM timeout \(max\_execution\_time\) is also specified as part of a WLM configuration, the lower of statement\_timeout and max\_execution\_time is used\. For more information, see [WLM timeout](cm-c-defining-query-queues.md#wlm-timeout)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_statement_timeout.md
|
dccc417aba0b-0
|
Because the following query takes longer than 1 millisecond, it times out and is cancelled\.
```
set statement_timeout to 1;
select * from listing where listid>5000;
ERROR: Query (150) cancelled on user's request
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_statement_timeout.md
|
84df1ef7b96d-0
|
Records information about transactions that currently hold locks on tables in the database\. Use the SVV\_TRANSACTIONS view to identify open transactions and lock contention issues\. For more information about locks, see [Managing concurrent write operations](c_Concurrent_writes.md) and [LOCK](r_LOCK.md)\.
All rows in SVV\_TRANSACTIONS, including rows generated by another user, are visible to all users\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TRANSACTIONS.md
|
3a93271a90d7-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_TRANSACTIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TRANSACTIONS.md
|
e9660a1aa70c-0
|
The following command shows all active transactions and the locks requested by each transaction\.
```
select * from svv_transactions;
txn_ lockable_
owner | txn_db | xid | pid | txn_start | lock_mode | object_type | relation | granted
-------+--------+--------+-------+----------------------------+---------------------+----------------+----------+---------
root | dev | 438484 | 22223 | 2016-03-02 18:42:18.862254 | AccessShareLock | relation | 100068 | t
root | dev | 438484 | 22223 | 2016-03-02 18:42:18.862254 | ExclusiveLock | transactionid | | t
root | tickit | 438490 | 22277 | 2016-03-02 18:42:48.084037 | AccessShareLock | relation | 50860 | t
root | tickit | 438490 | 22277 | 2016-03-02 18:42:48.084037 | AccessShareLock | relation | 52310 | t
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TRANSACTIONS.md
|
e9660a1aa70c-1
|
root | tickit | 438490 | 22277 | 2016-03-02 18:42:48.084037 | ExclusiveLock | transactionid | | t
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | AccessExclusiveLock | relation | 100068 | f
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | RowExclusiveLock | relation | 16688 | t
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | AccessShareLock | relation | 100064 | t
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | AccessExclusiveLock | relation | 100166 | t
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | AccessExclusiveLock | relation | 100171 | t
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | AccessExclusiveLock | relation | 100190 | t
root | dev | 438505 | 22378 | 2016-03-02 18:43:27.611292 | ExclusiveLock | transactionid | | t
(12 rows)
(12 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TRANSACTIONS.md
|
e2ea216818bf-0
|
You can use a manifest to ensure that the COPY command loads all of the required files, and only the required files, for a data load\. You can use a manifest to load files from different buckets or files that do not share the same prefix\. Instead of supplying an object path for the COPY command, you supply the name of a JSON\-formatted text file that explicitly lists the files to be loaded\. The URL in the manifest must specify the bucket name and full object path for the file, not just a prefix\.
For more information about manifest files, see the COPY example [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest)\.
The following example shows the JSON to load files from different buckets and with file names that begin with date stamps\.
```
{
"entries": [
{"url":"s3://mybucket-alpha/2013-10-04-custdata", "mandatory":true},
{"url":"s3://mybucket-alpha/2013-10-05-custdata", "mandatory":true},
{"url":"s3://mybucket-beta/2013-10-04-custdata", "mandatory":true},
{"url":"s3://mybucket-beta/2013-10-05-custdata", "mandatory":true}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-files-using-manifest.md
|
e2ea216818bf-1
|
]
}
```
The optional `mandatory` flag specifies whether COPY should return an error if the file is not found\. The default of `mandatory` is `false`\. Regardless of any mandatory settings, COPY will terminate if no files are found\.
The following example runs the COPY command with the manifest in the previous example, which is named `cust.manifest`\.
```
copy customer
from 's3://mybucket/cust.manifest'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
manifest;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-files-using-manifest.md
|
0e61f8c1722a-0
|
A manifest created by an [UNLOAD](r_UNLOAD.md) operation using the MANIFEST parameter might have keys that are not required for the COPY operation\. For example, the following `UNLOAD` manifest includes a `meta` key that is required for an Amazon Redshift Spectrum external table and for loading data files in an `ORC` or `Parquet` file format\. The `meta` key contains a `content_length` key with a value that is the actual size of the file in bytes\. The COPY operation requires only the `url` key and an optional `mandatory` key\.
```
{
"entries": [
{"url":"s3://mybucket/unload/manifest_0000_part_00", "meta": { "content_length": 5956875 }},
{"url":"s3://mybucket/unload/unload/manifest_0001_part_00", "meta": { "content_length": 5997091 }}
]
}
```
For more information about manifest files, see [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-files-using-manifest.md
|
9dad9e5d3765-0
|
Sets a configuration parameter to a new setting\.
This function is equivalent to the SET command in SQL\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_CONFIG.md
|
539c6b12257e-0
|
```
set_config('parameter', 'new_value' , is_local)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_CONFIG.md
|
82f262d31234-0
|
*parameter*
Parameter to set\.
*new\_value*
New value of the parameter\.
*is\_local*
If true, parameter value applies only to the current transaction\. Valid values are `true` or `1` and `false` or `0`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_CONFIG.md
|
adcb22ee8453-0
|
Returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_CONFIG.md
|
0aa4d98b06b3-0
|
The following query sets the value of the `query_group` parameter to `test` for the current transaction only:
```
select set_config('query_group', 'test', true);
set_config
------------
test
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_CONFIG.md
|
43e951258e74-0
|
You can avoid potential conflicts and unexpected results considering your UDF naming conventions before implementation\. Because function names can be overloaded, they can collide with existing and future Amazon Redshift function names\. This topic discusses overloading and presents a strategy for avoiding conflict\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-naming-udfs.md
|
7c236b4f9b1c-0
|
A function is identified by its name and *signature*, which is the number of input arguments and the data types of the arguments\. Two functions in the same schema can have the same name if they have different signatures\. In other words, the function names can be *overloaded*\.
When you execute a query, the query engine determines which function to call based on the number of arguments you provide and the data types of the arguments\. You can use overloading to simulate functions with a variable number of arguments, up to the limit allowed by the [CREATE FUNCTION](r_CREATE_FUNCTION.md) command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-naming-udfs.md
|
da710174fb7c-0
|
We recommend that you name all UDFs using the prefix `f_`\. Amazon Redshift reserves the `f_` prefix exclusively for UDFs and by prefixing your UDF names with `f_`, you ensure that your UDF name won't conflict with any existing or future Amazon Redshift built\-in SQL function names\. For example, by naming a new UDF `f_sum`, you avoid conflict with the Amazon Redshift SUM function\. Similarly, if you name a new function `f_fibonacci`, you avoid conflict if Amazon Redshift adds a function named FIBONACCI in a future release\.
You can create a UDF with the same name and signature as an existing Amazon Redshift built\-in SQL function without the function name being overloaded if the UDF and the built\-in function exist in different schemas\. Because built\-in functions exist in the system catalog schema, pg\_catalog, you can create a UDF with the same name in another schema, such as public or a user\-defined schema\. When you call a function that is not explicitly qualified with a schema name, Amazon Redshift searches the pg\_catalog schema first by default, so a built\-in function will run before a new UDF with the same name\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-naming-udfs.md
|
da710174fb7c-1
|
You can change this behavior by setting the search path to place pg\_catalog at the end so that your UDFs take precedence over built\-in functions, but the practice can cause unexpected results\. Adopting a unique naming strategy, such as using the reserved prefix `f_`, is a more reliable practice\. For more information, see [SET](r_SET.md) and [search\_path](r_search_path.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-naming-udfs.md
|
ea2bcda85e15-0
|
Generally, if a query attempts to use an unsupported data type, including explicit or implicit casts, it will return an error\. However, some queries that use unsupported data types will run on the leader node but not on the compute nodes\. See [SQL functions supported on the leader node](c_sql-functions-leader-node.md)\.
For a list of the supported data types, see [Data types](c_Supported_data_types.md)\.
These PostgreSQL data types are not supported in Amazon Redshift\.
+ Arrays
+ BIT, BIT VARYING
+ BYTEA
+ Composite Types
+ Date/Time Types
+ INTERVAL
+ TIME
+ Enumerated Types
+ Geometric Types
+ HSTORE
+ JSON
+ Network Address Types
+ Numeric Types
+ SERIAL, BIGSERIAL, SMALLSERIAL
+ MONEY
+ Object Identifier Types
+ Pseudo\-Types
+ Range Types
+ Special Character Types
+ "char" – A single\-byte internal type \(where the data type named char is enclosed in quotation marks\)\.
+ name – An internal type for object names\.
For more information about these types, see [Special Character Types](https://www.postgresql.org/docs/8.0/datatype-character.html) in the PostgreSQL documentation\.
+ Text Search Types
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-datatypes.md
|
ea2bcda85e15-1
|
+ Text Search Types
+ TXID\_SNAPSHOT
+ UUID
+ XML
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-datatypes.md
|
2eee2873609e-0
|
Cancels a database query that is currently running\.
The CANCEL command requires the process ID of the running query and displays a confirmation message to verify that the query was cancelled\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CANCEL.md
|
27be3cc3a751-0
|
```
CANCEL process_id [ 'message' ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CANCEL.md
|
91f497966902-0
|
*process\_id*
Process ID corresponding to the query that you want to cancel\.
'*message*'
An optional confirmation message that displays when the query cancellation completes\. If you don't specify a message, Amazon Redshift displays the default message as verification\. You must enclose the message in single quotation marks\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CANCEL.md
|
58a11ddf2f0f-0
|
You can't cancel a query by specifying a *query ID*; you must specify the query's *process ID* \(PID\)\. You can only cancel queries currently being run by your user\. Superusers can cancel all queries\.
If queries in multiple sessions hold locks on the same table, you can use the [PG\_TERMINATE\_BACKEND](PG_TERMINATE_BACKEND.md) function to terminate one of the sessions, which forces any currently running transactions in the terminated session to release all locks and roll back the transaction\. Query the [STV\_LOCKS](r_STV_LOCKS.md) system table to view currently held locks\.
Following certain internal events, Amazon Redshift might restart an active session and assign a new PID\. If the PID has changed, you might receive the following error message:
```
Session <PID> does not exist. The session PID might have changed. Check the stl_restarted_sessions system table for details.
```
To find the new PID, query the [STL\_RESTARTED\_SESSIONS](r_STL_RESTARTED_SESSIONS.md) system table and filter on the `oldpid` column\.
```
select oldpid, newpid from stl_restarted_sessions where oldpid = 1234;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CANCEL.md
|
535894701e74-0
|
To cancel a currently running query, first retrieve the process ID for the query that you want to cancel\. To determine the process IDs for all currently running queries, type the following command:
```
select pid, starttime, duration,
trim(user_name) as user,
trim (query) as querytxt
from stv_recents
where status = 'Running';
pid | starttime | duration | user | querytxt
-----+----------------------------+----------+----------+-----------------
802 | 2008-10-14 09:19:03.550885 | 132 | dwuser | select
venuename from venue where venuestate='FL', where venuecity not in
('Miami' , 'Orlando');
834 | 2008-10-14 08:33:49.473585 | 1250414 | dwuser | select *
from listing;
964 | 2008-10-14 08:30:43.290527 | 326179 | dwuser | select
sellerid from sales where qtysold in (8, 10);
```
Check the query text to determine which process id \(PID\) corresponds to the query that you want to cancel\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CANCEL.md
|
535894701e74-1
|
```
Check the query text to determine which process id \(PID\) corresponds to the query that you want to cancel\.
Type the following command to use PID 802 to cancel that query:
```
cancel 802;
```
The session where the query was running displays the following message:
```
ERROR: Query (168) cancelled on user's request
```
where `168` is the query ID \(not the process ID used to cancel the query\)\.
Alternatively, you can specify a custom confirmation message to display instead of the default message\. To specify a custom message, include your message in single quotation marks at the end of the CANCEL command:
```
cancel 802 'Long-running query';
```
The session where the query was running displays the following message:
```
ERROR: Long-running query
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CANCEL.md
|
6aa106c21b4c-0
|
Synonym for the BPCHARCMP function\.
See [BPCHARCMP function](r_BPCHARCMP.md) for details\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BTTEXT_PATTERN_CMP.md
|
d53fd6a060d4-0
|
To allow inbound traffic to the host instances, edit the security group and add one Inbound rule for each Amazon Redshift cluster node\. For **Type**, select SSH with TCP protocol on Port 22\. For **Source**, enter the Amazon Redshift cluster node Private IP addresses you retrieved in [Step 3: Retrieve the Amazon Redshift cluster public key and cluster node IP addresses](load-from-emr-steps-retrieve-key-and-ips.md)\. For information about adding rules to an Amazon EC2
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-configure-security-groups.md
|
d53fd6a060d4-1
|
information about adding rules to an Amazon EC2 security group, see [Authorizing Inbound Traffic for Your Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html) in the *Amazon EC2 User Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-configure-security-groups.md
|
86461da49bc9-0
|
SVCS System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the tables with the prefix STL except that the STL tables provide information only for queries run on the main cluster\.
**Topics**
+ [SVCS\_ALERT\_EVENT\_LOG](r_SVCS_ALERT_EVENT_LOG.md)
+ [SVCS\_COMPILE](r_SVCS_COMPILE.md)
+ [SVCS\_CONCURRENCY\_SCALING\_USAGE](r_SVCS_CONCURRENCY_SCALING_USAGE.md)
+ [SVCS\_EXPLAIN](r_SVCS_EXPLAIN.md)
+ [SVCS\_PLAN\_INFO](r_SVCS_PLAN_INFO.md)
+ [SVCS\_QUERY\_SUMMARY](r_SVCS_QUERY_SUMMARY.md)
+ [SVCS\_S3LIST](r_SVCS_S3LIST.md)
+ [SVCS\_S3LOG](r_SVCS_S3LOG.md)
+ [SVCS\_S3PARTITION\_SUMMARY](r_SVCS_S3PARTITION_SUMMARY.md)
+ [SVCS\_S3QUERY\_SUMMARY](r_SVCS_S3QUERY_SUMMARY.md)
+ [SVCS\_STREAM\_SEGS](r_SVCS_STREAM_SEGS.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svcs_views.md
|
86461da49bc9-1
|
+ [SVCS\_STREAM\_SEGS](r_SVCS_STREAM_SEGS.md)
+ [SVCS\_UNLOAD\_LOG](r_SVCS_UNLOAD_LOG.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svcs_views.md
|
644c7f513a6a-0
|
Creates a materialized view based on one or more Amazon Redshift tables or external tables that you can create using Spectrum or federated query\. For information about Spectrum, see [Querying external data using Amazon Redshift Spectrum](c-using-spectrum.md)\. For information about federated query, see [Querying data with federated queries in Amazon Redshift](federated-overview.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
4030c8a16661-0
|
```
CREATE MATERIALIZED VIEW mv_name
[ BACKUP { YES | NO } ]
[ table_attributes ]
AS query
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
7e495af7b5a5-0
|
BACKUP
A clause that specifies whether the materialized view is included in automated and manual cluster snapshots, which are stored in Amazon S3\.
The default value for `BACKUP` is `YES`\.
You can specify `BACKUP NO` to save processing time when creating snapshots and restoring from snapshots, and to reduce the amount of storage required in Amazon S3\.
The `BACKUP NO` setting has no effect on automatic replication of data to other nodes within the cluster, so tables with `BACKUP NO` specified are restored in a node failure\.
*table\_attributes*
A clause that specifies how the data in the materialized view is distributed, including the following:
+ The distribution style for the materialized view, in the format `DISTSTYLE { EVEN | ALL | KEY }`\. If you omit this clause, the distribution style is `EVEN`\. For more information, see [Distribution styles](c_choosing_dist_sort.md)\.
+ The distribution key for the materialized view, in the format `DISTKEY ( distkey_identifier )`\. For more information, see [Designating distribution styles](t_designating_distribution_styles.md)\.
+ The sort key for the materialized view, in the format `SORTKEY ( column_name [, ...] )`\. For more information, see [Choosing sort keys](t_Sorting_data.md)\.
AS *query*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
7e495af7b5a5-1
|
AS *query*
A valid `SELECT` statement which defines the materialized view and its content\. The result set from the query defines the columns and rows of the materialized view\. For information about limitations when creating materialized views, see [Limitations](#mv_CREATE_MATERIALIZED_VIEW-limitations)\.
Furthermore, specific SQL language constructs used in the query determines whether the materialized view can be incrementally or fully refreshed\. For information about the refresh method, see [REFRESH MATERIALIZED VIEW](materialized-view-refresh-sql-command.md)\. For information about the limitations for incremental refresh, see [Limitations for incremental refresh](materialized-view-refresh-sql-command.md#mv_REFRESH_MARTERIALIZED_VIEW_limitations)\.
If the query contains an SQL command that doesn't support incremental refresh, Amazon Redshift displays a message indicating that the materialized view will use a full refresh\. The message may or may not be displayed depending on the SQL client application\. For example, psql displays the message, and a JDBC client may not\. Check the `state` column of the [STV\_MV\_INFO](r_STV_MV_INFO.md) to see the refresh type used by a materialized view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
6729b022cfd4-0
|
To create a materialized view, you must have the following privileges:
+ CREATE privileges for a schema\.
+ Table\-level SELECT privilege on the base tables to create a materialized view\. Even if you have column\-level privileges on specific columns, you can't create a materialized view on only those columns\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
4b0b6d49d6f4-0
|
When using materialized views in Amazon Redshift, follow these usage notes for data definition language \(DDL\) updates to materialized views or base tables\.
+ You can add columns to a base table without affecting any materialized views that reference the base table\.
+ Some operations can leave the materialized view in a state that can't be refreshed at all\. Examples are operations such as renaming or dropping a column, changing the type of a column, and changing the name of a schema\. Such materialized views can be queried but can't be refreshed\. In this case, you must drop and recreate the materialized view\.
+ In general, you can't alter a materialized view's definition \(its SQL statement\)\.
+ You can't rename a materialized view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
afdf7dd8a5cc-0
|
You can't define a materialized view that references or includes any of the following:
+ Any other materialized view, a standard view, or system tables and views\.
+ Temporary tables\.
+ User\-defined functions\.
+ The ORDER BY, LIMIT, or OFFSET clause\.
+ Late binding references to base tables\. In other words, any base tables or related columns referenced in the defining SQL query of the materialized view must exist and must be valid\.
+ System administration functions\. For a list, see [System administration functions](r_System_administration_functions.md)\.
+ System information functions\. For a list, see [System information functions](r_System_information_functions.md)\.
+ Leader node\-only functions: CURRENT\_SCHEMA, CURRENT\_SCHEMAS, HAS\_DATABASE\_PRIVILEGE, HAS\_SCHEMA\_PRIVILEGE, HAS\_TABLE\_PRIVILEGE, AGE, CURRENT\_TIME, CURRENT\_TIMESTAMP, LOCALTIME, NOW\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
afdf7dd8a5cc-1
|
+ Date functions: CURRENT\_DATE, DATE, DATE\_PART, DATE\_TRUNC, DATE\_CMP\_TIMESTAMPTZ, SYSDATE, TIMEOFDAY, TO\_TIMESTAMP\. When defining a materialized view, consider the following functions with specific input argument types: DATE is immutable for timestamp, DATE\_PART is immutable for date, time, interval, and time\-tz, DATE\_TRUNC is immutable for the following data type: date, timestamp, and interval\. You must use functions that are immutable in order to successfully create materialized views\. Otherwise, Amazon Redshift blocks the creation of materialized views that contain functions that are not immutable\. For more information about functions, see [Function volatility categories](https://www.postgresql.org/docs/8.0/xfunc-volatility.html)\.
+ Math functions: RANDOM\.
+ Date type formatting functions: TO\_CHAR WITH TIMESTAMPTZ\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
63d01f1650b5-0
|
The following example creates a materialized view from three base tables which are joined and aggregated\. Each row represents a category with the number of tickets sold\. When you query the tickets\_mv materialized view, you directly access the precomputed data in the tickets\_mv materialized view\.
```
CREATE MATERIALIZED VIEW tickets_mv AS
select catgroup,
sum(qtysold) as sold
from category c, event e, sales s
where c.catid = e.catid
and e.eventid = s.eventid
group by catgroup;
```
The following example creates a materialized view similar to the previous example and uses the aggregate function MAX\(\) that is currently not supported for incremental refresh\. You can verify that by querying the STV\_MV\_INFO table and see that the ‘state' column is 0\.
```
CREATE MATERIALIZED VIEW tickets_mv_max AS
select catgroup,
max(qtysold) as sold
from category c, event e, sales s
where c.catid = e.catid
and e.eventid = s.eventid
group by catgroup;
SELECT name, state FROM STV_MV_INFO
name | state
-----------------+--------
tickets_mv | 1
tickets_mv_max | 0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
63d01f1650b5-1
|
tickets_mv | 1
tickets_mv_max | 0
```
The following example uses a UNION ALL clause to join the Amazon Redshift `public_sales` table and the Redshift Spectrum `spectrum.sales` table to create a material view `mv_sales_vw`\. For information about the CREATE EXTERNAL TABLE command for Amazon Redshift Spectrum, see [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)\. The Redshift Spectrum external table references the data on Amazon S3\.
```
CREATE MATERIALIZED VIEW mv_sales_vw as
select salesid, qtysold, pricepaid, commission, saletime from public.sales
union all
select salesid, qtysold, pricepaid, commission, saletime from spectrum.sales
```
The following example creates a materialized view `mv_fq` based on a federated query external table\. For information about federated query, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
```
CREATE MATERIALIZED VIEW mv_fq as select firstname, lastname from apg.mv_fq_example;
select firstname, lastname from mv_fq;
firstname | lastname
-----------+----------
John | Day
Jane | Doe
(2 rows)
```
The following example shows the definition of a materialized view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
63d01f1650b5-2
|
Jane | Doe
(2 rows)
```
The following example shows the definition of a materialized view\.
```
SELECT pg_catalog.pg_get_viewdef('mv_sales_vw'::regclass::oid, true);
pg_get_viewdef
---------------------------------------------------
create materialized view mv_sales_vw as select a from t;
```
For details about materialized view overview and SQL commands used to refresh and drop materialized views, see the following topics:
+ [Creating materialized views in Amazon Redshift](materialized-view-overview.md)
+ [REFRESH MATERIALIZED VIEW](materialized-view-refresh-sql-command.md)
+ [DROP MATERIALIZED VIEW](materialized-view-drop-sql-command.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-create-sql-command.md
|
a3b78c3a4823-0
|
In this tutorial, you uploaded data files to Amazon S3 and then used COPY commands to load the data from the files into Amazon Redshift tables\.
You loaded data using the following formats:
+ Character\-delimited
+ CSV
+ Fixed\-width
You used the STL\_LOAD\_ERRORS system table to troubleshoot load errors, and then used the REGION, MANIFEST, MAXERROR, ACCEPTINVCHARS, DATEFORMAT, and NULL AS options to resolve the errors\.
You applied the following best practices for loading data:
+ [Use a COPY command to load data](c_best-practices-use-copy.md)
+ [Split your load data into multiple files](c_best-practices-use-multiple-files.md)
+ [Use a single COPY command to load from multiple files](c_best-practices-single-copy-command.md)
+ [Compress your data files](c_best-practices-compress-data-files.md)
+ [Use a manifest file](best-practices-preventing-load-data-errors.md)
+ [Verify data files before and after a load](c_best-practices-verifying-data-files.md)
For more information about Amazon Redshift best practices, see the following links:
+ [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)
+ [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-summary.md
|
a3b78c3a4823-1
|
+ [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)
+ [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-summary.md
|
9012e2c000e4-0
|
For your next step, if you haven't done so already, we recommend taking [Tutorial: Tuning table design](tutorial-tuning-tables.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-summary.md
|
e685f3bd0851-0
|
The SVL\_VACUUM\_PERCENTAGE view reports the percentage of data blocks allocated to a table after performing a vacuum\. This percentage number shows how much disk space was reclaimed\. See the [VACUUM](r_VACUUM_command.md) command for more information about the vacuum utility\.
SVL\_VACUUM\_PERCENTAGE is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_VACUUM_PERCENTAGE.md
|
9451589487cc-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_VACUUM_PERCENTAGE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_VACUUM_PERCENTAGE.md
|
e5a97dd0bb4c-0
|
The following query displays the percentage for a specific operation on table 100238:
```
select * from svl_vacuum_percentage
where table_id=100238 and xid=2200;
xid | table_id | percentage
-----+----------+------------
1337 | 100238 | 60
(1 row)
```
After this vacuum operation, the table contained 60 percent of the original blocks\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_VACUUM_PERCENTAGE.md
|
60c54de93e23-0
|
Use the SVV\_QUERY\_INFLIGHT view to determine what queries are currently running on the database\. This view joins [STV\_INFLIGHT](r_STV_INFLIGHT.md) to [STL\_QUERYTEXT](r_STL_QUERYTEXT.md)\. SVV\_QUERY\_INFLIGHT does not show leader\-node only queries\. For more information, see [Leader node–only functions](c_SQL_functions_leader_node_only.md)\.
SVV\_QUERY\_INFLIGHT is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_INFLIGHT.md
|
dfe4c7add209-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_QUERY_INFLIGHT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_INFLIGHT.md
|
2598cb8525d9-0
|
The sample output below shows two queries currently running, the SVV\_QUERY\_INFLIGHT query itself and query 428, which is split into three rows in the table\. \(The starttime and statement columns are truncated in this sample output\.\)
```
select slice, query, pid, starttime, suspended, trim(text) as statement, sequence
from svv_query_inflight
order by query, sequence;
slice|query| pid | starttime |suspended| statement | sequence
-----+-----+------+----------------------+---------+-----------+---------
1012 | 428 | 1658 | 2012-04-10 13:53:... | 0 | select ...| 0
1012 | 428 | 1658 | 2012-04-10 13:53:... | 0 | enueid ...| 1
1012 | 428 | 1658 | 2012-04-10 13:53:... | 0 | atname,...| 2
1012 | 429 | 1608 | 2012-04-10 13:53:... | 0 | select ...| 0
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_INFLIGHT.md
|
63fc15571cce-0
|
As it loads the table, COPY attempts to implicitly convert the strings in the source data to the data type of the target column\. If you need to specify a conversion that is different from the default behavior, or if the default conversion results in errors, you can manage data conversions by specifying the following parameters\.
+ [ACCEPTANYDATE](#copy-acceptanydate)
+ [ACCEPTINVCHARS](#copy-acceptinvchars)
+ [BLANKSASNULL](#copy-blanksasnull)
+ [DATEFORMAT](#copy-dateformat)
+ [EMPTYASNULL](#copy-emptyasnull)
+ [ENCODING](#copy-encoding)
+ [ESCAPE](#copy-escape)
+ [EXPLICIT_IDS](#copy-explicit-ids)
+ [FILLRECORD](#copy-fillrecord)
+ [IGNOREBLANKLINES](#copy-ignoreblanklines)
+ [IGNOREHEADER](#copy-ignoreheader)
+ [NULL AS](#copy-null-as)
+ [REMOVEQUOTES](#copy-removequotes)
+ [ROUNDEC](#copy-roundec)
+ [TIMEFORMAT](#copy-timeformat)
+ [TRIMBLANKS](#copy-trimblanks)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-1
|
+ [TIMEFORMAT](#copy-timeformat)
+ [TRIMBLANKS](#copy-trimblanks)
+ [TRUNCATECOLUMNS](#copy-truncatecolumns) <a name="copy-data-conversion-parameters"></a>Data conversion parameters
ACCEPTANYDATE <a name="copy-acceptanydate"></a>
Allows any date format, including invalid formats such as `00/00/00 00:00:00`, to be loaded without generating an error\. This parameter applies only to TIMESTAMP and DATE columns\. Always use ACCEPTANYDATE with the DATEFORMAT parameter\. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field\.
ACCEPTINVCHARS \[AS\] \['*replacement\_char*'\] <a name="copy-acceptinvchars"></a>
Enables loading of data into VARCHAR columns even if the data contains invalid UTF\-8 characters\. When ACCEPTINVCHARS is specified, COPY replaces each invalid UTF\-8 character with a string of equal length consisting of the character specified by *replacement\_char*\. For example, if the replacement character is '`^`', an invalid three\-byte character will be replaced with '`^^^`'\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-2
|
The replacement character can be any ASCII character except NULL\. The default is a question mark \( ? \)\. For information about invalid UTF\-8 characters, see [Multibyte character load errors](multi-byte-character-load-errors.md)\.
COPY returns the number of rows that contained invalid UTF\-8 characters, and it adds an entry to the [STL\_REPLACEMENTS](r_STL_REPLACEMENTS.md) system table for each affected row, up to a maximum of 100 rows for each node slice\. Additional invalid UTF\-8 characters are also replaced, but those replacement events aren't recorded\.
If ACCEPTINVCHARS isn't specified, COPY returns an error whenever it encounters an invalid UTF\-8 character\.
ACCEPTINVCHARS is valid only for VARCHAR columns\.
BLANKSASNULL <a name="copy-blanksasnull"></a>
Loads blank fields, which consist of only white space characters, as NULL\. This option applies only to CHAR and VARCHAR columns\. Blank fields for other data types, such as INT, are always loaded with NULL\. For example, a string that contains three space characters in succession \(and no other characters\) is loaded as a NULL\. The default behavior, without this option, is to load the space characters as is\.
DATEFORMAT \[AS\] \{'*dateformat\_string*' \| 'auto' \} <a name="copy-dateformat"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-3
|
If no DATEFORMAT is specified, the default format is `'YYYY-MM-DD'`\. For example, an alternative valid format is `'MM-DD-YYYY'`\.
If the COPY command doesn't recognize the format of your date or time values, or if your date or time values use different formats, use the `'auto'` argument with the DATEFORMAT or TIMEFORMAT parameter\. The `'auto'` argument recognizes several formats that aren't supported when using a DATEFORMAT and TIMEFORMAT string\. The `'auto'`' keyword is case\-sensitive\. For more information, see [Using automatic recognition with DATEFORMAT and TIMEFORMAT](automatic-recognition.md)\.
The date format can include time information \(hour, minutes, seconds\), but this information is ignored\. The AS keyword is optional\. For more information, see [ DATEFORMAT and TIMEFORMAT strings](r_DATEFORMAT_and_TIMEFORMAT_strings.md)\.
EMPTYASNULL <a name="copy-emptyasnull"></a>
Indicates that Amazon Redshift should load empty CHAR and VARCHAR fields as NULL\. Empty fields for other data types, such as INT, are always loaded with NULL\. Empty fields occur when data contains two delimiters in succession with no characters between the delimiters\. EMPTYASNULL and NULL AS '' \(empty string\) produce the same behavior\.
ENCODING \[AS\] *file\_encoding* <a name="copy-encoding"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-4
|
ENCODING \[AS\] *file\_encoding* <a name="copy-encoding"></a>
Specifies the encoding type of the load data\. The COPY command converts the data from the specified encoding into UTF\-8 during loading\.
Valid values for *file\_encoding* are as follows:
+ `UTF8`
+ `UTF16`
+ `UTF16LE`
+ `UTF16BE`
The default is `UTF8`\.
Source file names must use UTF\-8 encoding\.
The following files must use UTF\-8 encoding, even if a different encoding is specified for the load data:
+ Manifest files
+ JSONPaths files
The argument strings provided with the following parameters must use UTF\-8:
+ FIXEDWIDTH '*fixedwidth\_spec*'
+ ACCEPTINVCHARS '*replacement\_char*'
+ DATEFORMAT '*dateformat\_string*'
+ TIMEFORMAT '*timeformat\_string*'
+ NULL AS '*null\_string*'
Fixed\-width data files must use UTF\-8 encoding\. The field widths are based on the number of characters, not the number of bytes\.
All load data must use the specified encoding\. If COPY encounters a different encoding, it skips the file and returns an error\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-5
|
All load data must use the specified encoding\. If COPY encounters a different encoding, it skips the file and returns an error\.
If you specify `UTF16`, then your data must have a byte order mark \(BOM\)\. If you know whether your UTF\-16 data is little\-endian \(LE\) or big\-endian \(BE\), you can use `UTF16LE` or `UTF16BE`, regardless of the presence of a BOM\.
ESCAPE <a name="copy-escape"></a>
When this parameter is specified, the backslash character \(`\`\) in input data is treated as an escape character\. The character that immediately follows the backslash character is loaded into the table as part of the current column value, even if it is a character that normally serves a special purpose\. For example, you can use this parameter to escape the delimiter character, a quotation mark, an embedded newline character, or the escape character itself when any of these characters is a legitimate part of a column value\.
If you specify the ESCAPE parameter in combination with the REMOVEQUOTES parameter, you can escape and retain quotation marks \(`'` or `"`\) that might otherwise be removed\. The default null string, `\N`, works as is, but it can also be escaped in the input data as `\\N`\. As long as you don't specify an alternative null string with the NULL AS parameter, `\N` and `\\N` produce the same results\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-6
|
The control character `0x00` \(NUL\) cannot be escaped and should be removed from the input data or converted\. This character is treated as an end of record \(EOR\) marker, causing the remainder of the record to be truncated\.
You cannot use the ESCAPE parameter for FIXEDWIDTH loads, and you cannot specify the escape character itself; the escape character is always the backslash character\. Also, you must ensure that the input data contains the escape character in the appropriate places\.
Here are some examples of input data and the resulting loaded data when the ESCAPE parameter is specified\. The result for row 4 assumes that the REMOVEQUOTES parameter is also specified\. The input data consists of two pipe\-delimited fields:
```
1|The quick brown fox\[newline]
jumped over the lazy dog.
2| A\\B\\C
3| A \| B \| C
4| 'A Midsummer Night\'s Dream'
```
The data loaded into column 2 looks like this:
```
The quick brown fox
jumped over the lazy dog.
A\B\C
A|B|C
A Midsummer Night's Dream
```
Applying the escape character to the input data for a load is the responsibility of the user\. One exception to this requirement is when you reload data that was previously unloaded with the ESCAPE parameter\. In this case, the data will already contain the necessary escape characters\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-7
|
The ESCAPE parameter doesn't interpret octal, hex, Unicode, or other escape sequence notation\. For example, if your source data contains the octal line feed value \(`\012`\) and you try to load this data with the ESCAPE parameter, Amazon Redshift loads the value `012` into the table and doesn't interpret this value as a line feed that is being escaped\.
In order to escape newline characters in data that originates from Microsoft Windows platforms, you might need to use two escape characters: one for the carriage return and one for the line feed\. Alternatively, you can remove the carriage returns before loading the file \(for example, by using the dos2unix utility\)\.
EXPLICIT\_IDS <a name="copy-explicit-ids"></a>
Use EXPLICIT\_IDS with tables that have IDENTITY columns if you want to override the autogenerated values with explicit values from the source data files for the tables\. If the command includes a column list, that list must include the IDENTITY columns to use this parameter\. The data format for EXPLICIT\_IDS values must match the IDENTITY format specified by the CREATE TABLE definition\.
After you run a COPY command against a table with the EXPLICIT\_IDS option, Amazon Redshift no longer checks the uniqueness of IDENTITY columns in the table\.
If a column is defined with GENERATED BY DEFAULT AS IDENTITY, then it can be copied\. Values are generated or updated with values that you supply\. The EXPLICIT\_IDS option isn't required\. COPY doesn't update the identity high watermark\.
FILLRECORD <a name="copy-fillrecord"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-8
|
FILLRECORD <a name="copy-fillrecord"></a>
Allows data files to be loaded when contiguous columns are missing at the end of some of the records\. The missing columns are filled with either zero\-length strings or NULLs, as appropriate for the data types of the columns in question\. If the EMPTYASNULL parameter is present in the COPY command and the missing column is a VARCHAR column, NULLs are loaded; if EMPTYASNULL isn't present and the column is a VARCHAR, zero\-length strings are loaded\. NULL substitution only works if the column definition allows NULLs\.
For example, if the table definition contains four nullable CHAR columns, and a record contains the values `apple, orange, banana, mango`, the COPY command could load and fill in a record that contains only the values `apple, orange`\. The missing CHAR values would be loaded as NULL values\.
IGNOREBLANKLINES <a name="copy-ignoreblanklines"></a>
Ignores blank lines that only contain a line feed in a data file and does not try to load them\.
IGNOREHEADER \[ AS \] *number\_rows* <a name="copy-ignoreheader"></a>
Treats the specified *number\_rows* as a file header and doesn't load them\. Use IGNOREHEADER to skip file headers in all files in a parallel load\.
NULL AS '*null\_string*' <a name="copy-null-as"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-9
|
NULL AS '*null\_string*' <a name="copy-null-as"></a>
Loads fields that match *null\_string* as NULL, where *null\_string* can be any string\. If your data includes a null terminator, also referred to as NUL \(UTF\-8 0000\) or binary zero \(0x000\), COPY treats it as any other character\. For example, a record containing '1' \|\| NUL \|\| '2' is copied as string of length 3 bytes\. If a field contains only NUL, you can use NULL AS to replace the null terminator with NULL by specifying `'\0'` or `'\000'`—for example, `NULL AS '\0'` or `NULL AS '\000'`\. If a field contains a string that ends with NUL and NULL AS is specified, the string is inserted with NUL at the end\. Do not use '\\n' \(newline\) for the *null\_string* value\. Amazon Redshift reserves '\\n' for use as a line delimiter\. The default *null\_string* is `'\N`'\.
If you attempt to load nulls into a column defined as NOT NULL, the COPY command will fail\.
REMOVEQUOTES <a name="copy-removequotes"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
63fc15571cce-10
|
REMOVEQUOTES <a name="copy-removequotes"></a>
Removes surrounding quotation marks from strings in the incoming data\. All characters within the quotation marks, including delimiters, are retained\. If a string has a beginning single or double quotation mark but no corresponding ending mark, the COPY command fails to load that row and returns an error\. The following table shows some simple examples of strings that contain quotes and the resulting loaded values\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-conversion.html)
ROUNDEC <a name="copy-roundec"></a>
Rounds up numeric values when the scale of the input value is greater than the scale of the column\. By default, COPY truncates values when necessary to fit the scale of the column\. For example, if a value of `20.259` is loaded into a DECIMAL\(8,2\) column, COPY truncates the value to `20.25` by default\. If ROUNDEC is specified, COPY rounds the value to `20.26`\. The INSERT command always rounds values when necessary to match the column's scale, so a COPY command with the ROUNDEC parameter behaves the same as an INSERT command\.
TIMEFORMAT \[AS\] \{'*timeformat\_string*' \| 'auto' \| 'epochsecs' \| 'epochmillisecs' \} <a name="copy-timeformat"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.