id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
f293f50e8753-0
|
**Note**
If when running these examples you encounter an error similar to:
```
ERROR: 42601: [Amazon](500310) unterminated dollar-quoted string at or near "$$
```
See [Overview of stored procedures in Amazon Redshift](stored-procedure-create.md)\.
The following example creates a procedure with two input parameters\.
```
CREATE OR REPLACE PROCEDURE test_sp1(f1 int, f2 varchar(20))
AS $$
DECLARE
min_val int;
BEGIN
DROP TABLE IF EXISTS tmp_tbl;
CREATE TEMP TABLE tmp_tbl(id int);
INSERT INTO tmp_tbl values (f1),(10001),(10002);
SELECT INTO min_val MIN(id) FROM tmp_tbl;
RAISE INFO 'min_val = %, f2 = %', min_val, f2;
END;
$$ LANGUAGE plpgsql;
```
The following example creates a procedure with one IN parameter, one OUT parameter, and one INOUT parameter\.
```
CREATE OR REPLACE PROCEDURE test_sp2(f1 IN int, f2 INOUT varchar(256), out_var OUT varchar(256))
AS $$
DECLARE
loop_var int;
BEGIN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
|
f293f50e8753-1
|
AS $$
DECLARE
loop_var int;
BEGIN
IF f1 is null OR f2 is null THEN
RAISE EXCEPTION 'input cannot be null';
END IF;
DROP TABLE if exists my_etl;
CREATE TEMP TABLE my_etl(a int, b varchar);
FOR loop_var IN 1..f1 LOOP
insert into my_etl values (loop_var, f2);
f2 := f2 || '+' || f2;
END LOOP;
SELECT INTO out_var count(*) from my_etl;
END;
$$ LANGUAGE plpgsql;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_PROCEDURE.md
|
2eba83ffd539-0
|
Selects rows defined by any query and inserts them into a new table\. You can specify whether to create a temporary or a persistent table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_INTO.md
|
5c588d47f67b-0
|
```
[ WITH with_subquery [, ...] ]
SELECT
[ TOP number ] [ ALL | DISTINCT ]
* | expression [ AS output_name ] [, ...]
INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table
[ FROM table_reference [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION | INTERSECT | { EXCEPT | MINUS } } [ ALL ] query ]
[ ORDER BY expression
[ ASC | DESC ]
[ LIMIT { number | ALL } ]
[ OFFSET start ]
```
For details about the parameters of this command, see [SELECT](r_SELECT_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_INTO.md
|
eff6708365c1-0
|
Select all of the rows from the EVENT table and create a NEWEVENT table:
```
select * into newevent from event;
```
Select the result of an aggregate query into a temporary table called PROFITS:
```
select username, lastname, sum(pricepaid-commission) as profit
into temp table profits
from sales, users
where sales.sellerid=users.userid
group by 1, 2
order by 3 desc;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_INTO.md
|
466fd1799c66-0
|
Literals or constants that represent numbers can be integer or floating\-point\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
e7d8ddb75e0d-0
|
An integer constant is a sequence of the digits 0\-9, with an optional positive \(\+\) or negative \(\-\) sign preceding the digits\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
408289c7a646-0
|
```
[ + | - ] digit ...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
bb05318968f9-0
|
Valid integers include the following:
```
23
-555
+17
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
77efff9739ac-0
|
Floating\-point literals \(also referred to as decimal, numeric, or fractional literals\) are sequences of digits that can include a decimal point, and optionally the exponent marker \(e\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
b7afafcf3e38-0
|
```
[ + | - ] digit ... [ . ] [ digit ...]
[ e | E [ + | - ] digit ... ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
71f21eb687a2-0
|
e \| E
e or E indicates that the number is specified in scientific notation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
f4291ffb4b1c-0
|
Valid floating\-point literals include the following:
```
3.14159
-37.
2.0e19
-2E-19
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_literals201.md
|
f3941a27721f-0
|
Displays information to identify and resolve transaction conflicts with database tables\.
A transaction conflict occurs when two or more users are querying and modifying data rows from tables such that their transactions cannot be serialized\. The transaction that executes a statement that would break serializability is aborted and rolled back\. Every time a transaction conflict occurs, Amazon Redshift writes a data row to the STL\_TR\_CONFLICT system table containing details about the aborted transaction\. For more information, see [Serializable isolation](c_serial_isolation.md)\.
This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_TR_CONFLICT.md
|
91fed4b15be7-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_TR_CONFLICT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_TR_CONFLICT.md
|
457b1d05f36f-0
|
To return information about conflicts that involved a particular table, run a query that specifies the table ID:
```
select * from stl_tr_conflict where table_id=100234
order by xact_start_ts;
xact_id|process_| xact_start_ts | abort_time |table_
|id | | |id
-------+--------+--------------------------+--------------------------+------
1876 | 8551 |2010-03-30 09:19:15.852326|2010-03-30 09:20:17.582499|100234
1928 | 15034 |2010-03-30 13:20:00.636045|2010-03-30 13:20:47.766817|100234
1991 | 23753 |2010-04-01 13:05:01.220059|2010-04-01 13:06:06.94098 |100234
2002 | 23679 |2010-04-01 13:17:05.173473|2010-04-01 13:18:27.898655|100234
(4 rows)
```
You can get the table ID from the DETAIL section of the error message for serializability violations \(error 1023\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_TR_CONFLICT.md
|
5286be7cb40d-0
|
Analyzes merge join execution steps for queries\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGEJOIN.md
|
ad0191fa5152-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_MERGEJOIN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGEJOIN.md
|
df699ea5b6a4-0
|
The following example returns merge join results for the most recent query\.
```
select sum(s.qtysold), e.eventname
from event e, listing l, sales s
where e.eventid=l.eventid
and l.listid= s.listid
group by e.eventname;
select * from stl_mergejoin where query=pg_last_query_id();
```
```
userid | query | slice | segment | step | starttime | endtime | tasknum | rows | tbl
--------+-------+-------+---------+------+---------------------+---------------------+---------+------+-----
100 | 27399 | 3 | 4 | 4 | 2013-10-02 16:30:41 | 2013-10-02 16:30:41 | 19 |43428 | 240
100 | 27399 | 0 | 4 | 4 | 2013-10-02 16:30:41 | 2013-10-02 16:30:41 | 19 |43159 | 240
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGEJOIN.md
|
df699ea5b6a4-1
|
100 | 27399 | 2 | 4 | 4 | 2013-10-02 16:30:41 | 2013-10-02 16:30:41 | 19 |42778 | 240
100 | 27399 | 1 | 4 | 4 | 2013-10-02 16:30:41 | 2013-10-02 16:30:41 | 19 |43091 | 240
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGEJOIN.md
|
a35d11ae22fd-0
|
You can use AWS Lake Formation to centrally define and enforce database, table, and column\-level access policies to data stored in Amazon S3\. After your data is registered with an AWS Glue Data Catalog enabled with Lake Formation, you can query it by using several services, including Redshift Spectrum\.
Lake Formation provides the security and governance of the Data Catalog\. Within Lake Formation, you can grant and revoke permissions to the Data Catalog objects, such as databases, tables, columns, and underlying Amazon S3 storage\.
**Important**
You can only use Redshift Spectrum with a Lake Formation enabled Data Catalog in AWS Regions where Lake Formation is available\. For a list of available Regions, see [AWS Lake Formation Endpoints and Quotas](https://docs.aws.amazon.com/general/latest/gr/lake-formation.html) in the AWS General Reference\.
By using Redshift Spectrum with Lake Formation, you can do the following:
+ Use Lake Formation as a centralized place where you grant and revoke permissions and access control policies on all of your data in the data lake\. Lake Formation provides a hierarchy of permissions to control access to databases and tables in a Data Catalog\. For more information, see [Lake Formation Permissions](https://docs.aws.amazon.com/lake-formation/latest/dg/lake-formation-permissions.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/spectrum-lake-formation.md
|
a35d11ae22fd-1
|
+ Create external tables and run queries on data in the data lake\. Before users in your account can run queries, a data lake account administrator registers your existing Amazon S3 paths containing source data with Lake Formation\. The administrator also creates tables and grants permissions to your users\. Access can be granted on databases, tables, or columns\.
After the data is registered in the Data Catalog, each time users try to run queries, Lake Formation verifies access to the table for that specific principal\. Lake Formation vends temporary credentials to Redshift Spectrum, and the query runs\.
When you use Redshift Spectrum with a Data Catalog enabled for Lake Formation, an IAM role associated with the cluster must have permission to the Data Catalog\.
**Important**
You can't chain IAM roles when using Redshift Spectrum with a Data Catalog enabled for Lake Formation\.
To learn more about the steps required to set up AWS Lake Formation to use with Redshift Spectrum, see [Tutorial: Creating a Data Lake from a JDBC Source in Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/getting-started-tutorial.html) in the AWS Lake Formation Developer Guide\. Specifically, see [Query the Data in the Data Lake Using Amazon Redshift Spectrum](https://docs.aws.amazon.com/lake-formation/latest/dg/tut-query-redshift.html) for details about integration with Redshift Spectrum\. The data and AWS resources used in this topic depend on prior steps in the tutorial\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/spectrum-lake-formation.md
|
d2c003822d4d-0
|
Use the SVL\_STATEMENTTEXT view to get a complete record of all of the SQL commands that have been run on the system\.
The SVL\_STATEMENTTEXT view contains the union of all of the rows in the [STL\_DDLTEXT](r_STL_DDLTEXT.md), [STL\_QUERYTEXT](r_STL_QUERYTEXT.md), and [STL\_UTILITYTEXT](r_STL_UTILITYTEXT.md) tables\. This view also includes a join to the STL\_QUERY table\.
SVL\_STATEMENTTEXT is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
8d5ae5c09173-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_STATEMENTTEXT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
aa8efff09dd2-0
|
The following query returns DDL statements that were run on June 16th, 2009:
```
select starttime, type, rtrim(text) from svl_statementtext
where starttime like '2009-06-16%' and type='DDL' order by starttime asc;
starttime | type | rtrim
---------------------------|------|--------------------------------
2009-06-16 10:36:50.625097 | DDL | create table ddltest(c1 int);
2009-06-16 15:02:16.006341 | DDL | drop view alltickitjoin;
2009-06-16 15:02:23.65285 | DDL | drop table sales;
2009-06-16 15:02:24.548928 | DDL | drop table listing;
2009-06-16 15:02:25.536655 | DDL | drop table event;
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
da56ec0989a9-0
|
To reconstruct the SQL stored in the `text` column of SVL\_STATEMENTTEXT, run a SELECT statement to create SQL from 1 or more parts in the `text` column\. Before running the reconstructed SQL, replace any \(`\n`\) special characters with a new line\. The result of the following SELECT statement is rows of reconstructed SQL in the `query_statement` field\.
```
select LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END, '') within group (order by sequence) AS query_statement
from SVL_STATEMENTTEXT where pid=pg_backend_pid();
```
For example, the following query selects 3 columns\. The query itself is longer than 200 characters and is stored in parts in SVL\_STATEMENTTEXT\.
```
select
1 AS a0123456789012345678901234567890123456789012345678901234567890,
2 AS b0123456789012345678901234567890123456789012345678901234567890,
3 AS b012345678901234567890123456789012345678901234
FROM stl_querytext;
```
In this example, the query is stored in 2 parts \(rows\) in the `text` column of SVL\_STATEMENTTEXT\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
da56ec0989a9-1
|
```
In this example, the query is stored in 2 parts \(rows\) in the `text` column of SVL\_STATEMENTTEXT\.
```
select sequence, text from SVL_STATEMENTTEXT where pid = pg_backend_pid() order by starttime, sequence;
```
```
sequence | text
----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
da56ec0989a9-2
|
0 | select\n1 AS a0123456789012345678901234567890123456789012345678901234567890,\n2 AS b0123456789012345678901234567890123456789012345678901234567890,\n3 AS b012345678901234567890123456789012345678901234
1 | \nFROM stl_querytext;
```
To reconstruct the SQL stored in STL\_STATEMENTTEXT, run the following SQL\.
```
select LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END, '') within group (order by sequence) AS text
from SVL_STATEMENTTEXT where pid=pg_backend_pid();
```
To use the resulting reconstructed SQL in your client, replace any \(`\n`\) special characters with a new line\.
```
text
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
da56ec0989a9-3
|
```
text
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
select\n1 AS a0123456789012345678901234567890123456789012345678901234567890,\n2 AS b0123456789012345678901234567890123456789012345678901234567890,\n3 AS b012345678901234567890123456789012345678901234\nFROM stl_querytext;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_STATEMENTTEXT.md
|
216f996104d7-0
|
Use the STV\_RECENTS table to find out information about the currently active and recently run queries against a database\.
All rows in STV\_RECENTS, including rows generated by another user, are visible to all users\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_RECENTS.md
|
24d769a5ae7e-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_RECENTS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_RECENTS.md
|
3951466fd3e4-0
|
To determine what queries are currently running against the database, type the following query:
```
select user_name, db_name, pid, query
from stv_recents
where status = 'Running';
```
The sample output below shows a single query running on the TICKIT database:
```
user_name | db_name | pid | query
----------+---------+---------+-------------
dwuser | tickit | 19996 |select venuename, venueseats from
venue where venueseats > 50000 order by venueseats desc;
```
The following example returns a list of queries \(if any\) that are running or waiting in queue to be executed:
```
select * from stv_recents where status<>'Done';
status | starttime | duration |user_name|db_name| query | pid
-------+---------------------+----------+---------+-------+-----------+------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_RECENTS.md
|
3951466fd3e4-1
|
Running| 2010-04-21 16:11... | 281566454| dwuser |tickit | select ...| 23347
```
This query does not return results unless you are running a number of concurrent queries and some of those queries are in queue\.
The following example extends the previous example\. In this case, queries that are truly "in flight" \(running, not waiting\) are excluded from the result:
```
select * from stv_recents where status<>'Done'
and pid not in (select pid from stv_inflight);
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_RECENTS.md
|
f0cea0318ff0-0
|
Re\-sorts rows and reclaims space in either a specified table or all tables in the current database\.
**Note**
Only the table owner or a superuser can effectively vacuum a table\. If VACUUM is run without the necessary table privileges, the operation completes successfully but has no effect\.
Amazon Redshift automatically sorts data and runs VACUUM DELETE in the background\. This lessens the need to run the VACUUM command\. For more information, see [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
By default, VACUUM skips the sort phase for any table where more than 95 percent of the table's rows are already sorted\. Skipping the sort phase can significantly improve VACUUM performance\. To change the default sort or delete threshold for a single table, include the table name and the TO *threshold* PERCENT parameter when you run VACUUM\.
Users can access tables while they are being vacuumed\. You can perform queries and write operations while a table is being vacuumed, but when data manipulation language \(DML\) commands and a vacuum run concurrently, both might take longer\. If you execute UPDATE and DELETE statements during a vacuum, system performance might be reduced\. VACUUM DELETE temporarily blocks update and delete operations\.
Amazon Redshift automatically performs a DELETE ONLY vacuum in the background\. Automatic vacuum operation pauses when users run data definition language \(DDL\) operations, such as ALTER TABLE\.
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
f0cea0318ff0-1
|
**Note**
The Amazon Redshift VACUUM command syntax and behavior are substantially different from the PostgreSQL VACUUM operation\. For example, the default VACUUM operation in Amazon Redshift is VACUUM FULL, which reclaims disk space and re\-sorts all rows\. In contrast, the default VACUUM operation in PostgreSQL simply reclaims space and makes it available for reuse\.
For more information, see [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
f752766169e4-0
|
```
VACUUM [ FULL | SORT ONLY | DELETE ONLY | REINDEX ]
[ [ table_name ] [ TO threshold PERCENT ] [ BOOST ] ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
956d4a7a20cf-0
|
FULL <a name="vacuum-full"></a>
Sorts the specified table \(or all tables in the current database\) and reclaims disk space occupied by rows that were marked for deletion by previous UPDATE and DELETE operations\. VACUUM FULL is the default\.
A full vacuum doesn't perform a reindex for interleaved tables\. To reindex interleaved tables followed by a full vacuum, use the [VACUUM REINDEX](#vacuum-reindex) option\.
By default, VACUUM FULL skips the sort phase for any table that is already at least 95 percent sorted\. If VACUUM is able to skip the sort phase, it performs a DELETE ONLY and reclaims space in the delete phase such that at least 95 percent of the remaining rows aren't marked for deletion\.
If the sort threshold isn't met \(for example, if 90 percent of rows are sorted\) and VACUUM performs a full sort, then it also performs a complete delete operation, recovering space from 100 percent of deleted rows\.
You can change the default vacuum threshold only for a single table\. To change the default vacuum threshold for a single table, include the table name and the TO *threshold* PERCENT parameter\.
SORT ONLY <a name="vacuum-sort-only"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
956d4a7a20cf-1
|
SORT ONLY <a name="vacuum-sort-only"></a>
Sorts the specified table \(or all tables in the current database\) without reclaiming space freed by deleted rows\. This option is useful when reclaiming disk space isn't important but re\-sorting new rows is important\. A SORT ONLY vacuum reduces the elapsed time for vacuum operations when the unsorted region doesn't contain a large number of deleted rows and doesn't span the entire sorted region\. Applications that don't have disk space constraints but do depend on query optimizations associated with keeping table rows sorted can benefit from this kind of vacuum\.
By default, VACUUM SORT ONLY skips any table that is already at least 95 percent sorted\. To change the default sort threshold for a single table, include the table name and the TO *threshold* PERCENT parameter when you run VACUUM\.
DELETE ONLY <a name="vacuum-delete-only"></a>
Amazon Redshift automatically performs a DELETE ONLY vacuum in the background, so you rarely, if ever, need to run a DELETE ONLY vacuum\.
A VACUUM DELETE reclaims disk space occupied by rows that were marked for deletion by previous UPDATE and DELETE operations, and compacts the table to free up the consumed space\. A DELETE ONLY vacuum operation doesn't sort table data\.
This option reduces the elapsed time for vacuum operations when reclaiming disk space is important but re\-sorting new rows isn't important\. This option can also be useful when your query performance is already optimal, and re\-sorting rows to optimize query performance isn't a requirement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
956d4a7a20cf-2
|
By default, VACUUM DELETE ONLY reclaims space such that at least 95 percent of the remaining rows aren't marked for deletion\. To change the default delete threshold for a single table, include the table name and the TO *threshold* PERCENT parameter when you run VACUUM\.
Some operations, such as `ALTER TABLE APPEND`, can cause tables to be fragmented\. When you use the `DELETE ONLY` clause the vacuum operation reclaims space from fragmented tables\. The same threshold value of 95 percent applies to the defragmentation operation\.
REINDEX *tablename* <a name="vacuum-reindex"></a>
Analyzes the distribution of the values in interleaved sort key columns, then performs a full VACUUM operation\. If REINDEX is used, a table name is required\.
VACUUM REINDEX takes significantly longer than VACUUM FULL because it makes an additional pass to analyze the interleaved sort keys\. The sort and merge operation can take longer for interleaved tables because the interleaved sort might need to rearrange more rows than a compound sort\.
If a VACUUM REINDEX operation terminates before it completes, the next VACUUM resumes the reindex operation before performing the full vacuum operation\.
VACUUM REINDEX isn't supported with TO *threshold* PERCENT\.
*table\_name*
The name of a table to vacuum\. If you don't specify a table name, the vacuum operation applies to all tables in the current database\. You can specify any permanent or temporary user\-created table\. The command isn't meaningful for other objects, such as views and system tables\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
956d4a7a20cf-3
|
If you include the TO *threshold* PERCENT parameter, a table name is required\.
TO *threshold* PERCENT
A clause that specifies the threshold above which VACUUM skips the sort phase and the target threshold for reclaiming space in the delete phase\. The *sort threshold* is the percentage of total rows that are already in sort order for the specified table prior to vacuuming\. The *delete threshold* is the minimum percentage of total rows not marked for deletion after vacuuming\.
Because VACUUM re\-sorts the rows only when the percent of sorted rows in a table is less than the sort threshold, Amazon Redshift can often reduce VACUUM times significantly\. Similarly, when VACUUM isn't constrained to reclaim space from 100 percent of rows marked for deletion, it is often able to skip rewriting blocks that contain only a few deleted rows\.
For example, if you specify 75 for *threshold*, VACUUM skips the sort phase if 75 percent or more of the table's rows are already in sort order\. For the delete phase, VACUUMS sets a target of reclaiming disk space such that at least 75 percent of the table's rows aren't marked for deletion following the vacuum\. The *threshold* value must be an integer between 0 and 100\. The default is 95\. If you specify a value of 100, VACUUM always sorts the table unless it's already fully sorted and reclaims space from all rows marked for deletion\. If you specify a value of 0, VACUUM never sorts the table and never reclaims space\.
If you include the TO *threshold* PERCENT parameter, you must also specify a table name\. If a table name is omitted, VACUUM fails\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
956d4a7a20cf-4
|
If you include the TO *threshold* PERCENT parameter, you must also specify a table name\. If a table name is omitted, VACUUM fails\.
You can't use the TO *threshold* PERCENT parameter with REINDEX\.
BOOST
Runs the VACUUM command with additional resources, such as memory and disk space, as they're available\. With the BOOST option, VACUUM operates in one window and blocks concurrent deletes and updates for the duration of the VACUUM operation\. Running with the BOOST option contends for system resources, which might affect query performance\. Run the VACUUM BOOST when the load on the system is light, such as during maintenance operations\.
Consider the following when using the BOOST option:
+ When BOOST is specified, the *table\_name* value is required\.
+ BOOST isn't supported with REINDEX\.
+ BOOST is ignored with DELETE ONLY\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
23c447009f3e-0
|
For most Amazon Redshift applications, a full vacuum is recommended\. For more information, see [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
Before running a vacuum operation, note the following behavior:
+ You can't run VACUUM within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
+ You can run only one VACUUM command on a cluster at any given time\. If you attempt to run multiple vacuum operations concurrently, Amazon Redshift returns an error\.
+ Some amount of table growth might occur when tables are vacuumed\. This behavior is expected when there are no deleted rows to reclaim or the new sort order of the table results in a lower ratio of data compression\.
+ During vacuum operations, some degree of query performance degradation is expected\. Normal performance resumes as soon as the vacuum operation is complete\.
+ Concurrent write operations proceed during vacuum operations, but we don’t recommended performing write operations while vacuuming\. It's more efficient to complete write operations before running the vacuum\. Also, any data that is written after a vacuum operation has been started can't be vacuumed by that operation\. In this case, a second vacuum operation is necessary\.
+ A vacuum operation might not be able to start if a load or insert operation is already in progress\. Vacuum operations temporarily require exclusive access to tables in order to start\. This exclusive access is required briefly, so vacuum operations don't block concurrent loads and inserts for any significant period of time\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
23c447009f3e-1
|
+ Vacuum operations are skipped when there is no work to do for a particular table; however, there is some overhead associated with discovering that the operation can be skipped\. If you know that a table is pristine or doesn't meet the vacuum threshold, don't run a vacuum operation against it\.
+ A DELETE ONLY vacuum operation on a small table might not reduce the number of blocks used to store the data, especially when the table has a large number of columns or the cluster uses a large number of slices per node\. These vacuum operations add one block per column per slice to account for concurrent inserts into the table, and there is potential for this overhead to outweigh the reduction in block count from the reclaimed disk space\. For example, if a 10\-column table on an 8\-node cluster occupies 1000 blocks before a vacuum, the vacuum doesn't reduce the actual block count unless more than 80 blocks of disk space are reclaimed because of deleted rows\. \(Each data block uses 1 MB\.\)
Automatic vacuum operations pause if any of the following conditions are met:
+ A user runs a data definition language \(DDL\) operation, such as ALTER TABLE, that requires an exclusive lock on a table that automatic vacuum is currently working on\.
+ A user triggers VACUUM on any table in the cluster \(only one VACUUM can run at a time\)\.
+ A period of high cluster load\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
01feb671dcb5-0
|
Reclaim space and database and re\-sort rows in all tables based on the default 95 percent vacuum threshold\.
```
vacuum;
```
Reclaim space and re\-sort rows in the SALES table based on the default 95 percent threshold\.
```
vacuum sales;
```
Always reclaim space and re\-sort rows in the SALES table\.
```
vacuum sales to 100 percent;
```
Re\-sort rows in the SALES table only if fewer than 75 percent of rows are already sorted\.
```
vacuum sort only sales to 75 percent;
```
Reclaim space in the SALES table such that at least 75 percent of the remaining rows aren't marked for deletion following the vacuum\.
```
vacuum delete only sales to 75 percent;
```
Reindex and then vacuum the LISTING table\.
```
vacuum reindex listing;
```
The following command returns an error\.
```
vacuum reindex listing to 75 percent;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VACUUM_command.md
|
ab1b822fed96-0
|
Use SVV\_QUERY\_STATE to view information about the execution of currently running queries\.
The SVV\_QUERY\_STATE view contains a data subset of the STV\_EXEC\_STATE table\.
SVV\_QUERY\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_STATE.md
|
92dce1a0fc10-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_QUERY_STATE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_STATE.md
|
ea177fcec8d3-0
|
**Determining the processing time of a query by step**
The following query shows how long each step of the query with query ID 279 took to execute and how many data rows Amazon Redshift processed:
```
select query, seg, step, maxtime, avgtime, rows, label
from svv_query_state
where query = 279
order by query, seg, step;
```
This query retrieves the processing information about query 279, as shown in the following sample output:
```
query | seg | step | maxtime | avgtime | rows | label
------+---------+------+---------+---------+---------+-------------------
279 | 3 | 0 | 1658054 | 1645711 | 1405360 | scan
279 | 3 | 1 | 1658072 | 1645809 | 0 | project
279 | 3 | 2 | 1658074 | 1645812 | 1405434 | insert
279 | 3 | 3 | 1658080 | 1645816 | 1405437 | distribute
279 | 4 | 0 | 1677443 | 1666189 | 1268431 | scan
279 | 4 | 1 | 1677446 | 1666192 | 1268434 | insert
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_STATE.md
|
ea177fcec8d3-1
|
279 | 4 | 1 | 1677446 | 1666192 | 1268434 | insert
279 | 4 | 2 | 1677451 | 1666195 | 0 | aggr
(7 rows)
```
**Determining if any active queries are currently running on disk**
The following query shows if any active queries are currently running on disk:
```
select query, label, is_diskbased from svv_query_state
where is_diskbased = 't';
```
This sample output shows any active queries currently running on disk:
```
query | label | is_diskbased
-------+--------------+--------------
1025 | hash tbl=142 | t
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_QUERY_STATE.md
|
b51978eaa9fb-0
|
If you decide to manually specify column encodings, you might want to test different encodings with your data\.
**Note**
We recommend that you use the COPY command to load data whenever possible, and allow the COPY command to choose the optimal encodings based on your data\. Alternatively, you can use the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command to view the suggested encodings for existing data\. For details about applying automatic compression, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
To perform a meaningful test of data compression, you need a large number of rows\. For this example, we will create a table and insert rows by using a statement that selects from two tables; VENUE and LISTING\. We will leave out the WHERE clause that would normally join the two tables; the result is that *each* row in the VENUE table is joined to *all* of the rows in the LISTING table, for a total of over 32 million rows\. This is known as a Cartesian join and normally is not recommended, but for this purpose, it is a convenient method of creating a lot of rows\. If you have an existing table with data that you want to test, you can skip this step\.
After we have a table with sample data, we create a table with seven columns, each with a different compression encoding: raw, bytedict, lzo, runlength, text255, text32k, and zstd\. We populate each column with exactly the same data by executing an INSERT command that selects the data from the first table\.
To test compression encodings:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Verifying_data_compression.md
|
b51978eaa9fb-1
|
To test compression encodings:
1. \(Optional\) First, we'll use a Cartesian join to create a table with a large number of rows\. Skip this step if you want to test an existing table\.
```
create table cartesian_venue(
venueid smallint not null distkey sortkey,
venuename varchar(100),
venuecity varchar(30),
venuestate char(2),
venueseats integer);
insert into cartesian_venue
select venueid, venuename, venuecity, venuestate, venueseats
from venue, listing;
```
1. Next, create a table with the encodings that you want to compare\.
```
create table encodingvenue (
venueraw varchar(100) encode raw,
venuebytedict varchar(100) encode bytedict,
venuelzo varchar(100) encode lzo,
venuerunlength varchar(100) encode runlength,
venuetext255 varchar(100) encode text255,
venuetext32k varchar(100) encode text32k,
venuezstd varchar(100) encode zstd);
```
1. Insert the same data into all of the columns using an INSERT statement with a SELECT clause\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Verifying_data_compression.md
|
b51978eaa9fb-2
|
```
1. Insert the same data into all of the columns using an INSERT statement with a SELECT clause\.
```
insert into encodingvenue
select venuename as venueraw, venuename as venuebytedict, venuename as venuelzo, venuename as venuerunlength, venuename as venuetext32k, venuename as venuetext255, venuename as venuezstd
from cartesian_venue;
```
1. Verify the number of rows in the new table\.
```
select count(*) from encodingvenue
count
----------
38884394
(1 row)
```
1. Query the [STV\_BLOCKLIST](r_STV_BLOCKLIST.md) system table to compare the number of 1 MB disk blocks used by each column\.
The MAX aggregate function returns the highest block number for each column\. The STV\_BLOCKLIST table includes details for three system\-generated columns\. This example uses `col < 6` in the WHERE clause to exclude the system\-generated columns\.
```
select col, max(blocknum)
from stv_blocklist b, stv_tbl_perm p
where (b.tbl=p.id) and name ='encodingvenue'
and col < 7
group by name, col
order by col;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Verifying_data_compression.md
|
b51978eaa9fb-3
|
and col < 7
group by name, col
order by col;
```
The query returns the following results\. The columns are numbered beginning with zero\. Depending on how your cluster is configured, your result might have different numbers, but the relative sizes should be similar\. You can see that BYTEDICT encoding on the second column produced the best results for this dataset, with a compression ratio of better than 20:1\. LZO and ZSTD encoding also produced excellent results\. Different data sets will produce different results, of course\. When a column contains longer text strings, LZO often produces the best compression results\.
```
col | max
-----+-----
0 | 203
1 | 10
2 | 22
3 | 204
4 | 56
5 | 72
6 | 20
(7 rows)
```
If you have data in an existing table, you can use the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command to view the suggested encodings for the table\. For example, the following example shows the recommended encoding for a copy of the VENUE table, CARTESIAN\_VENUE, that contains 38 million rows\. Notice that ANALYZE COMPRESSION recommends LZO encoding for the VENUENAME column\. ANALYZE COMPRESSION chooses optimal compression based on multiple factors, which include percent of reduction\. In this specific case, BYTEDICT provides better compression, but LZO also produces greater than 90 percent compression\.
```
analyze compression cartesian_venue;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Verifying_data_compression.md
|
b51978eaa9fb-4
|
```
analyze compression cartesian_venue;
Table | Column | Encoding | Est_reduction_pct
---------------+------------+----------+------------------
reallybigvenue | venueid | lzo | 97.54
reallybigvenue | venuename | lzo | 91.71
reallybigvenue | venuecity | lzo | 96.01
reallybigvenue | venuestate | lzo | 97.68
reallybigvenue | venueseats | lzo | 98.21
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Verifying_data_compression.md
|
dd47aa5f1ca7-0
|
Aborts the currently running transaction and discards all updates made by that transaction\. ABORT has no effect on already completed transactions\.
This command performs the same function as the ROLLBACK command\. See [ROLLBACK](r_ROLLBACK.md) for more detailed documentation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABORT.md
|
7597f7a109fd-0
|
```
ABORT [ WORK | TRANSACTION ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABORT.md
|
6bc240e9dad6-0
|
WORK
Optional keyword\.
TRANSACTION
Optional keyword; WORK and TRANSACTION are synonyms\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABORT.md
|
ea4b2b69405a-0
|
The following example creates a table then starts a transaction where data is inserted into the table\. The ABORT command then rolls back the data insertion to leave the table empty\.
The following command creates an example table called MOVIE\_GROSS:
```
create table movie_gross( name varchar(30), gross bigint );
```
The next set of commands starts a transaction that inserts two data rows into the table:
```
begin;
insert into movie_gross values ( 'Raiders of the Lost Ark', 23400000);
insert into movie_gross values ( 'Star Wars', 10000000 );
```
Next, the following command selects the data from the table to show that it was successfully inserted:
```
select * from movie_gross;
```
The command output shows that both rows are successfully inserted:
```
name | gross
------------------------+----------
Raiders of the Lost Ark | 23400000
Star Wars | 10000000
(2 rows)
```
This command now rolls back the data changes to where the transaction began:
```
abort;
```
Selecting data from the table now shows an empty table:
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABORT.md
|
ea4b2b69405a-1
|
abort;
```
Selecting data from the table now shows an empty table:
```
select * from movie_gross;
name | gross
------+-------
(0 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABORT.md
|
904a4f8e2a7d-0
|
This view returns an estimate of how much time it will take to complete a vacuum operation that is currently in progress\.
SVV\_VACUUM\_PROGRESS is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
For information about SVV\_VACUUM\_SUMMARY, see [SVV\_VACUUM\_SUMMARY](r_SVV_VACUUM_SUMMARY.md)\.
For information about SVL\_VACUUM\_PERCENTAGE, see [SVL\_VACUUM\_PERCENTAGE](r_SVL_VACUUM_PERCENTAGE.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_PROGRESS.md
|
9f6cfe03f889-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_VACUUM_PROGRESS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_PROGRESS.md
|
b14e0855faa8-0
|
The following queries, run a few minutes apart, show that a large table named SALESNEW is being vacuumed\.
```
select * from svv_vacuum_progress;
table_name | status | time_remaining_estimate
--------------+-------------------------------+-------------------------
salesnew | Vacuum: initialize salesnew |
(1 row)
...
select * from svv_vacuum_progress;
table_name | status | time_remaining_estimate
-------------+------------------------+-------------------------
salesnew | Vacuum salesnew sort | 33m 21s
(1 row)
```
The following query shows that no vacuum operation is currently in progress\. The last table to be vacuumed was the SALES table\.
```
select * from svv_vacuum_progress;
table_name | status | time_remaining_estimate
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_PROGRESS.md
|
b14e0855faa8-1
|
```
select * from svv_vacuum_progress;
table_name | status | time_remaining_estimate
-------------+----------+-------------------------
sales | Complete |
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_PROGRESS.md
|
6d6848216de9-0
|
Logical conditions combine the result of two conditions to produce a single result\. All logical conditions are binary operators with a Boolean return type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_logical_condition.md
|
ad9fcd011207-0
|
```
expression
{ AND | OR }
expression
NOT expression
```
Logical conditions use a three\-valued Boolean logic where the null value represents an unknown relationship\. The following table describes the results for logical conditions, where `E1` and `E2` represent expressions:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_logical_condition.html)
The NOT operator is evaluated before AND, and the AND operator is evaluated before the OR operator\. Any parentheses used may override this default order of evaluation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_logical_condition.md
|
16545ed9b733-0
|
The following example returns USERID and USERNAME from the USERS table where the user likes both Las Vegas and sports:
```
select userid, username from users
where likevegas = 1 and likesports = 1
order by userid;
userid | username
--------+----------
1 | JSG99FHE
67 | TWU10MZT
87 | DUF19VXU
92 | HYP36WEQ
109 | FPL38HZK
120 | DMJ24GUZ
123 | QZR22XGQ
130 | ZQC82ALK
133 | LBN45WCH
144 | UCX04JKN
165 | TEY68OEB
169 | AYQ83HGO
184 | TVX65AZX
...
(2128 rows)
```
The next example returns the USERID and USERNAME from the USERS table where the user likes Las Vegas, or sports, or both\. This query returns all of the output from the previous example plus the users who like only Las Vegas or sports\.
```
select userid, username from users
where likevegas = 1 or likesports = 1
order by userid;
userid | username
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_logical_condition.md
|
16545ed9b733-1
|
where likevegas = 1 or likesports = 1
order by userid;
userid | username
--------+----------
1 | JSG99FHE
2 | PGL08LJI
3 | IFT66TXU
5 | AEB55QTM
6 | NDQ15VBM
9 | MSD36KVR
10 | WKW41AIW
13 | QTF33MCG
15 | OWU78MTR
16 | ZMG93CDD
22 | RHT62AGI
27 | KOY02CVE
29 | HUH27PKK
...
(18968 rows)
```
The following query uses parentheses around the `OR` condition to find venues in New York or California where Macbeth was performed:
```
select distinct venuename, venuecity
from venue join event on venue.venueid=event.venueid
where (venuestate = 'NY' or venuestate = 'CA') and eventname='Macbeth'
order by 2,1;
venuename | venuecity
----------------------------------------+---------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_logical_condition.md
|
16545ed9b733-2
|
Geffen Playhouse | Los Angeles
Greek Theatre | Los Angeles
Royce Hall | Los Angeles
American Airlines Theatre | New York City
August Wilson Theatre | New York City
Belasco Theatre | New York City
Bernard B. Jacobs Theatre | New York City
...
```
Removing the parentheses in this example changes the logic and results of the query\.
The following example uses the `NOT` operator:
```
select * from category
where not catid=1
order by 1;
catid | catgroup | catname | catdesc
-------+----------+-----------+--------------------------------------------
2 | Sports | NHL | National Hockey League
3 | Sports | NFL | National Football League
4 | Sports | NBA | National Basketball Association
5 | Sports | MLS | Major League Soccer
...
```
The following example uses a `NOT` condition followed by an `AND` condition:
```
select * from category
where (not catid=1) and catgroup='Sports'
order by catid;
catid | catgroup | catname | catdesc
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_logical_condition.md
|
16545ed9b733-3
|
order by catid;
catid | catgroup | catname | catdesc
-------+----------+---------+---------------------------------
2 | Sports | NHL | National Hockey League
3 | Sports | NFL | National Football League
4 | Sports | NBA | National Basketball Association
5 | Sports | MLS | Major League Soccer
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_logical_condition.md
|
22d5de5e70e8-0
|
All external tables must be created in an external schema, which you create using a [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md) statement\.
**Note**
Some applications use the term *database* and *schema* interchangeably\. In Amazon Redshift, we use the term *schema*\.
An Amazon Redshift external schema references an external database in an external data catalog\. You can create the external database in Amazon Redshift, in [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/catalog.html), in [AWS Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro), or in an Apache Hive metastore, such as [Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html)\. If you create an external database in Amazon Redshift, the database resides in the Athena Data Catalog\. To create a database in a Hive metastore, you need to create the database in your Hive application\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
22d5de5e70e8-1
|
Amazon Redshift needs authorization to access the Data Catalog in Athena and the data files in Amazon S3 on your behalf\. To provide that authorization, you first create an AWS Identity and Access Management \(IAM\) role\. Then you attach the role to your cluster and provide Amazon Resource Name \(ARN\) for the role in the Amazon Redshift `CREATE EXTERNAL SCHEMA` statement\. For more information about authorization, see [IAM policies for Amazon Redshift Spectrum](c-spectrum-iam-policies.md)\.
**Note**
If you currently have Redshift Spectrum external tables in the Athena Data Catalog, you can migrate your Athena Data Catalog to an AWS Glue Data Catalog\. To use an AWS Glue Data Catalog with Redshift Spectrum, you might need to change your IAM policies\. For more information, see [Upgrading to the AWS Glue Data Catalog](https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html#glue-upgrade) in the *Amazon Athena User Guide*\.
To create an external database at the same time you create an external schema, specify `FROM DATA CATALOG` and include the `CREATE EXTERNAL DATABASE` clause in your `CREATE EXTERNAL SCHEMA` statement\.
The following example creates an external schema named `spectrum_schema` using the external database `spectrum_db`\.
```
create external schema spectrum_schema from data catalog
database 'spectrum_db'
iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
22d5de5e70e8-2
|
iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole'
create external database if not exists;
```
If you manage your data catalog using Athena, specify the Athena database name and the AWS Region in which the Athena Data Catalog is located\.
The following example creates an external schema using the default `sampledb` database in the Athena Data Catalog\.
```
create external schema athena_schema from data catalog
database 'sampledb'
iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole'
region 'us-east-2';
```
**Note**
The `region` parameter references the AWS Region in which the Athena Data Catalog is located, not the location of the data files in Amazon S3\.
If you manage your data catalog using a Hive metastore, such as Amazon EMR, your security groups must be configured to allow traffic between the clusters\.
In the CREATE EXTERNAL SCHEMA statement, specify `FROM HIVE METASTORE` and include the metastore's URI and port number\. The following example creates an external schema using a Hive metastore database named `hive_db`\.
```
create external schema hive_schema
from hive metastore
database 'hive_db'
uri '172.10.10.10' port 99
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
22d5de5e70e8-3
|
from hive metastore
database 'hive_db'
uri '172.10.10.10' port 99
iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole'
```
To view external schemas for your cluster, query the PG\_EXTERNAL\_SCHEMA catalog table or the SVV\_EXTERNAL\_SCHEMAS view\. The following example queries SVV\_EXTERNAL\_SCHEMAS, which joins PG\_EXTERNAL\_SCHEMA and PG\_NAMESPACE\.
```
select * from svv_external_schemas
```
For the full command syntax and examples, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
bb92a774b78a-0
|
The metadata for Amazon Redshift Spectrum external databases and external tables is stored in an external data catalog\. By default, Redshift Spectrum metadata is stored in an Athena Data Catalog\. You can view and manage Redshift Spectrum databases and tables in your Athena console\.
You can also create and manage external databases and external tables using Hive data definition language \(DDL\) using Athena or a Hive metastore, such as Amazon EMR\.
**Note**
We recommend using Amazon Redshift to create and manage external databases and external tables in Redshift Spectrum\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
ba392908caf8-0
|
You can create an external database by including the CREATE EXTERNAL DATABASE IF NOT EXISTS clause as part of your CREATE EXTERNAL SCHEMA statement\. In such cases, the external database metadata is stored in your Athena data catalog\. The metadata for external tables that you create qualified by the external schema is also stored in your Athena Data Catalog\.
Athena maintains a Data Catalog for each supported AWS Region\. To view table metadata, log on to the Athena console and choose **Catalog Manager**\. The following example shows the Athena Catalog Manager for the US West \(Oregon\) Region\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/spectrum-athena-catalog.png)
If you create and manage your external tables using Athena, register the database using CREATE EXTERNAL SCHEMA\. For example, the following command registers the Athena database named `sampledb`\.
```
create external schema athena_sample
from data catalog
database 'sampledb'
iam_role 'arn:aws:iam::123456789012:role/mySpectrumRole'
region 'us-east-1';
```
When you query the SVV\_EXTERNAL\_TABLES system view, you see tables in the Athena `sampledb` database and also tables that you created in Amazon Redshift\.
```
select * from svv_external_tables;
```
```
schemaname | tablename | location
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
ba392908caf8-1
|
select * from svv_external_tables;
```
```
schemaname | tablename | location
--------------+------------------+--------------------------------------------------------
athena_sample | elb_logs | s3://athena-examples/elb/plaintext
athena_sample | lineitem_1t_csv | s3://myspectrum/tpch/1000/lineitem_csv
athena_sample | lineitem_1t_part | s3://myspectrum/tpch/1000/lineitem_partition
spectrum | sales | s3://awssampledbuswest2/tickit/spectrum/sales
spectrum | sales_part | s3://awssampledbuswest2/tickit/spectrum/sales_part
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
4320f7912324-0
|
If you create external tables in an Apache Hive metastore, you can use CREATE EXTERNAL SCHEMA to register those tables in Redshift Spectrum\.
In the CREATE EXTERNAL SCHEMA statement, specify the FROM HIVE METASTORE clause and provide the Hive metastore URI and port number\. The IAM role must include permission to access Amazon S3 but doesn't need any Athena permissions\. The following example registers a Hive metastore\.
```
create external schema if not exists hive_schema
from hive metastore
database 'hive_database'
uri 'ip-10-0-111-111.us-west-2.compute.internal' port 9083
iam_role 'arn:aws:iam::123456789012:role/mySpectrumRole';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
8503583cacee-0
|
If your Hive metastore is in Amazon EMR, you must give your Amazon Redshift cluster access to your Amazon EMR cluster\. To do so, you create an Amazon EC2 security group\. You then allow all inbound traffic to the EC2 security group from your Amazon Redshift cluster's security group and your Amazon EMR cluster's security group\. Then you add the EC2 security to both your Amazon Redshift cluster and your Amazon EMR cluster\.
**To enable your Amazon Redshift cluster to access your Amazon EMR cluster**
1. In Amazon Redshift, make a note of your cluster's security group name\.
**Note**
A new console is available for Amazon Redshift\. Choose either the **New console** or the **Original console** instructions based on the console that you are using\. The **New console** instructions are open by default\.
**New console**
To display the security group, do the following:
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. On the navigation menu, choose **CLUSTERS**, then choose the cluster from the list to open its details\. Choose **Properties** and view the **Network and security** section\.
1. Find your security group in **VPC security group**\.
**Original console**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
8503583cacee-1
|
1. Find your security group in **VPC security group**\.
**Original console**
In the Amazon Redshift console, choose your cluster\. Find your cluster security groups in the **Cluster Properties** group\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/spectrum-redshift-security-groups.png)
1. In Amazon EMR, make a note of the EMR master node security group name\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/spectrum-emr-security-groups.png)
1. Create or modify an Amazon EC2 security group to allow connection between Amazon Redshift and Amazon EMR:
1. In the Amazon EC2 dashboard, choose **Security Groups**\.
1. Choose **Create Security Group**\.
1. If using VPC, choose the VPC that both your Amazon Redshift and Amazon EMR clusters are in\.
1. Add an inbound rule\.
1. For **Type**, choose **TCP**\.
1. For **Source**, choose **Custom**\.
1. Enter the name of your Amazon Redshift security group\.
1. Add another inbound rule\.
1. For **Type**, choose **TCP**\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
8503583cacee-2
|
1. Add another inbound rule\.
1. For **Type**, choose **TCP**\.
1. For **Port Range**, enter **9083**\.
**Note**
The default port for an EMR HMS is 9083\. If your HMS uses a different port, specify that port in the inbound rule and in the external schema definition\.
1. For **Source**, choose **Custom**\.
1. Enter the name of your Amazon EMR security group\.
1. Choose **Create**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/spectrum-ec2-create-security-group.png)
1. Add the Amazon EC2 security group you created in the previous step to your Amazon Redshift cluster and to your Amazon EMR cluster:
1. In Amazon Redshift, choose your cluster\.
1. Choose **Cluster**, **Modify**\.
1. In **VPC Security Groups**, add the new security group by pressing CRTL and choosing the new security group name\.
1. In Amazon EMR, choose your cluster\.
1. Under **Hardware**, choose the link for the Master node\.
1. Choose the link in the **EC2 Instance ID** column\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
8503583cacee-3
|
1. Choose the link in the **EC2 Instance ID** column\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/spectrum-emr-add-security-group.png)
1. For **Actions**, choose **Networking**, **Change Security Groups**\.
1. Choose the new security group\.
1. Choose **Assign Security Groups**\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/spectrum-emr-assign-security-group.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-external-schemas.md
|
a2ebcee2b26c-0
|
Returns the location of the specified substring within a string\.
Synonym of the [STRPOS function](r_STRPOS.md) function\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POSITION.md
|
3bc173fec2e2-0
|
```
POSITION(substring IN string )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POSITION.md
|
8a11efd45f29-0
|
*substring*
The substring to search for within the *string*\.
*string*
The string or column to be searched\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POSITION.md
|
50b7ac661c35-0
|
The POSITION function returns an integer corresponding to the position of the substring \(one\-based, not zero\-based\)\. The position is based on the number of characters, not bytes, so that multi\-byte characters are counted as single characters\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POSITION.md
|
0b040edff624-0
|
POSITION returns 0 if the substring is not found within the string:
```
select position('dog' in 'fish');
position
----------
0
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POSITION.md
|
b8507fea1aa1-0
|
The following example shows the position of the string `fish` within the word `dogfish`:
```
select position('fish' in 'dogfish');
position
----------
4
(1 row)
```
The following example returns the number of sales transactions with a COMMISSION over 999\.00 from the SALES table:
```
select distinct position('.' in commission), count (position('.' in commission))
from sales where position('.' in commission) > 4 group by position('.' in commission)
order by 1,2;
position | count
---------+-------
5 | 629
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_POSITION.md
|
491905bb393b-0
|
ST\_Within returns true if the first input geometry is within the second input geometry\.
For example, geometry `A` is within geometry `B` if every point in `A` is a point in `B` and their interiors have nonempty intersection\.
ST\_Within\(`A`, `B`\) is equivalent to ST\_Contains\(`B`, `A`\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Within-function.md
|
45d26251fed8-0
|
```
ST_Within(geom1, geom2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Within-function.md
|
8c839acff961-0
|
*geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. This value is compared with *geom2* to determine if it is within *geom2*\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Within-function.md
|
94b3701eebd9-0
|
`BOOLEAN`
If *geom1* or *geom2* is null, then null is returned\.
If *geom1* and *geom2* don't have the same spatial reference system identifier \(SRID\) value, then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Within-function.md
|
1976770158a8-0
|
The following SQL checks if the first polygon is within the second polygon\.
```
SELECT ST_Within(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'), ST_GeomFromText('POLYGON((-1 3,2 1,0 -3,-1 3))'));
```
```
st_within
-----------
true
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Within-function.md
|
a3c97c11c53d-0
|
The AVG window function returns the average \(arithmetic mean\) of the input expression values\. The AVG function works with numeric values and ignores NULL values\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_AVG.md
|
4160201f899e-0
|
```
AVG ( [ALL ] expression ) OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list
frame_clause ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_AVG.md
|
5a4db57e88eb-0
|
*expression *
The target column or expression that the function operates on\.
ALL
With the argument ALL, the function retains all duplicate values from the expression for counting\. ALL is the default\. DISTINCT is not supported\.
OVER
Specifies the window clauses for the aggregation functions\. The OVER clause distinguishes window aggregation functions from normal set aggregation functions\.
PARTITION BY *expr\_list*
Defines the window for the AVG function in terms of one or more expressions\.
ORDER BY *order\_list*
Sorts the rows within each partition\. If no PARTITION BY is specified, ORDER BY uses the entire table\.
*frame\_clause*
If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows within the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_AVG.md
|
c51c69d0377e-0
|
The argument types supported by the AVG function are SMALLINT, INTEGER, BIGINT, NUMERIC, DECIMAL, REAL, and DOUBLE PRECISION\.
The return types supported by the AVG function are:
+ BIGINT for SMALLINT or INTEGER arguments
+ NUMERIC for BIGINT arguments
+ DOUBLE PRECISION for floating point arguments
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_AVG.md
|
35bb117c9cd5-0
|
The following example computes a rolling average of quantities sold by date; order the results by date ID and sales ID:
```
select salesid, dateid, sellerid, qty,
avg(qty) over
(order by dateid, salesid rows unbounded preceding) as avg
from winsales
order by 2,1;
salesid | dateid | sellerid | qty | avg
---------+------------+----------+-----+-----
30001 | 2003-08-02 | 3 | 10 | 10
10001 | 2003-12-24 | 1 | 10 | 10
10005 | 2003-12-24 | 1 | 30 | 16
40001 | 2004-01-09 | 4 | 40 | 22
10006 | 2004-01-18 | 1 | 10 | 20
20001 | 2004-02-12 | 2 | 20 | 20
40005 | 2004-02-12 | 4 | 10 | 18
20002 | 2004-02-16 | 2 | 20 | 18
30003 | 2004-04-18 | 3 | 15 | 18
30004 | 2004-04-18 | 3 | 20 | 18
30007 | 2004-09-07 | 3 | 30 | 19
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_AVG.md
|
35bb117c9cd5-1
|
30007 | 2004-09-07 | 3 | 30 | 19
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_AVG.md
|
b21e01cee999-0
|
ST\_NumPoints returns the number of points in an input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumPoints-function.md
|
5348f4457149-0
|
```
ST_NumPoints(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumPoints-function.md
|
31fc0f0774f7-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumPoints-function.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.