id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
7e1ad3b1ca30-0
|
In the previous example you learned how to obtain the query ID and process ID \(PID\) for a completed query from the SVL\_QLOG view\.
You might need to find the PID for a query that is still running\. For example, you will need the PID if you need to cancel a query that is taking too long to run\. You can query the STV\_RECENTS system table to obtain a list of process IDs for running queries, along with the corresponding query string\. If your query returns multiple PIDs, you can look at the query text to determine which PID you need\.
To determine the PID of a running query, issue the following SELECT statement:
```
select pid, user_name, starttime, query
from stv_recents
where status='Running';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/determine_pid.md
|
97e4cf688bbe-0
|
Sets the value of a server configuration parameter\. Use the SET command to override a setting for the duration of the current session or transaction only\.
Use the [RESET](r_RESET.md) command to return a parameter to its default value\.
You can change the server configuration parameters in several ways\. For more information, see [Modifying the server configuration](t_Modifying_the_default_settings.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
f342a52ac88a-0
|
```
SET { [ SESSION | LOCAL ]
{ SEED | parameter_name } { TO | = }
{ value | 'value' | DEFAULT } |
SEED TO value }
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
7ea815b59c44-0
|
SESSION
Specifies that the setting is valid for the current session\. Default value\.
LOCAL
Specifies that the setting is valid for the current transaction\.
SEED TO *value*
Sets an internal seed to be used by the RANDOM function for random number generation\.
SET SEED takes a numeric *value* between 0 and 1, and multiplies this number by \(231\-1\) for use with the [RANDOM function](r_RANDOM.md) function\. If you use SET SEED before making multiple RANDOM calls, RANDOM generates numbers in a predictable sequence\.
*parameter\_name*
Name of the parameter to set\. See [Modifying the server configuration](t_Modifying_the_default_settings.md) for information about parameters\.
*value*
New parameter value\. Use single quotation marks to set the value to a specific string\. If using SET SEED, this parameter contains the SEED value\.
DEFAULT
Sets the parameter to the default value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
79698206a82a-0
|
**Changing a parameter for the current session**
The following example sets the datestyle:
```
set datestyle to 'SQL,DMY';
```
**Setting a query group for workload management**
If query groups are listed in a queue definition as part of the cluster's WLM configuration, you can set the QUERY\_GROUP parameter to a listed query group name\. Subsequent queries are assigned to the associated query queue\. The QUERY\_GROUP setting remains in effect for the duration of the session or until a RESET QUERY\_GROUP command is encountered\.
This example runs two queries as part of the query group 'priority', then resets the query group\.
```
set query_group to 'priority';
select tbl, count(*)from stv_blocklist;
select query, elapsed, substring from svl_qlog order by query desc limit 5;
reset query_group;
```
See [Implementing workload management](cm-c-implementing-workload-management.md)
**Setting a label for a group of queries**
The QUERY\_GROUP parameter defines a label for one or more queries that are executed in the same session after a SET command\. In turn, this label is logged when queries are executed and can be used to constrain results returned from the STL\_QUERY and STV\_INFLIGHT system tables and the SVL\_QLOG view\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
79698206a82a-1
|
```
show query_group;
query_group
-------------
unset
(1 row)
set query_group to '6 p.m.';
show query_group;
query_group
-------------
6 p.m.
(1 row)
select * from sales where salesid=500;
salesid | listid | sellerid | buyerid | eventid | dateid | ...
---------+--------+----------+---------+---------+--------+-----
500 | 504 | 3858 | 2123 | 5871 | 2052 | ...
(1 row)
reset query_group;
select query, trim(label) querygroup, pid, trim(querytxt) sql
from stl_query
where label ='6 p.m.';
query | querygroup | pid | sql
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
79698206a82a-2
|
from stl_query
where label ='6 p.m.';
query | querygroup | pid | sql
-------+------------+-------+----------------------------------------
57 | 6 p.m. | 30711 | select * from sales where salesid=500;
(1 row)
```
Query group labels are a useful mechanism for isolating individual queries or groups of queries that are run as part of scripts\. You don't need to identify and track queries by their IDs; you can track them by their labels\.
**Setting a seed value for random number generation**
The following example uses the SEED option with SET to cause the RANDOM function to generate numbers in a predictable sequence\.
First, return three RANDOM integers without setting the SEED value first:
```
select cast (random() * 100 as int);
int4
------
6
(1 row)
select cast (random() * 100 as int);
int4
------
68
(1 row)
select cast (random() * 100 as int);
int4
------
56
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
79698206a82a-3
|
select cast (random() * 100 as int);
int4
------
56
(1 row)
```
Now, set the SEED value to `.25`, and return three more RANDOM numbers:
```
set seed to .25;
select cast (random() * 100 as int);
int4
------
21
(1 row)
select cast (random() * 100 as int);
int4
------
79
(1 row)
select cast (random() * 100 as int);
int4
------
12
(1 row)
```
Finally, reset the SEED value to `.25`, and verify that RANDOM returns the same results as the previous three calls:
```
set seed to .25;
select cast (random() * 100 as int);
int4
------
21
(1 row)
select cast (random() * 100 as int);
int4
------
79
(1 row)
select cast (random() * 100 as int);
int4
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
79698206a82a-4
|
79
(1 row)
select cast (random() * 100 as int);
int4
------
12
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET.md
|
c7d3adafae3c-0
|
The following examples show different ways in which subqueries fit into SELECT queries\. See [Join examples](r_Join_examples.md) for another example of the use of subqueries\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Subquery_examples.md
|
bdab0a4036a2-0
|
The following example contains a subquery in the SELECT list\. This subquery is *scalar*: it returns only one column and one value, which is repeated in the result for each row that is returned from the outer query\. The query compares the Q1SALES value that the subquery computes with sales values for two other quarters \(2 and 3\) in 2008, as defined by the outer query\.
```
select qtr, sum(pricepaid) as qtrsales,
(select sum(pricepaid)
from sales join date on sales.dateid=date.dateid
where qtr='1' and year=2008) as q1sales
from sales join date on sales.dateid=date.dateid
where qtr in('2','3') and year=2008
group by qtr
order by qtr;
qtr | qtrsales | q1sales
-------+-------------+-------------
2 | 30560050.00 | 24742065.00
3 | 31170237.00 | 24742065.00
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Subquery_examples.md
|
3c558e08afa5-0
|
The following example contains a table subquery in the WHERE clause\. This subquery produces multiple rows\. In this case, the rows contain only one column, but table subqueries can contain multiple columns and rows, just like any other table\.
The query finds the top 10 sellers in terms of maximum tickets sold\. The top 10 list is restricted by the subquery, which removes users who live in cities where there are ticket venues\. This query can be written in different ways; for example, the subquery could be rewritten as a join within the main query\.
```
select firstname, lastname, city, max(qtysold) as maxsold
from users join sales on users.userid=sales.sellerid
where users.city not in(select venuecity from venue)
group by firstname, lastname, city
order by maxsold desc, city desc
limit 10;
firstname | lastname | city | maxsold
-----------+-----------+----------------+---------
Noah | Guerrero | Worcester | 8
Isadora | Moss | Winooski | 8
Kieran | Harrison | Westminster | 8
Heidi | Davis | Warwick | 8
Sara | Anthony | Waco | 8
Bree | Buck | Valdez | 8
Evangeline | Sampson | Trenton | 8
Kendall | Keith | Stillwater | 8
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Subquery_examples.md
|
3c558e08afa5-1
|
Bree | Buck | Valdez | 8
Evangeline | Sampson | Trenton | 8
Kendall | Keith | Stillwater | 8
Bertha | Bishop | Stevens Point | 8
Patricia | Anderson | South Portland | 8
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Subquery_examples.md
|
d4b49d8fced3-0
|
See [WITH clause](r_WITH_clause.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Subquery_examples.md
|
d444693b8e27-0
|
Analyzes merge execution steps for queries\. These steps occur when the results of parallel operations \(such as sorts and joins\) are merged for subsequent processing\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGE.md
|
fb7e1853c3d0-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_MERGE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGE.md
|
2f0e3e9e51ed-0
|
The following example returns 10 merge execution results\.
```
select query, step, starttime, endtime, tasknum, rows
from stl_merge
limit 10;
```
```
query | step | starttime | endtime | tasknum | rows
-------+------+---------------------+---------------------+---------+------
9 | 0 | 2013-08-12 20:08:14 | 2013-08-12 20:08:14 | 0 | 0
12 | 0 | 2013-08-12 20:09:10 | 2013-08-12 20:09:10 | 0 | 0
15 | 0 | 2013-08-12 20:10:24 | 2013-08-12 20:10:24 | 0 | 0
20 | 0 | 2013-08-12 20:11:27 | 2013-08-12 20:11:27 | 0 | 0
26 | 0 | 2013-08-12 20:12:28 | 2013-08-12 20:12:28 | 0 | 0
32 | 0 | 2013-08-12 20:14:33 | 2013-08-12 20:14:33 | 0 | 0
38 | 0 | 2013-08-12 20:16:43 | 2013-08-12 20:16:43 | 0 | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGE.md
|
2f0e3e9e51ed-1
|
38 | 0 | 2013-08-12 20:16:43 | 2013-08-12 20:16:43 | 0 | 0
44 | 0 | 2013-08-12 20:17:05 | 2013-08-12 20:17:05 | 0 | 0
50 | 0 | 2013-08-12 20:18:48 | 2013-08-12 20:18:48 | 0 | 0
56 | 0 | 2013-08-12 20:20:48 | 2013-08-12 20:20:48 | 0 | 0
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_MERGE.md
|
5b9831b3366b-0
|
A scalar subquery is a regular SELECT query in parentheses that returns exactly one value: one row with one column\. The query is executed and the returned value is used in the outer query\. If the subquery returns zero rows, the value of the subquery expression is null\. If it returns more than one row, Amazon Redshift returns an error\. The subquery can refer to variables from the parent query, which will act as constants during any one invocation of the subquery\.
You can use scalar subqueries in most statements that call for an expression\. Scalar subqueries are not valid expressions in the following cases:
+ As default values for expressions
+ In GROUP BY and HAVING clauses
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_scalar_subqueries.md
|
14578034986d-0
|
The following subquery computes the average price paid per sale across the entire year of 2008, then the outer query uses that value in the output to compare against the average price per sale per quarter:
```
select qtr, avg(pricepaid) as avg_saleprice_per_qtr,
(select avg(pricepaid)
from sales join date on sales.dateid=date.dateid
where year = 2008) as avg_saleprice_yearly
from sales join date on sales.dateid=date.dateid
where year = 2008
group by qtr
order by qtr;
qtr | avg_saleprice_per_qtr | avg_saleprice_yearly
-------+-----------------------+----------------------
1 | 647.64 | 642.28
2 | 646.86 | 642.28
3 | 636.79 | 642.28
4 | 638.26 | 642.28
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_scalar_subqueries.md
|
e24de28bcc29-0
|
Amazon Redshift can automatically sort and perform a VACUUM DELETE operation on tables in the background\. To clean up tables after a load or a series of incremental updates, you can also run the [VACUUM](r_VACUUM_command.md) command, either against the entire database or against individual tables\.
**Note**
Only the table owner or a superuser can effectively vacuum a table\. If you don't have owner or superuser privileges for a table, a VACUUM operation that specifies a single table fails\. If you run a VACUUM of the entire database without specifying a table name, the operation completes successfully\. However, the operation has no effect on tables for which you don't have owner or superuser privileges\.
For this reason, we recommend vacuuming individual tables as needed\. We also recommend this approach because vacuuming the entire database is potentially an expensive operation,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
b763ede86f09-0
|
Amazon Redshift automatically sorts data in the background to maintain table data in the order of its sort key\. Amazon Redshift keeps track of your scan queries to determine which sections of the table will benefit from sorting\.
Depending on the load on the system, Amazon Redshift automatically initiates the sort\. This automatic sort lessens the need to run the VACUUM command to keep data in sort key order\. If you need data fully sorted in sort key order, for example after a large data load, then you can still manually run the VACUUM command\. To determine whether your table will benefit by running VACUUM SORT, monitor the `vacuum_sort_benefit` column in [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md)\.
Amazon Redshift tracks scan queries that use the sort key on each table\. Amazon Redshift estimates the maximum percentage of improvement in scanning and filtering of data for each table \(if the table was fully sorted\)\. This estimate is visible in the `vacuum_sort_benefit` column in [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md)\. You can use this column, along with the `unsorted` column, to determine when queries can benefit from manually running VACUUM SORT on a table\. The `unsorted` column reflects the physical sort order of a table\. The `vacuum_sort_benefit` column specifies the impact of sorting a table by manually running VACUUM SORT\.
For example, consider the following query:
```
select "table", unsorted,vacuum_sort_benefit from svv_table_info order by 1;
```
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
b763ede86f09-1
|
```
```
table | unsorted | vacuum_sort_benefit
-------+----------+---------------------
sales | 85.71 | 5.00
event | 45.24 | 67.00
```
For the table “sales”, even though the table is \~86% physically unsorted, the query performance impact from the table being 86% unsorted is only 5%\. This might be either because only a small portion of the table is accessed by queries, or very few queries accessed the table\. For the table “event”, the table is \~45% physically unsorted\. But the query performance impact of 67% indicates that either a larger portion of the table was accessed by queries, or the number of queries accessing the table was large\. The table "event" can potentially benefit from running VACUUM SORT\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
37fe18798bd8-0
|
When you perform a delete, the rows are marked for deletion, but not removed\. Amazon Redshift automatically runs a VACUUM DELETE operation in the background based on the number of deleted rows in database tables\. Amazon Redshift schedules the VACUUM DELETE to run during periods of reduced load and pauses the operation during periods of high load\.
**Topics**
+ [Automatic table sort](#automatic-table-sort)
+ [Automatic vacuum delete](#automatic-table-delete)
+ [VACUUM frequency](#vacuum-frequency)
+ [Sort stage and merge stage](#vacuum-stages)
+ [Vacuum threshold](#vacuum-sort-threshold)
+ [Vacuum types](#vacuum-types)
+ [Managing vacuum times](vacuum-managing-vacuum-times.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
defd7c6e3138-0
|
You should vacuum as often as you need to in order to maintain consistent query performance\. Consider these factors when determining how often to run your VACUUM command\.
+ Run VACUUM during time periods when you expect minimal activity on the cluster, such as evenings or during designated database administration windows\.
+ A large unsorted region results in longer vacuum times\. If you delay vacuuming, the vacuum will take longer because more data has to be reorganized\.
+ VACUUM is an I/O intensive operation, so the longer it takes for your vacuum to complete, the more impact it will have on concurrent queries and other database operations running on your cluster\.
+ VACUUM takes longer for tables that use interleaved sorting\. To evaluate whether interleaved tables need to be re\-sorted, query the [SVV\_INTERLEAVED\_COLUMNS](r_SVV_INTERLEAVED_COLUMNS.md) view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
fca9d71725ae-0
|
Amazon Redshift performs a vacuum operation in two stages: first, it sorts the rows in the unsorted region, then, if necessary, it merges the newly sorted rows at the end of the table with the existing rows\. When vacuuming a large table, the vacuum operation proceeds in a series of steps consisting of incremental sorts followed by merges\. If the operation fails or if Amazon Redshift goes off line during the vacuum, the partially vacuumed table or database will be in a consistent state, but you will need to manually restart the vacuum operation\. Incremental sorts are lost, but merged rows that were committed before the failure do not need to be vacuumed again\. If the unsorted region is large, the lost time might be significant\. For more information about the sort and merge stages, see [Managing the volume of merged rows](vacuum-managing-volume-of-unmerged-rows.md)\.
Users can access tables while they are being vacuumed\. You can perform queries and write operations while a table is being vacuumed, but when DML and a vacuum run concurrently, both might take longer\. If you execute UPDATE and DELETE statements during a vacuum, system performance might be reduced\. Incremental merges temporarily block concurrent UPDATE and DELETE operations, and UPDATE and DELETE operations in turn temporarily block incremental merge steps on the affected tables\. DDL operations, such as ALTER TABLE, are blocked until the vacuum operation finishes with the table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
ff2b5c4fd958-0
|
By default, VACUUM skips the sort phase for any table where more than 95 percent of the table's rows are already sorted\. Skipping the sort phase can significantly improve VACUUM performance\. To change the default sort threshold for a single table, include the table name and the TO *threshold* PERCENT parameter when you run the VACUUM command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
5d13c6afe773-0
|
You can run a full vacuum, a delete only vacuum, a sort only vacuum, or a reindex with full vacuum\.
+ **VACUUM FULL**
VACUUM FULL re\-sorts rows and reclaims space from deleted rows\. Amazon Redshift automatically performs VACUUM DELETE ONLY operations in the background, so for most applications, VACUUM FULL and VACUUM SORT ONLY are equivalent\. VACUUM FULL is the same as VACUUM\. Full vacuum is the default vacuum operation\.
+ **VACUUM DELETE ONLY**
A DELETE ONLY vacuum is the same as a full vacuum except that it skips the sort\. Amazon Redshift automatically performs a DELETE ONLY vacuum in the background, so you rarely, if ever, need to run a DELETE ONLY vacuum\.
+ **VACUUM SORT ONLY**
A SORT ONLY doesn't reclaim disk space\. In most cases there is little benefit compared to a full vacuum\.
+ **VACUUM REINDEX**
Use VACUUM REINDEX for tables that use interleaved sort keys\. For more information about interleaved sort keys, see [Interleaved sort key](t_Sorting_data.md#t_Sorting_data-interleaved)\.
When you initially load an empty interleaved table using COPY or CREATE TABLE AS, Amazon Redshift automatically builds the interleaved index\. If you initially load an interleaved table using INSERT, you need to run VACUUM REINDEX afterwards to initialize the interleaved index\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
5d13c6afe773-1
|
REINDEX reanalyzes the distribution of the values in the table's sort key columns, then performs a full VACUUM operation\. VACUUM REINDEX takes significantly longer than VACUUM FULL because it needs to take an extra analysis pass over the data, and because merging in new interleaved data can involve touching all the data blocks\.
If a VACUUM REINDEX operation terminates before it completes, the next VACUUM resumes the reindex operation before performing the vacuum\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reclaiming_storage_space202.md
|
1f23e34f1532-0
|
Returns the length of the specified string as the number of characters\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
75ab0b343868-0
|
LEN is a synonym of [LENGTH function](r_LENGTH.md), [CHAR\_LENGTH function](r_CHAR_LENGTH.md), [CHARACTER\_LENGTH function](r_CHARACTER_LENGTH.md), and [TEXTLEN function](r_TEXTLEN.md)\.
```
LEN(expression)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
826c0ae3ebbc-0
|
*expression*
The input parameter is a CHAR or VARCHAR text string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
d8dbab2c7ce0-0
|
The LEN function returns an integer indicating the number of characters in the input string\. The LEN function returns the actual number of characters in multi\-byte strings, not the number of bytes\. For example, a VARCHAR\(12\) column is required to store three four\-byte Chinese characters\. The LEN function will return 3 for that same string\. To get the length of a string in bytes, use the [OCTET\_LENGTH](r_OCTET_LENGTH.md) function\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
5d4cb4e18bea-0
|
Length calculations do not count trailing spaces for fixed\-length character strings but do count them for variable\-length strings\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
c9e98df79896-0
|
The following example returns the number of bytes and the number of characters in the string `français`\.
```
select octet_length('français'),
len('français');
octet_length | len
--------------+-----
9 | 8
(1 row)
```
The following example returns the number of characters in the strings `cat` with no trailing spaces and `cat ` with three trailing spaces:
```
select len('cat'), len('cat ');
len | len
-----+-----
3 | 6
(1 row)
```
The following example returns the ten longest VENUENAME entries in the VENUE table:
```
select venuename, len(venuename)
from venue
order by 2 desc, 1
limit 10;
venuename | len
----------------------------------------+-----
Saratoga Springs Performing Arts Center | 39
Lincoln Center for the Performing Arts | 38
Nassau Veterans Memorial Coliseum | 33
Jacksonville Municipal Stadium | 30
Rangers BallPark in Arlington | 29
University of Phoenix Stadium | 29
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
c9e98df79896-1
|
Nassau Veterans Memorial Coliseum | 33
Jacksonville Municipal Stadium | 30
Rangers BallPark in Arlington | 29
University of Phoenix Stadium | 29
Circle in the Square Theatre | 28
Hubert H. Humphrey Metrodome | 28
Oriole Park at Camden Yards | 27
Dick's Sporting Goods Park | 26
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEN.md
|
481c32bd884b-0
|
Returns the name of the schema at the front of the search path\. This schema will be used for any tables or other named objects that are created without specifying a target schema\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMA.md
|
9c97c176ec77-0
|
**Note**
This is a leader\-node function\. This function returns an error if it references a user\-created table, an STL or STV system table, or an SVV or SVL system view\.
```
current_schema()
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMA.md
|
d436cc26980e-0
|
CURRENT\_SCHEMA returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMA.md
|
889600473684-0
|
The following query returns the current schema:
```
select current_schema();
current_schema
----------------
public
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMA.md
|
2437abf67fbe-0
|
Prepare a statement for execution\.
PREPARE creates a prepared statement\. When the PREPARE statement is executed, the specified statement \(SELECT, INSERT, UPDATE, or DELETE\) is parsed, rewritten, and planned\. When an EXECUTE command is then issued for the prepared statement, Amazon Redshift may optionally revise the query execution plan \(to improve performance based on the specified parameter values\) before executing the prepared statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PREPARE.md
|
469741322199-0
|
```
PREPARE plan_name [ (datatype [, ...] ) ] AS statement
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PREPARE.md
|
27ae48bcc7aa-0
|
*plan\_name*
An arbitrary name given to this particular prepared statement\. It must be unique within a single session and is subsequently used to execute or deallocate a previously prepared statement\.
*datatype*
The data type of a parameter to the prepared statement\. To refer to the parameters in the prepared statement itself, use $1, $2, and so on\.
*statement *
Any SELECT, INSERT, UPDATE, or DELETE statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PREPARE.md
|
fefcee028ea8-0
|
Prepared statements can take parameters: values that are substituted into the statement when it is executed\. To include parameters in a prepared statement, supply a list of data types in the PREPARE statement, and, in the statement to be prepared itself, refer to the parameters by position using the notation $1, $2, \.\.\. When executing the statement, specify the actual values for these parameters in the EXECUTE statement\. See [EXECUTE](r_EXECUTE.md) for more details\.
Prepared statements only last for the duration of the current session\. When the session ends, the prepared statement is discarded, so it must be re\-created before being used again\. This also means that a single prepared statement can't be used by multiple simultaneous database clients; however, each client can create its own prepared statement to use\. The prepared statement can be manually removed using the DEALLOCATE command\.
Prepared statements have the largest performance advantage when a single session is being used to execute a large number of similar statements\. As mentioned, for each new execution of a prepared statement, Amazon Redshift may revise the query execution plan to improve performance based on the specified parameter values\. To examine the query execution plan that Amazon Redshift has chosen for any specific EXECUTE statements, use the [EXPLAIN](r_EXPLAIN.md) command\.
For more information on query planning and the statistics collected by Amazon Redshift for query optimization, see the [ANALYZE](r_ANALYZE.md) command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PREPARE.md
|
c5a5bba8eea1-0
|
Create a temporary table, prepare INSERT statement and then execute it:
```
DROP TABLE IF EXISTS prep1;
CREATE TABLE prep1 (c1 int, c2 char(20));
PREPARE prep_insert_plan (int, char)
AS insert into prep1 values ($1, $2);
EXECUTE prep_insert_plan (1, 'one');
EXECUTE prep_insert_plan (2, 'two');
EXECUTE prep_insert_plan (3, 'three');
DEALLOCATE prep_insert_plan;
```
Prepare a SELECT statement and then execute it:
```
PREPARE prep_select_plan (int)
AS select * from prep1 where c1 = $1;
EXECUTE prep_select_plan (2);
EXECUTE prep_select_plan (3);
DEALLOCATE prep_select_plan;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PREPARE.md
|
8421ece7ec6b-0
|
[DEALLOCATE](r_DEALLOCATE.md), [EXECUTE](r_EXECUTE.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PREPARE.md
|
e11367ea2fb9-0
|
**Topics**
+ [Examples of catalog queries](c_join_PG_examples.md)
In general, you can join catalog tables and views \(relations whose names begin with **PG\_**\) to Amazon Redshift tables and views\.
The catalog tables use a number of data types that Amazon Redshift does not support\. The following data types are supported when queries join catalog tables to Amazon Redshift tables:
+ bool
+ "char"
+ float4
+ int2
+ int4
+ int8
+ name
+ oid
+ text
+ varchar
If you write a join query that explicitly or implicitly references a column that has an unsupported data type, the query returns an error\. The SQL functions that are used in some of the catalog tables are also unsupported, except for those used by the PG\_SETTINGS and PG\_LOCKS tables\.
For example, the PG\_STATS table cannot be queried in a join with an Amazon Redshift table because of unsupported functions\.
The following catalog tables and views provide useful information that can be joined to information in Amazon Redshift tables\. Some of these tables allow only partial access because of data type and function restrictions\. When you query the partially accessible tables, select or reference their columns carefully\.
The following tables are completely accessible and contain no unsupported types or functions:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG.md
|
e11367ea2fb9-1
|
The following tables are completely accessible and contain no unsupported types or functions:
+ [pg\_attribute](https://www.postgresql.org/docs/8.0/static/catalog-pg-attribute.html)
+ [pg\_cast ](https://www.postgresql.org/docs/8.0/static/catalog-pg-cast.html)
+ [pg\_depend](https://www.postgresql.org/docs/8.0/static/catalog-pg-depend.html)
+ [pg\_description ](https://www.postgresql.org/docs/8.0/static/catalog-pg-description.html)
+ [pg\_locks ](https://www.postgresql.org/docs/8.0/static/view-pg-locks.html)
+ [pg\_opclass ](https://www.postgresql.org/docs/8.0/static/catalog-pg-opclass.html)
The following tables are partially accessible and contain some unsupported types, functions, and truncated text columns\. Values in text columns are truncated to varchar\(256\) values\.
+ [pg\_class](https://www.postgresql.org/docs/8.0/static/catalog-pg-class.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG.md
|
e11367ea2fb9-2
|
+ [pg\_constraint](https://www.postgresql.org/docs/8.0/static/catalog-pg-constraint.html)
+ [pg\_database](https://www.postgresql.org/docs/8.0/static/catalog-pg-database.html)
+ [pg\_group](https://www.postgresql.org/docs/8.0/static/catalog-pg-group.html)
+ [pg\_language ](https://www.postgresql.org/docs/8.0/static/catalog-pg-language.html)
+ [pg\_namespace](https://www.postgresql.org/docs/8.0/static/catalog-pg-namespace.html)
+ [pg\_operator](https://www.postgresql.org/docs/8.0/static/catalog-pg-operator.html)
+ [pg\_proc](https://www.postgresql.org/docs/8.0/static/catalog-pg-proc.html)
+ [pg\_settings](https://www.postgresql.org/docs/8.0/static/view-pg-settings.html)
+ [pg\_statistic](https://www.postgresql.org/docs/8.0/static/catalog-pg-statistic.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG.md
|
e11367ea2fb9-3
|
+ [pg\_tables](https://www.postgresql.org/docs/8.0/static/view-pg-tables.html)
+ [pg\_type ](https://www.postgresql.org/docs/8.0/static/catalog-pg-type.html)
+ [pg\_user](https://www.postgresql.org/docs/8.0/static/view-pg-user.html)
+ [pg\_views](https://www.postgresql.org/docs/8.0/static/view-pg-views.html)
The catalog tables that are not listed here are either inaccessible or unlikely to be useful to Amazon Redshift administrators\. However, you can query any catalog table or view openly if your query does not involve a join to an Amazon Redshift table\.
You can use the OID columns in the Postgres catalog tables as joining columns\. For example, the join condition `pg_database.oid = stv_tbl_perm.db_id` matches the internal database object ID for each PG\_DATABASE row with the visible DB\_ID column in the STV\_TBL\_PERM table\. The OID columns are internal primary keys that are not visible when you select from the table\. The catalog views do not have OID columns\.
Some Amazon Redshift functions must execute only on the compute nodes\. If a query references a user\-created table, the SQL runs on the compute nodes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG.md
|
e11367ea2fb9-4
|
Some Amazon Redshift functions must execute only on the compute nodes\. If a query references a user\-created table, the SQL runs on the compute nodes\.
A query that references only catalog tables \(tables with a PG prefix, such as PG\_TABLE\_DEF\) or that does not reference any tables, runs exclusively on the leader node\.
If a query that uses a compute\-node function doesn't reference a user\-defined table or Amazon Redshift system table returns the following error\.
```
[Amazon](500310) Invalid operation: One or more of the used functions must be applied on at least one user created table.
```
The following Amazon Redshift functions are compute\-node only functions:
System information functions
+ LISTAGG
+ MEDIAN
+ PERCENTILE\_CONT
+ PERCENTILE\_DISC and APPROXIMATE PERCENTILE\_DISC
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG.md
|
429a34811eb1-0
|
The SVV\_VACUUM\_SUMMARY view joins the STL\_VACUUM, STL\_QUERY, and STV\_TBL\_PERM tables to summarize information about vacuum operations logged by the system\. The view returns one row per table per vacuum transaction\. The view records the elapsed time of the operation, the number of sort partitions created, the number of merge increments required, and deltas in row and block counts before and after the operation was performed\.
SVV\_VACUUM\_SUMMARY is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
For information about SVV\_VACUUM\_PROGRESS, see [SVV\_VACUUM\_PROGRESS](r_SVV_VACUUM_PROGRESS.md)\.
For information about SVL\_VACUUM\_PERCENTAGE, see [SVL\_VACUUM\_PERCENTAGE](r_SVL_VACUUM_PERCENTAGE.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_SUMMARY.md
|
dc579a1d956d-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_VACUUM_SUMMARY.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_SUMMARY.md
|
ca9011f307bd-0
|
The following query returns statistics for vacuum operations on three different tables\. The SALES table was vacuumed twice\.
```
select table_name, xid, sort_partitions as parts, merge_increments as merges,
elapsed_time, row_delta, sortedrow_delta as sorted_delta, block_delta
from svv_vacuum_summary
order by xid;
table_ | xid |parts|merges| elapsed_ | row_ | sorted_ | block_
name | | | | time | delta | delta | delta
--------+------+-----+------+----------+---------+---------+--------
users | 2985 | 1 | 1 | 61919653 | 0 | 49990 | 20
category| 3982 | 1 | 1 | 24136484 | 0 | 11 | 0
sales | 3992 | 2 | 1 | 71736163 | 0 | 1207192 | 32
sales | 4000 | 1 | 1 | 15363010 | -851648 | -851648 | -140
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_VACUUM_SUMMARY.md
|
998be0f67753-0
|
LZO encoding provides a very high compression ratio with good performance\. LZO encoding works especially well for CHAR and VARCHAR columns that store very long character strings, especially free form text, such as product descriptions, user comments, or JSON strings\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/lzo-encoding.md
|
c862a1d73ce2-0
|
Restores the value of a configuration parameter to its default value\.
You can reset either a single specified parameter or all parameters at once\. To set a parameter to a specific value, use the [SET](r_SET.md) command\. To display the current value of a parameter, use the [SHOW](r_SHOW.md) command\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RESET.md
|
5ca29641e2bd-0
|
```
RESET { parameter_name | ALL }
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RESET.md
|
ac01ba677148-0
|
*parameter\_name*
Name of the parameter to reset\. See [Modifying the server configuration](t_Modifying_the_default_settings.md) for more documentation about parameters\.
ALL
Resets all runtime parameters\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RESET.md
|
9f7e0f2ebdbd-0
|
The following example resets the `query_group` parameter to its default value:
```
reset query_group;
```
The following example resets all runtime parameters to their default values\.
```
reset all;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RESET.md
|
f77910fdbffc-0
|
**0**, \-15 to 2
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_extra_float_digits.md
|
64b7ae5bff22-0
|
Sets the number of digits displayed for floating\-point values, including float4 and float8\. The value is added to the standard number of digits \(FLT\_DIG or DBL\_DIG as appropriate\)\. The value can be set as high as 2, to include partially significant digits; this is especially useful for outputting float data that needs to be restored exactly\. Or it can be set negative to suppress unwanted digits\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_extra_float_digits.md
|
3cfd44bac811-0
|
In this tutorial, you walk through the process of loading data into your Amazon Redshift database tables from data files in an Amazon S3 bucket from beginning to end\.
In this tutorial, you do the following:
+ Download data files that use comma\-separated value \(CSV\), character\-delimited, and fixed width formats\.
+ Create an Amazon S3 bucket and then upload the data files to the bucket\.
+ Launch an Amazon Redshift cluster and create database tables\.
+ Use COPY commands to load the tables from the data files on Amazon S3\.
+ Troubleshoot load errors and modify your COPY commands to correct the errors\.
**Estimated time:** 60 minutes
**Estimated cost:** $1\.00 per hour for the cluster
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data.md
|
9ca645dc5c5c-0
|
You need the following prerequisites:
+ An AWS account to launch an Amazon Redshift cluster and to create a bucket in Amazon S3\.
+ Your AWS credentials \(an access key ID and secret access key\) to load test data from Amazon S3\. If you need to create new access keys, go to [Administering access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html)\.
+ An SQL client such as the Amazon Redshift console query editor\.
This tutorial is designed so that it can be taken by itself\. In addition to this tutorial, we recommend completing the following tutorials to gain a more complete understanding of how to design and use Amazon Redshift databases:
+ [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/) walks you through the process of creating an Amazon Redshift cluster and loading sample data\.
+ [Tutorial: Tuning table design](tutorial-tuning-tables.md) walks you step by step through the process of designing and tuning tables, including choosing sort keys, distribution styles, and compression encodings, and evaluating system performance before and after tuning\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data.md
|
b3cb2523fa58-0
|
You can add data to your Amazon Redshift tables either by using an INSERT command or by using a COPY command\. At the scale and speed of an Amazon Redshift data warehouse, the COPY command is many times faster and more efficient than INSERT commands\.
The COPY command uses the Amazon Redshift massively parallel processing \(MPP\) architecture to read and load data in parallel from multiple data sources\. You can load from data files on Amazon S3, Amazon EMR, or any remote host accessible through a Secure Shell \(SSH\) connection\. Or you can load directly from an Amazon DynamoDB table\.
In this tutorial, you use the COPY command to load data from Amazon S3\. Many of the principles presented here apply to loading from other data sources as well\.
To learn more about using the COPY command, see these resources:
+ [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)
+ [Loading data from Amazon EMR](loading-data-from-emr.md)
+ [Loading data from remote hosts](loading-data-from-remote-hosts.md)
+ [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data.md
|
766f88d10ed6-0
|
+ [Step 1: Create a cluster](tutorial-loading-data-launch-cluster.md)
+ [Step 2: Download the data files](tutorial-loading-data-download-files.md)
+ [Step 3: Upload the files to an Amazon S3 bucket](tutorial-loading-data-upload-files.md)
+ [Step 4: Create the sample tables](tutorial-loading-data-create-tables.md)
+ [Step 5: Run the COPY commands](tutorial-loading-run-copy.md)
+ [Step 6: Vacuum and analyze the database](tutorial-loading-data-vacuum.md)
+ [Step 7: Clean up your resources](tutorial-loading-data-clean-up.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data.md
|
d71995595ae7-0
|
ST\_Distance returns the Euclidean distance between the two input geometry values\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Distance-function.md
|
ce2c69ffce2a-0
|
```
ST_Distance(geom1, geom2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Distance-function.md
|
1c6e384adb17-0
|
*geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Distance-function.md
|
a3a26cd52fe5-0
|
`DOUBLE PRECISION` in the same units as the input geometries\.
If *geom1* or *geom2* is null or empty, then null is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Distance-function.md
|
176fe670f11a-0
|
The following SQL returns the distance between two polygons\.
```
SELECT ST_Distance(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'), ST_GeomFromText('POLYGON((-1 -3,-2 -1,0 -3,-1 -3))'));
```
```
st_distance
-----------
1.4142135623731
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Distance-function.md
|
a978b96e3232-0
|
ACOS is a trigonometric function that returns the arc cosine of a number\. The return value is in radians and is between PI/2 and \-PI/2\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ACOS.md
|
a6750a60f127-0
|
```
ACOS(number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ACOS.md
|
1505b6b0277e-0
|
*number*
The input parameter is a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ACOS.md
|
936f5836883c-0
|
The ACOS function returns a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ACOS.md
|
cce35a6211fa-0
|
The following example returns the arc cosine of \-1:
```
select acos(-1);
acos
------------------
3.14159265358979
(1 row)
```
The following example converts the arc cosine of \.5 to the equivalent number of degrees:
```
select (acos(.5) * 180/(select pi())) as degrees;
degrees
---------
60
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ACOS.md
|
6eef6744b956-0
|
Changes a user group\. Use this command to add users to the group, drop users from the group, or rename the group\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_GROUP.md
|
0c285c74dca7-0
|
```
ALTER GROUP group_name
{
ADD USER username [, ... ] |
DROP USER username [, ... ] |
RENAME TO new_name
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_GROUP.md
|
bcc4e3877a43-0
|
*group\_name*
Name of the user group to modify\.
ADD
Adds a user to a user group\.
DROP
Removes a user from a user group\.
*username*
Name of the user to add to the group or drop from the group\.
RENAME TO
Renames the user group\. Group names beginning with two underscores are reserved for Amazon Redshift internal use\. For more information about valid names, see [Names and identifiers](r_names.md)\.
*new\_name*
New name of the user group\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_GROUP.md
|
7b03c7186302-0
|
The following example adds a user named DWUSER to the ADMIN\_GROUP group:
```
alter group admin_group
add user dwuser;
```
The following example renames the group ADMIN\_GROUP to ADMINISTRATORS:
```
alter group admin_group
rename to administrators;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_GROUP.md
|
dbdc678ecfcf-0
|
**Topics**
+ [Raw encoding](c_Raw_encoding.md)
+ [AZ64](az64-encoding.md)
+ [Byte\-dictionary encoding](c_Byte_dictionary_encoding.md)
+ [Delta encoding](c_Delta_encoding.md)
+ [LZO](lzo-encoding.md)
+ [Mostly encoding](c_MostlyN_encoding.md)
+ [Runlength encoding](c_Runlength_encoding.md)
+ [Text255 and Text32k encodings](c_Text255_encoding.md)
+ [ZSTD](zstd-encoding.md)
<a name="compression-encoding-list"></a>A compression encoding specifies the type of compression that is applied to a column of data values as rows are added to a table\.
If no compression is specified in a CREATE TABLE or ALTER TABLE statement, Amazon Redshift automatically assigns compression encoding as follows:
+ Columns that are defined as sort keys are assigned RAW compression\.
+ Columns that are defined as BOOLEAN, REAL, or DOUBLE PRECISION data types are assigned RAW compression\.
+ Columns that are defined as SMALLINT, INTEGER, BIGINT, DECIMAL, DATE, TIMESTAMP, or TIMESTAMPTZ data types are assigned AZ64 compression\.
+ Columns that are defined as CHAR or VARCHAR data types are assigned LZO compression\.
The following table identifies the supported compression encodings and the data types that support the encoding\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Compression_encodings.md
|
dbdc678ecfcf-1
|
The following table identifies the supported compression encodings and the data types that support the encoding\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Compression_encodings.md
|
6cd3ae71bbb9-0
|
The PI function returns the value of PI to 14 decimal places\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PI.md
|
d2337330847e-0
|
```
PI()
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PI.md
|
aa88b7fa9d93-0
|
PI returns a DOUBLE PRECISION number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PI.md
|
934d12c0f325-0
|
Return the value of pi:
```
select pi();
pi
------------------
3.14159265358979
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PI.md
|
9300db011d77-0
|
This section introduces the elements of the Amazon Redshift data warehouse architecture as shown in the following figure\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/02-NodeRelationships.png)
**Client applications**
Amazon Redshift integrates with various data loading and ETL \(extract, transform, and load\) tools and business intelligence \(BI\) reporting, data mining, and analytics tools\. Amazon Redshift is based on industry\-standard PostgreSQL, so most existing SQL client applications will work with only minimal changes\. For information about important differences between Amazon Redshift SQL and PostgreSQL, see [Amazon Redshift and PostgreSQL](c_redshift-and-postgres-sql.md)\.
**Connections**
Amazon Redshift communicates with client applications by using industry\-standard JDBC and ODBC drivers for PostgreSQL\. For more information, see [Amazon Redshift and PostgreSQL JDBC and ODBC](c_redshift-postgres-jdbc.md)\.
**Clusters**
The core infrastructure component of an Amazon Redshift data warehouse is a *cluster*\.
A cluster is composed of one or more *compute nodes*\. If a cluster is provisioned with two or more compute nodes, an additional *leader node* coordinates the compute nodes and handles external communication\. Your client application interacts directly only with the leader node\. The compute nodes are transparent to external applications\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_high_level_system_architecture.md
|
9300db011d77-1
|
**Leader node**
The leader node manages communications with client programs and all communication with compute nodes\. It parses and develops execution plans to carry out database operations, in particular, the series of steps necessary to obtain results for complex queries\. Based on the execution plan, the leader node compiles code, distributes the compiled code to the compute nodes, and assigns a portion of the data to each compute node\.
The leader node distributes SQL statements to the compute nodes only when a query references tables that are stored on the compute nodes\. All other queries run exclusively on the leader node\. Amazon Redshift is designed to implement certain SQL functions only on the leader node\. A query that uses any of these functions will return an error if it references tables that reside on the compute nodes\. For more information, see [SQL functions supported on the leader node](c_sql-functions-leader-node.md)\.
**Compute nodes**
The leader node compiles code for individual elements of the execution plan and assigns the code to individual compute nodes\. The compute nodes execute the compiled code and send intermediate results back to the leader node for final aggregation\.
Each compute node has its own dedicated CPU, memory, and attached disk storage, which are determined by the node type\. As your workload grows, you can increase the compute capacity and storage capacity of a cluster by increasing the number of nodes, upgrading the node type, or both\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_high_level_system_architecture.md
|
9300db011d77-2
|
Amazon Redshift provides several node types for your compute and storage needs\. For details of each node type, see [Amazon Redshift clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) in the *Amazon Redshift Cluster Management Guide*\.
**Node slices**
A compute node is partitioned into slices\. Each slice is allocated a portion of the node's memory and disk space, where it processes a portion of the workload assigned to the node\. The leader node manages distributing data to the slices and apportions the workload for any queries or other database operations to the slices\. The slices then work in parallel to complete the operation\.
The number of slices per node is determined by the node size of the cluster\. For more information about the number of slices for each node size, go to [About clusters and nodes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Cluster Management Guide*\.
When you create a table, you can optionally specify one column as the distribution key\. When the table is loaded with data, the rows are distributed to the node slices according to the distribution key that is defined for a table\. Choosing a good distribution key enables Amazon Redshift to use parallel processing to load data and execute queries efficiently\. For information about choosing a distribution key, see [Choose the best distribution style](c_best-practices-best-dist-key.md)\.
**Internal network**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_high_level_system_architecture.md
|
9300db011d77-3
|
**Internal network**
Amazon Redshift takes advantage of high\-bandwidth connections, close proximity, and custom communication protocols to provide private, very high\-speed network communication between the leader node and compute nodes\. The compute nodes run on a separate, isolated network that client applications never access directly\.
**Databases**
A cluster contains one or more databases\. User data is stored on the compute nodes\. Your SQL client communicates with the leader node, which in turn coordinates query execution with the compute nodes\.
Amazon Redshift is a relational database management system \(RDBMS\), so it is compatible with other RDBMS applications\. Although it provides the same functionality as a typical RDBMS, including online transaction processing \(OLTP\) functions such as inserting and deleting data, Amazon Redshift is optimized for high\-performance analysis and reporting of very large datasets\.
Amazon Redshift is based on PostgreSQL 8\.0\.2\. Amazon Redshift and PostgreSQL have a number of very important differences that you need to take into account as you design and develop your data warehouse applications\. For information about how Amazon Redshift SQL differs from PostgreSQL, see [Amazon Redshift and PostgreSQL](c_redshift-and-postgres-sql.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_high_level_system_architecture.md
|
228009e22ee3-0
|
Compare the following example with the first UNION example\. The only difference between the two examples is the set operator that is used, but the results are very different\. Only one of the rows is the same:
```
235494 | 23875 | 8771
```
This is the only row in the limited result of 5 rows that was found in both tables\.
```
select listid, sellerid, eventid from listing
intersect
select listid, sellerid, eventid from sales
order by listid desc, sellerid, eventid
limit 5;
listid | sellerid | eventid
--------+----------+---------
235494 | 23875 | 8771
235482 | 1067 | 2667
235479 | 1589 | 7303
235476 | 15550 | 793
235475 | 22306 | 7848
(5 rows)
```
The following query finds events \(for which tickets were sold\) that occurred at venues in both New York City and Los Angeles in March\. The difference between the two query expressions is the constraint on the VENUECITY column\.
```
select distinct eventname from event, sales, venue
where event.eventid=sales.eventid and event.venueid=venue.venueid
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_intersect_query.md
|
228009e22ee3-1
|
select distinct eventname from event, sales, venue
where event.eventid=sales.eventid and event.venueid=venue.venueid
and date_part(month,starttime)=3 and venuecity='Los Angeles'
intersect
select distinct eventname from event, sales, venue
where event.eventid=sales.eventid and event.venueid=venue.venueid
and date_part(month,starttime)=3 and venuecity='New York City'
order by eventname asc;
eventname
----------------------------
A Streetcar Named Desire
Dirty Dancing
Electra
Running with Annalise
Hairspray
Mary Poppins
November
Oliver!
Return To Forever
Rhinoceros
South Pacific
The 39 Steps
The Bacchae
The Caucasian Chalk Circle
The Country Girl
Wicked
Woyzeck
(16 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_intersect_query.md
|
ace983f75e2e-0
|
ST\_YMin returns the minimum second coordinate of an input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMin-function.md
|
fc55ce83855a-0
|
```
ST_YMin(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMin-function.md
|
41c93b9f30a7-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMin-function.md
|
d623d915b112-0
|
`DOUBLE PRECISION` value of the minimum second coordinate\.
If *geom* is empty, then null is returned\.
If *geom* is null, then null is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMin-function.md
|
0be91005e83b-0
|
The following SQL returns the smallest second coordinate of a linestring\.
```
SELECT ST_YMin(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)'));
```
```
st_ymin
-----------
29.07
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMin-function.md
|
1db6bb77d870-0
|
You can load table data from a single file, or you can split the data for each table into multiple files\. The COPY command can load data from multiple files in parallel\. You can load multiple files by specifying a common prefix, or *prefix key*, for the set, or by explicitly listing the files in a manifest file\.
**Note**
We strongly recommend that you divide your data into multiple files to take advantage of parallel processing\.
Split your data into files so that the number of files is a multiple of the number of slices in your cluster\. That way Amazon Redshift can divide the data evenly among the slices\. The number of slices per node depends on the node size of the cluster\. For example, each DS2\.XL compute node has two slices, and each DS2\.8XL compute node has 32 slices\. For more information about the number of slices that each node size has, go to [About Clusters and Nodes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Cluster Management Guide*\.
The nodes all participate in parallel query execution, working on data that is distributed as evenly as possible across the slices\. If you have a cluster with two DS2\.XL nodes, you might split your data into four files or some multiple of four\. Amazon Redshift does not take file size into account when dividing the workload, so you need to ensure that the files are roughly the same size, between 1 MB and 1 GB after compression\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_splitting-data-files.md
|
1db6bb77d870-1
|
If you intend to use object prefixes to identify the load files, name each file with a common prefix\. For example, the `venue.txt` file might be split into four files, as follows:
```
venue.txt.1
venue.txt.2
venue.txt.3
venue.txt.4
```
If you put multiple files in a folder in your bucket, you can specify the folder name as the prefix and COPY will load all of the files in the folder\. If you explicitly list the files to be loaded by using a manifest file, the files can reside in different buckets or folders\.
For more information about manifest files, see [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_splitting-data-files.md
|
197c917dadb5-0
|
The BIT\_OR function runs bit\-wise OR operations on all of the values in a single integer column or expression\. This function aggregates each bit of each binary value that corresponds to each integer value in the expression\.
For example, suppose that your table contains four integer values in a column: 3, 7, 10, and 22\. These integers are represented in binary form as follows\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_BIT_OR.html)
If you apply the BIT\_OR function to the set of integer values, the operation looks for any value in which a `1` is found in each position\. In this case, a `1` exists in the last five positions for at least one of the values, yielding a binary result of `00011111`; therefore, the function returns `31` \(or `16 + 8 + 4 + 2 + 1`\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_OR.md
|
37845662d120-0
|
```
BIT_OR ( [DISTINCT | ALL] expression )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BIT_OR.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.