id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
9de6b40888c7-0
|
The following two examples install the [urlparse](https://docs.python.org/2/library/urlparse.html#module-urlparse) Python module, which is packaged in a file named `urlparse3-1.0.3.zip`\.
The following command installs a UDF library named `f_urlparse` from a package that has been uploaded to an Amazon S3 bucket located in the US East region\.
```
create library f_urlparse
language plpythonu
from 's3://mybucket/urlparse3-1.0.3.zip'
credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>'
region as 'us-east-1';
```
The following example installs a library named `f_urlparse` from a library file on a website\.
```
create library f_urlparse
language plpythonu
from 'https://example.com/packages/urlparse3-1.0.3.zip';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_LIBRARY.md
|
edc356763b2e-0
|
ABS calculates the absolute value of a number, where that number can be a literal or an expression that evaluates to a number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABS.md
|
52ece8237b37-0
|
```
ABS (number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABS.md
|
17748b85f8a7-0
|
*number*
Number or expression that evaluates to a number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABS.md
|
fa9ece94ba34-0
|
ABS returns the same data type as its argument\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABS.md
|
902954b5a853-0
|
Calculate the absolute value of \-38:
```
select abs (-38);
abs
-------
38
(1 row)
```
Calculate the absolute value of \(14\-76\):
```
select abs (14-76);
abs
-------
62
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ABS.md
|
9eb385b91ece-0
|
Sets the user name for the current session\.
You can use the SET SESSION AUTHORIZATION command, for example, to test database access by temporarily running a session or transaction as an unprivileged user\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_SESSION_AUTHORIZATION.md
|
bac7aea45064-0
|
```
SET [ SESSION | LOCAL ] SESSION AUTHORIZATION { user_name | DEFAULT }
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_SESSION_AUTHORIZATION.md
|
a2744665a71e-0
|
SESSION
Specifies that the setting is valid for the current session\. Default value\.
LOCAL
Specifies that the setting is valid for the current transaction\.
*user\_name*
Name of the user to set\. The user name may be written as an identifier or a string literal\.
DEFAULT
Sets the session user name to the default value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_SESSION_AUTHORIZATION.md
|
5e779ee5bd56-0
|
The following example sets the user name for the current session to `dwuser`:
```
SET SESSION AUTHORIZATION 'dwuser';
```
The following example sets the user name for the current transaction to `dwuser`:
```
SET LOCAL SESSION AUTHORIZATION 'dwuser';
```
This example sets the user name for the current session to the default user name:
```
SET SESSION AUTHORIZATION DEFAULT;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_SESSION_AUTHORIZATION.md
|
d229e64c6941-0
|
**Topics**
+ [Unloading data to Amazon S3](t_Unloading_tables.md)
+ [Unloading encrypted data files](t_unloading_encrypted_files.md)
+ [Unloading data in delimited or fixed\-width format](t_unloading_fixed_width_data.md)
+ [Reloading unloaded data](t_Reloading_unload_files.md)
To unload data from database tables to a set of files in an Amazon S3 bucket, you can use the [UNLOAD](r_UNLOAD.md) command with a SELECT statement\. You can unload text data in either delimited format or fixed\-width format, regardless of the data format that was used to load it\. You can also specify whether to create compressed GZIP files\.
You can limit the access users have to your Amazon S3 bucket by using temporary security credentials\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unloading_data.md
|
35ddc7dc9b47-0
|
In addition to the tables that you create, your database contains a number of system tables\. These system tables contain information about your installation and about the various queries and processes that are running on the system\. You can query these system tables to collect information about your database\.
**Note**
The description for each table in the System Tables Reference indicates whether a table is visible to all users or visible only to superusers\. You must be logged in as a superuser to query tables that are visible only to superusers\.
Amazon Redshift provides access to the following types of system tables:
+ [STL views for logging](c_intro_STL_tables.md)
These system tables are generated from Amazon Redshift log files to provide a history of the system\. Logging tables have an STL prefix\.
+ [STV tables for snapshot data](c_intro_STV_tables.md)
These tables are virtual system tables that contain snapshots of the current system data\. Snapshot tables have an STV prefix\.
+ [System views](c_intro_system_views.md)
System views contain a subset of data found in several of the STL and STV system tables\. Systems views have an SVV or SVL prefix\.
+ [System catalog tables](c_intro_catalog_views.md)
The system catalog tables store schema metadata, such as information about tables and columns\. System catalog tables have a PG prefix\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
35ddc7dc9b47-1
|
The system catalog tables store schema metadata, such as information about tables and columns\. System catalog tables have a PG prefix\.
You may need to specify the process ID associated with a query to retrieve system table information about that query\. For information, see [Determine the process ID of a running query](determine_pid.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
f0ec8f02df8e-0
|
For example, to view a list of all tables in the public schema, you can query the PG\_TABLE\_DEF system catalog table\.
```
select distinct(tablename) from pg_table_def where schemaname = 'public';
```
The result will look something like this:
```
tablename
---------
category
date
event
listing
sales
testtable
users
venue
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
453e56647e7c-0
|
You can query the PG\_USER catalog to view a list of all database users, along with the user ID \(USESYSID\) and user privileges\.
```
select * from pg_user;
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+-----------
rdsdb | 1 | t | t | t | ******** | |
masteruser | 100 | t | t | f | ******** | |
dwuser | 101 | f | f | f | ******** | |
simpleuser | 102 | f | f | f | ******** | |
poweruser | 103 | f | t | f | ******** | |
dbuser | 104 | t | f | f | ******** | |
(6 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
453e56647e7c-1
|
dbuser | 104 | t | f | f | ******** | |
(6 rows)
```
The user name `rdsdb` is used internally by Amazon Redshift to perform routine administrative and maintenance tasks\. You can filter your query to show only user\-defined user names by adding `where usesysid > 1` to your select statement\.
```
select * from pg_user
where usesysid > 1;
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+-----------
masteruser | 100 | t | t | f | ******** | |
dwuser | 101 | f | f | f | ******** | |
simpleuser | 102 | f | f | f | ******** | |
poweruser | 103 | f | t | f | ******** | |
dbuser | 104 | t | f | f | ******** | |
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
81967e83a6e6-0
|
In the previous example, you found that the user ID \(USESYSID\) for masteruser is 100\. To list the five most recent queries executed by masteruser, you can query the SVL\_QLOG view\. The SVL\_QLOG view is a friendlier subset of information from the STL\_QUERY table\. You can use this view to find the query ID \(QUERY\) or process ID \(PID\) for a recently run query or to see how long it took a query to complete\. SVL\_QLOG includes the first 60 characters of the query string \(SUBSTRING\) to help you locate a specific query\. Use the LIMIT clause with your SELECT statement to limit the results to five rows\.
```
select query, pid, elapsed, substring from svl_qlog
where userid = 100
order by starttime desc
limit 5;
```
The result will look something like this:
```
query | pid | elapsed | substring
--------+-------+----------+--------------------------------------------------------------
187752 | 18921 | 18465685 | select query, elapsed, substring from svl_qlog order by query
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
81967e83a6e6-1
|
187752 | 18921 | 18465685 | select query, elapsed, substring from svl_qlog order by query
204168 | 5117 | 59603 | insert into testtable values (100);
187561 | 17046 | 1003052 | select * from pg_table_def where tablename = 'testtable';
187549 | 17046 | 1108584 | select * from STV_WLM_SERVICE_CLASS_CONFIG
187468 | 17046 | 5670661 | select * from pg_table_def where schemaname = 'public';
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_querying_redshift_system_tables.md
|
955a107b1b93-0
|
Use the SVL\_QUERY\_SUMMARY view to find general information about the execution of a query\.
The SVL\_QUERY\_SUMMARY view contains a subset of data from the SVL\_QUERY\_REPORT view\. Note that the information in SVL\_QUERY\_SUMMARY is aggregated from all nodes\.
**Note**
The SVL\_QUERY\_SUMMARY view only contains information about queries executed by Amazon Redshift, not other utility and DDL commands\. For a complete listing and information on all statements executed by Amazon Redshift, including DDL and utility commands, you can query the SVL\_STATEMENTTEXT view\.
SVL\_QUERY\_SUMMARY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
For information about SVCS\_QUERY\_SUMMARY, see [SVCS\_QUERY\_SUMMARY](r_SVCS_QUERY_SUMMARY.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_SUMMARY.md
|
6729c7627fc2-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QUERY_SUMMARY.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_SUMMARY.md
|
e687248fc1e0-0
|
**Viewing processing information for a query step**
The following query shows basic processing information for each step of query 87:
```
select query, stm, seg, step, rows, bytes
from svl_query_summary
where query = 87
order by query, seg, step;
```
This query retrieves the processing information about query 87, as shown in the following sample output:
```
query | stm | seg | step | rows | bytes
-------+-----+-----+------+--------+---------
87 | 0 | 0 | 0 | 90 | 1890
87 | 0 | 0 | 2 | 90 | 360
87 | 0 | 1 | 0 | 90 | 360
87 | 0 | 1 | 2 | 90 | 1440
87 | 1 | 2 | 0 | 210494 | 4209880
87 | 1 | 2 | 3 | 89500 | 0
87 | 1 | 2 | 6 | 4 | 96
87 | 2 | 3 | 0 | 4 | 96
87 | 2 | 3 | 1 | 4 | 96
87 | 2 | 4 | 0 | 4 | 96
87 | 2 | 4 | 1 | 1 | 24
87 | 3 | 5 | 0 | 1 | 24
87 | 3 | 5 | 4 | 0 | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_SUMMARY.md
|
e687248fc1e0-1
|
87 | 3 | 5 | 0 | 1 | 24
87 | 3 | 5 | 4 | 0 | 0
(13 rows)
```
**Determining whether query steps spilled to disk**
The following query shows whether or not any of the steps for the query with query ID 1025 \(see the [SVL\_QLOG](r_SVL_QLOG.md) view to learn how to obtain the query ID for a query\) spilled to disk or if the query ran entirely in\-memory:
```
select query, step, rows, workmem, label, is_diskbased
from svl_query_summary
where query = 1025
order by workmem desc;
```
This query returns the following sample output:
```
query| step| rows | workmem | label | is_diskbased
-----+-----+--------+-----------+---------------+--------------
1025 | 0 |16000000| 141557760 |scan tbl=9 | f
1025 | 2 |16000000| 135266304 |hash tbl=142 | t
1025 | 0 |16000000| 128974848 |scan tbl=116536| f
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_SUMMARY.md
|
e687248fc1e0-2
|
1025 | 0 |16000000| 128974848 |scan tbl=116536| f
1025 | 2 |16000000| 122683392 |dist | f
(4 rows)
```
By scanning the values for IS\_DISKBASED, you can see which query steps went to disk\. For query 1025, the hash step ran on disk\. Steps might run on disk include hash, aggr, and sort steps\. To view only disk\-based query steps, add **and is\_diskbased = 't'** clause to the SQL statement in the above example\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_SUMMARY.md
|
fe6e5e75905c-0
|
You can change the server configuration in the following ways:
+ By using a [SET](r_SET.md) command to override a setting for the duration of the current session only\.
For example:
```
set extra_float_digits to 2;
```
+ By modifying the parameter group settings for the cluster\. The parameter group settings include additional parameters that you can configure\. For more information, see [Amazon Redshift Parameter Groups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) in the *Amazon Redshift Cluster Management Guide*\.
+ By using the [ALTER USER](r_ALTER_USER.md) command to set a configuration parameter to a new value for all sessions run by the specified user\.
```
ALTER USER username SET parameter { TO | = } { value | DEFAULT }
```
Use the SHOW command to view the current parameter settings\. Use SHOW ALL to view all the settings that you can configure by using the [SET](r_SET.md) command\.
```
show all;
```
```
name | setting
--------------------------+--------------
analyze_threshold_percent | 10
datestyle | ISO, MDY
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Modifying_the_default_settings.md
|
fe6e5e75905c-1
|
analyze_threshold_percent | 10
datestyle | ISO, MDY
extra_float_digits | 2
query_group | default
search_path | $user, public
statement_timeout | 0
timezone | UTC
wlm_query_slot_count | 1
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Modifying_the_default_settings.md
|
7444636e96d9-0
|
To maximize query performance, follow these recommendations when creating queries:
+ Design tables according to best practices to provide a solid foundation for query performance\. For more information, see [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)\.
+ Avoid using `select *`\. Include only the columns you specifically need\.
+ Use a [CASE expression](r_CASE_function.md) to perform complex aggregations instead of selecting from the same table multiple times\.
+ Don't use cross\-joins unless absolutely necessary\. These joins without a join condition result in the Cartesian product of two tables\. Cross\-joins are typically executed as nested\-loop joins, which are the slowest of the possible join types\.
+ Use subqueries in cases where one table in the query is used only for predicate conditions and the subquery returns a small number of rows \(less than about 200\)\. The following example uses a subquery to avoid joining the LISTING table\.
```
select sum(sales.qtysold)
from sales
where salesid in (select listid from listing where listtime > '2008-12-26');
```
+ Use predicates to restrict the dataset as much as possible\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_designing-queries-best-practices.md
|
7444636e96d9-1
|
```
+ Use predicates to restrict the dataset as much as possible\.
+ In the predicate, use the least expensive operators that you can\. [Comparison condition](r_comparison_condition.md) operators are preferable to [LIKE](r_patternmatching_condition_like.md) operators\. LIKE operators are still preferable to [SIMILAR TO](pattern-matching-conditions-similar-to.md) or [POSIX operators](pattern-matching-conditions-posix.md)\.
+ Avoid using functions in query predicates\. Using them can drive up the cost of the query by requiring large numbers of rows to resolve the intermediate steps of the query\.
+ If possible, use a WHERE clause to restrict the dataset\. The query planner can then use row order to help determine which records match the criteria, so it can skip scanning large numbers of disk blocks\. Without this, the query execution engine must scan participating columns entirely\.
+ Add predicates to filter tables that participate in joins, even if the predicates apply the same filters\. The query returns the same result set, but Amazon Redshift is able to filter the join tables before the scan step and can then efficiently skip scanning blocks from those tables\. Redundant filters aren't needed if you filter on a column that's used in the join condition\.
For example, suppose that you want to join `SALES` and `LISTING` to find ticket sales for tickets listed after December, grouped by seller\. Both tables are sorted by date\. The following query joins the tables on their common key and filters for `listing.listtime` values greater than December 1\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_designing-queries-best-practices.md
|
7444636e96d9-2
|
```
select listing.sellerid, sum(sales.qtysold)
from sales, listing
where sales.salesid = listing.listid
and listing.listtime > '2008-12-01'
group by 1 order by 1;
```
The WHERE clause doesn't include a predicate for `sales.saletime`, so the execution engine is forced to scan the entire `SALES` table\. If you know the filter would result in fewer rows participating in the join, then add that filter as well\. The following example cuts execution time significantly\.
```
select listing.sellerid, sum(sales.qtysold)
from sales, listing
where sales.salesid = listing.listid
and listing.listtime > '2008-12-01'
and sales.saletime > '2008-12-01'
group by 1 order by 1;
```
+ Use sort keys in the GROUP BY clause so the query planner can use more efficient aggregation\. A query might qualify for one\-phase aggregation when its GROUP BY list contains only sort key columns, one of which is also the distribution key\. The sort key columns in the GROUP BY list must include the first sort key, then other sort keys that you want to use in sort key order\. For example, it is valid to use the first sort key, the first and second sort keys, the first, second, and third sort keys, and so on\. It is not valid to use the first and third sort keys\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_designing-queries-best-practices.md
|
7444636e96d9-3
|
You can confirm the use of one\-phase aggregation by running the [EXPLAIN](r_EXPLAIN.md) command and looking for `XN GroupAggregate` in the aggregation step of the query\.
+ If you use both GROUP BY and ORDER BY clauses, make sure that you put the columns in the same order in both\. That is, use the approach just following\.
```
group by a, b, c
order by a, b, c
```
Don't use the following approach\.
```
group by b, c, a
order by a, b, c
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_designing-queries-best-practices.md
|
e71a7dbbf106-0
|
You can view Amazon Redshift Advisor analysis results and recommendations on the AWS Management Console\.
**Note**
A new console is available for Amazon Redshift\. Choose either the **New console** or the **Original console** instructions based on the console that you are using\. The **New console** instructions are open by default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/access-advisor.md
|
e9d911cb6db0-0
|
**To view Amazon Redshift Advisor recommendations on the console**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. On the navigation menu, choose **ADVISOR**\.
1. Expand each recommendation to see more details\. On this page, you can sort and group recommendations\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/access-advisor.md
|
696a51d331a6-0
|
**To view Amazon Redshift Advisor recommendations on the console**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. In the navigation pane, choose **Advisor**\.
1. Choose the cluster that you want to get recommendations for\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/rs-advisor.png)
1. Expand each recommendation to see more details\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/rs-advisor-recommendations.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/access-advisor.md
|
8ddcb47902f1-0
|
Python UDFs can use any standard Amazon Redshift data type for the input arguments and the function's return value\. In addition to the standard data types, UDFs support the data type *ANYELEMENT*, which Amazon Redshift automatically converts to a standard data type based on the arguments supplied at run time\. Scalar UDFs can return a data type of ANYELEMENT\. For more information, see [ANYELEMENT data type](udf-creating-a-scalar-udf.md#udf-anyelement-data-type)\.
During execution, Amazon Redshift converts the arguments from Amazon Redshift data types to Python data types for processing, and then converts the return value from the Python data type to the corresponding Amazon Redshift data type\. For more information about Amazon Redshift data types, see [Data types](c_Supported_data_types.md)\.
The following table maps Amazon Redshift data types to Python data types\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/udf-data-types.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-data-types.md
|
c61b9c159883-0
|
Synonym for CURRENT\_USER\. See [CURRENT\_USER](r_CURRENT_USER.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_USER.md
|
6f3757e5410e-0
|
ST\_Disjoint returns true if the two input geometries have no points in common\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Disjoint-function.md
|
4f2ced737c20-0
|
```
ST_Disjoint(geom1, geom2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Disjoint-function.md
|
aa94172031f7-0
|
*geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Disjoint-function.md
|
22b8f7571d54-0
|
`BOOLEAN`
If *geom1* or *geom2* is null, then null is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Disjoint-function.md
|
c3908ebda2b3-0
|
The following SQL checks if the first polygon is disjoint from the second polygon\.
```
SELECT ST_Disjoint(ST_GeomFromText('POLYGON((0 0,10 0,10 10,0 10,0 0),(2 2,2 5,5 5,5 2,2 2))'), ST_Point(4, 4));
```
```
st_disjoint
-----------
true
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Disjoint-function.md
|
2433c0e08dbe-0
|
**Topics**
+ [System tables and views](c_intro_system_tables.md)
+ [Types of system tables and views](c_types-of-system-tables-and-views.md)
+ [Visibility of data in system tables and views](c_visibility-of-data.md)
+ [STV tables for snapshot data](c_intro_STV_tables.md)
+ [System views](c_intro_system_views.md)
+ [System catalog tables](c_intro_catalog_views.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm_chap_system-tables.md
|
9acc12dcff9f-0
|
Contains a record of each attempted execution of a query in a service class handled by WLM\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_QUERY.md
|
9d50b078db11-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_WLM_QUERY.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_QUERY.md
|
daab15f1acd2-0
|
**View average query Time in queues and executing**
The following queries display the current configuration for service classes greater than 4\. For a list of service class IDs, see [WLM service class IDs](cm-c-wlm-system-tables-and-views.md#wlm-service-class-ids)\.
The following query returns the average time \(in microseconds\) that each query spent in query queues and executing for each service class\.
```
select service_class as svc_class, count(*),
avg(datediff(microseconds, queue_start_time, queue_end_time)) as avg_queue_time,
avg(datediff(microseconds, exec_start_time, exec_end_time )) as avg_exec_time
from stl_wlm_query
where service_class > 4
group by service_class
order by service_class;
```
This query returns the following sample output:
```
svc_class | count | avg_queue_time | avg_exec_time
-----------+-------+----------------+---------------
5 | 20103 | 0 | 80415
5 | 3421 | 34015 | 234015
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_QUERY.md
|
daab15f1acd2-1
|
5 | 20103 | 0 | 80415
5 | 3421 | 34015 | 234015
6 | 42 | 0 | 944266
7 | 196 | 6439 | 1364399
(4 rows)
```
**View maximum query time in queues and executing**
The following query returns the maximum amount of time \(in microseconds\) that a query spent in any query queue and executing for each service class\.
```
select service_class as svc_class, count(*),
max(datediff(microseconds, queue_start_time, queue_end_time)) as max_queue_time,
max(datediff(microseconds, exec_start_time, exec_end_time )) as max_exec_time
from stl_wlm_query
where svc_class > 5
group by service_class
order by service_class;
```
```
svc_class | count | max_queue_time | max_exec_time
-----------+-------+----------------+---------------
6 | 42 | 0 | 3775896
7 | 197 | 37947 | 16379473
(4 rows)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_QUERY.md
|
daab15f1acd2-2
|
6 | 42 | 0 | 3775896
7 | 197 | 37947 | 16379473
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_WLM_QUERY.md
|
8ed1f7318f93-0
|
The following statement creates a CUSTOMER table that has columns with various data types\. This CREATE TABLE statement shows one of many possible combinations of compression encodings for these columns\.
```
create table customer(
custkey int encode delta,
custname varchar(30) encode raw,
gender varchar(7) encode text255,
address varchar(200) encode text255,
city varchar(30) encode text255,
state char(2) encode raw,
zipcode char(5) encode bytedict,
start_date date encode delta32k);
```
The following table shows the column encodings that were chosen for the CUSTOMER table and gives an explanation for the choices:
| Column | Data type | Encoding | Explanation |
| --- | --- | --- | --- |
| CUSTKEY | int | delta | CUSTKEY consists of unique, consecutive integer values\. Since the differences will be one byte, DELTA is a good choice\. |
| CUSTNAME | varchar\(30\) | raw | CUSTNAME has a large domain with few repeated values\. Any compression encoding would probably be ineffective\. |
| GENDER | varchar\(7\) | text255 | GENDER is very small domain with many repeated values\. Text255 works well with VARCHAR columns in which the same words recur\. |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Examples__compression_encodings_in_CREATE_TABLE_statements.md
|
8ed1f7318f93-1
|
| ADDRESS | varchar\(200\) | text255 | ADDRESS is a large domain, but contains many repeated words, such as Street Avenue, North, South, and so on\. Text 255 and text 32k are useful for compressing VARCHAR columns in which the same words recur\. The column length is short, so text255 is a good choice\. |
| CITY | varchar\(30\) | text255 | CITY is a large domain, with some repeated values\. Certain city names are used much more commonly than others\. Text255 is a good choice for the same reasons as ADDRESS\. |
| STATE | char\(2\) | raw | In the United States, STATE is a precise domain of 50 two\-character values\. Bytedict encoding would yield some compression, but because the column size is only two characters, compression might not be worth the overhead of uncompressing the data\. |
| ZIPCODE | char\(5\) | bytedict | ZIPCODE is a known domain of fewer than 50,000 unique values\. Certain zip codes occur much more commonly than others\. Bytedict encoding is very effective when a column contains a limited number of unique values\. |
| START\_DATE | date | delta32k | Delta encodings are very useful for datetime columns, especially if the rows are loaded in date order\. |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/Examples__compression_encodings_in_CREATE_TABLE_statements.md
|
4f6c932490ea-0
|
ST\_AsBinary returns the hexadecimal well\-known binary \(WKB\) representation of an input geometry using ASCII hexadecimal characters \(0–9, A–F\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsBinary-function.md
|
c78286081d28-0
|
```
ST_AsBinary(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsBinary-function.md
|
bb1fc798a521-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsBinary-function.md
|
465b8590c429-0
|
`VARCHAR`
If *geom* is null, then null is returned\.
If the result is larger than a 64\-KB `VARCHAR`, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsBinary-function.md
|
65b8f2fdbb2f-0
|
The following SQL returns the hexadecimal WKB representation of a polygon\.
```
SELECT ST_AsBinary(ST_GeomFromText('POLYGON((0 0,0 1,1 1,1 0,0 0))',4326));
```
```
st_asbinary
--------------------------------
01030000000100000005000000000000000000000000000000000000000000000000000000000000000000F03F000000000000F03F000000000000F03F000000000000F03F000000000000000000000000000000000000000000000000
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsBinary-function.md
|
8b72350c6648-0
|
The following examples demonstrate how to use ALTER TABLE to add and then drop a basic table column and also how to drop a column with a dependent object\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_COL_ex-add-drop.md
|
14209cc5b2ea-0
|
The following example adds a standalone FEEDBACK\_SCORE column to the USERS table\. This column simply contains an integer, and the default value for this column is NULL \(no feedback score\)\.
First, query the PG\_TABLE\_DEF catalog table to view the USERS table:
```
column | type | encoding | distkey | sortkey
--------------+------------------------+----------+---------+--------
userid | integer | delta | true | 1
username | character(8) | lzo | false | 0
firstname | character varying(30) | text32k | false | 0
lastname | character varying(30) | text32k | false | 0
city | character varying(30) | text32k | false | 0
state | character(2) | bytedict | false | 0
email | character varying(100) | lzo | false | 0
phone | character(14) | lzo | false | 0
likesports | boolean | none | false | 0
liketheatre | boolean | none | false | 0
likeconcerts | boolean | none | false | 0
likejazz | boolean | none | false | 0
likeclassical | boolean | none | false | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_COL_ex-add-drop.md
|
14209cc5b2ea-1
|
likejazz | boolean | none | false | 0
likeclassical | boolean | none | false | 0
likeopera | boolean | none | false | 0
likerock | boolean | none | false | 0
likevegas | boolean | none | false | 0
likebroadway | boolean | none | false | 0
likemusicals | boolean | none | false | 0
```
Now add the feedback\_score column:
```
alter table users
add column feedback_score int
default NULL;
```
Select the FEEDBACK\_SCORE column from USERS to verify that it was added:
```
select feedback_score from users limit 5;
feedback_score
----------------
(5 rows)
```
Drop the column to reinstate the original DDL:
```
alter table users drop column feedback_score;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_COL_ex-add-drop.md
|
472fd185d30a-0
|
This example drops a column that has a dependent object\. As a result, the dependent object is also dropped\.
To start, add the FEEDBACK\_SCORE column to the USERS table again:
```
alter table users
add column feedback_score int
default NULL;
```
Next, create a view from the USERS table called USERS\_VIEW:
```
create view users_view as select * from users;
```
Now, try to drop the FEEDBACK\_SCORE column from the USERS table\. This DROP statement uses the default behavior \(RESTRICT\):
```
alter table users drop column feedback_score;
```
Amazon Redshift displays an error message that the column can't be dropped because another object depends on it\.
Try dropping the FEEDBACK\_SCORE column again, this time specifying CASCADE to drop all dependent objects:
```
alter table users
drop column feedback_score cascade;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_TABLE_COL_ex-add-drop.md
|
c785d3ff6ce5-0
|
No default; the value can be any character string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_query_group.md
|
f3dd1f64b30d-0
|
This parameter applies a user\-defined label to a group of queries that are run during the same session\. This label is captured in the query logs and can be used to constrain results from the STL\_QUERY and STV\_INFLIGHT tables and the SVL\_QLOG view\. For example, you can apply a separate label to every query that you run to uniquely identify queries without having to look up their IDs\.
This parameter does not exist in the server configuration file and must be set at runtime with a SET command\. Although you can use a long character string as a label, the label is truncated to 30 characters in the LABEL column of the STL\_QUERY table and the SVL\_QLOG view \(and to 15 characters in STV\_INFLIGHT\)\.
In the following example, query\_group is set to **Monday**, then several queries are executed with that label:
```
set query_group to 'Monday';
SET
select * from category limit 1;
...
...
select query, pid, substring, elapsed, label
from svl_qlog where label ='Monday'
order by query;
query | pid | substring | elapsed | label
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_query_group.md
|
f3dd1f64b30d-1
|
from svl_qlog where label ='Monday'
order by query;
query | pid | substring | elapsed | label
------+------+------------------------------------+-----------+--------
789 | 6084 | select * from category limit 1; | 65468 | Monday
790 | 6084 | select query, trim(label) from ... | 1260327 | Monday
791 | 6084 | select * from svl_qlog where .. | 2293547 | Monday
792 | 6084 | select count(*) from bigsales; | 108235617 | Monday
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_query_group.md
|
2d1a24415c97-0
|
ATAN2 is a trigonometric function that returns the arc tangent of a one number divided by another number\. The return value is in radians and is between PI/2 and \-PI/2\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN2.md
|
baf07dad34fd-0
|
```
ATAN2(number1, number2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN2.md
|
7d8519915827-0
|
*number1*
The first input parameter is a double precision number\.
*number2*
The second parameter is a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN2.md
|
7ca3b4e1438c-0
|
The ATAN2 function returns a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN2.md
|
3c63016b9bc8-0
|
The following example returns the arc tangent of 2/2 and multiplies it by 4:
```
select atan2(2,2) * 4 as pi;
pi
------------------
3.14159265358979
(1 row)
```
The following example converts the arc tangent of 1/0 \(or 0\) to the equivalent number of degrees:
```
select (atan2(1,0) * 180/(select pi())) as degrees;
degrees
---------
90
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN2.md
|
2a3977a6cb17-0
|
With manual WLM, you can manage system performance and your users' experience by modifying your WLM configuration to create separate queues for the long\-running queries and the short\-running queries\.
When users run queries in Amazon Redshift, the queries are routed to query queues\. Each query queue contains a number of query slots\. Each queue is allocated a portion of the cluster's available memory\. A queue's memory is divided among the queue's query slots\. You can enable Amazon Redshift to manage query concurrency with automatic WLM\. For more information, see [Implementing automatic WLM](automatic-wlm.md)\.
Or you can configure WLM properties for each query queue\. You do so to specify the way that memory is allocated among slots and how queries can be routed to specific queues at run\. You can also configure WLM properties to cancel long\-running queries\. In addition, you can use the `wlm_query_slot_count` parameter, which is separate from the WLM properties\. This parameter can temporarily enable queries to use more memory by allocating multiple slots\.
By default, Amazon Redshift configures the following query queues:
+ **One superuser queue**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
2a3977a6cb17-1
|
By default, Amazon Redshift configures the following query queues:
+ **One superuser queue**
The superuser queue is reserved for superusers only and it can't be configured\. Use this queue only when you need to run queries that affect the system or for troubleshooting purposes\. For example, use this queue when you need to cancel a user's long\-running query or to add users to the database\. Don't use it to perform routine queries\. The queue doesn't appear in the console, but it does appear in the system tables in the database as the fifth queue\. To run a query in the superuser queue, a user must be logged in as a superuser, and must run the query using the predefined `superuser` query group\.
+ **One default user queue**
The default queue is initially configured to run five queries concurrently\. You can change the concurrency, timeout, and memory allocation properties for the default queue, but you cannot specify user groups or query groups\. The default queue must be the last queue in the WLM configuration\. Any queries that are not routed to other queues run in the default queue\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
2a3977a6cb17-2
|
Query queues are defined in the WLM configuration\. The WLM configuration is an editable parameter \(`wlm_json_configuration`\) in a parameter group, which can be associated with one or more clusters\. For more information, For more information, see [Configuring Workload Management](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html) in the *Amazon Redshift Cluster Management Guide*\.
You can add additional query queues to the default WLM configuration, up to a total of eight user queues\. You can configure the following for each query queue:
+ Concurrency scaling mode
+ Concurrency level
+ User groups
+ Query groups
+ WLM memory percent to use
+ WLM timeout
+ WLM query queue hopping
+ Query monitoring rules
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
dc883634097c-0
|
When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read queries\. Write operations continue as normal on your main cluster\. Users see the most current data, whether the queries run on the main cluster or on a concurrency scaling cluster\.
You manage which queries are sent to the concurrency scaling cluster by configuring WLM queues\. When you enable concurrency scaling for a queue, eligible queries are sent to the concurrency scaling cluster instead of waiting in line\. For more information, see [Working with concurrency scaling](concurrency-scaling.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
9b0e87d5e5b2-0
|
Queries in a queue run concurrently until they reach the WLM query slot count, or *concurrency* level, defined for that queue\. Subsequent queries then wait in the queue\.
**Note**
WLM concurrency level is different from the number of concurrent user connections that can be made to a cluster\. For more information, see [Connecting to a Cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-to-cluster.html) in the *Amazon Redshift Cluster Management Guide*\.
In an automatic WLM configuration \(recommended\), the concurrency level is set to **Auto**\. For more information, see [Implementing automatic WLM](automatic-wlm.md)\.
In a manual WLM configuration, each queue can be configured with up to 50 query slots\. The maximum WLM query slot count for all user\-defined queues is 50\. The limit includes the default queue, but doesn't include the reserved superuser queue\. By default, Amazon Redshift allocates an equal, fixed share of available memory to each queue\. Amazon Redshift also allocates by default an equal, fixed share of a queue's memory to each query slot in the queue\. The proportion of memory allocated to each queue is defined in the WLM configuration using the `memory_percent_to_use` property\. At runtime, you can temporarily override the amount of memory assigned to a query by setting the `wlm_query_slot_count` parameter to specify the number of slots allocated to the query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
9b0e87d5e5b2-1
|
By default, manual WLM queues have a concurrency level of 5\. Your workload might benefit from a higher concurrency level in certain cases, such as the following:
+ If many small queries are forced to wait for long\-running queries, create a separate queue with a higher slot count and assign the smaller queries to that queue\. A queue with a higher concurrency level has less memory allocated to each query slot, but the smaller queries require less memory\.
**Note**
If you enable short\-query acceleration \(SQA\), WLM automatically prioritizes short queries over longer\-running queries, so you don't need a separate queue for short queries for most workflows\. For more information, see [Working with short query acceleration](wlm-short-query-acceleration.md)\.
+ If you have multiple queries that each access data on a single slice, set up a separate WLM queue to execute those queries concurrently\. Amazon Redshift assigns concurrent queries to separate slices, which allows multiple queries to execute in parallel on multiple slices\. For example, if a query is a simple aggregate with a predicate on the distribution key, the data for the query is located on a single slice\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
9b0e87d5e5b2-2
|
If your workload requires more than 15 queries to run in parallel, then we recommend enabling concurrency scaling\. This is because increasing query slot count above 15 might create contention for system resources and limit the overall throughput of a single cluster\. With concurrency scaling, you can run hundreds of queries in parallel up to a configured number of concurrency scaling clusters\. The number of concurrency scaling clusters that can be used is controlled by [max\_concurrency\_scaling\_clusters](r_max_concurrency_scaling_clusters.md)\. For more information about concurrency scaling, see [Working with concurrency scaling](concurrency-scaling.md)\.
The memory that is allocated to each queue is divided among the query slots in that queue\. The amount of memory available to a query is the memory allocated to the query slot in which the query is running\. This is true regardless of the number of queries that are actually running concurrently\. A query that can run entirely in memory when the slot count is 5 might need to write intermediate results to disk if the slot count is increased to 20\. The additional disk I/O could degrade performance\.
If a specific query needs more memory than is allocated to a single query slot, you can increase the available memory by increasing the [wlm\_query\_slot\_count](r_wlm_query_slot_count.md) parameter\. The following example sets `wlm_query_slot_count` to 10, performs a vacuum, and then resets `wlm_query_slot_count` to 1\.
```
set wlm_query_slot_count to 10;
vacuum;
set wlm_query_slot_count to 1;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
9b0e87d5e5b2-3
|
vacuum;
set wlm_query_slot_count to 1;
```
For more information, see [Improving query performance](query-performance-improvement-opportunities.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
cd3d0d250f31-0
|
You can assign a set of user groups to a queue by specifying each user group name or by using wildcards\. When a member of a listed user group runs a query, that query runs in the corresponding queue\. There is no set limit on the number of user groups that can be assigned to a queue\. For more information, see [Wildcards](#wlm-wildcards)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
b918b37d2852-0
|
You can assign a set of query groups to a queue by specifying each query group name or by using wildcards\. A query group is simply a label\. At runtime, you can assign the query group label to a series of queries\. Any queries that are assigned to a listed query group run in the corresponding queue\. There is no set limit to the number of query groups that can be assigned to a queue\. For more
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
b918b37d2852-1
|
can be assigned to a queue\. For more information, see [Wildcards](#wlm-wildcards)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
0b3e084c9561-0
|
If wildcards are enabled in the WLM queue configuration, you can assign user groups and query groups to a queue either individually or by using Unix shell\-style wildcards\. The pattern matching is case\-insensitive\.
For example, the '\*' wildcard character matches any number of characters\. Thus, if you add `dba_*` to the list of user groups for a queue, any user\-run query that belongs to a group with a name that begins with `dba_` is assigned to that queue\. Examples are `dba_admin` or `DBA_primary`, \. The '?' wildcard character matches any single character\. Thus, if the queue includes user\-group `dba?1`, then user groups named `dba11` and `dba21` match, but `dba12` doesn't match\.
Wildcards are disabled by default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
854b081b7c88-0
|
In an automatic WLM configuration, memory percent is set to **auto**\. For more information, see [Implementing automatic WLM](automatic-wlm.md)\.
In a manual WLM configuration, to specify the amount of available memory that is allocated to a query, you can set the `WLM Memory Percent to Use` parameter\. By default, each user\-defined queue is allocated an equal portion of the memory that is available for user\-defined queries\. For example, if you have four user\-defined queues, each queue is allocated 25 percent of the available memory\. The superuser queue has its own allocated memory and cannot be modified\. To change the allocation, you assign an integer percentage of memory to each queue, up to a total of 100 percent\. Any unallocated memory is managed by Amazon Redshift and can be temporarily given to a queue if the queue requests additional memory for processing\.
For example, if you configure four queues, you can allocate memory as follows: 20 percent, 30 percent, 15 percent, 15 percent\. The remaining 20 percent is unallocated and managed by the service\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
329b244727eb-0
|
WLM timeout \(`max_execution_time`\) is deprecated\. Instead, create a query monitoring rule \(QMR\) using `query_execution_time` to limit the elapsed execution time for a query\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\.
To limit the amount of time that queries in a given WLM queue are permitted to use, you can set the WLM timeout value for each queue\. The timeout parameter specifies the amount of time, in milliseconds, that Amazon Redshift waits for a query to execute before either canceling or hopping the query\. The timeout is based on query execution time and doesn't include time spent waiting in a queue\.
WLM attempts to hop [CREATE TABLE AS](r_CREATE_TABLE_AS.md) \(CTAS\) statements and read\-only queries, such as SELECT statements\. Queries that can't be hopped are canceled\. For more information, see [WLM query queue hopping](wlm-queue-hopping.md)\.
WLM timeout doesn't apply to a query that has reached the returning state\. To view the state of a query, see the [STV\_WLM\_QUERY\_STATE](r_STV_WLM_QUERY_STATE.md) system table\. COPY statements and maintenance operations, such as ANALYZE and VACUUM, are not subject to WLM timeout\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
329b244727eb-1
|
The function of WLM timeout is similar to the [statement\_timeout](r_statement_timeout.md) configuration parameter\. The difference is that, where the `statement_timeout` configuration parameter applies to the entire cluster, WLM timeout is specific to a single queue in the WLM configuration\.
If [statement\_timeout](r_statement_timeout.md) is also specified, the lower of statement\_timeout and WLM timeout \(max\_execution\_time\) is used\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
27300a605edb-0
|
Query monitoring rules define metrics\-based performance boundaries for WLM queues and specify what action to take when a query goes beyond those boundaries\. For example, for a queue dedicated to short running queries, you might create a rule that aborts queries that run for more than 60 seconds\. To track poorly designed queries, you might have another rule that logs queries that contain nested loops\. For more information, see [WLM query monitoring
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
27300a605edb-1
|
For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-defining-query-queues.md
|
51b52d9cb921-0
|
Follow the steps in these tutorials to learn about Amazon Redshift features:
+ [Tutorial: Tuning table design](tutorial-tuning-tables.md)
+ [Tutorial: Loading data from Amazon S3](tutorial-loading-data.md)
+ [Tutorial: Querying nested data with Amazon Redshift SpectrumTutorial: Querying nested data with Amazon Redshift Spectrum ](tutorial-query-nested-data.md)
+ [Tutorial: Configuring manual workload management \(WLM\) queues](tutorial-configuring-workload-management.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorials-redshift.md
|
54f8ab222b00-0
|
Use the STV\_PARTITIONS table to find out the disk speed performance and disk utilization for Amazon Redshift\.
STV\_PARTITIONS contains one row per node per logical disk partition, or slice\.
STV\_PARTITIONS is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_PARTITIONS.md
|
b3cbbe648ac3-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_PARTITIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_PARTITIONS.md
|
1c9dec03eb53-0
|
The following query returns the disk space used and capacity, in 1 MB disk blocks, and calculates disk utilization as a percentage of raw disk space\. The raw disk space includes space that is reserved by Amazon Redshift for internal use, so it is larger than the nominal disk capacity, which is the amount of disk space available to the user\. The **Percentage of Disk Space Used** metric on the **Performance** tab of the Amazon Redshift Management Console reports the percentage of nominal disk capacity used by your cluster\. We recommend that you monitor the **Percentage of Disk Space Used** metric to maintain your usage within your cluster's nominal disk capacity\.
**Important**
We strongly recommend that you do not exceed your cluster's nominal disk capacity\. While it might be technically possible under certain circumstances, exceeding your nominal disk capacity decreases your cluster's fault tolerance and increases your risk of losing data\.
This example was run on a two\-node cluster with six logical disk partitions per node\. Space is being used very evenly across the disks, with approximately 25% of each disk in use\.
```
select owner, host, diskno, used, capacity,
(used-tossed)/capacity::numeric *100 as pctused
from stv_partitions order by owner;
owner | host | diskno | used | capacity | pctused
-------+------+--------+--------+----------+---------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_PARTITIONS.md
|
1c9dec03eb53-1
|
0 | 0 | 0 | 236480 | 949954 | 24.9
0 | 0 | 1 | 236420 | 949954 | 24.9
0 | 0 | 2 | 236440 | 949954 | 24.9
0 | 1 | 2 | 235150 | 949954 | 24.8
0 | 1 | 1 | 237100 | 949954 | 25.0
0 | 1 | 0 | 237090 | 949954 | 25.0
1 | 1 | 0 | 236310 | 949954 | 24.9
1 | 1 | 1 | 236300 | 949954 | 24.9
1 | 1 | 2 | 236320 | 949954 | 24.9
1 | 0 | 2 | 237910 | 949954 | 25.0
1 | 0 | 1 | 235640 | 949954 | 24.8
1 | 0 | 0 | 235380 | 949954 | 24.8
(12 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_PARTITIONS.md
|
025a33df0e5f-0
|
Terminates a session\. You can terminate a session owned by your user\. A superuser can terminate any session\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
ce50c028d953-0
|
```
pg_terminate_backend( pid )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
ae4bd2032e15-0
|
*pid*
The process ID of the session to be terminated\. Requires an integer value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
4ab88599662b-0
|
None
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
f44f1bb6f6b9-0
|
If you are close to reaching the limit for concurrent connections, use PG\_TERMINATE\_BACKEND to terminate idle sessions and free up the connections\. For more information, see [Limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html)\.
If queries in multiple sessions hold locks on the same table, you can use PG\_TERMINATE\_BACKEND to terminate one of the sessions, which forces any currently running transactions in the terminated session to release all locks and roll back the transaction\. Query the PG\_\_LOCKS catalog table to view currently held locks\.
If a query is not in a transaction block \(BEGIN … END\), you can cancel the query by using the [CANCEL](r_CANCEL.md) command or the [PG\_CANCEL\_BACKEND](PG_CANCEL_BACKEND.md) function\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
6bebfc376a56-0
|
The following statement queries the SVV\_TRANSACTIONS table to view all locks in effect for current transactions:
```
select * from svv_transactions;
txn_owner | txn_db | xid | pid | txn_start | lock_mode | lockable_object_type | relation | granted
----------+-----------+-------+------+---------------------+-----------------+----------------------+----------+--------
rsuser | dev | 96178 | 8585 | 2017-04-12 20:13:07 | AccessShareLock | relation | 51940 | true
rsuser | dev | 96178 | 8585 | 2017-04-12 20:13:07 | AccessShareLock | relation | 52000 | true
rsuser | dev | 96178 | 8585 | 2017-04-12 20:13:07 | AccessShareLock | relation | 108623 | true
rsuser | dev | 96178 | 8585 | 2017-04-12 20:13:07 | ExclusiveLock | transactionid | | true
```
The following statement terminates the session holding the locks:
```
select pg_terminate_backend(8585);
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
6bebfc376a56-1
|
The following statement terminates the session holding the locks:
```
select pg_terminate_backend(8585);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_TERMINATE_BACKEND.md
|
5545d1872302-0
|
Use the following queries to identify issues with queries or underlying tables that can affect query performance\. We recommend using these queries in conjunction with the query tuning processes discussed in [Analyzing and improving queries](c-query-tuning.md)\.
**Topics**
+ [Identifying queries that are top candidates for tuning](#identify-queries-that-are-top-candidates-for-tuning)
+ [Identifying tables with data skew or unsorted rows](#identify-tables-with-data-skew-or-unsorted-rows)
+ [Identifying queries with nested loops](#identify-queries-with-nested-loops)
+ [Reviewing queue wait times for queries](#review-queue-wait-times-for-queries)
+ [Reviewing query alerts by table](#review-query-alerts-by-table)
+ [Identifying tables with missing statistics](#identify-tables-with-missing-statistics)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
9c279202e22f-0
|
The following query identifies the top 50 most time\-consuming statements that have been executed in the last 7 days\. You can use the results to identify queries that are taking unusually long, and also to identify queries that are run frequently \(those that appear more than once in the result set\)\. These queries are frequently good candidates for tuning to improve system performance\.
This query also provides a count of the alert events associated with each query identified\. These alerts provide details that you can use to improve the query’s performance\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\.
```
select trim(database) as db, count(query) as n_qry,
max(substring (qrytext,1,80)) as qrytext,
min(run_minutes) as "min" ,
max(run_minutes) as "max",
avg(run_minutes) as "avg", sum(run_minutes) as total,
max(query) as max_query_id,
max(starttime)::date as last_run,
sum(alerts) as alerts, aborted
from (select userid, label, stl_query.query,
trim(database) as database,
trim(querytxt) as qrytext,
md5(trim(querytxt)) as qry_md5,
starttime, endtime,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
9c279202e22f-1
|
md5(trim(querytxt)) as qry_md5,
starttime, endtime,
(datediff(seconds, starttime,endtime)::numeric(12,2))/60 as run_minutes,
alrt.num_events as alerts, aborted
from stl_query
left outer join
(select query, 1 as num_events from stl_alert_event_log group by query ) as alrt
on alrt.query = stl_query.query
where userid <> 1 and starttime >= dateadd(day, -7, current_date))
group by database, label, qry_md5, aborted
order by total desc limit 50;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
93a61dbcacb8-0
|
The following query identifies tables that have uneven data distribution \(data skew\) or a high percentage of unsorted rows\.
A low `skew` value indicates that table data is properly distributed\. If a table has a `skew` value of 4\.00 or higher, consider modifying its data distribution style\. For more information, see [Suboptimal data distribution](query-performance-improvement-opportunities.md#suboptimal-data-distribution)\.
If a table has a `pct_unsorted` value greater than 20 percent, consider running the [VACUUM](r_VACUUM_command.md) command\. For more information, see [Unsorted or missorted rows](query-performance-improvement-opportunities.md#unsorted-or-mis-sorted-rows)\.
Also review the `mbytes` and `pct_of_total` values for each table\. These columns identify the size of the table and what percentage of raw disk space the table consumes\. The raw disk space includes space that is reserved by Amazon Redshift for internal use, so it is larger than the nominal disk capacity, which is the amount of disk space available to the user\. Use this information to ensure that you have free disk space equal to at least 2\.5 times the size of your largest table\. Having this space available enables the system to write intermediate results to disk when processing complex queries\.
```
select trim(pgn.nspname) as schema,
trim(a.name) as table, id as tableid,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
93a61dbcacb8-1
|
select trim(pgn.nspname) as schema,
trim(a.name) as table, id as tableid,
decode(pgc.reldiststyle,0, 'even',1,det.distkey ,8,'all') as distkey, dist_ratio.ratio::decimal(10,4) as skew,
det.head_sort as "sortkey",
det.n_sortkeys as "#sks", b.mbytes,
decode(b.mbytes,0,0,((b.mbytes/part.total::decimal)*100)::decimal(5,2)) as pct_of_total,
decode(det.max_enc,0,'n','y') as enc, a.rows,
decode( det.n_sortkeys, 0, null, a.unsorted_rows ) as unsorted_rows ,
decode( det.n_sortkeys, 0, null, decode( a.rows,0,0, (a.unsorted_rows::decimal(32)/a.rows)*100) )::decimal(5,2) as pct_unsorted
from (select db_id, id, name, sum(rows) as rows,
sum(rows)-sum(sorted_rows) as unsorted_rows
from stv_tbl_perm a
group by db_id, id, name) as a
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
93a61dbcacb8-2
|
from stv_tbl_perm a
group by db_id, id, name) as a
join pg_class as pgc on pgc.oid = a.id
join pg_namespace as pgn on pgn.oid = pgc.relnamespace
left outer join (select tbl, count(*) as mbytes
from stv_blocklist group by tbl) b on a.id=b.tbl
inner join (select attrelid,
min(case attisdistkey when 't' then attname else null end) as "distkey",
min(case attsortkeyord when 1 then attname else null end ) as head_sort ,
max(attsortkeyord) as n_sortkeys,
max(attencodingtype) as max_enc
from pg_attribute group by 1) as det
on det.attrelid = a.id
inner join ( select tbl, max(mbytes)::decimal(32)/min(mbytes) as ratio
from (select tbl, trim(name) as name, slice, count(*) as mbytes
from svv_diskusage group by tbl, name, slice )
group by tbl, name ) as dist_ratio on a.id = dist_ratio.tbl
join ( select sum(capacity) as total
from stv_partitions where part_begin=0 ) as part on 1=1
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
93a61dbcacb8-3
|
join ( select sum(capacity) as total
from stv_partitions where part_begin=0 ) as part on 1=1
where mbytes is not null
order by mbytes desc;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.