id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
bbb074668eae-0
|
*table\_name*
A temporary or persistent table\. Only the owner of the table or a user with UPDATE privilege on the table may update rows\. If you use the FROM clause or select from tables in an expression or condition, you must have SELECT privilege on those tables\. You can't give the table an alias here; however, you can specify an alias in the FROM clause\.
Amazon Redshift Spectrum external tables are read\-only\. You can't UPDATE an external table\.
SET *column* =
One or more columns that you want to modify\. Columns that aren't listed retain their current values\. Do not include the table name in the specification of a target column\. For example, `UPDATE tab SET tab.col = 1` is invalid\.
*expression*
An expression that defines the new value for the specified column\.
DEFAULT
Updates the column with the default value that was assigned to the column in the CREATE TABLE statement\.
FROM *tablelist*
You can update a table by referencing information in other tables\. List these other tables in the FROM clause or use a subquery as part of the WHERE condition\. Tables listed in the FROM clause can have aliases\. If you need to include the target table of the UPDATE statement in the list, use an alias\.
WHERE *condition*
Optional clause that restricts updates to rows that match a condition\. When the condition returns `true`, the specified SET columns are updated\. The condition can be a simple predicate on a column or a condition based on the result of a subquery\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPDATE.md
|
bbb074668eae-1
|
You can name any table in the subquery, including the target table for the UPDATE\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPDATE.md
|
84b2be67b30e-0
|
After updating a large number of rows in a table:
+ Vacuum the table to reclaim storage space and re\-sort rows\.
+ Analyze the table to update statistics for the query planner\.
Left, right, and full outer joins aren't supported in the FROM clause of an UPDATE statement; they return the following error:
```
ERROR: Target table must be part of an equijoin predicate
```
If you need to specify an outer join, use a subquery in the WHERE clause of the UPDATE statement\.
If your UPDATE statement requires a self\-join to the target table, you need to specify the join condition as well as the WHERE clause criteria that qualify rows for the update operation\. In general, when the target table is joined to itself or another table, a best practice is to use a subquery that clearly separates the join conditions from the criteria that qualify rows for updates\.
UPDATE queries with multiple matches per row throws an error when the configuration parameter `error_on_nondeterministic_update` is set to *true*\. For more information, see [error\_on\_nondeterministic\_update](r_error_on_nondeterministic_update.md)\.
You can update a GENERATED BY DEFAULT AS IDENTITY column\. Columns defined as GENERATED BY DEFAULT AS IDENTITY can be updated with values you supply\. For more information, see [GENERATED BY DEFAULT AS IDENTITY](r_CREATE_TABLE_NEW.md#identity-generated-bydefault-clause)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPDATE.md
|
b1e2cf20f118-0
|
Stores information about default access privileges\. For more information on default access privileges, see [ALTER DEFAULT PRIVILEGES](r_ALTER_DEFAULT_PRIVILEGES.md)\.
PG\_DEFAULT\_ACL is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_DEFAULT_ACL.md
|
eb3d725a9b7f-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_DEFAULT_ACL.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_DEFAULT_ACL.md
|
77f4b88c1b46-0
|
The following query returns all default privileges defined for the database\.
```
select pg_get_userbyid(d.defacluser) as user,
n.nspname as schema,
case d.defaclobjtype when 'r' then 'tables' when 'f' then 'functions' end
as object_type,
array_to_string(d.defaclacl, ' + ') as default_privileges
from pg_catalog.pg_default_acl d
left join pg_catalog.pg_namespace n on n.oid = d.defaclnamespace;
user | schema | object_type | default_privileges
-------+--------+-------------+-------------------------------------------------------
admin | tickit | tables | user1=r/admin + "group group1=a/admin" + user2=w/admin
```
The result in the preceding example shows that for all new tables created by user `admin` in the `tickit` schema, `admin` grants SELECT privileges to `user1`, INSERT privileges to `group1`, and UPDATE privileges to `user2`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_DEFAULT_ACL.md
|
983a1e72b729-0
|
Restricts access to a database table\. This command is only meaningful when it is run inside a transaction block\.
The LOCK command obtains a table\-level lock in "ACCESS EXCLUSIVE" mode, waiting if necessary for any conflicting locks to be released\. Explicitly locking a table in this way causes reads and writes on the table to wait when they are attempted from other transactions or sessions\. An explicit table lock created by one user temporarily prevents another user from selecting data from that table or loading data into it\. The lock is released when the transaction that contains the LOCK command completes\.
Less restrictive table locks are acquired implicitly by commands that refer to tables, such as write operations\. For example, if a user tries to read data from a table while another user is updating the table, the data that is read will be a snapshot of the data that has already been committed\. \(In some cases, queries will abort if they violate serializable isolation rules\.\) See [Managing concurrent write operations](c_Concurrent_writes.md)\.
Some DDL operations, such as DROP TABLE and TRUNCATE, create exclusive locks\. These operations prevent data reads\.
If a lock conflict occurs, Amazon Redshift displays an error message to alert the user who started the transaction in conflict\. The transaction that received the lock conflict is aborted\. Every time a lock conflict occurs, Amazon Redshift writes an entry to the [STL\_TR\_CONFLICT](r_STL_TR_CONFLICT.md) table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOCK.md
|
8cda9ffc33d8-0
|
```
LOCK [ TABLE ] table_name [, ...]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOCK.md
|
db332ab55531-0
|
TABLE
Optional keyword\.
*table\_name*
Name of the table to lock\. You can lock more than one table by using a comma\-delimited list of table names\. You can't lock views\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOCK.md
|
600d02f74bed-0
|
```
begin;
lock event, sales;
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LOCK.md
|
fff7bac33312-0
|
Use SVV\_COLUMNS to view catalog information about the columns of local and external tables and views, including [late\-binding views](r_CREATE_VIEW.md#r_CREATE_VIEW_late-binding-views)\.
SVV\_COLUMNS is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\.
The SVV\_COLUMNS view joins table metadata from the [System catalog tables](c_intro_catalog_views.md) \(tables with a PG prefix\) and the [SVV\_EXTERNAL\_COLUMNS](r_SVV_EXTERNAL_COLUMNS.md) system view\. The system catalog tables describe Amazon Redshift database tables\. SVV\_EXTERNAL\_COLUMNS describes external tables that are used with Amazon Redshift Spectrum\.
All users can see all rows from the system catalog tables\. Regular users can see column definitions from the SVV\_EXTERNAL\_COLUMNS view only for external tables to which they have been granted access\. Although regular users can see table metadata in the system catalog tables, they can only select data from the user\-defined tables if they own the table or have been granted access\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_COLUMNS.md
|
08855a046b1e-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_COLUMNS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_COLUMNS.md
|
e8ec8bcd160c-0
|
The following steps show how to create a secret and an IAM role to use with federated queries\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-create-secret-iam-role.md
|
dd422691a3af-0
|
Make sure that you have the following prerequisites to create a secret and an IAM role to use with federated queries:
+ An RDS PostgreSQL or Aurora PostgreSQL DB instance with user name and password authentication\.
+ An Amazon Redshift cluster with a cluster maintenance version that supports federated queries\.
**To create a secret \(user name and password\) with AWS Secrets Manager**
1. Sign in to the Secrets Manager console with the account that owns your RDS PostgreSQL or Aurora PostgreSQL instance\.
1. Choose **Store a new secret**\.
1. Choose the **Credentials for RDS database** tile\. For **User name** and **Password**, enter values for your instance\. Confirm or choose a value for **Encryption key**\. Then choose the RDS database that your secret will access\.
**Note**
We recommend using the default encryption key \(`DefaultEncryptionKey`\)\. If you use a custom encryption key, the IAM role that is used to access the secret must be added as a key user\.
1. Enter a name for the secret, continue with the creation steps with the default choices, and then choose **Store**\.
1. View your secret and note the **Secret ARN** value that you created to identify the secret\.
**To create a security policy using the secret**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-create-secret-iam-role.md
|
dd422691a3af-1
|
**To create a security policy using the secret**
1. Sign in to the AWS Management Console and open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
1. Create a policy with JSON similar to the following\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessSecret",
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "arn:aws:secretsmanager:us-west-2:123456789012:secret:my-rds-secret-VNenFy"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"secretsmanager:GetRandomPassword",
"secretsmanager:ListSecrets"
],
"Resource": "*"
}
]
}
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-create-secret-iam-role.md
|
dd422691a3af-2
|
],
"Resource": "*"
}
]
}
```
To retrieve the secret, you need list and read actions\. We recommend that you restrict the resource to the specific secret that you created\. To do this, use the Amazon Resource Name \(ARN\) of the secret to limit the resource\. You can also specify the permissions and resources using the visual editor on the IAM console\.
1. Give the policy a name and finish creating it\.
1. Navigate to **IAM roles**\.
1. Create an IAM role for **Redshift \- Customizable**\.
1. Either attach the IAM policy you just created to an existing IAM role, or create a new IAM role and attach the policy\.
1. On the **Trust relationships** tab of your IAM role, confirm that the role contains the trust entity `redshift.amazonaws.com`\.
1. Note the **Role ARN** you created\. This ARN has access to the secret\.
**To attach the IAM role to your Amazon Redshift cluster**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-create-secret-iam-role.md
|
dd422691a3af-3
|
1. On the navigation menu, choose **CLUSTERS**\. The clusters for your account in the current AWS Region are listed\.
1. Choose the cluster name in the list to view more details about a cluster\.
1. For **Actions**, choose **Manage IAM roles**\. The **Manage IAM roles** page appears\.
1. Add your IAM role to the cluster\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-create-secret-iam-role.md
|
3b03a3786c30-0
|
ST\_GeomFromWKB constructs a geometry object from a hexadecimal well\-known binary \(WKB\) representation of an input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromWKB-function.md
|
dc6cd4e23020-0
|
```
ST_GeomFromWKB(wkb_string)
```
```
ST_GeomFromWKB(wkb_string, srid)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromWKB-function.md
|
887ba4574be0-0
|
*wkb\_string*
A value of data type `VARCHAR` that is a hexadecimal WKB representation of a geometry\.
*srid*
A value of data type `INTEGER` that is a spatial reference identifier \(SRID\)\. If an SRID value is provided, the returned geometry has this SRID value\. Otherwise, the SRID value of the returned geometry is set to 0\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromWKB-function.md
|
b0fb510ccc06-0
|
`GEOMETRY`
If *wkb\_string* or *srid* is null, then null is returned\.
If *srid* is negative, then null is returned\.
If *wkb\_string* is not valid, then an error is returned\.
If *srid* is not valid, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromWKB-function.md
|
d6ee4704f452-0
|
The following SQL constructs a polygon from a WKB value and returns the WKT representation of a polygon\.
```
SELECT ST_AsText(ST_GeomFromWKB('01030000000100000005000000000000000000000000000000000000000000000000000000000000000000F03F000000000000F03F000000000000F03F000000000000F03F000000000000000000000000000000000000000000000000'));
```
```
st_astext
--------------------------------
POLYGON((0 0,0 1,1 1,1 0,0 0))
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_GeomFromWKB-function.md
|
667ef9d0a4cc-0
|
Deallocates a prepared statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEALLOCATE.md
|
fde24fbfbdbd-0
|
```
DEALLOCATE [PREPARE] plan_name
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEALLOCATE.md
|
01bfce85c6e3-0
|
PREPARE
This keyword is optional and is ignored\.
*plan\_name*
The name of the prepared statement to deallocate\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEALLOCATE.md
|
fa2c965e2f1e-0
|
DEALLOCATE is used to deallocate a previously prepared SQL statement\. If you don't explicitly deallocate a prepared statement, it is deallocated when the current session ends\. For more information on prepared statements, see [PREPARE](r_PREPARE.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEALLOCATE.md
|
bf102fdf5061-0
|
[EXECUTE](r_EXECUTE.md), [PREPARE](r_PREPARE.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEALLOCATE.md
|
d745db31997e-0
|
The GROUP BY clause identifies the grouping columns for the query\. Grouping columns must be declared when the query computes aggregates with standard functions such as SUM, AVG, and COUNT\. For more information, see [Aggregate functions](c_Aggregate_Functions.md)\.
```
GROUP BY expression [, ...]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GROUP_BY_clause.md
|
19dd29521c11-0
|
The list of columns or expressions must match the list of non\-aggregate expressions in the select list of the query\. For example, consider the following simple query:
```
select listid, eventid, sum(pricepaid) as revenue,
count(qtysold) as numtix
from sales
group by listid, eventid
order by 3, 4, 2, 1
limit 5;
listid | eventid | revenue | numtix
--------+---------+---------+--------
89397 | 47 | 20.00 | 1
106590 | 76 | 20.00 | 1
124683 | 393 | 20.00 | 1
103037 | 403 | 20.00 | 1
147685 | 429 | 20.00 | 1
(5 rows)
```
In this query, the select list consists of two aggregate expressions\. The first uses the SUM function and the second uses the COUNT function\. The remaining two columns, LISTID and EVENTID, must be declared as grouping columns\.
Expressions in the GROUP BY clause can also reference the select list by using ordinal numbers\. For example, the previous example could be abbreviated as follows:
```
select listid, eventid, sum(pricepaid) as revenue,
count(qtysold) as numtix
from sales
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GROUP_BY_clause.md
|
19dd29521c11-1
|
select listid, eventid, sum(pricepaid) as revenue,
count(qtysold) as numtix
from sales
group by 1,2
order by 3, 4, 2, 1
limit 5;
listid | eventid | revenue | numtix
--------+---------+---------+--------
89397 | 47 | 20.00 | 1
106590 | 76 | 20.00 | 1
124683 | 393 | 20.00 | 1
103037 | 403 | 20.00 | 1
147685 | 429 | 20.00 | 1
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GROUP_BY_clause.md
|
1b080a2f7948-0
|
The MIN function returns the minimum value in a set of rows\. DISTINCT or ALL may be used but do not affect the result\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MIN.md
|
7a99888ccca0-0
|
```
MIN ( [ DISTINCT | ALL ] expression )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MIN.md
|
9a27863571ea-0
|
*expression *
The target column or expression that the function operates on\.
DISTINCT \| ALL
With the argument DISTINCT, the function eliminates all duplicate values from the specified expression before calculating the minimum\. With the argument ALL, the function retains all duplicate values from the expression for calculating the minimum\. ALL is the default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MIN.md
|
d541f4ef337b-0
|
Accepts any data type except Boolean as input\. Returns the same data type as *expression*\. The Boolean equivalent of the MIN function is [BOOL\_AND function](r_BOOL_AND.md), and the Boolean equivalent of MAX is [BOOL\_OR function](r_BOOL_OR.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MIN.md
|
ba80b082abdf-0
|
Find the lowest price paid from all sales:
```
select min(pricepaid)from sales;
max
-------
20.00
(1 row)
```
Find the lowest price paid per ticket from all sales:
```
select min(pricepaid/qtysold)as min_ticket_price
from sales;
min_ticket_price
------------------
20.00000000
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MIN.md
|
2c7f2af210e8-0
|
STL system views are generated from Amazon Redshift log files to provide a history of the system\.
These files reside on every node in the data warehouse cluster\. The STL views take the information from the logs and format them into usable views for system administrators\.
To manage disk space, the STL log views only retain approximately two to five days of log history, depending on log usage and available disk space\. If you want to retain the log data, you will need to periodically copy it to other tables or unload it to Amazon S3\.
**Topics**
+ [STL\_AGGR](r_STL_AGGR.md)
+ [STL\_ALERT\_EVENT\_LOG](r_STL_ALERT_EVENT_LOG.md)
+ [STL\_ANALYZE](r_STL_ANALYZE.md)
+ [STL\_ANALYZE\_COMPRESSION](r_STL_ANALYZE_COMPRESSION.md)
+ [STL\_BCAST](r_STL_BCAST.md)
+ [STL\_COMMIT\_STATS](r_STL_COMMIT_STATS.md)
+ [STL\_CONNECTION\_LOG](r_STL_CONNECTION_LOG.md)
+ [STL\_DDLTEXT](r_STL_DDLTEXT.md)
+ [STL\_DELETE](r_STL_DELETE.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STL_tables.md
|
2c7f2af210e8-1
|
+ [STL\_DELETE](r_STL_DELETE.md)
+ [STL\_DISK\_FULL\_DIAG](r_STL_DISK_FULL_DIAG.md)
+ [STL\_DIST](r_STL_DIST.md)
+ [STL\_ERROR](r_STL_ERROR.md)
+ [STL\_EXPLAIN](r_STL_EXPLAIN.md)
+ [STL\_FILE\_SCAN](r_STL_FILE_SCAN.md)
+ [STL\_HASH](r_STL_HASH.md)
+ [STL\_HASHJOIN](r_STL_HASHJOIN.md)
+ [STL\_INSERT](r_STL_INSERT.md)
+ [STL\_LIMIT](r_STL_LIMIT.md)
+ [STL\_LOAD\_COMMITS](r_STL_LOAD_COMMITS.md)
+ [STL\_LOAD\_ERRORS](r_STL_LOAD_ERRORS.md)
+ [STL\_LOADERROR\_DETAIL](r_STL_LOADERROR_DETAIL.md)
+ [STL\_MERGE](r_STL_MERGE.md)
+ [STL\_MERGEJOIN](r_STL_MERGEJOIN.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STL_tables.md
|
2c7f2af210e8-2
|
+ [STL\_MERGEJOIN](r_STL_MERGEJOIN.md)
+ [STL\_MV\_STATE](r_STL_MV_STATE.md)
+ [STL\_NESTLOOP](r_STL_NESTLOOP.md)
+ [STL\_PARSE](r_STL_PARSE.md)
+ [STL\_PLAN\_INFO](r_STL_PLAN_INFO.md)
+ [STL\_PROJECT](r_STL_PROJECT.md)
+ [STL\_QUERY](r_STL_QUERY.md)
+ [STL\_QUERY\_METRICS](r_STL_QUERY_METRICS.md)
+ [STL\_QUERYTEXT](r_STL_QUERYTEXT.md)
+ [STL\_REPLACEMENTS](r_STL_REPLACEMENTS.md)
+ [STL\_RESTARTED\_SESSIONS](r_STL_RESTARTED_SESSIONS.md)
+ [STL\_RETURN](r_STL_RETURN.md)
+ [STL\_S3CLIENT](r_STL_S3CLIENT.md)
+ [STL\_S3CLIENT\_ERROR](r_STL_S3CLIENT_ERROR.md)
+ [STL\_SAVE](r_STL_SAVE.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STL_tables.md
|
2c7f2af210e8-3
|
+ [STL\_SAVE](r_STL_SAVE.md)
+ [STL\_SCAN](r_STL_SCAN.md)
+ [STL\_SCHEMA\_QUOTA\_VIOLATIONS](r_STL_SCHEMA_QUOTA_VIOLATIONS.md)
+ [STL\_SESSIONS](r_STL_SESSIONS.md)
+ [STL\_SORT](r_STL_SORT.md)
+ [STL\_SSHCLIENT\_ERROR](r_STL_SSHCLIENT_ERROR.md)
+ [STL\_STREAM\_SEGS](r_STL_STREAM_SEGS.md)
+ [STL\_TR\_CONFLICT](r_STL_TR_CONFLICT.md)
+ [STL\_UNDONE](r_STL_UNDONE.md)
+ [STL\_UNIQUE](r_STL_UNIQUE.md)
+ [STL\_UNLOAD\_LOG](r_STL_UNLOAD_LOG.md)
+ [STL\_USAGE\_CONTROL](r_STL_USAGE_CONTROL.md)
+ [STL\_USERLOG](r_STL_USERLOG.md)
+ [STL\_UTILITYTEXT](r_STL_UTILITYTEXT.md)
+ [STL\_VACUUM](r_STL_VACUUM.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STL_tables.md
|
2c7f2af210e8-4
|
+ [STL\_VACUUM](r_STL_VACUUM.md)
+ [STL\_WINDOW](r_STL_WINDOW.md)
+ [STL\_WLM\_ERROR](r_STL_WLM_ERROR.md)
+ [STL\_WLM\_RULE\_ACTION](r_STL_WLM_RULE_ACTION.md)
+ [STL\_WLM\_QUERY](r_STL_WLM_QUERY.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_intro_STL_tables.md
|
066ff367858f-0
|
These PostgreSQL features are not supported in Amazon Redshift\.
**Important**
Do not assume that the semantics of elements that Amazon Redshift and PostgreSQL have in common are identical\. Make sure to consult the *Amazon Redshift Developer Guide * [SQL commands](c_SQL_commands.md) to understand the often subtle differences\.
+ Only the 8\.x version of the PostgreSQL query tool *psql* is supported\.
+ Table partitioning \(range and list partitioning\)
+ Tablespaces
+ Constraints
+ Unique
+ Foreign key
+ Primary key
+ Check constraints
+ Exclusion constraints
Unique, primary key, and foreign key constraints are permitted, but they are informational only\. They are not enforced by the system, but they are used by the query planner\.
+ Database roles
+ Inheritance
+ Postgres system columns
Amazon Redshift SQL does not implicitly define system columns\. However, the PostgreSQL system column names cannot be used as names of user\-defined columns\. See [https://www\.postgresql\.org/docs/8\.0/static/ddl\-system\-columns\.html](https://www.postgresql.org/docs/8.0/static/ddl-system-columns.html)
+ Indexes
+ NULLS clause in Window functions
+ Collations
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-features.md
|
066ff367858f-1
|
+ Indexes
+ NULLS clause in Window functions
+ Collations
Amazon Redshift does not support locale\-specific or user\-defined collation sequences\. See [Collation sequences](c_collation_sequences.md)\.
+ Value expressions
+ Subscripted expressions
+ Array constructors
+ Row constructors
+ Triggers
+ Management of External Data \(SQL/MED\)
+ Table functions
+ VALUES list used as constant tables
+ Recursive common table expressions
+ Sequences
+ Full text search
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-features.md
|
7f3047c2a946-0
|
You can create and manage database users using the Amazon Redshift SQL commands CREATE USER and ALTER USER, or you can configure your SQL client with custom Amazon Redshift JDBC or ODBC drivers that manage the process of creating database users and temporary passwords as part of the database logon process\.
The drivers authenticate database users based on AWS Identity and Access Management \(IAM\) authentication\. If you already manage user identities outside of AWS, you can use a SAML 2\.0\-compliant identity provider \(IdP\) to manage access to Amazon Redshift resources\. You use an IAM role to configure your IdP and AWS to permit your federated users to generate temporary database credentials and log on to Amazon Redshift databases\. For more information, see [Using IAM authentication to generate database user credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html)\.
Amazon Redshift user accounts can only be created and dropped by a database superuser\. Users are authenticated when they login to Amazon Redshift\. They can own databases and database objects \(for example, tables\) and can grant privileges on those objects to users, groups, and schemas to control who has access to which object\. Users with CREATE DATABASE rights can create databases and grant privileges to those databases\. Superusers have database ownership privileges for all databases\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Users.md
|
ff683a4a840e-0
|
Database users accounts are global across a data warehouse cluster \(and not per individual database\)\.
+ To create a user use the [CREATE USER](r_CREATE_USER.md) command\.
+ To create a superuser use the [CREATE USER](r_CREATE_USER.md) command with the CREATEUSER option\.
+ To remove an existing user, use the [DROP USER](r_DROP_USER.md) command\.
+ To make changes to a user account, such as changing a password, use the [ALTER USER](r_ALTER_USER.md) command\.
+ To view a list of users, query the PG\_USER catalog table:
```
select * from pg_user;
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+-----------
rdsdb | 1 | t | t | t | ******** | |
masteruser | 100 | t | t | f | ******** | |
dwuser | 101 | f | f | f | ******** | |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Users.md
|
ff683a4a840e-1
|
dwuser | 101 | f | f | f | ******** | |
simpleuser | 102 | f | f | f | ******** | |
poweruser | 103 | f | t | f | ******** | |
dbuser | 104 | t | f | f | ******** | |
(6 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Users.md
|
6995d84841ed-0
|
The following example shows how to set up a federated query that references an Amazon Redshift database, an Aurora PostgreSQL database, and Amazon S3\. This example illustrates how federated queries works\. To run it on your own environment, change it to fit your environment\. For prerequisites for doing this, see [Getting started with using federated queries](getting-started-federated.md)\.
Create an external schema that references an Aurora PostgreSQL database\.
```
CREATE EXTERNAL SCHEMA apg
FROM POSTGRES
DATABASE ‘database-1’ SCHEMA ’myschema’
URI 'xxx.xx.x.xxx'
IAM_ROLE 'arn:aws:iam::123456789012:role/Redshift-SecretsManager-RO'
SECRET_ARN 'arn:aws:secretsmanager:us-west-2:123456789012:secret:federation/test/dataplane-apg-creds-YbVKQw';
```
Create another external schema that references Amazon S3, which uses Amazon Redshift Spectrum\. Also, grant permission to use the schema to `public`\.
```
CREATE EXTERNAL SCHEMA s3
FROM DATA CATALOG
DATABASE 'default' REGION 'us-west-2'
IAM_ROLE 'arn:aws:iam::123456789012:role/Redshift-S3';
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated_query_example.md
|
6995d84841ed-1
|
IAM_ROLE 'arn:aws:iam::123456789012:role/Redshift-S3';
GRANT USAGE ON SCHEMA s3 TO public;
```
Show the count of rows in the Amazon Redshift table\.
```
SELECT count(*) FROM public.lineitem;
count
----------
25075099
```
Show the count of rows in the Aurora PostgreSQL table\.
```
SELECT count(*) FROM apg.lineitem;
count
-------
11760
```
Show the count of rows in Amazon S3\.
```
SELECT count(*) FROM s3.lineitem_1t_part;
count
------------
6144008876
```
Create a view of the tables from Amazon Redshift, Aurora PostgreSQL, and Amazon S3\. This view is used to run your federated query\.
```
CREATE VIEW lineitem_all AS
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated_query_example.md
|
6995d84841ed-2
|
```
CREATE VIEW lineitem_all AS
SELECT l_orderkey,l_partkey,l_suppkey,l_linenumber,l_quantity,l_extendedprice,l_discount,l_tax,l_returnflag,l_linestatus,
l_shipdate::date,l_commitdate::date,l_receiptdate::date, l_shipinstruct ,l_shipmode,l_comment
FROM s3.lineitem_1t_part
UNION ALL SELECT * FROM public.lineitem
UNION ALL SELECT * FROM apg.lineitem
with no schema binding;
```
Show the count of rows in the view `lineitem_all` with a predicate to limit the results\.
```
SELECT count(*) from lineitem_all WHERE l_quantity = 10;
count
-----------
123373836
```
Find out how many sales of one item there were in January of each year\.
```
SELECT extract(year from l_shipdate) as year,
extract(month from l_shipdate) as month,
count(*) as orders
FROM lineitem_all
WHERE extract(month from l_shipdate) = 1
AND l_quantity < 2
GROUP BY 1,2
ORDER BY 1,2;
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated_query_example.md
|
6995d84841ed-3
|
AND l_quantity < 2
GROUP BY 1,2
ORDER BY 1,2;
year | month | orders
------+-------+---------
1992 | 1 | 196019
1993 | 1 | 1582034
1994 | 1 | 1583181
1995 | 1 | 1583919
1996 | 1 | 1583622
1997 | 1 | 1586541
1998 | 1 | 1583198
2016 | 1 | 15542
2017 | 1 | 15414
2018 | 1 | 15527
2019 | 1 | 151
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated_query_example.md
|
80982314d2c1-0
|
If any errors occur while loading data from a file, query the [STL\_LOAD\_ERRORS](r_STL_LOAD_ERRORS.md) table to identify the error and determine the possible explanation\. The following table lists all error codes that might occur during data loads:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Load_Error_Reference.md
|
7a205ebf130c-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Load_Error_Reference.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Load_Error_Reference.md
|
9f3203a0c6df-0
|
Amazon Redshift is a fast, scalable data warehouse that makes it simple and cost\-effective to analyze all your data using standard SQL with your existing business intelligence \(BI\) tools\. Amazon Redshift offers fast performance in a low\-cost cloud data warehouse\. It uses sophisticated query optimization, accelerated cache, columnar storage on high\-performance local disks, and massively parallel query execution\.
In the following sections, you can find a framework for building a proof of concept with Amazon Redshift\. The framework helps you to use architectural best practices for designing and operating a secure, high\-performing, and cost\-effective data warehouse\. This guidance is based on reviewing designs of thousands of customer architectures across a wide variety of business types and use cases\. We have compiled customer experiences to develop this set of best practices to help you develop criteria for evaluating your data warehouse workload\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
b1e39d10fccb-0
|
Conducting a proof of concept is a three\-step process:
1. Identify the goals of the proof of concept – you can work backward from your business requirements and success criteria, and translate them into a technical proof of concept project plan\.
1. Set up the proof of concept environment – most of the setup process is a click of few buttons to create your resources\. Within minutes, you can have a data warehouse environment ready with data loaded\.
1. Execute the proof of concept project plan to ensure that the goals are met\.
In the following sections, we go into the details of each step\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
c15c6cc7c861-0
|
Identifying the goals of the proof of concept plays a critical role in determining what you want to measure as part of the evaluation process\. The evaluation criteria should include the current scaling challenges, enhancements to improve your customer's experience of the data warehouse, and methods of addressing your current operational pain points\. You can use the following questions to identify the goals of the proof of concept:
+ What are your goals for scaling your data warehouse?
+ What are the specific service\-level agreements whose terms you want to improve?
+ What new datasets do you need to include in your data warehouse?
+ What are the business\-critical SQL queries that you need to test and measure? Make sure to include the full range of SQL complexities, such as the different types of queries \(for example, select, insert, update, and delete\)\.
+ What are the general types of workloads you plan to test? Examples might include extract\-transform\-load \(ETL\) workloads, reporting queries, and batch extracts\.
After you have answered these questions, you should be able to establish SMART goals and success criteria for building your proof of concept\. For information about setting goals, see [SMART criteria](https://en.wikipedia.org/wiki/SMART_criteria) in *Wikipedia*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
d194011f0b1f-0
|
Because we eliminated hardware provisioning, networking, and software installation from an on\-premises data warehouse, trying Amazon Redshift with your own dataset has never been easier\. Many of the sizing decisions and estimations that used to be required are now simply a click away\. You can flexibly resize your cluster or adjust the ratio of storage versus compute\.
Broadly, setting up the Amazon Redshift proof of concept environment is a two\-step process\. It involves the launching of a data warehouse and then the conversion of the schema and datasets for evaluation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
6b44cc04396f-0
|
You can choose the node type and number of nodes using the Amazon Redshift console\. We recommend that you also test resizing the cluster as part of your proof of concept plan\. To get the initial sizing for your cluster, take the following steps:
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. On the navigation pane, choose **Create cluster** to open the configuration page\.
1. For **Cluster identifier**, enter a name for your cluster\.
1. Choose one of the following methods to size your cluster:
**Note**
The following step describes an Amazon Redshift console that is running in an AWS Region that supports RA3 node types\. For a list of AWS Regions that support RA3 node types, see [Overview of RA3 node types](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-ra3-node-types) in the *Amazon Redshift Cluster Management Guide*\.
+ If your AWS Region supports RA3 node types, choose either **Production** or **Free trial** to answer the question **What are you planning to use this cluster for?**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
6b44cc04396f-1
|
If your organization is eligible, you might be able to create a cluster under the Amazon Redshift free trial program\. To do this, choose **Free trial** to create a configuration with the dc2\.large node type\. For more information about choosing a free trial, see [Amazon Redshift free trial](http://aws.amazon.com/redshift/free-trial/)\.
+ If you don't know how large to size your cluster, choose **Help me choose**\. Doing this starts a sizing calculator that asks you questions about the size and query characteristics of the data that you plan to store in your data warehouse\.
If you know the required size of your cluster \(that is, the node type and number of nodes\), choose **I'll choose**\. Then choose the **Node type** and number of **Nodes** to size your cluster for the proof of concept\.
1. After you enter all required cluster properties, choose **Create cluster** to launch your data warehouse\.
For more details about creating clusters with the Amazon Redshift console, see [Creating a cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#create-cluster) in the *Amazon Redshift Cluster Management Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
8c4a392c1651-0
|
If you don't have an existing data warehouse, skip this section and see [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/)\. *Amazon Redshift Getting Started* provides a tutorial to create a cluster and examples of setting up data in Amazon Redshift\.
When migrating from your existing data warehouse, you can convert schema, code, and data using the AWS Schema Conversion Tool and the AWS Database Migration Service\. Your choice of tools depends on the source of your data and optional ongoing replications\. For more information, see [What Is the AWS Schema Conversion Tool?](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) in the *AWS Schema Conversion Tool User Guide* and [What Is AWS Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) in the *AWS Database Migration Service User Guide*\. The following can help you set up your data in Amazon Redshift:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
8c4a392c1651-1
|
+ [Migrate Your Data Warehouse to Amazon Redshift Using the AWS Schema Conversion Tool](https://aws.amazon.com/blogs/database/how-to-migrate-your-data-warehouse-to-amazon-redshift-using-the-aws-schema-conversion-tool-data-extractors/) – this blog post provides an overview on how you can use the AWS SCT data extractors to migrate your existing data warehouse to Amazon Redshift\. The AWS SCT tool can migrate your data from many legacy platforms \(such as Oracle, Greenplum, Netezza, Teradata, Microsoft SQL Server, or Vertica\)\.
+ Optionally, you can also use the AWS Database Migration Service for ongoing replications of changed data from the source\. For more information, see [Using an Amazon Redshift Database as a Target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html) in the *AWS Database Migration Service User Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
8c4a392c1651-2
|
Amazon Redshift is a relational database management system \(RDBMS\)\. As such, it can run many types of data models including star schemas, snowflake schemas, data vault models, and simple, flat, or normalized tables\. After setting up your schemas in Amazon Redshift, you can take advantage of massively parallel processing and columnar data storage for fast analytical queries out of the box\. For information about types of schemas, see [star schema](https://en.wikipedia.org/wiki/Star_schema), [snowflake schema](https://en.wikipedia.org/wiki/Snowflake_schema), and [data vault modeling](https://en.wikipedia.org/wiki/Data_vault_modeling) in *Wikipedia*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
141f53fff673-0
|
Make sure that a complete evaluation meets all your data warehouse needs\. Consider including the following items in your success criteria:
+ **Data load time** – using the `COPY` command is a common way to test how long it takes to load data\. For more information, see [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)\.
+ **Throughput of the cluster** – measuring queries per hour is a common way to determine throughput\. To do so, set up a test to run typical queries for your workload\.
+ **Data security** – you can easily encrypt data at rest and in transit with Amazon Redshift\. You also have a number of options for managing keys\. Amazon Redshift also supports single sign\-on \(SSO\) integration\. Amazon Redshift pricing includes built\-in security, data compression, backup storage, and data transfer\.
+ **Third\-party tools integration** – you can use either a JDBC or ODBC connection to integrate with business intelligence and other external tools\.
+ **Interoperability with other AWS services** – Amazon Redshift integrates with other AWS services, such as Amazon EMR, Amazon QuickSight, AWS Glue, Amazon S3, and Amazon Kinesis\. You can use this integration when setting up and managing your data warehouse\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
141f53fff673-1
|
+ **Backups and snapshots** – backups and snapshots are created automatically\. You can also create a point\-in\-time snapshot at any time or on a schedule\. Try using a snapshot and creating a second cluster as part of your evaluation\. Evaluate if your development and testing organizations can use the cluster\.
+ **Resizing** – your evaluation should include increasing the number or types of Amazon Redshift nodes\. Evaluate that the workload throughput before and after a resize meets any variability of the volume of your workload\. For more information, see [Resizing clusters in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html#rs-resize-tutorial) in the *Amazon Redshift Cluster Management Guide*\.
+ **Concurrency scaling** – this feature helps you handle variability of traffic volume in your data warehouse\. With concurrency scaling, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance\. For more information, see [Working with concurrency scaling](concurrency-scaling.md)\.
+ **Automatic workload management \(WLM\)** – prioritize your business critical queries over other queries by using automatic WLM\. Try setting up queues based on your workloads \(for example, a queue for ETL and a queue for reporting\)\. Then enable automatic WLM to allocate the concurrency and memory resources dynamically\. For more information, see [Implementing automatic WLM](automatic-wlm.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
141f53fff673-2
|
+ **Amazon Redshift Advisor** – the Advisor develops customized recommendations to increase performance and optimize costs by analyzing your workload and usage metrics for your cluster\. Sign in to the Amazon Redshift console to view Advisor recommendations\. For more information, see [Working with recommendations from Amazon Redshift Advisor](advisor.md)\.
+ **Table design** – Amazon Redshift provides great performance out of the box for most workloads\. To get even better performance for your queries, check the Amazon Redshift Advisor in the Amazon Redshift console\. Advisor shows recommendations on setting optimal sort and distribution keys for your tables\. Because Advisor needs time to assess your workload, it usually takes a couple of days to get these recommendations\. For more information about designing tables, see [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)\.
+ **Support** – we strongly recommend that you evaluate AWS Support as part of your evaluation\. Also, make sure to talk to your account manager about your proof of concept\. AWS can help with technical guidance and credits for the proof of concept if you qualify\. If you don't find the help you're looking for, you can talk directly to the Amazon Redshift team\. For help, submit the form at [Request support for your Amazon Redshift proof\-of\-concept](https://pages.awscloud.com/redshift-proof-of-concept-request.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
141f53fff673-3
|
+ **Lake house integration** – with built\-in integration, try using the out\-of\-box Amazon Redshift Spectrum feature\. With Redshift Spectrum, you can extend the data warehouse into your data lake and run queries against petabytes of data in Amazon S3 using your existing cluster\. For more information, see [Querying external data using Amazon Redshift Spectrum](c-using-spectrum.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
5b552b351f91-0
|
Some of the following techniques for creating query benchmarks might help support your Amazon Redshift evaluation:
+ Assemble a list of queries for each runtime category\. Having a sufficient number \(for example, 30 per category\) helps ensure that your evaluation reflects a real\-world data warehouse implementation\. Add a unique identifier to associate each query that you include in your evaluation with one of the categories you establish for your evaluation\. You can then use these unique identifiers to determine throughput from the system tables\.
You can also create a query group to organize your evaluation queries\. For example, if you have established a "Reporting" category for your evaluation, you might create a coding system to tag your evaluation queries with the word "Report\." You can then identify individual queries within reporting as R1, R2, and so on\. The following example demonstrates this approach\.
```
SELECT 'Reporting' AS query_category, 'R1' as query_id, * FROM customers;
```
```
SELECT query, datediff(seconds, starttime, endtime)
FROM stl_query
WHERE
querytxt LIKE '%Reporting%'
and starttime >= '2018-04-15 00:00'
and endtime < '2018-04-15 23:59';
```
When you have associated a query with an evaluation category, you can use a unique identifier to determine throughput from the system tables for each category\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
5b552b351f91-1
|
When you have associated a query with an evaluation category, you can use a unique identifier to determine throughput from the system tables for each category\.
+ Test throughput with historical user or ETL queries that have a variety of runtimes in your existing data warehouse\. You might use a load testing utility, such as the open\-source [JMeter](https://jmeter.apache.org/) or a custom utility\. If so, make sure that your utility does the following:
+ It can take the network transmission time into account\.
+ It evaluates execution time based on throughput of the internal system tables\. For information about how to do this, see [Analyzing the query summary](c-analyzing-the-query-summary.md)\.
+ Identify all the various permutations that you plan to test during your evaluation\. The following list provides some common variables:
+ Cluster size
+ Node type
+ Load testing duration
+ Concurrency settings
+ Reduce the cost of your proof of concept by pausing your cluster during off\-hours and weekends\. When a cluster is paused, on\-demand compute billing is suspended\. To run tests on the cluster, resume per\-second billing\. You can also create a schedule to pause and resume your cluster automatically\. For more information, see [Pausing and resuming clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html#rs-mgmt-pause-resume-cluster) in the *Amazon Redshift Cluster Management Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
5b552b351f91-2
|
At this stage, you're ready to execute on your project plan and evaluate results\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
8c0cfb11db43-0
|
To help your Amazon Redshift evaluation, see the following:
+ [Service highlights and pricing](https://aws.amazon.com/redshift/) – this product detail page provides the Amazon Redshift value proposition, service highlights, and pricing\.
+ [Amazon Redshift Getting Started](http://docs.aws.amazon.com/redshift/latest/gsg/) – this guide provides a tutorial of using Amazon Redshift to create a sample cluster and work with sample data\.
+ [Getting started with Amazon Redshift Spectrum](c-getting-started-using-spectrum.md) – in this tutorial, you learn how to use Redshift Spectrum to query data directly from files on Amazon S3\.
+ [Amazon Redshift management overview](http://docs.aws.amazon.com/redshift/latest/mgmt/overview.html) – this topic in the *Amazon Redshift Cluster Management Guide* provides an overview of Amazon Redshift\.
+ Optimize Amazon Redshift for performance with BI tools – consider integration with tools such as [Tableau](https://www.tableau.com/learn/whitepapers/optimize-tableau-redshift-better-deployment), [Power BI](https://aws.amazon.com/blogs/big-data/integrate-power-bi-with-amazon-redshift-for-insights-and-analytics/), and others\.
+ [Amazon Redshift Advisor recommendations](advisor-recommendations.md) – contains explanations and details for each Advisor recommendation\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
8c0cfb11db43-1
|
+ [Amazon Redshift Advisor recommendations](advisor-recommendations.md) – contains explanations and details for each Advisor recommendation\.
+ [What's new in Amazon Redshift](https://aws.amazon.com/redshift/whats-new/) – announcements that help you keep track of new features and enhancements\.
+ [Improved speed and scalability](https://aws.amazon.com/blogs/big-data/improved-speed-and-scalability-in-amazon-redshift/) – this blog post summarizes recent Amazon Redshift improvements\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
30f3d8b413bd-0
|
Make sure to talk to your account manager to let them know about your proof of concept\. AWS can help with technical guidance and credits for the proof of concept if you qualify\. If you don't find the help you are looking for, you can talk directly to the Amazon Redshift team\. For help, submit the form at [Request support for your Amazon Redshift proof\-of\-concept](https://pages.awscloud.com/redshift-proof-of-concept-request.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/proof-of-concept-playbook.md
|
1af0cd21672c-0
|
This command is deprecated\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SET_SESSION_CHARACTERISTICS.md
|
ea6ca25ce786-0
|
The AVG function returns the average \(arithmetic mean\) of the input expression values\. The AVG function works with numeric values and ignores NULL values\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AVG.md
|
00165e0651cf-0
|
```
AVG ( [ DISTINCT | ALL ] expression )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AVG.md
|
6a32f9b2d9ba-0
|
*expression *
The target column or expression that the function operates on\.
DISTINCT \| ALL
With the argument DISTINCT, the function eliminates all duplicate values from the specified expression before calculating the average\. With the argument ALL, the function retains all duplicate values from the expression for calculating the average\. ALL is the default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AVG.md
|
8b979d9d47a8-0
|
The argument types supported by the AVG function are SMALLINT, INTEGER, BIGINT, NUMERIC, DECIMAL, REAL, and DOUBLE PRECISION\.
The return types supported by the AVG function are:
+ NUMERIC for any integer type argument
+ DOUBLE PRECISION for a floating point argument
The default precision for an AVG function result with a 64\-bit NUMERIC or DECIMAL argument is 19\. The default precision for a result with a 128\-bit NUMERIC or DECIMAL argument is 38\. The scale of the result is the same as the scale of the argument\. For example, an AVG of a DEC\(5,2\) column returns a DEC\(19,2\) data type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AVG.md
|
cfa8403993ae-0
|
Find the average quantity sold per transaction from the SALES table:
```
select avg(qtysold)from sales;
avg
-----
2
(1 row)
```
Find the average total price listed for all listings:
```
select avg(numtickets*priceperticket) as avg_total_price from listing;
avg_total_price
-----------------
3034.41
(1 row)
```
Find the average price paid, grouped by month in descending order:
```
select avg(pricepaid) as avg_price, month
from sales, date
where sales.dateid = date.dateid
group by month
order by avg_price desc;
avg_price | month
-----------+-------
659.34 | MAR
655.06 | APR
645.82 | JAN
643.10 | MAY
642.72 | JUN
642.37 | SEP
640.72 | OCT
640.57 | DEC
635.34 | JUL
635.24 | FEB
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AVG.md
|
cfa8403993ae-1
|
640.72 | OCT
640.57 | DEC
635.34 | JUL
635.24 | FEB
634.24 | NOV
632.78 | AUG
(12 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_AVG.md
|
5852560e3fd3-0
|
Amazon Redshift routes a submitted SQL query through the parser and optimizer to develop a query plan\. The execution engine then translates the query plan into code and sends that code to the compute nodes for execution\.
**Topics**
+ [Query planning and execution workflow](c-query-planning.md)
+ [Query plan](c-the-query-plan.md)
+ [Reviewing query plan steps](reviewing-query-plan-steps.md)
+ [Factors affecting query performance](c-query-performance.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-processing.md
|
4a1d756f9925-0
|
This section explains the conventions that are used to write the syntax for the SQL expressions, commands, and functions described in the SQL reference section\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_SQL_reference_conventions.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_reference_conventions.md
|
8851de6f3097-0
|
PG\_ATTRIBUTE\_INFO is an Amazon Redshift system view built on the PostgreSQL catalog table PG\_ATTRIBUTE and the internal catalog table PG\_ATTRIBUTE\_ACL\. PG\_ATTRIBUTE\_INFO includes details about columns of a table or view, including column access control lists, if any\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_ATTRIBUTE_INFO.md
|
ebd2d15965d6-0
|
PG\_ATTRIBUTE\_INFO shows the following column in addition to the columns in PG\_ATTRIBUTE\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_ATTRIBUTE_INFO.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_ATTRIBUTE_INFO.md
|
ae7d32a30577-0
|
Concatenates two strings on either side of the \|\| symbol and returns the concatenated string\.
Similar to [CONCAT](r_CONCAT.md)\.
**Note**
For both the CONCAT function and the concatenation operator, if one or both strings is null, the result of the concatenation is null\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_concat_op.md
|
f564fac0f5b0-0
|
```
string1 || string2
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_concat_op.md
|
ded3860d5f5f-0
|
*string1*, *string2*
Both arguments can be fixed\-length or variable\-length character strings or expressions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_concat_op.md
|
38eb62f4b11e-0
|
The \|\| operator returns a string\. The type of string is the same as the input arguments\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_concat_op.md
|
326b2b649a59-0
|
The following example concatenates the FIRSTNAME and LASTNAME fields from the USERS table:
```
select firstname || ' ' || lastname
from users
order by 1
limit 10;
?column?
-----------------
Aaron Banks
Aaron Booth
Aaron Browning
Aaron Burnett
Aaron Casey
Aaron Cash
Aaron Castro
Aaron Dickerson
Aaron Dixon
Aaron Dotson
(10 rows)
```
To concatenate columns that might contain nulls, use the [NVL expression](r_NVL_function.md) expression\. The following example uses NVL to return a 0 whenever NULL is encountered\.
```
select venuename || ' seats ' || nvl(venueseats, 0)
from venue where venuestate = 'NV' or venuestate = 'NC'
order by 1
limit 10;
seating
-----------------------------------
Ballys Hotel seats 0
Bank of America Stadium seats 73298
Bellagio Hotel seats 0
Caesars Palace seats 0
Harrahs Hotel seats 0
Hilton Hotel seats 0
Luxor Hotel seats 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_concat_op.md
|
326b2b649a59-1
|
Caesars Palace seats 0
Harrahs Hotel seats 0
Hilton Hotel seats 0
Luxor Hotel seats 0
Mandalay Bay Hotel seats 0
Mirage Hotel seats 0
New York New York seats 0
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_concat_op.md
|
a630e4daf5b9-0
|
The CHR function returns the character that matches the ASCII code point value specified by of the input parameter\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHR.md
|
23277e796dc2-0
|
```
CHR(number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHR.md
|
ad4d8a892d19-0
|
*number*
The input parameter is an integer that represents an ASCII code point value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHR.md
|
c9e759dff30a-0
|
The CHR function returns a CHAR string if an ASCII character matches the input value\. If the input number has no ASCII match, the function returns null\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHR.md
|
732a83b3fa12-0
|
The following example returns event names that begin with a capital A \(ASCII code point 65\):
```
select distinct eventname from event
where substring(eventname, 1, 1)=chr(65);
eventname
---------------------------
Adriana Lecouvreur
A Man For All Seasons
A Bronx Tale
A Christmas Carol
Allman Brothers Band
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHR.md
|
466d2e74bfa2-0
|
-----
*****Copyright © 2020 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.*****
-----
Amazon's trademarks and trade dress may not be used in
connection with any product or service that is not Amazon's,
in any manner that is likely to cause confusion among customers,
or in any manner that disparages or discredits Amazon. All other
trademarks not owned by Amazon are the property of their respective
owners, who may or may not be affiliated with, connected to, or
sponsored by Amazon.
-----
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-0
|
+ [Amazon Redshift system overview](welcome.md)
+ [Are you a first-time Amazon Redshift user?](c-first-time-user.md)
+ [Are you a database developer?](c-who-should-use-this-guide.md)
+ [Prerequisites](c-dev-guide-prereqs.md)
+ [System and architecture overview](c_redshift_system_overview.md)
+ [Data warehouse system architecture](c_high_level_system_architecture.md)
+ [Performance](c_challenges_achieving_high_performance_queries.md)
+ [Columnar storage](c_columnar_storage_disk_mem_mgmnt.md)
+ [Workload management](c_workload_mngmt_classification.md)
+ [Using Amazon Redshift with other services](using-redshift-with-other-services.md)
+ [Getting started using databases](c_intro_to_admin.md)
+ [Step 1: Create a database](t_creating_database.md)
+ [Step 2: Create a database user](t_adding_redshift_user_cmd.md)
+ [Delete a database user](t_deleting_redshift_user_cmd.md)
+ [Step 3: Create a database table](t_creating_table.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-1
|
+ [Step 3: Create a database table](t_creating_table.md)
+ [Insert data rows into a table](t_inserting_data_into_table.md)
+ [Select data from a table](t_selecting_data.md)
+ [Step 4: Load sample data](cm-dev-t-load-sample-data.md)
+ [Step 5: Query the system tables](t_querying_redshift_system_tables.md)
+ [Determine the process ID of a running query](determine_pid.md)
+ [Step 6: Cancel a query](cancel_query.md)
+ [Step 7: Clean up your resources](cm-dev-t-clean-up-resources.md)
+ [Amazon Redshift best practices](best-practices.md)
+ [Conducting a proof of concept for Amazon Redshift](proof-of-concept-playbook.md)
+ [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)
+ [Take the tuning table design tutorial](c_best-practices-tutorial-tuning-tables.md)
+ [Choose the best sort key](c_best-practices-sort-key.md)
+ [Choose the best distribution style](c_best-practices-best-dist-key.md)
+ [Let COPY choose compression encodings](c_best-practices-use-auto-compression.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-2
|
+ [Let COPY choose compression encodings](c_best-practices-use-auto-compression.md)
+ [Define primary key and foreign key constraints](c_best-practices-defining-constraints.md)
+ [Use the smallest possible column size](c_best-practices-smallest-column-size.md)
+ [Use date/time data types for date columns](c_best-practices-timestamp-date-columns.md)
+ [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)
+ [Take the loading data tutorial](c_best-practices-loading-take-loading-data-tutorial.md)
+ [Take the tuning table design tutorial](c_best-practices-loading-take-table-design-tutorial.md)
+ [Use a COPY command to load data](c_best-practices-use-copy.md)
+ [Use a single COPY command to load from multiple files](c_best-practices-single-copy-command.md)
+ [Split your load data into multiple files](c_best-practices-use-multiple-files.md)
+ [Compress your data files](c_best-practices-compress-data-files.md)
+ [Use a manifest file](best-practices-preventing-load-data-errors.md)
+ [Verify data files before and after a load](c_best-practices-verifying-data-files.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-3
|
+ [Verify data files before and after a load](c_best-practices-verifying-data-files.md)
+ [Use a multi-row insert](c_best-practices-multi-row-inserts.md)
+ [Use a bulk insert](c_best-practices-bulk-inserts.md)
+ [Load data in sort key order](c_best-practices-sort-key-order.md)
+ [Load data in sequential blocks](c_best-practices-load-data-in-sequential-blocks.md)
+ [Use time-series tables](c_best-practices-time-series-tables.md)
+ [Use a staging table to perform a merge (upsert)](c_best-practices-upsert.md)
+ [Schedule around maintenance windows](c_best-practices-avoid-maintenance.md)
+ [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)
+ [Working with recommendations from Amazon Redshift Advisor](advisor.md)
+ [Viewing Amazon Redshift Advisor recommendations on the console](access-advisor.md)
+ [Amazon Redshift Advisor recommendations](advisor-recommendations.md)
+ [Tutorials for Amazon Redshift](tutorials-redshift.md)
+ [Designing tables](t_Creating_tables.md)
+ [Choosing a column compression type](t_Compressing_data_on_disk.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-4
|
+ [Choosing a column compression type](t_Compressing_data_on_disk.md)
+ [Compression encodings](c_Compression_encodings.md)
+ [Raw encoding](c_Raw_encoding.md)
+ [AZ64 encoding](az64-encoding.md)
+ [Byte-dictionary encoding](c_Byte_dictionary_encoding.md)
+ [Delta encoding](c_Delta_encoding.md)
+ [LZO encoding](lzo-encoding.md)
+ [Mostly encoding](c_MostlyN_encoding.md)
+ [Runlength encoding](c_Runlength_encoding.md)
+ [Text255 and Text32k encodings](c_Text255_encoding.md)
+ [Zstandard encoding](zstd-encoding.md)
+ [Testing compression encodings](t_Verifying_data_compression.md)
+ [Example: Choosing compression encodings for the CUSTOMER table](Examples__compression_encodings_in_CREATE_TABLE_statements.md)
+ [Choosing a data distribution style](t_Distributing_data.md)
+ [Distribution styles](c_choosing_dist_sort.md)
+ [Viewing distribution styles](viewing-distribution-styles.md)
+ [Evaluating query patterns](t_evaluating_query_patterns.md)
+ [Designating distribution styles](t_designating_distribution_styles.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-5
|
+ [Designating distribution styles](t_designating_distribution_styles.md)
+ [Evaluating the query plan](c_data_redistribution.md)
+ [Query plan example](t_explain_plan_example.md)
+ [Distribution examples](c_Distribution_examples.md)
+ [Choosing sort keys](t_Sorting_data.md)
+ [Comparing sort styles](t_Sorting_data-compare-sort-styles.md)
+ [Defining constraints](t_Defining_constraints.md)
+ [Analyzing table design](c_analyzing-table-design.md)
+ [Tutorial: Tuning table design](tutorial-tuning-tables.md)
+ [Step 1: Create a test data set](tutorial-tuning-tables-create-test-data.md)
+ [Step 2: Test system performance to establish a baseline](tutorial-tuning-tables-test-performance.md)
+ [Step 3: Select sort keys](tutorial-tuning-tables-sort-keys.md)
+ [Step 4: Select distribution styles](tutorial-tuning-tables-distribution.md)
+ [Step 5: Review compression encodings](tutorial-tuning-tables-compression.md)
+ [Step 6: Recreate the test data set](tutorial-tuning-tables-recreate-test-data.md)
+ [Step 7: Retest system performance after tuning](tutorial-tuning-tables-retest.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
9bfab894e9d0-6
|
+ [Step 7: Retest system performance after tuning](tutorial-tuning-tables-retest.md)
+ [Step 8: Evaluate the results](tutorial-tuning-tables-evaluate.md)
+ [Step 9: Clean up your resources](tutorial-tuning-tables-clean-up.md)
+ [Summary](tutorial-tuning-tables-summary.md)
+ [Loading data](t_Loading_data.md)
+ [Using a COPY command to load data](t_Loading_tables_with_the_COPY_command.md)
+ [Credentials and access permissions](loading-data-access-permissions.md)
+ [Preparing your input data](t_preparing-input-data.md)
+ [Loading data from Amazon S3](t_Loading-data-from-S3.md)
+ [Splitting your data into multiple files](t_splitting-data-files.md)
+ [Uploading files to Amazon S3](t_uploading-data-to-S3.md)
+ [Managing data consistency](managing-data-consistency.md)
+ [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md)
+ [Verifying that the correct files are present in your bucket](verifying-that-correct-files-are-present.md)
+ [Using the COPY command to load from Amazon S3](t_loading-tables-from-s3.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/index.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.