id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
d9136a43bb91-0
|
The following example searches for the `@` character that begins a domain name and returns the starting position of the first match\.
```
select email, regexp_instr(email,'@[^.]*')
from users
limit 5;
email | regexp_instr
--------------------------------------+-------------
Cum@accumsan.com | 4
lorem.ipsum@Vestibulumante.com | 12
non.justo.Proin@ametconsectetuer.edu | 16
non.ante.bibendum@porttitortellus.org | 18
eros@blanditatnisi.org | 5
(5 rows)
```
The following example searches for variants of the word `Center` and returns the starting position of the first match\.
```
select venuename, regexp_instr(venuename,'[cC]ent(er|re)$')
from venue
where regexp_instr(venuename,'[cC]ent(er|re)$') > 0
limit 5;
venuename | regexp_instr
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md
|
d9136a43bb91-1
|
limit 5;
venuename | regexp_instr
----------------------+-------------
The Home Depot Center | 16
Izod Center | 6
Wachovia Center | 10
Air Canada Centre | 12
United Center | 8
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md
|
6969b297ca4e-0
|
The TRUNC function truncates a number and right\-fills it with zeros from the position specified\. This function also truncates a time stamp and returns a date\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC.md
|
5b85a89dbb41-0
|
```
TRUNC(number [ , integer ] |
timestamp )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC.md
|
54e5b34c58b1-0
|
*number*
Numeric data type to be truncated\. SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, and DOUBLE PRECISION data types are supported\.
*integer* \(optional\)
An integer that indicates the number of decimal places of precision, in either direction\. If no integer is provided, the number is truncated as a whole number; if an integer is specified, the number is truncated to the specified decimal place\.
*timestamp*
The function can also return the date from a time stamp\. \(To return a time stamp value with `00:00:00` as the time, cast the function result to a time stamp\.\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC.md
|
e09ee312be81-0
|
TRUNC returns the same numeric data type as the first input argument\. For time stamps, TRUNC returns a date\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC.md
|
441b2de65ce1-0
|
Truncate the commission paid for a given sales transaction\.
```
select commission, trunc(commission)
from sales where salesid=784;
commission | trunc
-----------+-------
111.15 | 111
(1 row)
```
Truncate the same commission value to the first decimal place\.
```
select commission, trunc(commission,1)
from sales where salesid=784;
commission | trunc
-----------+-------
111.15 | 111.1
(1 row)
```
Truncate the commission with a negative value for the second argument; `111.15` is rounded down to `110`\.
```
select commission, trunc(commission,-1)
from sales where salesid=784;
commission | trunc
-----------+-------
111.15 | 110
(1 row)
```
Return the date portion from the result of the SYSDATE function \(which returns a time stamp\):
```
select sysdate;
timestamp
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC.md
|
441b2de65ce1-1
|
```
select sysdate;
timestamp
----------------------------
2011-07-21 10:32:38.248109
(1 row)
select trunc(sysdate);
trunc
------------
2011-07-21
(1 row)
```
Apply the TRUNC function to a TIMESTAMP column\. The return type is a date\.
```
select trunc(starttime) from event
order by eventid limit 1;
trunc
------------
2008-01-25
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNC.md
|
e704a52c1cdd-0
|
The LAG window function returns the values for a row at a given offset above \(before\) the current row in the partition\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LAG.md
|
ad723a44a6e4-0
|
```
LAG (value_expr [, offset ])
[ IGNORE NULLS | RESPECT NULLS ]
OVER ( [ PARTITION BY window_partition ] ORDER BY window_ordering )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LAG.md
|
6ad45678477b-0
|
*value\_expr*
The target column or expression that the function operates on\.
*offset*
An optional parameter that specifies the number of rows before the current row to return values for\. The offset can be a constant integer or an expression that evaluates to an integer\. If you do not specify an offset, Amazon Redshift uses `1` as the default value\. An offset of `0` indicates the current row\.
IGNORE NULLS
An optional specification that indicates that Amazon Redshift should skip null values in the determination of which row to use\. Null values are included if IGNORE NULLS is not listed\.
You can use an NVL or COALESCE expression to replace the null values with another value\. For more information, see [NVL expression](r_NVL_function.md)\.
RESPECT NULLS
Indicates that Amazon Redshift should include null values in the determination of which row to use\. RESPECT NULLS is supported by default if you do not specify IGNORE NULLS\.
OVER
Specifies the window partitioning and ordering\. The OVER clause cannot contain a window frame specification\.
PARTITION BY *window\_partition*
An optional argument that sets the range of records for each group in the OVER clause\.
ORDER BY *window\_ordering*
Sorts the rows within each partition\.
The LAG window function supports expressions that use any of the Amazon Redshift data types\. The return type is the same as the type of the *value\_expr*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LAG.md
|
1f6b99b6b3fa-0
|
The following example shows the quantity of tickets sold to the buyer with a buyer ID of 3 and the time that buyer 3 bought the tickets\. To compare each sale with the previous sale for buyer 3, the query returns the previous quantity sold for each sale\. Since there is no purchase before 1/16/2008, the first previous quantity sold value is null:
```
select buyerid, saletime, qtysold,
lag(qtysold,1) over (order by buyerid, saletime) as prev_qtysold
from sales where buyerid = 3 order by buyerid, saletime;
buyerid | saletime | qtysold | prev_qtysold
---------+---------------------+---------+--------------
3 | 2008-01-16 01:06:09 | 1 |
3 | 2008-01-28 02:10:01 | 1 | 1
3 | 2008-03-12 10:39:53 | 1 | 1
3 | 2008-03-13 02:56:07 | 1 | 1
3 | 2008-03-29 08:21:39 | 2 | 1
3 | 2008-04-27 02:39:01 | 1 | 2
3 | 2008-08-16 07:04:37 | 2 | 1
3 | 2008-08-22 11:45:26 | 2 | 2
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LAG.md
|
1f6b99b6b3fa-1
|
3 | 2008-08-16 07:04:37 | 2 | 1
3 | 2008-08-22 11:45:26 | 2 | 2
3 | 2008-09-12 09:11:25 | 1 | 2
3 | 2008-10-01 06:22:37 | 1 | 1
3 | 2008-10-20 01:55:51 | 2 | 1
3 | 2008-10-28 01:30:40 | 1 | 2
(12 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LAG.md
|
17925c8de698-0
|
Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables\. Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets\. Much of the processing occurs in the Redshift Spectrum layer, and most of the data remains in Amazon S3\. Multiple clusters can concurrently query the same dataset in Amazon S3 without the need to make copies of the data for each cluster\.
**Topics**
+ [Amazon Redshift Spectrum overview](#c-spectrum-overview)
+ [Getting started with Amazon Redshift Spectrum](c-getting-started-using-spectrum.md)
+ [IAM policies for Amazon Redshift Spectrum](c-spectrum-iam-policies.md)
+ [Using Redshift Spectrum with AWS Lake Formation](spectrum-lake-formation.md)
+ [Creating data files for queries in Amazon Redshift Spectrum](c-spectrum-data-files.md)
+ [Creating external schemas for Amazon Redshift Spectrum](c-spectrum-external-schemas.md)
+ [Creating external tables for Amazon Redshift Spectrum](c-spectrum-external-tables.md)
+ [Improving Amazon Redshift Spectrum query performance](c-spectrum-external-performance.md)
+ [Monitoring metrics in Amazon Redshift Spectrum](c-spectrum-metrics.md)
+ [Troubleshooting queries in Amazon Redshift Spectrum](c-spectrum-troubleshooting.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
17925c8de698-1
|
+ [Troubleshooting queries in Amazon Redshift Spectrum](c-spectrum-troubleshooting.md)
+ [Tutorial: Querying nested data with Amazon Redshift Spectrum](tutorial-query-nested-data.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
e2e533b9e743-0
|
Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are independent of your cluster\. Redshift Spectrum pushes many compute\-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer\. Thus, Redshift Spectrum queries use much less of your cluster's processing capacity than other queries\. Redshift Spectrum also scales intelligently\. Based on the demands of your queries, Redshift Spectrum can potentially use thousands of instances to take advantage of massively parallel processing\.
You create Redshift Spectrum tables by defining the structure for your files and registering them as tables in an external data catalog\. The external data catalog can be AWS Glue, the data catalog that comes with Amazon Athena, or your own Apache Hive metastore\. You can create and manage external tables either from Amazon Redshift using data definition language \(DDL\) commands or using any other tool that connects to the external data catalog\. Changes to the external data catalog are immediately available to any of your Amazon Redshift clusters\.
Optionally, you can partition the external tables on one or more columns\. Defining partitions as part of the external table can improve performance\. The improvement occurs because the Amazon Redshift query optimizer eliminates partitions that don't contain data for the query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
e2e533b9e743-1
|
After your Redshift Spectrum tables have been defined, you can query and join the tables just as you do any other Amazon Redshift table\. Redshift Spectrum doesn't support update operations on external tables\. You can add Redshift Spectrum tables to multiple Amazon Redshift clusters and query the same data on Amazon S3 from any cluster in the same AWS Region\. When you update Amazon S3 data files, the data is immediately available for query from any of your Amazon Redshift clusters\.
The AWS Glue Data Catalog that you access might be encrypted to increase security\. If the AWS Glue catalog is encrypted, you need the AWS Key Management Service \(AWS KMS\) key for AWS Glue to access the AWS Glue catalog\. AWS Glue catalog encryption is not available in all AWS Regions\. For a list of supported AWS Regions, see [Encryption and Secure Access for AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/encryption-glue-resources.html) in the *[AWS Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/)\. *For more information about AWS Glue Data Catalog encryption, see [Encrypting Your AWS Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/encrypt-glue-data-catalog.html) in the *[AWS Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/)\. *
**Note**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
e2e533b9e743-2
|
**Note**
You can't view details for Redshift Spectrum tables using the same resources that you use for standard Amazon Redshift tables, such as [PG\_TABLE\_DEF](r_PG_TABLE_DEF.md), [STV\_TBL\_PERM](r_STV_TBL_PERM.md), PG\_CLASS, or information\_schema\. If your business intelligence or analytics tool doesn't recognize Redshift Spectrum external tables, configure your application to query [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md) and [SVV\_EXTERNAL\_COLUMNS](r_SVV_EXTERNAL_COLUMNS.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
792f7edc432d-0
|
Redshift Spectrum is available only in the following AWS Regions:
+ US East \(N\. Virginia\) Region \(us\-east\-1\)
+ US East \(Ohio\) Region \(us\-east\-2\)
+ US West \(N\. California\) Region \(us\-west\-1\)
+ US West \(Oregon\) Region \(us\-west\-2\)
+ Asia Pacific \(Hong Kong\) Region \(ap\-east\-1\)
+ Asia Pacific \(Mumbai\) Region \(ap\-south\-1\)
+ Asia Pacific \(Seoul\) Region \(ap\-northeast\-2\)
+ Asia Pacific \(Singapore\) Region \(ap\-southeast\-1\)
+ Asia Pacific \(Sydney\) Region \(ap\-southeast\-2\)
+ Asia Pacific \(Tokyo\) Region \(ap\-northeast\-1\)
+ Canada \(Central\) Region \(ca\-central\-1\)
+ China \(Beijing\) Region \(cn\-north\-1\)
+ China \(Ningxia\) Region \(cn\-northwest\-1\)
+ Europe \(Frankfurt\) Region \(eu\-central\-1\)
+ Europe \(Ireland\) Region \(eu\-west\-1\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
792f7edc432d-1
|
+ Europe \(Ireland\) Region \(eu\-west\-1\)
+ Europe \(London\) Region \(eu\-west\-2\)
+ Europe \(Paris\) Region \(eu\-west\-3\)
+ Europe \(Stockholm\) Region \(eu\-north\-1\)
+ Middle East \(Bahrain\) Region \(me\-south\-1\)
+ South America \(São Paulo\) Region \(sa\-east\-1\)
+ AWS GovCloud \(US\-West\) \(us\-gov\-west\-1\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
e0c1c9ece662-0
|
Note the following considerations when you use Amazon Redshift Spectrum:
+ The Amazon Redshift cluster and the Amazon S3 bucket must be in the same AWS Region\.
+ If your cluster uses Enhanced VPC Routing, you might need to perform additional configuration steps\. For more information, see [Using Amazon Redshift Spectrum with Enhanced VPC Routing](https://docs.aws.amazon.com/redshift/latest/mgmt/spectrum-enhanced-vpc.html)\.
+ You can't perform update or delete operations on external tables\. To create a new external table in the specified schema, you can use CREATE EXTERNAL TABLE\. For more information about CREATE EXTERNAL TABLE, see [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)\. To insert the results of a SELECT query into existing external tables on external catalogs, you can use INSERT \(external table\)\. For more information about INSERT \(external table\), see [INSERT \(external table\)](r_INSERT_external_table.md)\.
+ Unless you are using an AWS Glue Data Catalog that is enabled for AWS Lake Formation, you can't control user permissions on an external table\. Instead, you can grant and revoke permissions on the external schema\. For more information about working with Lake Formation, see [Using Redshift Spectrum with AWS Lake Formation](spectrum-lake-formation.md)\.
+ To run Redshift Spectrum queries, the database user must have permission to create temporary tables in the database\. The following example grants temporary permission on the database `spectrumdb` to the `spectrumusers` user group\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
e0c1c9ece662-1
|
```
grant temp on database spectrumdb to group spectrumusers;
```
For more information, see [GRANT](r_GRANT.md)\.
+ When using the Athena Data Catalog or AWS Glue Data Catalog as a metadata store, see [Quotas and Limits](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Cluster Management Guide\.*
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-using-spectrum.md
|
48fb64742549-0
|
Creates a new database\.
You can't run CREATE DATABASE within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_DATABASE.md
|
5495c977b7b3-0
|
```
CREATE DATABASE database_name [ WITH ]
[ OWNER [=] db_owner ]
[ CONNECTION LIMIT { limit | UNLIMITED } ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_DATABASE.md
|
ebf22817d213-0
|
*database\_name*
Name of the new database\. For more information about valid names, see [Names and identifiers](r_names.md)\.
WITH
Optional keyword\.
OWNER
Specifies a database owner\.
=
Optional character\.
*db\_owner*
Username for the database owner\.
CONNECTION LIMIT \{ *limit* \| UNLIMITED \}
The maximum number of database connections users are permitted to have open concurrently\. The limit isn't enforced for super users\. Use the UNLIMITED keyword to permit the maximum number of concurrent connections\. A limit on the number of connections for each user might also apply\. For more information, see [CREATE USER](r_CREATE_USER.md)\. The default is UNLIMITED\. To view current connections, query the [STV\_SESSIONS](r_STV_SESSIONS.md) system view\.
If both user and database connection limits apply, an unused connection slot must be available that is within both limits when a user attempts to connect\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_DATABASE.md
|
9e9eb585b0cb-0
|
Amazon Redshift enforces these limits for databases\.
+ Maximum of 60 user\-defined databases per cluster\.
+ Maximum of 127 bytes for a database name\.
+ Cannot be a reserved word\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_DATABASE.md
|
bfa9229f086c-0
|
The following example creates a database named TICKIT and gives ownership to the user DWUSER:
```
create database tickit
with owner dwuser;
```
Query the PG\_DATABASE\_INFO catalog table to view details about databases\.
```
select datname, datdba, datconnlimit
from pg_database_info
where datdba > 1;
datname | datdba | datconnlimit
-------------+--------+-------------
admin | 100 | UNLIMITED
reports | 100 | 100
tickit | 100 | 100
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_DATABASE.md
|
f718d51657c1-0
|
For this tutorial, you use a set of five tables based on the Star Schema Benchmark \(SSB\) schema\. The following diagram shows the SSB data model\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-ssb-data-model.png)
The SSB tables might already exist in the current database\. If so, drop the tables to remove them from the database before you create them using the CREATE TABLE commands in the next step\. The tables used in this tutorial might have different attributes than the existing tables\.
**To create the sample tables**
1. To drop the SSB tables, execute the following commands in your SQL client\.
```
drop table part cascade;
drop table supplier;
drop table customer;
drop table dwdate;
drop table lineorder;
```
1. Execute the following CREATE TABLE commands in your SQL client\.
```
CREATE TABLE part
(
p_partkey INTEGER NOT NULL,
p_name VARCHAR(22) NOT NULL,
p_mfgr VARCHAR(6),
p_category VARCHAR(7) NOT NULL,
p_brand1 VARCHAR(9) NOT NULL,
p_color VARCHAR(11) NOT NULL,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-create-tables.md
|
f718d51657c1-1
|
p_brand1 VARCHAR(9) NOT NULL,
p_color VARCHAR(11) NOT NULL,
p_type VARCHAR(25) NOT NULL,
p_size INTEGER NOT NULL,
p_container VARCHAR(10) NOT NULL
);
CREATE TABLE supplier
(
s_suppkey INTEGER NOT NULL,
s_name VARCHAR(25) NOT NULL,
s_address VARCHAR(25) NOT NULL,
s_city VARCHAR(10) NOT NULL,
s_nation VARCHAR(15) NOT NULL,
s_region VARCHAR(12) NOT NULL,
s_phone VARCHAR(15) NOT NULL
);
CREATE TABLE customer
(
c_custkey INTEGER NOT NULL,
c_name VARCHAR(25) NOT NULL,
c_address VARCHAR(25) NOT NULL,
c_city VARCHAR(10) NOT NULL,
c_nation VARCHAR(15) NOT NULL,
c_region VARCHAR(12) NOT NULL,
c_phone VARCHAR(15) NOT NULL,
c_mktsegment VARCHAR(10) NOT NULL
);
CREATE TABLE dwdate
(
d_datekey INTEGER NOT NULL,
d_date VARCHAR(19) NOT NULL,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-create-tables.md
|
f718d51657c1-2
|
CREATE TABLE dwdate
(
d_datekey INTEGER NOT NULL,
d_date VARCHAR(19) NOT NULL,
d_dayofweek VARCHAR(10) NOT NULL,
d_month VARCHAR(10) NOT NULL,
d_year INTEGER NOT NULL,
d_yearmonthnum INTEGER NOT NULL,
d_yearmonth VARCHAR(8) NOT NULL,
d_daynuminweek INTEGER NOT NULL,
d_daynuminmonth INTEGER NOT NULL,
d_daynuminyear INTEGER NOT NULL,
d_monthnuminyear INTEGER NOT NULL,
d_weeknuminyear INTEGER NOT NULL,
d_sellingseason VARCHAR(13) NOT NULL,
d_lastdayinweekfl VARCHAR(1) NOT NULL,
d_lastdayinmonthfl VARCHAR(1) NOT NULL,
d_holidayfl VARCHAR(1) NOT NULL,
d_weekdayfl VARCHAR(1) NOT NULL
);
CREATE TABLE lineorder
(
lo_orderkey INTEGER NOT NULL,
lo_linenumber INTEGER NOT NULL,
lo_custkey INTEGER NOT NULL,
lo_partkey INTEGER NOT NULL,
lo_suppkey INTEGER NOT NULL,
lo_orderdate INTEGER NOT NULL,
lo_orderpriority VARCHAR(15) NOT NULL,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-create-tables.md
|
f718d51657c1-3
|
lo_orderdate INTEGER NOT NULL,
lo_orderpriority VARCHAR(15) NOT NULL,
lo_shippriority VARCHAR(1) NOT NULL,
lo_quantity INTEGER NOT NULL,
lo_extendedprice INTEGER NOT NULL,
lo_ordertotalprice INTEGER NOT NULL,
lo_discount INTEGER NOT NULL,
lo_revenue INTEGER NOT NULL,
lo_supplycost INTEGER NOT NULL,
lo_tax INTEGER NOT NULL,
lo_commitdate INTEGER NOT NULL,
lo_shipmode VARCHAR(10) NOT NULL
);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-create-tables.md
|
03182e006edb-0
|
[Step 5: Run the COPY commands](tutorial-loading-run-copy.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-create-tables.md
|
09116fdbcb32-0
|
Mostly encodings are useful when the data type for a column is larger than most of the stored values require\. By specifying a mostly encoding for this type of column, you can compress the majority of the values in the column to a smaller standard storage size\. The remaining values that cannot be compressed are stored in their raw form\. For example, you can compress a 16\-bit column, such as an INT2 column, to 8\-bit storage\.
In general, the mostly encodings work with the following data types:
+ SMALLINT/INT2 \(16\-bit\)
+ INTEGER/INT \(32\-bit\)
+ BIGINT/INT8 \(64\-bit\)
+ DECIMAL/NUMERIC \(64\-bit\)
Choose the appropriate variation of the mostly encoding to suit the size of the data type for the column\. For example, apply MOSTLY8 to a column that is defined as a 16\-bit integer column\. Applying MOSTLY16 to a column with a 16\-bit data type or MOSTLY32 to a column with a 32\-bit data type is disallowed\.
Mostly encodings might be less effective than no compression when a relatively high number of the values in the column cannot be compressed\. Before applying one of these encodings to a column, check that *most* of the values that you are going to load now \(and are likely to load in the future\) fit into the ranges shown in the following table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_MostlyN_encoding.md
|
09116fdbcb32-1
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_MostlyN_encoding.html)
**Note**
For decimal values, ignore the decimal point to determine whether the value fits into the range\. For example, 1,234\.56 is treated as 123,456 and can be compressed in a MOSTLY32 column\.
For example, the VENUEID column in the VENUE table is defined as a raw integer column, which means that its values consume 4 bytes of storage\. However, the current range of values in the column is **0** to **309**\. Therefore, re\-creating and reloading this table with MOSTLY16 encoding for VENUEID would reduce the storage of every value in that column to 2 bytes\.
If the VENUEID values referenced in another table were mostly in the range of 0 to 127, it might make sense to encode that foreign\-key column as MOSTLY8\. Before making the choice, you would have to run some queries against the referencing table data to find out whether the values mostly fall into the 8\-bit, 16\-bit, or 32\-bit range\.
The following table shows compressed sizes for specific numeric values when the MOSTLY8, MOSTLY16, and MOSTLY32 encodings are used:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_MostlyN_encoding.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_MostlyN_encoding.md
|
a8e1145f190d-0
|
**Topics**
+ [Splitting your data into multiple files](t_splitting-data-files.md)
+ [Uploading files to Amazon S3](t_uploading-data-to-S3.md)
+ [Using the COPY command to load from Amazon S3](t_loading-tables-from-s3.md)
The COPY command leverages the Amazon Redshift massively parallel processing \(MPP\) architecture to read and load data in parallel from files in an Amazon S3 bucket\. You can take maximum advantage of parallel processing by splitting your data into multiple files and by setting distribution keys on your tables\. For more information about distribution keys, see [Choosing a data distribution style](t_Distributing_data.md)\.
Data from the files is loaded into the target table, one line per row\. The fields in the data file are matched to table columns in order, left to right\. Fields in the data files can be fixed\-width or character delimited; the default delimiter is a pipe \(\|\)\. By default, all the table columns are loaded, but you can optionally define a comma\-separated list of columns\. If a table column is not included in the column list specified in the COPY command, it is loaded with a default value\. For more information, see [Loading default column values](c_loading_default_values.md)\.
Follow this general process to load data from Amazon S3:
1. Split your data into multiple files\.
1. Upload your files to Amazon S3\.
1. Run a COPY command to load the table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading-data-from-S3.md
|
a8e1145f190d-1
|
1. Upload your files to Amazon S3\.
1. Run a COPY command to load the table\.
1. Verify that the data was loaded correctly\.
The rest of this section explains these steps in detail\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Loading-data-from-S3.md
|
3d7a76c038d4-0
|
CHANGE\_SESSION\_PRIORITY enables superusers to immediately change the priority of any session in the system\. Only one session, user, or query can run with the priority `CRITICAL`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_SESSION_PRIORITY.md
|
a2a3983bd553-0
|
```
CHANGE_SESSION_PRIORITY(pid, priority)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_SESSION_PRIORITY.md
|
4aa2823c5675-0
|
*pid*
The process identifier of the session whose priority is changed\. The value `-1` refers to the current session\.
*priority*
The new priority to be assigned to the session\. This argument must be a string with the value `CRITICAL`, `HIGHEST`, `HIGH`, `NORMAL`, `LOW`, or `LOWEST`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_SESSION_PRIORITY.md
|
31de4f5a6ab7-0
|
None
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_SESSION_PRIORITY.md
|
47ce5bdb5197-0
|
The following example returns the process identifier of the server process handling the current session\.
```
select pg_backend_pid();
pg_backend_pid
----------------
30311
(1 row)
```
In this example, the priority is changed for the current session to `LOWEST`\.
```
select change_session_priority(30311, 'Lowest');
change_session_priority
---------------------------------------------------------
Succeeded to change session priority. Changed session (pid:30311) priority to lowest.
(1 row)
```
In this example, the priority is changed for the current session to `HIGH`\.
```
select change_session_priority(-1, 'High');
change_session_priority
---------------------------------------------------------------------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_SESSION_PRIORITY.md
|
47ce5bdb5197-1
|
Succeeded to change session priority. Changed session (pid:30311) priority from lowest to high.
(1 row)
```
In the following example, a stored procedure is created to change a session priority\. Permission to run this stored procedure is granted to the database user `test_user`\.
```
CREATE OR REPLACE PROCEDURE sp_priority_low(pid IN int, result OUT varchar)
AS $$
BEGIN
select change_session_priority(pid, 'low') into result;
END;
$$ LANGUAGE plpgsql
SECURITY DEFINER;
GRANT EXECUTE ON PROCEDURE sp_priority_low(int) TO test_user;
```
Then the database user named `test_user` calls the procedure\.
```
call sp_priority_low(pg_backend_pid());
result
-------------------------------------------------------
Success. Change session (pid:13155) priority to low.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_SESSION_PRIORITY.md
|
c126c7bb9396-0
|
Contains the current state of service class query tasks\.
STV\_WLM\_QUERY\_TASK\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QUERY_TASK_STATE.md
|
4220c16fddbf-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_WLM_QUERY_TASK_STATE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QUERY_TASK_STATE.md
|
1427c60ababe-0
|
The following query displays the current state of queries in service classes greater than 4\. For a list of service class IDs, see [WLM service class IDs](cm-c-wlm-system-tables-and-views.md#wlm-service-class-ids)\.
```
select * from stv_wlm_query_task_state
where service_class > 4;
```
This query returns the following sample output:
```
service_class | task | query | start_time | exec_time
--------------+------+-------+----------------------------+-----------
5 | 466 | 491 | 2010-10-06 13:29:23.063787 | 357618748
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QUERY_TASK_STATE.md
|
8f6036e73070-0
|
ST\_DWithin returns true if the Euclidean distance between two input geometry values is not larger than a threshold value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DWithin-function.md
|
e71a919ccbd0-0
|
```
ST_DWithin(geom1, geom2, threshold)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DWithin-function.md
|
8403b9d1235b-0
|
*geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*threshold*
A value of data type `DOUBLE PRECISION`\. This value is in the units of the input arguments\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DWithin-function.md
|
6aeaaae0a0e3-0
|
`BOOLEAN`
If *geom1* or *geom2* is null, then null is returned\.
If *threshold* is negative, then an error is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DWithin-function.md
|
77223ea68f44-0
|
The following SQL checks if the distance between two polygons is within five units\.
```
SELECT ST_DWithin(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'), ST_GeomFromText('POLYGON((-1 3,2 1,0 -3,-1 3))'),5);
```
```
st_dwithin
-----------
true
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_DWithin-function.md
|
5337a3296e2f-0
|
PG\_CLASS\_INFO is an Amazon Redshift system view built on the PostgreSQL catalog tables PG\_CLASS and PG\_CLASS\_EXTENDED\. PG\_CLASS\_INFO includes details about table creation time and the current distribution style\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\.
PG\_CLASS\_INFO is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_CLASS_INFO.md
|
fd7d78f4cd8d-0
|
PG\_CLASS\_INFO shows the following columns in addition to the columns in PG\_CLASS\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_CLASS_INFO.html)
The RELEFFECTIVEDISTSTYLE column in PG\_CLASS\_INFO indicates the current distribution style for the table\. If the table uses automatic distribution, RELEFFECTIVEDISTSTYLE is 10 or 11, which indicates whether the effective distribution style is AUTO \(ALL\) or AUTO \(EVEN\)\. If the table uses automatic distribution, the distribution style might initially show AUTO \(ALL\), then change to AUTO \(EVEN\) when the table grows\.
The following table gives the distribution style for each value in RELEFFECTIVEDISTSTYLE column:
| RELEFFECTIVEDISTSTYLE | Current distribution style |
| --- | --- |
| 0 | EVEN |
| 1 | KEY |
| 8 | ALL |
| 10 | AUTO \(ALL\) |
| 11 | AUTO \(EVEN\) |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_CLASS_INFO.md
|
0cd912a206e5-0
|
The following query returns the current distribution style of tables in the catalog\.
```
select reloid as tableid,trim(nspname) as schemaname,trim(relname) as tablename,reldiststyle,releffectivediststyle,
CASE WHEN "reldiststyle" = 0 THEN 'EVEN'::text
WHEN "reldiststyle" = 1 THEN 'KEY'::text
WHEN "reldiststyle" = 8 THEN 'ALL'::text
WHEN "releffectivediststyle" = 10 THEN 'AUTO(ALL)'::text
WHEN "releffectivediststyle" = 11 THEN 'AUTO(EVEN)'::text ELSE '<<UNKNOWN>>'::text END as diststyle,relcreationtime
from pg_class_info a left join pg_namespace b on a.relnamespace=b.oid;
```
```
tableid | schemaname | tablename | reldiststyle | releffectivediststyle | diststyle | relcreationtime
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_CLASS_INFO.md
|
0cd912a206e5-1
|
tableid | schemaname | tablename | reldiststyle | releffectivediststyle | diststyle | relcreationtime
---------+------------+-----------+--------------+-----------------------+------------+----------------------------
3638033 | public | customer | 0 | 0 | EVEN | 2019-06-13 15:02:50.666718
3638037 | public | sales | 1 | 1 | KEY | 2019-06-13 15:03:29.595007
3638035 | public | lineitem | 8 | 8 | ALL | 2019-06-13 15:03:01.378538
3638039 | public | product | 9 | 10 | AUTO(ALL) | 2019-06-13 15:03:42.691611
3638041 | public | shipping | 9 | 11 | AUTO(EVEN) | 2019-06-13 15:03:53.69192
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_CLASS_INFO.md
|
80c63b47185a-0
|
The CATEGORY table in the TICKIT database contains the following 11 rows:
```
catid | catgroup | catname | catdesc
-------+----------+-----------+--------------------------------------------
1 | Sports | MLB | Major League Baseball
2 | Sports | NHL | National Hockey League
3 | Sports | NFL | National Football League
4 | Sports | NBA | National Basketball Association
5 | Sports | MLS | Major League Soccer
6 | Shows | Musicals | Musical theatre
7 | Shows | Plays | All non-musical theatre
8 | Shows | Opera | All opera and light opera
9 | Concerts | Pop | All rock and pop music concerts
10 | Concerts | Jazz | All jazz singers and bands
11 | Concerts | Classical | All symphony, concerto, and choir concerts
(11 rows)
```
Assume that a CATEGORY\_STAGE table \(a staging table\) contains one additional row:
```
catid | catgroup | catname | catdesc
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Example_MINUS_query.md
|
80c63b47185a-1
|
```
catid | catgroup | catname | catdesc
-------+----------+-----------+--------------------------------------------
1 | Sports | MLB | Major League Baseball
2 | Sports | NHL | National Hockey League
3 | Sports | NFL | National Football League
4 | Sports | NBA | National Basketball Association
5 | Sports | MLS | Major League Soccer
6 | Shows | Musicals | Musical theatre
7 | Shows | Plays | All non-musical theatre
8 | Shows | Opera | All opera and light opera
9 | Concerts | Pop | All rock and pop music concerts
10 | Concerts | Jazz | All jazz singers and bands
11 | Concerts | Classical | All symphony, concerto, and choir concerts
12 | Concerts | Comedy | All stand up comedy performances
(12 rows)
```
Return the difference between the two tables\. In other words, return rows that are in the CATEGORY\_STAGE table but not in the CATEGORY table:
```
select * from category_stage
except
select * from category;
catid | catgroup | catname | catdesc
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Example_MINUS_query.md
|
80c63b47185a-2
|
select * from category_stage
except
select * from category;
catid | catgroup | catname | catdesc
-------+----------+---------+----------------------------------
12 | Concerts | Comedy | All stand up comedy performances
(1 row)
```
The following equivalent query uses the synonym MINUS\.
```
select * from category_stage
minus
select * from category;
catid | catgroup | catname | catdesc
-------+----------+---------+----------------------------------
12 | Concerts | Comedy | All stand up comedy performances
(1 row)
```
If you reverse the order of the SELECT expressions, the query returns no rows\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Example_MINUS_query.md
|
1be47d5eaf88-0
|
You can find a reference for datetime format strings following\.
The following format strings apply to functions such as TO\_CHAR\. These strings can contain datetime separators \(such as '`-`', '`/`', or '`:`'\) and the following "dateparts" and "timeparts"\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_FORMAT_strings.html)
**Note**
You must surround datetime separators \(such as '\-', '/' or ':'\) with single quotation marks, but you must surround the "dateparts" and "timeparts" listed in the preceding table with double quotation marks\.
The following example shows formatting for seconds, milliseconds, and microseconds\.
```
select sysdate,
to_char(sysdate, 'HH24:MI:SS') as seconds,
to_char(sysdate, 'HH24:MI:SS.MS') as milliseconds,
to_char(sysdate, 'HH24:MI:SS:US') as microseconds;
timestamp | seconds | milliseconds | microseconds
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FORMAT_strings.md
|
1be47d5eaf88-1
|
timestamp | seconds | milliseconds | microseconds
--------------------+----------+--------------+----------------
2015-04-10 18:45:09 | 18:45:09 | 18:45:09.325 | 18:45:09:325143
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FORMAT_strings.md
|
ea64c752a191-0
|
AZ64 is Amazon's proprietary compression encoding algorithm designed to achieve a high compression ratio and improved query processing\. At its core, the AZ64 algorithm compresses smaller groups of data values and uses single instruction, multiple data \(SIMD\) instructions for parallel processing\. Use AZ64 to achieve significant storage savings and high performance for numeric, date, and time data types\. You can use AZ64 as the compression encoding when defining columns using CREATE TABLE and ALTER TABLE statements with the following data types:
+ SMALLINT
+ INTEGER
+ BIGINT
+ DECIMAL
+ DATE
+ TIMESTAMP
+ TIMESTAMPTZ
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/az64-encoding.md
|
636e8dc1e4f3-0
|
By default, Amazon Redshift Spectrum uses the AWS Glue Data Catalog in AWS Regions that support AWS Glue\. In other AWS Regions, Redshift Spectrum uses the Athena Data Catalog\. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3\. You provide that authorization by referencing an AWS Identity and Access Management \(IAM\) role that is attached to your cluster\. If you use an Apache Hive metastore to manage your data catalog, you don't need to provide access to Athena\.
You can chain roles so that your cluster can assume other roles not attached to the cluster\. For more information, see [Chaining IAM roles in Amazon Redshift Spectrum](#c-spectrum-chaining-roles)\.
The AWS Glue catalog that you access might be encrypted to increase security\. If the AWS Glue catalog is encrypted, you need the AWS KMS key for AWS Glue to access the AWS Glue Data Catalog\. For more information, see [Encrypting Your AWS Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/encrypt-glue-data-catalog.html) in the *[AWS Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/)\.*
**Topics**
+ [Amazon S3 permissions](#spectrum-iam-policies-s3)
+ [Cross\-account Amazon S3 permissions](#spectrum-iam-policies-cross-account)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
636e8dc1e4f3-1
|
+ [Cross\-account Amazon S3 permissions](#spectrum-iam-policies-cross-account)
+ [Policies to grant or restrict access using Redshift Spectrum](#spectrum-iam-policies-spectrum-only)
+ [Policies to grant minimum permissions](#spectrum-iam-policies-minimum-permissions)
+ [Chaining IAM roles in Amazon Redshift Spectrum](#c-spectrum-chaining-roles)
+ [Controlling access to the AWS Glue Data Catalog](#c-spectrum-glue-acess)
**Note**
If you currently have Redshift Spectrum external tables in the Athena Data Catalog, you can migrate your Athena Data Catalog to an AWS Glue Data Catalog\. To use the AWS Glue Data Catalog with Redshift Spectrum, you might need to change your IAM policies\. For more information, see [Upgrading to the AWS Glue Data Catalog](https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html#glue-upgrade) in the *Athena User Guide*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
506187debff7-0
|
At a minimum, your cluster needs GET and LIST access to your Amazon S3 bucket\. If your bucket is not in the same AWS account as your cluster, your bucket must also authorize your cluster to access the data\. For more information, see [ Authorizing Amazon Redshift to Access Other AWS Services on Your Behalf](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html)\.
**Note**
The Amazon S3 bucket can't use a bucket policy that restricts access only from specific VPC endpoints\.
The following policy grants GET and LIST access to any Amazon S3 bucket\. The policy allows access to Amazon S3 buckets for Redshift Spectrum as well as COPY operations\.
```
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
"Resource": "*"
}]
}
```
The following policy grants GET and LIST access to your Amazon S3 bucket named `myBucket`\.
```
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
506187debff7-1
|
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
"Resource": "arn:aws:s3:::myBucket/*"
}]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
e4bc53342641-0
|
To grant Redshift Spectrum permission to access data in an Amazon S3 bucket that belongs to another AWS account, add the following policy to the Amazon S3 bucket\. For more information, see [Granting Cross\-Account Bucket Permissions](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html)\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::redshift-account:role/spectrumrole"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListMultipartUploadParts",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::bucketname",
"arn:aws:s3:::bucketname/*"
]
}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
52846e26364d-0
|
To grant access to an Amazon S3 bucket only using Redshift Spectrum, include a condition that allows access for the user agent `AWS Redshift/Spectrum`\. The following policy allows access to Amazon S3 buckets only for Redshift Spectrum\. It excludes other access, such as COPY operations\.
```
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
"Resource": "arn:aws:s3:::myBucket/*",
"Condition": {"StringEquals": {"aws:UserAgent": "AWS Redshift/Spectrum"}}
}]
}
```
Similarly, you might want to create an IAM role that allows access for COPY operations, but excludes Redshift Spectrum access\. To do so, include a condition that denies access for the user agent "AWS Redshift/Spectrum"\. The following policy allows access to an Amazon S3 bucket with the exception of Redshift Spectrum\.
```
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
52846e26364d-1
|
"Effect": "Allow",
"Action": ["s3:Get*", "s3:List*"],
"Resource": "arn:aws:s3:::myBucket/*",
"Condition": {"StringNotEquals": {"aws:UserAgent": "AWS Redshift/Spectrum"}}
}]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
d8319d765bbd-0
|
The following policy grants the minimum permissions required to use Redshift Spectrum with Amazon S3, AWS Glue, and Athena\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListMultipartUploadParts",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::bucketname",
"arn:aws:s3:::bucketname/folder1/folder2/*"
]
},
{
"Effect": "Allow",
"Action": [
"glue:CreateDatabase",
"glue:DeleteDatabase",
"glue:GetDatabase",
"glue:GetDatabases",
"glue:UpdateDatabase",
"glue:CreateTable",
"glue:DeleteTable",
"glue:BatchDeleteTable",
"glue:UpdateTable",
"glue:GetTable",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
d8319d765bbd-1
|
"glue:BatchDeleteTable",
"glue:UpdateTable",
"glue:GetTable",
"glue:GetTables",
"glue:BatchCreatePartition",
"glue:CreatePartition",
"glue:DeletePartition",
"glue:BatchDeletePartition",
"glue:UpdatePartition",
"glue:GetPartition",
"glue:GetPartitions",
"glue:BatchGetPartition"
],
"Resource": [
"*"
]
}
]
}
```
If you use Athena for your data catalog instead of AWS Glue, the policy requires full Athena access\. The following policy grants access to Athena resources\. If your external database is in a Hive metastore, you don't need Athena access\.
```
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["athena:*"],
"Resource": ["*"]
}]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
8dd13ce34a07-0
|
When you attach a role to your cluster, your cluster can assume that role to access Amazon S3, Athena, and AWS Glue on your behalf\. If a role attached to your cluster doesn't have access to the necessary resources, you can chain another role, possibly belonging to another account\. Your cluster then temporarily assumes the chained role to access the data\. You can also grant cross\-account access by chaining roles\. You can chain a maximum of 10 roles\. Each role in the chain assumes the next role in the chain, until the cluster assumes the role at the end of chain\.
To chain roles, you establish a trust relationship between the roles\. A role that assumes another role must have a permissions policy that allows it to assume the specified role\. In turn, the role that passes permissions must have a trust policy that allows it to pass its permissions to another role\. For more information, see [Chaining IAM Roles in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html#authorizing-redshift-service-chaining-roles)\.
When you run the CREATE EXTERNAL SCHEMA command, you can chain roles by including a comma\-separated list of role ARNs\.
**Note**
The list of chained roles must not include spaces\.
In the following example, `MyRedshiftRole` is attached to the cluster\. `MyRedshiftRole` assumes the role `AcmeData`, which belongs to account `111122223333`\.
```
create external schema acme from data catalog
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
8dd13ce34a07-1
|
```
create external schema acme from data catalog
database 'acmedb' region 'us-west-2'
iam_role 'arn:aws:iam::123456789012:role/MyRedshiftRole,arn:aws:iam::111122223333:role/AcmeData';
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
578d6e3ae225-0
|
If you use AWS Glue for your data catalog, you can apply fine\-grained access control to the AWS Glue Data Catalog with your IAM policy\. For example, you might want to expose only a few databases and tables to a specific IAM role\.
The following sections describe the IAM policies for various levels of access to data stored in the AWS Glue Data Catalog\.
**Topics**
+ [Policy for database operations](#c-spectrum-glue-acess-database)
+ [Policy for table operations](#c-spectrum-glue-acess-tables)
+ [Policy for partition operations](#c-spectrum-glue-acess-partitions)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
6a4b99346821-0
|
If you want to give users permissions to view and create a database, they need access rights to both the database and the AWS Glue Data Catalog\.
The following example query creates a database\.
```
CREATE EXTERNAL SCHEMA example_db
FROM DATA CATALOG DATABASE 'example_db' region 'us-west-2'
IAM_ROLE 'arn:aws:iam::redshift-account:role/spectrumrole'
CREATE EXTERNAL DATABASE IF NOT EXISTS
```
The following IAM policy gives the minimum permissions required for creating a database\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetDatabase",
"glue:CreateDatabase"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:catalog"
]
}
]
}
```
The following example query lists the current databases\.
```
SELECT * FROM SVV_EXTERNAL_DATABASES WHERE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
6a4b99346821-1
|
```
The following example query lists the current databases\.
```
SELECT * FROM SVV_EXTERNAL_DATABASES WHERE
databasename = 'example_db1' or databasename = 'example_db2';
```
The following IAM policy gives the minimum permissions required to list the current databases\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetDatabases"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:database/example_db1",
"arn:aws:glue:us-west-2:redshift-account:database/example_db2",
"arn:aws:glue:us-west-2:redshift-account:catalog"
]
}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
0cfb9bf51c7d-0
|
If you want to give users permissions to view, create, drop, alter, or take other actions on tables, they need several types of access\. They need access to the tables themselves, the databases they belong to, and the catalog\.
The following example query creates an external table\.
```
CREATE EXTERNAL TABLE example_db.example_tbl0(
col0 INT,
col1 VARCHAR(255)
) PARTITIONED BY (part INT) STORED AS TEXTFILE
LOCATION 's3://test/s3/location/';
```
The following IAM policy gives the minimum permissions required to create an external table\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:CreateTable"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
0cfb9bf51c7d-1
|
]
}
]
}
```
The following example queries each list the current external tables\.
```
SELECT * FROM svv_external_tables
WHERE tablename = 'example_tbl0' OR
tablename = 'example_tbl1';
```
```
SELECT * FROM svv_external_columns
WHERE tablename = 'example_tbl0' OR
tablename = 'example_tbl1';
```
```
SELECT parameters FROM svv_external_tables
WHERE tablename = 'example_tbl0' OR
tablename = 'example_tbl1';
```
The following IAM policy gives the minimum permissions required to list the current external tables\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetTables"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
0cfb9bf51c7d-2
|
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl1"
]
}
]
}
```
The following example query alters an existing table\.
```
ALTER TABLE example_db.example_tbl0
SET TABLE PROPERTIES ('numRows' = '100');
```
The following IAM policy gives the minimum permissions required to alter an existing table\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetTable",
"glue:UpdateTable"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
0cfb9bf51c7d-3
|
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
The following example query drops an existing table\.
```
DROP TABLE example_db.example_tbl0;
```
The following IAM policy gives the minimum permissions required to drop an existing table\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:DeleteTable"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
8fe379a046fe-0
|
If you want to give users permissions to perform partition\-level operations \(view, create, drop, alter, and so on\), they need permissions to the tables that the partitions belong to\. They also need permissions to the related databases and the AWS Glue Data Catalog\.
The following example query creates a partition\.
```
ALTER TABLE example_db.example_tbl0
ADD PARTITION (part=0) LOCATION 's3://test/s3/location/part=0/';
ALTER TABLE example_db.example_t
ADD PARTITION (part=1) LOCATION 's3://test/s3/location/part=1/';
```
The following IAM policy gives the minimum permissions required to create a partition\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetTable",
"glue:BatchCreatePartition"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
8fe379a046fe-1
|
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
The following example query lists the current partitions\.
```
SELECT * FROM svv_external_partitions
WHERE schemname = 'example_db' AND
tablename = 'example_tbl0'
```
The following IAM policy gives the minimum permissions required to list the current partitions\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetPartitions",
"glue:GetTables",
"glue:GetTable"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
8fe379a046fe-2
|
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
The following example query alters an existing partition\.
```
ALTER TABLE example_db.example_tbl0 PARTITION(part='0')
SET LOCATION 's3://test/s3/new/location/part=0/';
```
The following IAM policy gives the minimum permissions required to alter an existing partition\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:GetPartition",
"glue:UpdatePartition"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
8fe379a046fe-3
|
]
}
]
}
```
The following example query drops an existing partition\.
```
ALTER TABLE example_db.example_tbl0 DROP PARTITION(part='0');
```
The following IAM policy gives the minimum permissions required to drop an existing partition\.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:DeletePartition"
],
"Resource": [
"arn:aws:glue:us-west-2:redshift-account:catalog",
"arn:aws:glue:us-west-2:redshift-account:database/example_db",
"arn:aws:glue:us-west-2:redshift-account:table/example_db/example_tbl0"
]
}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-spectrum-iam-policies.md
|
d1e4c4036237-0
|
After recreating the test data set with the selected sort keys, distribution styles, and compressions encodings, you will retest the system performance\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-0
|
1. Record storage use\.
Determine how many 1 MB blocks of disk space are used for each table by querying the STV\_BLOCKLIST table and record the results in your benchmarks table\.
```
select stv_tbl_perm.name as "table", count(*) as "blocks (mb)"
from stv_blocklist, stv_tbl_perm
where stv_blocklist.tbl = stv_tbl_perm.id
and stv_blocklist.slice = stv_tbl_perm.slice
and stv_tbl_perm.name in ('customer', 'part', 'supplier', 'dwdate', 'lineorder')
group by stv_tbl_perm.name
order by 1 asc;
```
Your results will look similar to this:
```
table | blocks (mb)
-----------+-----------------
customer 604
dwdate 160
lineorder 27152
part 200
supplier 236
```
1. Check for distribution skew\.
Uneven distribution, or data distribution skew, forces some nodes to do more work than others, which limits query performance\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-1
|
1. Check for distribution skew\.
Uneven distribution, or data distribution skew, forces some nodes to do more work than others, which limits query performance\.
To check for distribution skew, query the SVV\_DISKUSAGE system view\. Each row in SVV\_DISKUSAGE records the statistics for one disk block\. The `num_values` column gives the number of rows in that disk block, so `sum(num_values)` returns the number of rows on each slice\.
Execute the following query to see the distribution for all of the tables in the SSB database\.
```
select trim(name) as table, slice, sum(num_values) as rows, min(minvalue), max(maxvalue)
from svv_diskusage
where name in ('customer', 'part', 'supplier', 'dwdate', 'lineorder')
and col =0
group by name, slice
order by name, slice;
```
Your results will look something like this:
```
table | slice | rows | min | max
-----------+-------+----------+----------+-----------
customer | 0 | 3000000 | 1 | 3000000
customer | 2 | 3000000 | 1 | 3000000
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-2
|
customer | 0 | 3000000 | 1 | 3000000
customer | 2 | 3000000 | 1 | 3000000
customer | 4 | 3000000 | 1 | 3000000
customer | 6 | 3000000 | 1 | 3000000
dwdate | 0 | 2556 | 19920101 | 19981230
dwdate | 2 | 2556 | 19920101 | 19981230
dwdate | 4 | 2556 | 19920101 | 19981230
dwdate | 6 | 2556 | 19920101 | 19981230
lineorder | 0 | 75029991 | 3 | 599999975
lineorder | 1 | 75059242 | 7 | 600000000
lineorder | 2 | 75238172 | 1 | 599999975
lineorder | 3 | 75065416 | 1 | 599999973
lineorder | 4 | 74801845 | 3 | 599999975
lineorder | 5 | 75177053 | 1 | 599999975
lineorder | 6 | 74631775 | 1 | 600000000
lineorder | 7 | 75034408 | 1 | 599999974
part | 0 | 175006 | 15 | 1399997
part | 1 | 175199 | 1 | 1399999
part | 2 | 175441 | 4 | 1399989
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-3
|
part | 1 | 175199 | 1 | 1399999
part | 2 | 175441 | 4 | 1399989
part | 3 | 175000 | 3 | 1399995
part | 4 | 175018 | 5 | 1399979
part | 5 | 175091 | 11 | 1400000
part | 6 | 174253 | 2 | 1399969
part | 7 | 174992 | 13 | 1399996
supplier | 0 | 1000000 | 1 | 1000000
supplier | 2 | 1000000 | 1 | 1000000
supplier | 4 | 1000000 | 1 | 1000000
supplier | 6 | 1000000 | 1 | 1000000
(28 rows)
```
The following chart illustrates the distribution of the three largest tables\. \(The columns are not to scale\.\) Notice that because CUSTOMER uses ALL distribution, it was distributed to only one slice per node\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-compression-chart.png)
The distribution is relatively even, so you don't need to adjust for distribution skew\.
1. Run an EXPLAIN command with each query to view the query plans\.
The following example shows the EXPLAIN command with Query 2\.
```
explain
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-4
|
The following example shows the EXPLAIN command with Query 2\.
```
explain
select sum(lo_revenue), d_year, p_brand1
from lineorder, dwdate, part, supplier
where lo_orderdate = d_datekey
and lo_partkey = p_partkey
and lo_suppkey = s_suppkey
and p_category = 'MFGR#12'
and s_region = 'AMERICA'
group by d_year, p_brand1
order by d_year, p_brand1;
```
In the EXPLAIN plan for Query 2, notice that the DS\_BCAST\_INNER labels have been replaced by DS\_DIST\_ALL\_NONE and DS\_DIST\_NONE, which means that no redistribution was required for those steps, and the query should run much more quickly\.
```
QUERY PLAN
XN Merge (cost=1000014243538.45..1000014243539.15 rows=280 width=20)
Merge Key: dwdate.d_year, part.p_brand1
-> XN Network (cost=1000014243538.45..1000014243539.15 rows=280 width=20)
Send to leader
-> XN Sort (cost=1000014243538.45..1000014243539.15 rows=280 width=20)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-5
|
-> XN Sort (cost=1000014243538.45..1000014243539.15 rows=280 width=20)
Sort Key: dwdate.d_year, part.p_brand1
-> XN HashAggregate (cost=14243526.37..14243527.07 rows=280 width=20)
-> XN Hash Join DS_DIST_ALL_NONE (cost=30643.30..14211277.03 rows=4299912
Hash Cond: ("outer".lo_orderdate = "inner".d_datekey)
-> XN Hash Join DS_DIST_ALL_NONE (cost=30611.35..14114497.06
Hash Cond: ("outer".lo_suppkey = "inner".s_suppkey)
-> XN Hash Join DS_DIST_NONE (cost=17640.00..13758507.64
Hash Cond: ("outer".lo_partkey = "inner".p_partkey)
-> XN Seq Scan on lineorder (cost=0.00..6000378.88
-> XN Hash (cost=17500.00..17500.00 rows=56000 width=16)
-> XN Seq Scan on part (cost=0.00..17500.00
Filter: ((p_category)::text = 'MFGR#12'::text)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-6
|
Filter: ((p_category)::text = 'MFGR#12'::text)
-> XN Hash (cost=12500.00..12500.00 rows=188541 width=4)
-> XN Seq Scan on supplier (cost=0.00..12500.00
Filter: ((s_region)::text = 'AMERICA'::text)
-> XN Hash (cost=25.56..25.56 rows=2556 width=8)
-> XN Seq Scan on dwdate (cost=0.00..25.56 rows=2556 width=8)
```
1. Run the same test queries again\.
If you reconnected to the database since your first set of tests, disable result caching for this session\. To disable result caching for the current session, set the [enable\_result\_cache\_for\_session](r_enable_result_cache_for_session.md) parameter to `off`, as shown following\.
```
set enable_result_cache_for_session to off;
```
As you did earlier, run the following queries twice to eliminate compile time\. Record the second time for each query in the benchmarks table\.
```
-- Query 1
-- Restrictions on only one dimension.
select sum(lo_extendedprice*lo_discount) as revenue
from lineorder, dwdate
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-7
|
select sum(lo_extendedprice*lo_discount) as revenue
from lineorder, dwdate
where lo_orderdate = d_datekey
and d_year = 1997
and lo_discount between 1 and 3
and lo_quantity < 24;
-- Query 2
-- Restrictions on two dimensions
select sum(lo_revenue), d_year, p_brand1
from lineorder, dwdate, part, supplier
where lo_orderdate = d_datekey
and lo_partkey = p_partkey
and lo_suppkey = s_suppkey
and p_category = 'MFGR#12'
and s_region = 'AMERICA'
group by d_year, p_brand1
order by d_year, p_brand1;
-- Query 3
-- Drill down in time to just one month
select c_city, s_city, d_year, sum(lo_revenue) as revenue
from customer, lineorder, supplier, dwdate
where lo_custkey = c_custkey
and lo_suppkey = s_suppkey
and lo_orderdate = d_datekey
and (c_city='UNITED KI1' or
c_city='UNITED KI5')
and (s_city='UNITED KI1' or
s_city='UNITED KI5')
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
7f63fbc5feba-8
|
and (s_city='UNITED KI1' or
s_city='UNITED KI5')
and d_yearmonth = 'Dec1997'
group by c_city, s_city, d_year
order by d_year asc, revenue desc;
```
The following benchmarks table shows the results based on the cluster used in this example\. Your results will vary based on a number of factors, but the relative results should be similar\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-retest.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
db125e29bb83-0
|
[Step 8: Evaluate the results](tutorial-tuning-tables-evaluate.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-retest.md
|
070bb2bf160f-0
|
The NTH\_VALUE window function returns the expression value of the specified row of the window frame relative to the first row of the window\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTH.md
|
d7720e52c39d-0
|
```
NTH_VALUE (expr, offset)
[ IGNORE NULLS | RESPECT NULLS ]
OVER
( [ PARTITION BY window_partition ]
[ ORDER BY window_ordering
frame_clause ] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTH.md
|
92579505ca16-0
|
*expr*
The target column or expression that the function operates on\.
*offset*
Determines the row number relative to the first row in the window for which to return the expression\. The *offset* can be a constant or an expression and must be a positive integer that is greater than 0\.
IGNORE NULLS
An optional specification that indicates that Amazon Redshift should skip null values in the determination of which row to use\. Null values are included if IGNORE NULLS is not listed\.
RESPECT NULLS
Indicates that Amazon Redshift should include null values in the determination of which row to use\. RESPECT NULLS is supported by default if you do not specify IGNORE NULLS\.
OVER
Specifies the window partitioning, ordering, and window frame\.
PARTITION BY *window\_partition*
Sets the range of records for each group in the OVER clause\.
ORDER BY *window\_ordering*
Sorts the rows within each partition\. If ORDER BY is omitted, the default frame consists of all rows in the partition\.
*frame\_clause*
If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows in the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTH.md
|
92579505ca16-1
|
The NTH\_VALUE window function supports expressions that use any of the Amazon Redshift data types\. The return type is the same as the type of the *expr*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTH.md
|
5b4f00194243-0
|
The following example shows the number of seats in the third largest venue in California, Florida, and New York compared to the number of seats in the other venues in those states:
```
select venuestate, venuename, venueseats,
nth_value(venueseats, 3)
ignore nulls
over(partition by venuestate order by venueseats desc
rows between unbounded preceding and unbounded following)
as third_most_seats
from (select * from venue where venueseats > 0 and
venuestate in('CA', 'FL', 'NY'))
order by venuestate;
venuestate | venuename | venueseats | third_most_seats
------------+--------------------------------+------------+------------------
CA | Qualcomm Stadium | 70561 | 63026
CA | Monster Park | 69843 | 63026
CA | McAfee Coliseum | 63026 | 63026
CA | Dodger Stadium | 56000 | 63026
CA | Angel Stadium of Anaheim | 45050 | 63026
CA | PETCO Park | 42445 | 63026
CA | AT&T Park | 41503 | 63026
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTH.md
|
5b4f00194243-1
|
CA | PETCO Park | 42445 | 63026
CA | AT&T Park | 41503 | 63026
CA | Shoreline Amphitheatre | 22000 | 63026
FL | Dolphin Stadium | 74916 | 65647
FL | Jacksonville Municipal Stadium | 73800 | 65647
FL | Raymond James Stadium | 65647 | 65647
FL | Tropicana Field | 36048 | 65647
NY | Ralph Wilson Stadium | 73967 | 20000
NY | Yankee Stadium | 52325 | 20000
NY | Madison Square Garden | 20000 | 20000
(15 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTH.md
|
15628833ea2c-0
|
Synonym of the LEN function\.
See [LEN function](r_LEN.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHAR_LENGTH.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.