id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
d736be3bc197-0
Stores information about table columns\. PG\_TABLE\_DEF only returns information about tables that are visible to the user\. If PG\_TABLE\_DEF does not return the expected results, verify that the [search\_path](r_search_path.md) parameter is set correctly to include the relevant schemas\. You can use [SVV\_TABLE\_INFO](r_SVV_TABLE_INFO.md) to view more comprehensive information about a table, including data distribution skew, key distribution skew, table size, and statistics\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_TABLE_DEF.md
42b2f2f0a04d-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_TABLE_DEF.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_TABLE_DEF.md
9c53bbf16e85-0
The following example shows the compound sort key columns for the LINEORDER\_COMPOUND table\. ``` select "column", type, encoding, distkey, sortkey, "notnull" from pg_table_def where tablename = 'lineorder_compound' and sortkey <> 0; column | type | encoding | distkey | sortkey | notnull -------------+---------+----------+---------+---------+-------- lo_orderkey | integer | delta32k | false | 1 | true lo_custkey | integer | none | false | 2 | true lo_partkey | integer | none | true | 3 | true lo_suppkey | integer | delta32k | false | 4 | true lo_orderdate | integer | delta | false | 5 | true (5 rows) ``` The following example shows the interleaved sort key columns for the LINEORDER\_INTERLEAVED table\. ``` select "column", type, encoding, distkey, sortkey, "notnull" from pg_table_def where tablename = 'lineorder_interleaved' and sortkey <> 0;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_TABLE_DEF.md
9c53bbf16e85-1
from pg_table_def where tablename = 'lineorder_interleaved' and sortkey <> 0; column | type | encoding | distkey | sortkey | notnull -------------+---------+----------+---------+---------+-------- lo_orderkey | integer | delta32k | false | -1 | true lo_custkey | integer | none | false | 2 | true lo_partkey | integer | none | true | -3 | true lo_suppkey | integer | delta32k | false | 4 | true lo_orderdate | integer | delta | false | -5 | true (5 rows) ``` PG\_TABLE\_DEF will only return information for tables in schemas that are included in the search path\. For more information, see [search\_path](r_search_path.md)\. For example, suppose you create a new schema and a new table, then query PG\_TABLE\_DEF\. ``` create schema demo; create table demo.demotable (one int); select * from pg_table_def where tablename = 'demotable'; schemaname|tablename|column| type | encoding | distkey | sortkey | notnull
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_TABLE_DEF.md
9c53bbf16e85-2
schemaname|tablename|column| type | encoding | distkey | sortkey | notnull ----------+---------+------+------+----------+---------+---------+-------- ``` The query returns no rows for the new table\. Examine the setting for `search_path`\. ``` show search_path; search_path --------------- $user, public (1 row) ``` Add the `demo` schema to the search path and execute the query again\. ``` set search_path to '$user', 'public', 'demo'; select * from pg_table_def where tablename = 'demotable'; schemaname| tablename |column| type | encoding |distkey|sortkey| notnull ----------+-----------+------+---------+----------+-------+-------+-------- demo | demotable | one | integer | none | f | 0 | f
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_TABLE_DEF.md
9c53bbf16e85-3
demo | demotable | one | integer | none | f | 0 | f (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_TABLE_DEF.md
7fbf47924ad4-0
You can manage the specific behavior of concurrent write operations by deciding when and how to run different types of commands\. The following commands are relevant to this discussion: + COPY commands, which perform loads \(initial or incremental\) + INSERT commands that append one or more rows at a time + UPDATE commands, which modify existing rows + DELETE commands, which remove rows COPY and INSERT operations are pure write operations, but DELETE and UPDATE operations are read\-write operations\. \(In order for rows to be deleted or updated, they have to be read first\.\) The results of concurrent write operations depend on the specific commands that are being run concurrently\. COPY and INSERT operations against the same table are held in a wait state until the lock is released, then they proceed as normal\. UPDATE and DELETE operations behave differently because they rely on an initial table read before they do any writes\. Given that concurrent transactions are invisible to each other, both UPDATEs and DELETEs have to read a snapshot of the data from the last commit\. When the first UPDATE or DELETE releases its lock, the second UPDATE or DELETE needs to determine whether the data that it is going to work with is potentially stale\. It will not be stale, because the second transaction does not obtain its snapshot of data until after the first transaction has released its lock\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_write_readwrite.md
b4f2fe514f3a-0
Whenever transactions involve updates of more than one table, there is always the possibility of concurrently running transactions becoming deadlocked when they both try to write to the same set of tables\. A transaction releases all of its table locks at once when it either commits or rolls back; it does not relinquish locks one at a time\. For example, suppose that transactions T1 and T2 start at roughly the same time\. If T1 starts writing to table A and T2 starts writing to table B, both transactions can proceed without conflict; however, if T1 finishes writing to table A and needs to start writing to table B, it will not be able to proceed because T2 still holds the lock on B\. Conversely, if T2 finishes writing to table B and needs to start writing to table A, it will not be able to proceed either because T1 still holds the lock on A\. Because neither transaction can release its locks until all its write operations are committed, neither transaction can proceed\. In order to avoid this kind of deadlock, you need to schedule concurrent write operations carefully\. For example, you should always update tables in the same order in transactions and, if specifying locks, lock tables in the same order before you perform any DML operations\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_write_readwrite.md
caedaba189d4-0
When you load data from Amazon S3, first upload your files to your Amazon S3 bucket, then verify that the bucket contains all the correct files, and only those files\. For more information, see [Verifying that the correct files are present in your bucket](verifying-that-correct-files-are-present.md)\. After the load operation is complete, query the [STL\_LOAD\_COMMITS](r_STL_LOAD_COMMITS.md) system table to verify that the expected files were loaded\. For more information, see [Verifying that the data loaded correctly](verifying-that-data-loaded-correctly.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-verifying-data-files.md
fb9fe406d976-0
Capitalizes the first letter of each word in a specified string\. INITCAP supports UTF\-8 multibyte characters, up to a maximum of four bytes per character\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
a04e912d9b90-0
``` INITCAP(string) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
93cf647ce642-0
*string* The input parameter is a CHAR or VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
f688137ba287-0
The INITCAP function returns a VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
0adf8e5d8d9d-0
The INITCAP function makes the first letter of each word in a string uppercase, and any subsequent letters are made \(or left\) lowercase\. Therefore, it is important to understand which characters \(other than space characters\) function as word separators\. A *word separator* character is any non\-alphanumeric character, including punctuation marks, symbols, and control characters\. All of the following characters are word separators: ``` ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~ ``` Tabs, newline characters, form feeds, line feeds, and carriage returns are also word separators\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
f40668590e76-0
The following example capitalizes the initials of each word in the CATDESC column: ``` select catid, catdesc, initcap(catdesc) from category order by 1, 2, 3; catid | catdesc | initcap -------+--------------------------------------------+-------------------------------------------- 1 | Major League Baseball | Major League Baseball 2 | National Hockey League | National Hockey League 3 | National Football League | National Football League 4 | National Basketball Association | National Basketball Association 5 | Major League Soccer | Major League Soccer 6 | Musical theatre | Musical Theatre 7 | All non-musical theatre | All Non-Musical Theatre 8 | All opera and light opera | All Opera And Light Opera 9 | All rock and pop music concerts | All Rock And Pop Music Concerts 10 | All jazz singers and bands | All Jazz Singers And Bands 11 | All symphony, concerto, and choir concerts | All Symphony, Concerto, And Choir Concerts (11 rows) ``` The following example shows that the INITCAP function does not preserve uppercase characters when they do not begin words\. For example, MLB becomes Mlb\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
f40668590e76-1
The following example shows that the INITCAP function does not preserve uppercase characters when they do not begin words\. For example, MLB becomes Mlb\. ``` select initcap(catname) from category order by catname; initcap ----------- Classical Jazz Mlb Mls Musicals Nba Nfl Nhl Opera Plays Pop (11 rows) ``` The following example shows that non\-alphanumeric characters other than spaces function as word separators, causing uppercase characters to be applied to several letters in each string: ``` select email, initcap(email) from users order by userid desc limit 5; email | initcap ------------------------------------+------------------------------------ urna.Ut@egetdictumplacerat.edu | Urna.Ut@Egetdictumplacerat.Edu nibh.enim@egestas.ca | Nibh.Enim@Egestas.Ca
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
f40668590e76-2
nibh.enim@egestas.ca | Nibh.Enim@Egestas.Ca in@Donecat.ca | In@Donecat.Ca sodales@blanditviverraDonec.ca | Sodales@Blanditviverradonec.Ca sociis.natoque.penatibus@vitae.org | Sociis.Natoque.Penatibus@Vitae.Org (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INITCAP.md
d0aa30b35003-0
The SVL\_QLOG view contains a log of all queries run against the database\. Amazon Redshift creates the SVL\_QLOG view as a readable subset of information from the [STL\_QUERY](r_STL_QUERY.md) table\. Use this table to find the query ID for a recently run query or to see how long it took a query to complete\. SVL\_QLOG is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QLOG.md
d47ffb31f617-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QLOG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QLOG.md
0ccea1258e84-0
The following example returns the query ID, execution time, and truncated query text for the five most recent database queries executed by the user with `userid = 100`\. ``` select query, pid, elapsed, substring from svl_qlog where userid = 100 order by starttime desc limit 5; query | pid | elapsed | substring --------+-------+----------+----------------------------------------------- 187752 | 18921 | 18465685 | select query, elapsed, substring from svl_... 204168 | 5117 | 59603 | insert into testtable values (100); 187561 | 17046 | 1003052 | select * from pg_table_def where tablename... 187549 | 17046 | 1108584 | select * from STV_WLM_SERVICE_CLASS_CONFIG 187468 | 17046 | 5670661 | select * from pg_table_def where schemaname... (5 rows) ``` The following example returns the SQL script name \(LABEL column\) and elapsed time for a query that was cancelled \(**aborted=1**\): ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QLOG.md
0ccea1258e84-1
``` select query, elapsed, trim(label) querylabel from svl_qlog where aborted=1; query | elapsed | querylabel -------+----------+------------------------- 16 | 6935292 | alltickittablesjoin.sql (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QLOG.md
c92ee3c2d4bd-0
It helps to map the operations from the query plan to the steps \(identified by the label field values\) in the query summary to get further details on them: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/query-plan-summary-map.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-plan-summary-map.md
c7d0b46700d9-0
ST\_Dimension returns the inherent dimension of an input geometry\. The *inherent dimension* is the dimension value of the subtype that is defined in the geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Dimension-function.md
fb3f0344989e-0
``` ST_Dimension(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Dimension-function.md
2117068e109d-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Dimension-function.md
0e69112c679c-0
`INTEGER` representing the inherent dimension of *geom*\. If *geom* is null, then null is returned\. The values returned are as follows\. | Returned value | Geometry subtype | | --- | --- | | 0 | Returned if *geom* is a `POINT` or `MULTIPOINT` subtype | | 1 | Returned if *geom* is a `LINESTRING` or `MULTILINESTRING` subtype\. | | 2 | Returned if *geom* is a `POLYGON` or `MULTIPOLYGON` subtype | | 0 | Returned if *geom* is an empty `GEOMETRYCOLLECTION` subtype | | Largest dimension of the components of the collection | Returned if *geom* is a `GEOMETRYCOLLECTION` subtype |
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Dimension-function.md
ee86df101ff5-0
The following SQL converts a well\-known text \(WKT\) representation of a four\-point LINESTRING to a GEOMETRY object and returns the dimension of the linestring\. ``` SELECT ST_Dimension(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_dimension ------------- 1 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Dimension-function.md
e1a376123a08-0
SVL views are system views that contain references to STL tables and logs for more detailed information\. These views provide quicker and easier access to commonly queried data found in those tables\. **Note** The SVL\_QUERY\_SUMMARY view only contains information about queries executed by Amazon Redshift, not other utility and DDL commands\. For a complete listing and information on all statements executed by Amazon Redshift, including DDL and utility commands, you can query the SVL\_STATEMENTTEXT view **Topics** + [SVL\_COMPILE](r_SVL_COMPILE.md) + [SVL\_FEDERATED\_QUERY](r_SVL_FEDERATED_QUERY.md) + [SVL\_MULTI\_STATEMENT\_VIOLATIONS](r_SVL_MULTI_STATEMENT_VIOLATIONS.md) + [SVL\_MV\_REFRESH\_STATUS](r_SVL_MV_REFRESH_STATUS.md) + [SVL\_QERROR](r_SVL_QERROR.md) + [SVL\_QLOG](r_SVL_QLOG.md) + [SVL\_QUERY\_METRICS](r_SVL_QUERY_METRICS.md) + [SVL\_QUERY\_METRICS\_SUMMARY](r_SVL_QUERY_METRICS_SUMMARY.md) + [SVL\_QUERY\_QUEUE\_INFO](r_SVL_QUERY_QUEUE_INFO.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svl_views.md
e1a376123a08-1
+ [SVL\_QUERY\_QUEUE\_INFO](r_SVL_QUERY_QUEUE_INFO.md) + [SVL\_QUERY\_REPORT](r_SVL_QUERY_REPORT.md) + [SVL\_QUERY\_SUMMARY](r_SVL_QUERY_SUMMARY.md) + [SVL\_S3LIST](r_SVL_S3LIST.md) + [SVL\_S3LOG](r_SVL_S3LOG.md) + [SVL\_S3PARTITION](r_SVL_S3PARTITION.md) + [SVL\_S3PARTITION\_SUMMARY](r_SVL_S3PARTITION_SUMMARY.md) + [SVL\_S3QUERY](r_SVL_S3QUERY.md) + [SVL\_S3QUERY\_SUMMARY](r_SVL_S3QUERY_SUMMARY.md) + [SVL\_S3RETRIES](r_SVL_S3RETRIES.md) + [SVL\_STATEMENTTEXT](r_SVL_STATEMENTTEXT.md) + [SVL\_STORED\_PROC\_CALL](r_SVL_STORED_PROC_CALL.md) + [SVL\_STORED\_PROC\_MESSAGES](r_SVL_STORED_PROC_MESSAGES.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svl_views.md
e1a376123a08-2
+ [SVL\_STORED\_PROC\_MESSAGES](r_SVL_STORED_PROC_MESSAGES.md) + [SVL\_TERMINATE](r_SVL_TERMINATE.md) + [SVL\_UDF\_LOG](r_SVL_UDF_LOG.md) + [SVL\_USER\_INFO](r_SVL_USER_INFO.md) + [SVL\_VACUUM\_PERCENTAGE](r_SVL_VACUUM_PERCENTAGE.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/svl_views.md
56e2b398fc63-0
The SVL\_MV\_REFRESH\_STATUS view contains a row for the refresh activity of materialized views\. For more information about materialized views, see [Creating materialized views in Amazon Redshift](materialized-view-overview.md)\. SVL\_MV\_REFRESH\_STATUS is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MV_REFRESH_STATUS.md
6431d696e11d-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_MV_REFRESH_STATUS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MV_REFRESH_STATUS.md
e469495e0382-0
To view the refresh status of materialized views, run the following query\. ``` select * from svl_mv_refresh_status; ``` This query returns the following sample output: ``` db_name | userid | schema | name | xid | starttime | endtime | status ---------+--------+-----------+---------+-------+----------------------------+----------------------------+------------------------------------------------------ dev | 169 | mv_schema | mv_test | 6640 | 2020-02-14 02:26:53.497935 | 2020-02-14 02:26:53.556156 | Refresh successfully recomputed MV from scratch dev | 166 | mv_schema | mv_test | 6517 | 2020-02-14 02:26:39.287438 | 2020-02-14 02:26:39.349539 | Refresh successfully updated MV incrementally
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MV_REFRESH_STATUS.md
e469495e0382-1
dev | 162 | mv_schema | mv_test | 6388 | 2020-02-14 02:26:27.863426 | 2020-02-14 02:26:27.918307 | Refresh successfully recomputed MV from scratch dev | 161 | mv_schema | mv_test | 6323 | 2020-02-14 02:26:20.020717 | 2020-02-14 02:26:20.080002 | Refresh successfully updated MV incrementally dev | 161 | mv_schema | mv_test | 6301 | 2020-02-14 02:26:05.796146 | 2020-02-14 02:26:07.853986 | Refresh successfully recomputed MV from scratch dev | 153 | mv_schema | mv_test | 6024 | 2020-02-14 02:25:18.762335 | 2020-02-14 02:25:20.043462 | MV was already updated dev | 143 | mv_schema | mv_test | 5557 | 2020-02-14 02:24:23.100601 | 2020-02-14 02:24:23.100633 | MV was already updated dev | 141 | mv_schema | mv_test | 5447 | 2020-02-14 02:23:54.102837 | 2020-02-14 02:24:00.310166 | Refresh successfully updated MV incrementally
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MV_REFRESH_STATUS.md
e469495e0382-2
dev | 1 | mv_schema | mv_test | 5329 | 2020-02-14 02:22:26.328481 | 2020-02-14 02:22:28.369217 | Refresh successfully recomputed MV from scratch dev | 138 | mv_schema | mv_test | 5290 | 2020-02-14 02:21:56.885093 | 2020-02-14 02:21:56.885098 | Refresh failed. MV was not found ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_MV_REFRESH_STATUS.md
42978e5f12c2-0
DATEDIFF returns the difference between the date parts of two date or time expressions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEDIFF_function.md
2466a17453bd-0
``` DATEDIFF ( datepart, {date|timestamp}, {date|timestamp} ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEDIFF_function.md
2638df1bcedc-0
*datepart* The specific part of the date value \(year, month, or day, for example\) that the function operates on\. For more information, see [Dateparts for Date or Time Stamp functions](r_Dateparts_for_datetime_functions.md)\. Specifically, DATEDIFF determines the number of *datepart boundaries* that are crossed between two expressions\. For example, if you are calculating the difference in years between two dates, `12-31-2008` and `01-01-2009`, the function returns 1 year despite the fact that these dates are only one day apart\. If you are finding the difference in hours between two time stamps, `01-01-2009 8:30:00` and `01-01-2009 10:00:00`, the result is 2 hours\. *date*\|*timestamp* A date or timestamp columns or expressions that implicitly convert to a date or time stamp\. The expressions must both contain the specified date part\. If the second date or time is later than the first date or time, the result is positive\. If the second date or time is earlier than the first date or time, the result is negative\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEDIFF_function.md
31fafe7e3e5d-0
BIGINT
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEDIFF_function.md
873dcf1e7a6a-0
Find the difference, in number of weeks, between two literal date values: ``` select datediff(week,'2009-01-01','2009-12-31') as numweeks; numweeks ---------- 52 (1 row) ``` Find the difference, in number of quarters, between a literal value in the past and today's date\. This example assumes that the current date is June 5, 2008\. You can name dateparts in full or abbreviate them\. The default column name for the DATEDIFF function is DATE\_DIFF\. ``` select datediff(qtr, '1998-07-01', current_date); date_diff ----------- 40 (1 row) ``` This example joins the SALES and LISTING tables to calculate how many days after they were listed any tickets were sold for listings 1000 through 1005\. The longest wait for sales of these listings was 15 days, and the shortest was less than one day \(0 days\)\. ``` select priceperticket, datediff(day, listtime, saletime) as wait from sales, listing where sales.listid = listing.listid and sales.listid between 1000 and 1005 order by wait desc, priceperticket desc; priceperticket | wait
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEDIFF_function.md
873dcf1e7a6a-1
and sales.listid between 1000 and 1005 order by wait desc, priceperticket desc; priceperticket | wait ---------------+------ 96.00 | 15 123.00 | 11 131.00 | 9 123.00 | 6 129.00 | 4 96.00 | 4 96.00 | 0 (7 rows) ``` This example calculates the average number of hours sellers waited for all ticket sales\. ``` select avg(datediff(hours, listtime, saletime)) as avgwait from sales, listing where sales.listid = listing.listid; avgwait --------- 465 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATEDIFF_function.md
fe6623ec1405-0
ST\_YMax returns the maximum second coordinate of an input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMax-function.md
425cccf0ab88-0
``` ST_YMax(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMax-function.md
c4820a7f2450-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMax-function.md
1dda03a8aabb-0
`DOUBLE PRECISION` value of the maximum second coordinate\. If *geom* is empty, then null is returned\. If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMax-function.md
0cd5a878b3be-0
The following SQL returns the largest second coordinate of a linestring\. ``` SELECT ST_YMax(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_ymax ----------- 29.31 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_YMax-function.md
ff4fd96c79ed-0
The VAR\_SAMP and VAR\_POP functions return the sample and population variance of a set of numeric values \(integer, decimal, or floating\-point\)\. The result of the VAR\_SAMP function is equivalent to the squared sample standard deviation of the same set of values\. VAR\_SAMP and VARIANCE are synonyms for the same function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VARIANCE_functions.md
0da2115d523d-0
``` VAR_SAMP | VARIANCE ( [ DISTINCT | ALL ] expression) VAR_POP ( [ DISTINCT | ALL ] expression) ``` The expression must have an integer, decimal, or floating\-point data type\. Regardless of the data type of the expression, the return type of this function is a double precision number\. **Note** The results of these functions might vary across data warehouse clusters, depending on the configuration of the cluster in each case\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VARIANCE_functions.md
ac4ff88240cc-0
When the sample variance \(VARIANCE or VAR\_SAMP\) is calculated for an expression that consists of a single value, the result of the function is NULL not 0\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VARIANCE_functions.md
8bd90df30eeb-0
The following query returns the rounded sample and population variance of the NUMTICKETS column in the LISTING table\. ``` select avg(numtickets), round(var_samp(numtickets)) varsamp, round(var_pop(numtickets)) varpop from listing; avg | varsamp | varpop -----+---------+-------- 10 | 54 | 54 (1 row) ``` The following query runs the same calculations but casts the results to decimal values\. ``` select avg(numtickets), cast(var_samp(numtickets) as dec(10,4)) varsamp, cast(var_pop(numtickets) as dec(10,4)) varpop from listing; avg | varsamp | varpop -----+---------+--------- 10 | 53.6291 | 53.6288 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_VARIANCE_functions.md
6b84b34a7926-0
Amazon S3 provides eventual consistency for some operations\. Thus, it's possible that new data won't be available immediately after the upload, which can result in an incomplete data load or loading stale data\. You can manage data consistency by using a manifest file to load data\. For more information, see [Managing data consistency](managing-data-consistency.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/best-practices-preventing-load-data-errors.md
ac863a9651de-0
Renames a procedure or changes the owner\. Both the procedure name and data types, or signature, are required\. Only the owner or a superuser can rename a procedure\. Only a superuser can change the owner of a procedure\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_PROCEDURE.md
35ac5de050b0-0
``` ALTER PROCEDURE sp_name [ ( [ [ argname ] [ argmode ] argtype [, ...] ] ) ] RENAME TO new_name ``` ``` ALTER PROCEDURE sp_name [ ( [ [ argname ] [ argmode ] argtype [, ...] ] ) ] OWNER TO { new_owner | CURRENT_USER | SESSION_USER } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_PROCEDURE.md
28ae4c483d3e-0
*sp\_name* The name of the procedure to be altered\. Either specify just the name of the procedure in the current search path, or use the format `schema_name.sp_procedure_name` to use a specific schema\. *\[argname\] \[ argmode\] argtype* A list of argument names, argument modes, and data types\. Only the input data types are required, which are used to identify the stored procedure\. Alternatively, you can provide the full signature used to create the procedure including the input and output parameters with their modes\. *new\_name* A new name for the stored procedure\. *new\_owner* \| CURRENT\_USER \| SESSION\_USER A new owner for the stored procedure\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_PROCEDURE.md
3fedffa0fbb5-0
The following example changes the name of a procedure from `first_quarter_revenue` to `quarterly_revenue`\. ``` ALTER PROCEDURE first_quarter_revenue(volume INOUT bigint, at_price IN numeric, result OUT int) RENAME TO quarterly_revenue; ``` This example is equivalent to the following\. ``` ALTER PROCEDURE first_quarter_revenue(bigint, numeric) RENAME TO quarterly_revenue; ``` The following example changes the owner of a procedure to `etl_user`\. ``` ALTER PROCEDURE quarterly_revenue(bigint, numeric) OWNER TO etl_user; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_PROCEDURE.md
514f51099916-0
After you create your new database, you create tables to hold your database data\. You specify any column information for the table when you create the table\. For example, to create a table named **testtable** with a single column named **testcol** for an integer data type, issue the following command: ``` create table testtable (testcol int); ``` The PG\_TABLE\_DEF system table contains information about all the tables in the cluster\. To verify the result, issue the following SELECT command to query the PG\_TABLE\_DEF system table\. ``` select * from pg_table_def where tablename = 'testtable'; ``` The query result should look something like this: ``` schemaname|tablename|column | type |encoding|distkey|sortkey | notnull ----------+---------+-------+-------+--------+-------+--------+--------- public |testtable|testcol|integer|none |f | 0 | f (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_creating_table.md
514f51099916-1
public |testtable|testcol|integer|none |f | 0 | f (1 row) ``` By default, new database objects, such as tables, are created in a schema named "public"\. For more information about schemas, see [Schemas](r_Schemas_and_tables.md) in the Managing Database Security section\. The `encoding`, `distkey`, and `sortkey` columns are used by Amazon Redshift for parallel processing\. For more information about designing tables that incorporate these elements, see [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_creating_table.md
293f2831f478-0
Synonym of SHA1 function\. See [SHA1 function](SHA1.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/FUNC_SHA1.md
cc97abd2a31b-0
Some Amazon Redshift queries are distributed and executed on the compute nodes, and other queries execute exclusively on the leader node\. The leader node distributes SQL to the compute nodes whenever a query references user\-created tables or system tables \(tables with an STL or STV prefix and system views with an SVL or SVV prefix\)\. A query that references only catalog tables \(tables with a PG prefix, such as PG\_TABLE\_DEF, which reside on the leader node\) or that does not reference any tables, runs exclusively on the leader node\. Some Amazon Redshift SQL functions are supported only on the leader node and are not supported on the compute nodes\. A query that uses a leader\-node function must execute exclusively on the leader node, not on the compute nodes, or it will return an error\. The documentation for each function that must run exclusively on the leader node includes a note stating that the function will return an error if it references user\-defined tables or Amazon Redshift system tables\. See [Leader node–only functions](c_SQL_functions_leader_node_only.md) for a list of functions that run exclusively on the leader node\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_sql-functions-leader-node.md
ead19557e854-0
The CURRENT\_SCHEMA function is a leader\-node only function\. In this example, the query does not reference a table, so it runs exclusively on the leader node\. ``` select current_schema(); ``` The result is as follows\. ``` current_schema --------------- public (1 row) ``` In the next example, the query references a system catalog table, so it runs exclusively on the leader node\. ``` select * from pg_table_def where schemaname = current_schema() limit 1; schemaname | tablename | column | type | encoding | distkey | sortkey | notnull ------------+-----------+--------+----------+----------+---------+---------+--------- public | category | catid | smallint | none | t | 1 | t (1 row) ``` In the next example, the query references an Amazon Redshift system table that resides on the compute nodes, so it returns an error\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_sql-functions-leader-node.md
ead19557e854-1
In the next example, the query references an Amazon Redshift system table that resides on the compute nodes, so it returns an error\. ``` select current_schema(), userid from users; INFO: Function "current_schema()" not supported. ERROR: Specified types or functions (one per INFO message) not supported on Amazon Redshift tables. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_sql-functions-leader-node.md
11c1da59a56b-0
Contains the current classification rules for WLM\. STV\_WLM\_CLASSIFICATION\_CONFIG is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_CLASSIFICATION_CONFIG.md
cb660c70df38-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_WLM_CLASSIFICATION_CONFIG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_CLASSIFICATION_CONFIG.md
ad007b2a9e1c-0
``` select * from STV_WLM_CLASSIFICATION_CONFIG; id | condition | action_seq | action | action_service_class ---+---------------------------------------------+------------+--------+--------------------- 1 | (system user) and (query group: health) | 0 | assign | 1 2 | (system user) and (query group: metrics) | 0 | assign | 2 3 | (system user) and (query group: cmstats) | 0 | assign | 3 4 | (system user) | 0 | assign | 4 5 | (super user) and (query group: superuser) | 0 | assign | 5 6 | (query group: querygroup1) | 0 | assign | 6 7 | (user group: usergroup1) | 0 | assign | 6 8 | (user group: usergroup2) | 0 | assign | 7 9 | (query group: querygroup3) | 0 | assign | 8 10 | (query group: querygroup4) | 0 | assign | 9 11 | (user group: usergroup4) | 0 | assign | 9 12 | (query group: querygroup*) | 0 | assign | 10
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_CLASSIFICATION_CONFIG.md
ad007b2a9e1c-1
11 | (user group: usergroup4) | 0 | assign | 9 12 | (query group: querygroup*) | 0 | assign | 10 13 | (user group: usergroup*) | 0 | assign | 10 14 | (querytype: any) | 0 | assign | 11 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_CLASSIFICATION_CONFIG.md
9fe9c2059246-0
Searches a string for a regular expression pattern and returns an integer that indicates the number of times the pattern occurs in the string\. If no match is found, then the function returns 0\. For more information about regular expressions, see [POSIX operators](pattern-matching-conditions-posix.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_COUNT.md
e0daaf4c4a7d-0
``` REGEXP_COUNT ( source_string, pattern [, position ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_COUNT.md
07e8f47727e6-0
*source\_string* A string expression, such as a column name, to be searched\. *pattern* A string literal that represents a SQL standard regular expression pattern\. *position* A positive integer that indicates the position within *source\_string* to begin searching\. The position is based on the number of characters, not bytes, so that multibyte characters are counted as single characters\. The default is 1\. If *position* is less than 1, the search begins at the first character of *source\_string*\. If *position* is greater than the number of characters in *source\_string*, the result is 0\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_COUNT.md
33f40fa24675-0
Integer
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_COUNT.md
7aa058953997-0
The following example counts the number of times a three\-letter sequence occurs\. ``` select regexp_count('abcdefghijklmnopqrstuvwxyz', '[a-z]{3}'); regexp_count -------------- 8 (1 row) ``` The following example counts the number of times the top\-level domain name is either `org` or `edu`\. ``` select email, regexp_count(email,'@[^.]*\\.(org|edu)') from users limit 5; email | regexp_count --------------------------------------------+-------------- elementum@semperpretiumneque.ca | 0 Integer.mollis.Integer@tristiquealiquet.org | 1 lorem.ipsum@Vestibulumante.com | 0 euismod@turpis.org | 1 non.justo.Proin@ametconsectetuer.edu | 1 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_COUNT.md
a9bd0bc20827-0
**Topics** + [Choosing a column compression type](t_Compressing_data_on_disk.md) + [Choosing a data distribution style](t_Distributing_data.md) + [Choosing sort keys](t_Sorting_data.md) + [Defining constraints](t_Defining_constraints.md) + [Analyzing table design](c_analyzing-table-design.md) + [Tutorial: Tuning table design](tutorial-tuning-tables.md) A data warehouse system has very different design goals compared to a typical transaction\-oriented relational database system\. An online transaction processing \(OLTP\) application is focused primarily on single row transactions, inserts, and updates\. Amazon Redshift is optimized for very fast execution of complex analytic queries against very large data sets\. Because of the massive amount of data involved in data warehousing, you must specifically design your database to take full advantage of every available performance optimization\. This section explains how to choose and implement compression encodings, data distribution keys, sort keys, and table constraints, and it presents best practices for making these design decisions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Creating_tables.md
68141e04937e-0
**Topics** + [CASE expression](r_CASE_function.md) + [COALESCE](r_COALESCE.md) + [DECODE expression](r_DECODE_expression.md) + [GREATEST and LEAST](r_GREATEST_LEAST.md) + [NVL expression](r_NVL_function.md) + [NVL2 expression](r_NVL2.md) + [NULLIF expression](r_NULLIF_function.md) Amazon Redshift supports some conditional expressions that are extensions to the SQL standard\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_conditional_expressions.md
398780e63f8a-0
Records the configuration for WLM query monitoring rules \(QMR\)\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\. STV\_WLM\_QMR\_CONFIG is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QMR_CONFIG.md
95b2d0a513fd-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_WLM_QMR_CONFIG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QMR_CONFIG.md
6477b63aa86a-0
To view the QMR rule definitions for all service classes greater than 5 \(which includes user\-defined queues\), run the following query\. For a list of service class IDs, see [WLM service class IDs](cm-c-wlm-system-tables-and-views.md#wlm-service-class-ids)\. ``` Select * from stv_wlm_qmr_config where service_class > 5 order by service_class; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QMR_CONFIG.md
4120b02fdbe9-0
The DENSE\_RANK window function determines the rank of a value in a group of values, based on the ORDER BY expression in the OVER clause\. If the optional PARTITION BY clause is present, the rankings are reset for each group of rows\. Rows with equal values for the ranking criteria receive the same rank\. The DENSE\_RANK function differs from RANK in one respect: If two or more rows tie, there is no gap in the sequence of ranked values\. For example, if two rows are ranked 1, the next rank is 2\. You can have ranking functions with different PARTITION BY and ORDER BY clauses in the same query\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_DENSE_RANK.md
d1889f94aff0-0
``` DENSE_RANK () OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_DENSE_RANK.md
60cbccd54372-0
\( \) The function takes no arguments, but the empty parentheses are required\. OVER The window clauses for the DENSE\_RANK function\. PARTITION BY *expr\_list* Optional\. One or more expressions that define the window\. ORDER BY *order\_list* Optional\. The expression on which the ranking values are based\. If no PARTITION BY is specified, ORDER BY uses the entire table\. If ORDER BY is omitted, the return value is 1 for all rows\. If ORDER BY doesn't produce a unique ordering, the order of the rows is nondeterministic\. For more information, see [Unique ordering of data for window functions](r_Examples_order_by_WF.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_DENSE_RANK.md
04bc30c5c73a-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_DENSE_RANK.md
9206e30c4a29-0
The following example orders the table by the quantity sold \(in descending order\), and assign both a dense rank and a regular rank to each row\. The results are sorted after the window function results are applied\. ``` select salesid, qty, dense_rank() over(order by qty desc) as d_rnk, rank() over(order by qty desc) as rnk from winsales order by 2,1; salesid | qty | d_rnk | rnk ---------+-----+-------+----- 10001 | 10 | 5 | 8 10006 | 10 | 5 | 8 30001 | 10 | 5 | 8 40005 | 10 | 5 | 8 30003 | 15 | 4 | 7 20001 | 20 | 3 | 4 20002 | 20 | 3 | 4 30004 | 20 | 3 | 4 10005 | 30 | 2 | 2 30007 | 30 | 2 | 2 40001 | 40 | 1 | 1 (11 rows) ``` Note the difference in rankings assigned to the same set of rows when the DENSE\_RANK and RANK functions are used side by side in the same query\. For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_DENSE_RANK.md
9206e30c4a29-1
The following example partitions the table by SELLERID and orders each partition by the quantity \(in descending order\) and assign a dense rank to each row\. The results are sorted after the window function results are applied\. ``` select salesid, sellerid, qty, dense_rank() over(partition by sellerid order by qty desc) as d_rnk from winsales order by 2,3,1; salesid | sellerid | qty | d_rnk ---------+----------+-----+------- 10001 | 1 | 10 | 2 10006 | 1 | 10 | 2 10005 | 1 | 30 | 1 20001 | 2 | 20 | 1 20002 | 2 | 20 | 1 30001 | 3 | 10 | 4 30003 | 3 | 15 | 3 30004 | 3 | 20 | 2 30007 | 3 | 30 | 1 40005 | 4 | 10 | 2 40001 | 4 | 40 | 1 (11 rows) ``` For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_DENSE_RANK.md
abd50503e814-0
Commits the current transaction to the database\. This command makes the database updates from the transaction permanent\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COMMIT.md
cbc7c20be639-0
``` COMMIT [ WORK | TRANSACTION ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COMMIT.md
1e31f628c9c6-0
WORK Optional keyword\. This keyword isn't supported within a stored procedure\. TRANSACTION Optional keyword\. WORK and TRANSACTION are synonyms\. Neither is supported within a stored procedure\. For information about using COMMIT within a stored procedure, see [Managing transactions](stored-procedure-transaction-management.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COMMIT.md
14bd7f983deb-0
Each of the following examples commits the current transaction to the database: ``` commit; ``` ``` commit work; ``` ``` commit transaction; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COMMIT.md
b429c7a013a5-0
You can use query plans to identify candidates for optimizing the distribution style\. After making your initial design decisions, create your tables, load them with data, and test them\. Use a test dataset that is as close as possible to the real data\. Measure load times to use as a baseline for comparisons\. Evaluate queries that are representative of the most costly queries you expect to execute; specifically, queries that use joins and aggregations\. Compare execution times for various design options\. When you compare execution times, do not count the first time the query is executed, because the first run time includes the compilation time\. **DS\_DIST\_NONE** No redistribution is required, because corresponding slices are collocated on the compute nodes\. You will typically have only one DS\_DIST\_NONE step, the join between the fact table and one dimension table\. **DS\_DIST\_ALL\_NONE** No redistribution is required, because the inner join table used DISTSTYLE ALL\. The entire table is located on every node\. **DS\_DIST\_INNER** The inner table is redistributed\. **DS\_DIST\_OUTER** The outer table is redistributed\. **DS\_BCAST\_INNER** A copy of the entire inner table is broadcast to all the compute nodes\. **DS\_DIST\_ALL\_INNER**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_data_redistribution.md
b429c7a013a5-1
A copy of the entire inner table is broadcast to all the compute nodes\. **DS\_DIST\_ALL\_INNER** The entire inner table is redistributed to a single slice because the outer table uses DISTSTYLE ALL\. **DS\_DIST\_BOTH** Both tables are redistributed\. DS\_DIST\_NONE and DS\_DIST\_ALL\_NONE are good\. They indicate that no distribution was required for that step because all of the joins are collocated\. DS\_DIST\_INNER means that the step will probably have a relatively high cost because the inner table is being redistributed to the nodes\. DS\_DIST\_INNER indicates that the outer table is already properly distributed on the join key\. Set the inner table's distribution key to the join key to convert this to DS\_DIST\_NONE\. If distributing the inner table on the join key is not possible because the outer table is not distributed on the join key, evaluate whether to use ALL distribution for the inner table\. If the table is relatively slow moving, that is, it is not updated frequently or extensively, and it is large enough to carry a high redistribution cost, change the distribution style to ALL and test again\. ALL distribution causes increased load times, so when you retest, include the load time in your evaluation factors\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_data_redistribution.md
b429c7a013a5-2
DS\_DIST\_ALL\_INNER is not good\. It means the entire inner table is redistributed to a single slice because the outer table uses DISTSTYLE ALL, so that a copy of the entire outer table is located on each node\. This results in inefficient serial execution of the join on a single node instead taking advantage of parallel execution using all of the nodes\. DISTSTYLE ALL is meant to be used only for the inner join table\. Instead, specify a distribution key or use even distribution for the outer table\. DS\_BCAST\_INNER and DS\_DIST\_BOTH are not good\. Usually these redistributions occur because the tables are not joined on their distribution keys\. If the fact table does not already have a distribution key, specify the joining column as the distribution key for both tables\. If the fact table already has a distribution key on another column, you should evaluate whether changing the distribution key to collocate this join will improve overall performance\. If changing the distribution key of the outer table is not an optimal choice, you can achieve collocation by specifying DISTSTYLE ALL for the inner table\. The following example shows a portion of a query plan with DS\_BCAST\_INNER and DS\_DIST\_NONE labels\. ``` -> XN Hash Join DS_BCAST_INNER (cost=112.50..3272334142.59 rows=170771 width=84) Hash Cond: ("outer".venueid = "inner".venueid)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_data_redistribution.md
b429c7a013a5-3
Hash Cond: ("outer".venueid = "inner".venueid) -> XN Hash Join DS_BCAST_INNER (cost=109.98..3167290276.71 rows=172456 width=47) Hash Cond: ("outer".eventid = "inner".eventid) -> XN Merge Join DS_DIST_NONE (cost=0.00..6286.47 rows=172456 width=30) Merge Cond: ("outer".listid = "inner".listid) -> XN Seq Scan on listing (cost=0.00..1924.97 rows=192497 width=14) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=24) ``` After changing the dimension tables to use DISTSTYLE ALL, the query plan for the same query shows DS\_DIST\_ALL\_NONE in place of DS\_BCAST\_INNER\. Also, there is a dramatic change in the relative cost for the join steps\. ``` -> XN Hash Join DS_DIST_ALL_NONE (cost=112.50..14142.59 rows=170771 width=84) Hash Cond: ("outer".venueid = "inner".venueid) -> XN Hash Join DS_DIST_ALL_NONE (cost=109.98..10276.71 rows=172456 width=47)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_data_redistribution.md
b429c7a013a5-4
-> XN Hash Join DS_DIST_ALL_NONE (cost=109.98..10276.71 rows=172456 width=47) Hash Cond: ("outer".eventid = "inner".eventid) -> XN Merge Join DS_DIST_NONE (cost=0.00..6286.47 rows=172456 width=30) Merge Cond: ("outer".listid = "inner".listid) -> XN Seq Scan on listing (cost=0.00..1924.97 rows=192497 width=14) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=24) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_data_redistribution.md
0905bd9a190d-0
**Topics** + [CURRENT\_DATABASE](r_CURRENT_DATABASE.md) + [CURRENT\_SCHEMA](r_CURRENT_SCHEMA.md) + [CURRENT\_SCHEMAS](r_CURRENT_SCHEMAS.md) + [CURRENT\_USER](r_CURRENT_USER.md) + [CURRENT\_USER\_ID](r_CURRENT_USER_ID.md) + [HAS\_DATABASE\_PRIVILEGE](r_HAS_DATABASE_PRIVILEGE.md) + [HAS\_SCHEMA\_PRIVILEGE](r_HAS_SCHEMA_PRIVILEGE.md) + [HAS\_TABLE\_PRIVILEGE](r_HAS_TABLE_PRIVILEGE.md) + [PG\_BACKEND\_PID](PG_BACKEND_PID.md) + [PG\_GET\_COLS](PG_GET_COLS.md) + [PG\_GET\_LATE\_BINDING\_VIEW\_COLS](PG_GET_LATE_BINDING_VIEW_COLS.md) + [PG\_LAST\_COPY\_COUNT](PG_LAST_COPY_COUNT.md) + [PG\_LAST\_COPY\_ID](PG_LAST_COPY_ID.md) + [PG\_LAST\_UNLOAD\_ID](PG_LAST_UNLOAD_ID.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_System_information_functions.md
0905bd9a190d-1
+ [PG\_LAST\_UNLOAD\_ID](PG_LAST_UNLOAD_ID.md) + [PG\_LAST\_QUERY\_ID](PG_LAST_QUERY_ID.md) + [PG\_LAST\_UNLOAD\_COUNT](PG_LAST_UNLOAD_COUNT.md) + [SESSION\_USER](r_SESSION_USER.md) + [SLICE\_NUM Function](r_SLICE_NUM.md) + [USER](r_USER.md) + [VERSION](r_VERSION.md) Amazon Redshift supports numerous system information functions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_System_information_functions.md
5bd97fa1ed2d-0
Amazon S3 supports both server\-side encryption and client\-side encryption\. This topic discusses the differences between the server\-side and client\-side encryption and describes the steps to use client\-side encryption with Amazon Redshift\. Server\-side encryption is transparent to Amazon Redshift\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
e0fce4f4abbf-0
Server\-side encryption is data encryption at rest—that is, Amazon S3 encrypts your data as it uploads it and decrypts it for you when you access it\. When you load tables using a COPY command, there is no difference in the way you load from server\-side encrypted or unencrypted objects on Amazon S3\. For more information about server\-side encryption, see [Using Server\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) in the *Amazon
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
e0fce4f4abbf-1
in the *Amazon Simple Storage Service Developer Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
271903155e69-0
In client\-side encryption, your client application manages encryption of your data, the encryption keys, and related tools\. You can upload data to an Amazon S3 bucket using client\-side encryption, and then load the data using the COPY command with the ENCRYPTED option and a private encryption key to provide greater security\. You encrypt your data using envelope encryption\. With *envelope encryption,* your application handles all encryption exclusively\. Your private encryption keys and your unencrypted data are never sent to AWS, so it's very important that you safely manage your encryption keys\. If you lose your encryption keys, you won't be able to unencrypt your data, and you can't recover your encryption keys from AWS\. Envelope encryption combines the performance of fast symmetric encryption while maintaining the greater security that key management with asymmetric keys provides\. A one\-time\-use symmetric key \(the envelope symmetric key\) is generated by your Amazon S3 encryption client to encrypt your data, then that key is encrypted by your master key and stored alongside your data in Amazon S3\. When Amazon Redshift accesses your data during a load, the encrypted symmetric key is retrieved and decrypted with your real key, then the data is decrypted\. To work with Amazon S3 client\-side encrypted data in Amazon Redshift, follow the steps outlined in [Protecting Data Using Client\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html) in the *Amazon Simple Storage Service Developer Guide*, with the additional requirements that you use:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
271903155e69-1
+ **Symmetric encryption –** The AWS SDK for Java `AmazonS3EncryptionClient` class uses envelope encryption, described preceding, which is based on symmetric key encryption\. Use this class to create an Amazon S3 client to upload client\-side encrypted data\. + **A 256\-bit AES master symmetric key –** A master key encrypts the envelope key\. You pass the master key to your instance of the `AmazonS3EncryptionClient` class\. Save this key, because you will need it to copy data into Amazon Redshift\. + **Object metadata to store encrypted envelope key –** By default, Amazon S3 stores the envelope key as object metadata for the `AmazonS3EncryptionClient` class\. The encrypted envelope key that is stored as object metadata is used during the decryption process\. **Note** If you get a cipher encryption error message when you use the encryption API for the first time, your version of the JDK may have a Java Cryptography Extension \(JCE\) jurisdiction policy file that limits the maximum key length for encryption and decryption transformations to 128 bits\. For information about addressing this issue, go to [Specifying Client\-Side Encryption Using the AWS SDK for Java](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryptionUpload.html) in the *Amazon Simple Storage Service Developer Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
271903155e69-2
For information about loading client\-side encrypted files into your Amazon Redshift tables using the COPY command, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
6bb06c5b259b-0
For an example of how to use the AWS SDK for Java to upload client\-side encrypted data, go to [Example 1: Encrypt and Upload a File Using a Client\-Side Symmetric Master Key](https://docs.aws.amazon.com/AmazonS3/latest/dev/encrypt-client-side-symmetric-master-key.html) in the *Amazon Simple Storage Service Developer Guide*\. The example shows the choices you must make during client\-side encryption so that the data can be loaded in Amazon Redshift\. Specifically, the example shows using object metadata to store the encrypted envelope key and the use of a 256\-bit AES master symmetric key\. This example provides example code using the AWS SDK for Java to create a 256\-bit AES symmetric master key and save it to a file\. Then the example upload an object to Amazon S3 using an S3 encryption client that first encrypts sample data on the client\-side\. The example also downloads the object and verifies that the data is the same\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-encrypted-data.md
f140bbb9f4a3-0
These functions return the specified number of leftmost or rightmost characters from a character string\. The number is based on the number of characters, not bytes, so that multibyte characters are counted as single characters\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LEFT.md