id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
ca1990943daa-0
The following query returns the number of rows loaded by the latest COPY command in the current session\. ``` select pg_last_copy_count(); pg_last_copy_count -------------------- 192497 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_COUNT.md
74a91af489dc-0
Following, you can find a discussion about how type conversion rules and data type compatibility work in Amazon Redshift\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Type_conversion.md
d74c1b4bfdd0-0
Data type matching and matching of literal values and constants to data types occurs during various database operations, including the following: + Data manipulation language \(DML\) operations on tables + UNION, INTERSECT, and EXCEPT queries + CASE expressions + Evaluation of predicates, such as LIKE and IN + Evaluation of SQL functions that do comparisons or extractions of data + Comparisons with mathematical operators The results of these operations depend on type conversion rules and data type compatibility\. *Compatibility* implies that a one\-to\-one matching of a certain value and a certain data type is not always required\. Because some data types are *compatible*, an implicit conversion, or *coercion*, is possible \(for more information, see [Implicit conversion types](#implicit-conversion-types)\)\. When data types are incompatible, you can sometimes convert a value from one data type to another by using an explicit conversion function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Type_conversion.md
e9f2227945af-0
Note the following compatibility and conversion rules: + In general, data types that fall into the same type category \(such as different numeric data types\) are compatible and can be implicitly converted\. For example, with implicit conversion you can insert a decimal value into an integer column\. The decimal is rounded to produce a whole number\. Or you can extract a numeric value, such as `2008`, from a date and insert that value into an integer column\. + Numeric data types enforce overflow conditions that occur when you attempt to insert out\-of\-range values\. For example, a decimal value with a precision of 5 does not fit into a decimal column that was defined with a precision of 4\. An integer or the whole part of a decimal is never truncated; however, the fractional part of a decimal can be rounded up or down, as appropriate\. However, results of explicit casts of values selected from tables are not rounded\. + Different types of character strings are compatible; VARCHAR column strings containing single\-byte data and CHAR column strings are comparable and implicitly convertible\. VARCHAR strings that contain multibyte data are not comparable\. Also, you can convert a character string to a date, timestamp, or numeric value if the string is an appropriate literal value; any leading or trailing spaces are ignored\. Conversely, you can convert a date, timestamp, or numeric value to a fixed\-length or variable\-length character string\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Type_conversion.md
e9f2227945af-1
**Note** A character string that you want to cast to a numeric type must contain a character representation of a number\. For example, you can cast the strings `'1.0'` or `'5.9'` to decimal values, but you cannot cast the string `'ABC'` to any numeric type\. + If you compare numeric values with character strings, the numeric values are converted to character strings\. To enforce the opposite conversion \(converting character strings to numeric values\), use an explicit function, such as [CAST and CONVERT](r_CAST_function.md)\. + To convert 64\-bit DECIMAL or NUMERIC values to a higher precision, you must use an explicit conversion function such as the CAST or CONVERT functions\. + When converting DATE or TIMESTAMP to TIMESTAMPTZ, DATE or TIMESTAMP are assumed to use the current session time zone\. The session time zone is UTC by default\. For more information about setting the session time zone, see [timezone](r_timezone_config.md)\. + Similarly, TIMESTAMPTZ is converted to DATE or TIMESTAMP based on the current session time zone\. The session time zone is UTC by default\. After the conversion, time zone information is dropped\. + Character strings that represent a time stamp with time zone specified are converted to TIMESTAMPTZ using the specified time zone\. If the time zone is omitted, the current session time zone is used, which is UTC by default\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Type_conversion.md
998d4d32d7b3-0
There are two types of implicit conversions: + Implicit conversions in assignments, such as setting values in INSERT or UPDATE commands\. + Implicit conversions in expressions, such as performing comparisons in the WHERE clause\. The table following lists the data types that can be converted implicitly in assignments or expressions\. You can also use an explicit conversion function to perform these conversions\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Type_conversion.html) **Note** Implicit conversions between TIMESTAMPTZ, TIMESTAMP, DATE, or character strings use the current session time zone\. For information about setting the current time zone, see [timezone](r_timezone_config.md)\. The GEOMETRY data type can't be implicitly converted to any other data type\. For more information, see [CAST and CONVERT functions](r_CAST_function.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Type_conversion.md
ceba58ea691e-0
When you load data into a table, Amazon Redshift distributes the rows of the table to each of the node slices according to the table's distribution style\. The number of slices per node depends on the node size of the cluster\. For example, the dc2\.large cluster that you are using in this tutorial has four nodes with two slices each, so the cluster has a total of eight slices\. The nodes all participate in parallel query execution, working on data that is distributed across the slices\. When you execute a query, the query optimizer redistributes the rows to the compute nodes as needed to perform any joins and aggregations\. Redistribution might involve either sending specific rows to nodes for joining or broadcasting an entire table to all of the nodes\. You should assign distribution styles to achieve these goals\. + Collocate the rows from joining tables When the rows for joining columns are on the same slices, less data needs to be moved during query execution\. + Distribute data evenly among the slices in a cluster\. If data is distributed evenly, workload can be allocated evenly to all the slices\. These goals may conflict in some cases, and you will need to evaluate which strategy is the best choice for overall system performance\. For example, even distribution might place all matching values for a column on the same slice\. If a query uses an equality filter on that column, the slice with those values will carry a disproportionate share of the workload\. If tables are collocated based on a distribution key, the rows might be distributed unevenly to the slices because the keys are distributed unevenly through the table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
ceba58ea691e-1
In this step, you evaluate the distribution of the SSB tables with respect to the goals of data distribution, and then select the optimum distribution styles for the tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
cc09407bdc9e-0
When you create a table, you designate one of three distribution styles: KEY, ALL, or EVEN\. **KEY distribution** The rows are distributed according to the values in one column\. The leader node will attempt to place matching values on the same node slice\. If you distribute a pair of tables on the joining keys, the leader node collocates the rows on the slices according to the values in the joining columns so that matching values from the common columns are physically stored together\. **ALL distribution** A copy of the entire table is distributed to every node\. Where EVEN distribution or KEY distribution place only a portion of a table's rows on each node, ALL distribution ensures that every row is collocated for every join that the table participates in\. **EVEN distribution** The rows are distributed across the slices in a round\-robin fashion, regardless of the values in any particular column\. EVEN distribution is appropriate when a table does not participate in joins or when there is not a clear choice between KEY distribution and ALL distribution\. EVEN distribution is the default distribution style\. For more information, see [Distribution styles](c_choosing_dist_sort.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
f93470e20f8f-0
When you execute a query, the query optimizer redistributes the rows to the compute nodes as needed to perform any joins and aggregations\. By locating the data where it needs to be before the query is executed, you can minimize the impact of the redistribution step\. The first goal is to distribute the data so that the matching rows from joining tables are collocated, which means that the matching rows from joining tables are located on the same node slice\. 1. To look for redistribution steps in the query plan, execute an EXPLAIN command followed by the query\. This example uses Query 2 from our set of test queries\. ``` explain select sum(lo_revenue), d_year, p_brand1 from lineorder, dwdate, part, supplier where lo_orderdate = d_datekey and lo_partkey = p_partkey and lo_suppkey = s_suppkey and p_category = 'MFGR#12' and s_region = 'AMERICA' group by d_year, p_brand1 order by d_year, p_brand1; ``` The following shows a portion of the query plan\. Look for labels that begin with *DS\_BCAST* or *DS\_DIST* labels ``` QUERY PLAN XN Merge (cost=1038007224737.84..1038007224738.54 rows=280 width=20)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
f93470e20f8f-1
XN Merge (cost=1038007224737.84..1038007224738.54 rows=280 width=20) Merge Key: dwdate.d_year, part.p_brand1 -> XN Network (cost=1038007224737.84..1038007224738.54 rows=280 width=20) Send to leader -> XN Sort (cost=1038007224737.84..1038007224738.54 rows=280 width=20) Sort Key: dwdate.d_year, part.p_brand1 -> XN HashAggregate (cost=38007224725.76..38007224726.46 rows=280 -> XN Hash Join DS_BCAST_INNER (cost=30674.95..38007188507.46 Hash Cond: ("outer".lo_orderdate = "inner".d_datekey) -> XN Hash Join DS_BCAST_INNER (cost=30643.00..37598119820.65 Hash Cond: ("outer".lo_suppkey = "inner".s_suppkey) -> XN Hash Join DS_BCAST_INNER Hash Cond: ("outer".lo_partkey = "inner".p_partkey) -> XN Seq Scan on lineorder
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
f93470e20f8f-2
Hash Cond: ("outer".lo_partkey = "inner".p_partkey) -> XN Seq Scan on lineorder -> XN Hash (cost=17500.00..17500.00 rows=56000 -> XN Seq Scan on part (cost=0.00..17500.00 Filter: ((p_category)::text = -> XN Hash (cost=12500.00..12500.00 rows=201200 -> XN Seq Scan on supplier (cost=0.00..12500.00 Filter: ((s_region)::text = 'AMERICA'::text) -> XN Hash (cost=25.56..25.56 rows=2556 width=8) -> XN Seq Scan on dwdate (cost=0.00..25.56 rows=2556 ``` DS\_BCAST\_INNER indicates that the inner join table was broadcast to every slice\. A *DS\_DIST\_BOTH* label, if present, would indicate that both the outer join table and the inner join table were redistributed across the slices\. Broadcasting and redistribution can be expensive steps in terms of query performance\. You want to select distribution strategies that reduce or eliminate broadcast and distribution steps\. For more information about evaluating the EXPLAIN plan, see [Evaluating query patterns](t_evaluating_query_patterns.md)\. 1. Distribute the fact table and one dimension table on their common columns\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
f93470e20f8f-3
1. Distribute the fact table and one dimension table on their common columns\. The following diagram shows the relationships between the fact table, LINEORDER, and the dimension tables in the SSB schema\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-ssb-data-model-join-keys.png) Each table can have only one distribution key, which means that only one pair of tables in the schema can be collocated on their common columns\. The central fact table is the clear first choice\. For the second table in the pair, choose the largest dimension that commonly joins the fact table\. In this design, LINEORDER is the fact table, and PART is the largest dimension\. PART joins LINEORDER on its primary key, `p_partkey`\. Designate `lo_partkey` as the distribution key for LINEORDER and `p_partkey` as the distribution key for PART so that the matching values for the joining keys will be collocated on the same slices when the data is loaded\. 1. Change some dimension tables to use ALL distribution\. If a dimension table cannot be collocated with the fact table or other important joining tables, you can often improve query performance significantly by distributing the entire table to all of the nodes\. ALL distribution guarantees that the joining rows will be collocated on every slice\. You should weigh all factors before choosing ALL distribution\. Using ALL distribution multiplies storage space requirements and increases load times and maintenance operations\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
f93470e20f8f-4
CUSTOMER, SUPPLIER, and DWDATE also join the LINEORDER table on their primary keys; however, LINEORDER will be collocated with PART, so you will set the remaining tables to use DISTSTYLE ALL\. Because the tables are relatively small and are not updated frequently, using ALL distribution will have minimal impact on storage and load times\. 1. Use EVEN distribution for the remaining tables\. All of the tables have been assigned with DISTKEY or ALL distribution styles, so you won't assign EVEN to any tables\. After evaluating your performance results, you might decide to change some tables from ALL to EVEN distribution\. The following tuning table shows the chosen distribution styles\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-distribution.html) You can find the steps for setting the distribution style in [Step 6: Recreate the test data set](tutorial-tuning-tables-recreate-test-data.md)\. For more information, see [Choose the best distribution style](c_best-practices-best-dist-key.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
3bd633f49ba5-0
[Step 5: Review compression encodings](tutorial-tuning-tables-compression.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-distribution.md
a3d1a458022f-0
Amazon Redshift enforces a quota of the number of tables per cluster by node type\. The maximum number of characters for a table name is 127\. The maximum number of columns you can define in a single table is 1,600\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-0
CREATE TABLE AS \(CTAS\) tables don't inherit constraints, identity columns, default column values, or the primary key from the table that they were created from\. You can't specify column compression encodings for CTAS tables\. Amazon Redshift automatically assigns compression encoding as follows: + Columns that are defined as sort keys are assigned RAW compression\. + Columns that are defined as BOOLEAN, REAL, DOUBLE PRECISION, or GEOMETRY data type are assigned RAW compression\. + Columns that are defined as SMALLINT, INTEGER, BIGINT, DECIMAL, DATE, TIMESTAMP, or TIMESTAMPTZ are assigned AZ64 compression\. + Columns that are defined as CHAR or VARCHAR are assigned LZO compression\. For more information, see [Compression encodings](c_Compression_encodings.md) and [Data types](c_Supported_data_types.md)\. To explicitly assign column encodings, use [CREATE TABLE](r_CREATE_TABLE_NEW.md) CTAS determines distribution style and sort key for the new table based on the query plan for the SELECT clause\. If the SELECT clause is a simple select operation from a single table, without a limit clause, order by clause, or group by clause, then CTAS uses the source table's distribution style and sort key\. For complex queries, such as queries that include joins, aggregations, an order by clause, or a limit clause, CTAS makes a best effort to choose the optimal distribution style and sort key based on the query plan\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-1
**Note** For best performance with large data sets or complex queries, we recommend testing using typical data sets\. You can often predict which distribution key and sort key CTAS chooses by examining the query plan to see which columns, if any, the query optimizer chooses for sorting and distributing data\. If the top node of the query plan is a simple sequential scan from a single table \(XN Seq Scan\), then CTAS generally uses the source table's distribution style and sort key\. If the top node of the query plan is anything other a sequential scan \(such as XN Limit, XN Sort, XN HashAggregate, and so on\), CTAS makes a best effort to choose the optimal distribution style and sort key based on the query plan\. For example, suppose you create five tables using the following types of SELECT clauses: + A simple select statement + A limit clause + An order by clause using LISTID + An order by clause using QTYSOLD + A SUM aggregate function with a group by clause\. The following examples show the query plan for each CTAS statement\. ``` explain create table sales1_simple as select listid, dateid, qtysold from sales; QUERY PLAN ----------------------------------------------------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-2
XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=8) (1 row) explain create table sales2_limit as select listid, dateid, qtysold from sales limit 100; QUERY PLAN ---------------------------------------------------------------------- XN Limit (cost=0.00..1.00 rows=100 width=8) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=8) (2 rows) explain create table sales3_orderbylistid as select listid, dateid, qtysold from sales order by listid; QUERY PLAN ------------------------------------------------------------------------ XN Sort (cost=1000000016724.67..1000000017155.81 rows=172456 width=8) Sort Key: listid
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-3
Sort Key: listid -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=8) (3 rows) explain create table sales4_orderbyqty as select listid, dateid, qtysold from sales order by qtysold; QUERY PLAN ------------------------------------------------------------------------ XN Sort (cost=1000000016724.67..1000000017155.81 rows=172456 width=8) Sort Key: qtysold -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=8) (3 rows) explain create table sales5_groupby as select listid, dateid, sum(qtysold) from sales group by listid, dateid; QUERY PLAN ---------------------------------------------------------------------- XN HashAggregate (cost=3017.98..3226.75 rows=83509 width=8)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-4
XN HashAggregate (cost=3017.98..3226.75 rows=83509 width=8) -> XN Seq Scan on sales (cost=0.00..1724.56 rows=172456 width=8) (2 rows) ``` To view the distribution key and sortkey for each table, query the PG\_TABLE\_DEF system catalog table, as shown following\. ``` select * from pg_table_def where tablename like 'sales%'; tablename | column | distkey | sortkey ----------------------+------------+---------+--------- sales | salesid | f | 0 sales | listid | t | 0 sales | sellerid | f | 0 sales | buyerid | f | 0 sales | eventid | f | 0 sales | dateid | f | 1 sales | qtysold | f | 0 sales | pricepaid | f | 0 sales | commission | f | 0 sales | saletime | f | 0 sales1_simple | listid | t | 0 sales1_simple | dateid | f | 1 sales1_simple | qtysold | f | 0 sales2_limit | listid | f | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-5
sales1_simple | qtysold | f | 0 sales2_limit | listid | f | 0 sales2_limit | dateid | f | 0 sales2_limit | qtysold | f | 0 sales3_orderbylistid | listid | t | 1 sales3_orderbylistid | dateid | f | 0 sales3_orderbylistid | qtysold | f | 0 sales4_orderbyqty | listid | t | 0 sales4_orderbyqty | dateid | f | 0 sales4_orderbyqty | qtysold | f | 1 sales5_groupby | listid | f | 0 sales5_groupby | dateid | f | 0 sales5_groupby | sum | f | 0 ``` The following table summarizes the results\. For simplicity, we omit cost, rows, and width details from the explain plan\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_CTAS_usage_notes.html) You can explicitly specify distribution style and sort key in the CTAS statement\. For example, the following statement creates a table using EVEN distribution and specifies SALESID as the sort key\. ``` create table sales_disteven diststyle even sortkey (salesid) as
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
541716758e0b-6
``` create table sales_disteven diststyle even sortkey (salesid) as select eventid, venueid, dateid, eventname from event; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
3dc5bd7493b2-0
When the hash distribution scheme of the incoming data matches that of the target table, no physical distribution of the data is actually necessary when the data is loaded\. For example, if a distribution key is set for the new table and the data is being inserted from another table that is distributed on the same key column, the data is loaded in place, using the same nodes and slices\. However, if the source and target tables are
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
3dc5bd7493b2-1
However, if the source and target tables are both set to EVEN distribution, data is redistributed into the target table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
f7624beff921-0
Amazon Redshift automatically analyzes tables that you create with CTAS commands\. You do not need to run the ANALYZE command on these tables when they are first created\. If you modify them, you should analyze them in the same way as other tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CTAS_usage_notes.md
88ecb9265238-0
Defines access privileges for a user or user group\. Privileges include access options such as being able to read data in tables and views, write data, and create tables\. Use this command to give specific privileges for a table, database, schema, function, procedure, language, or column\. To revoke privileges from a database object, use the [REVOKE](r_REVOKE.md) command\. You can only GRANT or REVOKE USAGE permissions on an external schema to database users and user groups that use the ON SCHEMA syntax\. When using ON EXTERNAL SCHEMA with AWS Lake Formation, you can only GRANT and REVOKE privileges to an AWS Identity and Access Management \(IAM\) role\. For the list of privileges, see the syntax\. For stored procedures, the only privilege that you can grant is EXECUTE\. You can't run GRANT \(on an external resource\) within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
4a8bd83cebf8-0
``` GRANT { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [,...] | ALL [ PRIVILEGES ] } ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] } TO { username [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...] GRANT { { CREATE | TEMPORARY | TEMP } [,...] | ALL [ PRIVILEGES ] } ON DATABASE db_name [, ...] TO { username [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...] GRANT { { CREATE | USAGE } [,...] | ALL [ PRIVILEGES ] } ON SCHEMA schema_name [, ...] TO { username [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...] GRANT { EXECUTE | ALL [ PRIVILEGES ] } ON { FUNCTION function_name ( [ [ argname ] argtype [, ...] ] ) [, ...] | ALL FUNCTIONS IN SCHEMA schema_name [, ...] } TO { username [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...] GRANT { EXECUTE | ALL [ PRIVILEGES ] } ON { PROCEDURE procedure_name ( [ [ argname ] argtype [, ...] ] ) [, ...] | ALL PROCEDURES IN SCHEMA schema_name [, ...] }
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
4a8bd83cebf8-1
TO { username [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...] GRANT USAGE ON LANGUAGE language_name [, ...] TO { username [ WITH GRANT OPTION ] | GROUP group_name | PUBLIC } [, ...] ``` The following is the syntax for column\-level privileges on Amazon Redshift tables and views\. ``` GRANT { { SELECT | UPDATE } ( column_name [, ...] ) [, ...] | ALL [ PRIVILEGES ] ( column_name [,...] ) } ON { [ TABLE ] table_name [, ...] } TO { username | GROUP group_name | PUBLIC } [, ...] ``` The following is the syntax for Redshift Spectrum integration with Lake Formation\. ``` GRANT { SELECT | ALL [ PRIVILEGES ] } ( column_list ) ON EXTERNAL TABLE schema_name.table_name TO { IAM_ROLE iam_role } [, ...] [ WITH GRANT OPTION ] GRANT { { SELECT | ALTER | DROP | DELETE | INSERT } [, ...] | ALL [ PRIVILEGES ] } ON EXTERNAL TABLE schema_name.table_name [, ...] TO { { IAM_ROLE iam_role } [, ...] | PUBLIC } [ WITH GRANT OPTION ] GRANT { { CREATE | ALTER | DROP } [, ...] | ALL [ PRIVILEGES ] }
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
4a8bd83cebf8-2
GRANT { { CREATE | ALTER | DROP } [, ...] | ALL [ PRIVILEGES ] } ON EXTERNAL SCHEMA schema_name [, ...] TO { IAM_ROLE iam_role } [, ...] [ WITH GRANT OPTION ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-0
SELECT <a name="grant-select"></a> Grants privilege to select data from a table or view using a SELECT statement\. The SELECT privilege is also required to reference existing column values for UPDATE or DELETE operations\. INSERT <a name="grant-insert"></a> Grants privilege to load data into a table using an INSERT statement or a COPY statement\. UPDATE <a name="grant-update"></a> Grants privilege to update a table column using an UPDATE statement\. UPDATE operations also require the SELECT privilege, because they must reference table columns to determine which rows to update, or to compute new values for columns\. DELETE <a name="grant-delete"></a> Grants privilege to delete a data row from a table\. DELETE operations also require the SELECT privilege, because they must reference table columns to determine which rows to delete\. REFERENCES <a name="grant-references"></a> Grants privilege to create a foreign key constraint\. You need to grant this privilege on both the referenced table and the referencing table; otherwise, the user can't create the constraint\. ALL \[ PRIVILEGES \] <a name="grant-all"></a> Grants all available privileges at once to the specified user or user group\. The PRIVILEGES keyword is optional\. GRANT ALL ON SCHEMA doesn't grant CREATE privileges for external schemas\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-1
GRANT ALL ON SCHEMA doesn't grant CREATE privileges for external schemas\. You can grant ALL privilege to a table in an AWS Glue Data Catalog that is enabled for Lake Formation\. In this case, individual privileges \(such as SELECT, ALTER, and so on\) are recorded in the Data Catalog\. ALTER <a name="grant-alter"></a> Grants privilege to alter a table in an AWS Glue Data Catalog that is enabled for Lake Formation\. This privilege only applies when using Lake Formation\. DROP <a name="grant-drop"></a> Grants privilege to drop a table in an AWS Glue Data Catalog that is enabled for Lake Formation\. This privilege only applies when using Lake Formation\. ON \[ TABLE \] *table\_name* <a name="grant-on-table"></a> Grants the specified privileges on a table or a view\. The TABLE keyword is optional\. You can list multiple tables and views in one statement\. ON ALL TABLES IN SCHEMA *schema\_name* <a name="grant-all-tables"></a> Grants the specified privileges on all tables and views in the referenced schema\. \( *column\_name* \[,\.\.\.\] \) ON TABLE *table\_name* <a name="grant-column-level-privileges"></a> Grants the specified privileges to users, groups, or PUBLIC on the specified columns of the Amazon Redshift table or view\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-2
Grants the specified privileges to users, groups, or PUBLIC on the specified columns of the Amazon Redshift table or view\. \( *column\_list* \) ON EXTERNAL TABLE *schema\_name\.table\_name* <a name="grant-external-table-column"></a> Grants the specified privileges to an IAM role on the specified columns of the Lake Formation table in the referenced schema\. ON EXTERNAL TABLE *schema\_name\.table\_name* <a name="grant-external-table"></a> Grants the specified privileges to an IAM role on the specified Lake Formation tables in the referenced schema\. ON EXTERNAL SCHEMA *schema\_name* <a name="grant-external-schema"></a> Grants the specified privileges to an IAM role on the referenced schema\. TO *username* <a name="grant-to"></a> Indicates the user receiving the privileges\. TO IAM\_ROLE *iam\_role* <a name="grant-to-iam-role"></a> Indicates the IAM role receiving the privileges\. WITH GRANT OPTION <a name="grant-with-grant"></a> Indicates that the user receiving the privileges can in turn grant the same privileges to others\. WITH GRANT OPTION can not be granted to a group or to PUBLIC\. GROUP *group\_name* <a name="grant-group"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-3
GROUP *group\_name* <a name="grant-group"></a> Grants the privileges to a user group\. PUBLIC <a name="grant-public"></a> Grants the specified privileges to all users, including users created later\. PUBLIC represents a group that always includes all users\. An individual user's privileges consist of the sum of privileges granted to PUBLIC, privileges granted to any groups that the user belongs to, and any privileges granted to the user individually\. Granting PUBLIC to a Lake Formation EXTERNAL TABLE results in granting the privilege to the Lake Formation *everyone* group\. CREATE <a name="grant-create"></a> Depending on the database object, grants the following privileges to the user or user group: + For databases, CREATE allows users to create schemas within the database\. + For schemas, CREATE allows users to create objects within a schema\. To rename an object, the user must have the CREATE privilege and own the object to be renamed\. + CREATE ON SCHEMA isn't supported for Amazon Redshift Spectrum external schemas\. To grant usage of external tables in an external schema, grant USAGE ON SCHEMA to the users that need access\. Only the owner of an external schema or a superuser is permitted to create external tables in the external schema\. To transfer ownership of an external schema, use [ALTER SCHEMA](r_ALTER_SCHEMA.md) to change the owner\. TEMPORARY \| TEMP <a name="grant-temporary"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-4
TEMPORARY \| TEMP <a name="grant-temporary"></a> Grants the privilege to create temporary tables in the specified database\. To run Amazon Redshift Spectrum queries, the database user must have permission to create temporary tables in the database\. By default, users are granted permission to create temporary tables by their automatic membership in the PUBLIC group\. To remove the privilege for any users to create temporary tables, revoke the TEMP permission from the PUBLIC group\. Then explicitly grant the permission to create temporary tables to specific users or groups of users\. ON DATABASE *db\_name* <a name="grant-database"></a> Grants the specified privileges on a database\. USAGE <a name="grant-usage"></a> Grants USAGE privilege on a specific schema, which makes objects in that schema accessible to users\. Specific actions on these objects must be granted separately \(for example, SELECT or UPDATE privileges on tables\)\. By default, all users have CREATE and USAGE privileges on the PUBLIC schema\. ON SCHEMA *schema\_name* <a name="grant-schema"></a> Grants the specified privileges on a schema\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-5
Grants the specified privileges on a schema\. GRANT CREATE ON SCHEMA and the CREATE privilege in GRANT ALL ON SCHEMA aren't supported for Amazon Redshift Spectrum external schemas\. To grant usage of external tables in an external schema, grant USAGE ON SCHEMA to the users that need access\. Only the owner of an external schema or a superuser is permitted to create external tables in the external schema\. To transfer ownership of an external schema, use [ALTER SCHEMA](r_ALTER_SCHEMA.md) to change the owner\. EXECUTE ON FUNCTION *function\_name* <a name="grant-function"></a> Grants the EXECUTE privilege on a specific function\. Because function names can be overloaded, you must include the argument list for the function\. For more information, see [Naming UDFs](udf-naming-udfs.md)\. EXECUTE ON ALL FUNCTIONS IN SCHEMA *schema\_name* <a name="grant-all-functions"></a> Grants the specified privileges on all functions in the referenced schema\. EXECUTE ON PROCEDURE *procedure\_name* <a name="grant-procedure"></a> Grants the EXECUTE privilege on a specific stored procedure\. Because stored procedure names can be overloaded, you must include the argument list for the procedure\. For more information, see [Naming stored procedures](stored-procedure-naming.md)\. EXECUTE ON ALL PROCEDURES IN SCHEMA *schema\_name* <a name="grant-all-procedures"></a> Grants the specified privileges on all stored procedures in the referenced schema\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
6fcb1ce99dad-6
Grants the specified privileges on all stored procedures in the referenced schema\. USAGE ON LANGUAGE *language\_name* Grants the USAGE privilege on a language\. The USAGE ON LANGUAGE privilege is required to create user\-defined functions \(UDFs\) by running the [CREATE FUNCTION](r_CREATE_FUNCTION.md) command\. For more information, see [UDF security and privileges](udf-security-and-privileges.md)\. The USAGE ON LANGUAGE privilege is required to create stored procedures by running the [CREATE PROCEDURE](r_CREATE_PROCEDURE.md) command\. For more information, see [Security and privileges for stored procedures ](stored-procedure-security-and-privileges.md)\. For Python UDFs, use `plpythonu`\. For SQL UDFs, use `sql`\. For stored procedures, use `plpgsql`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
9fdcdb9cae1e-0
To grant privileges on an object, you must meet one of the following criteria: + Be the object owner\. + Be a superuser\. + Have a grant privilege for that object and privilege\. For example, the following command enables the user HR both to perform SELECT commands on the employees table and to grant and revoke the same privilege for other users\. ``` grant select on table employees to HR with grant option; ``` HR can't grant privileges for any operation other than SELECT, or on any other table than employees\. Having privileges granted on a view doesn't imply having privileges on the underlying tables\. Similarly, having privileges granted on a schema doesn't imply having privileges on the tables in the schema\. Instead, grant access to the underlying tables explicitly\. To grant privileges to an AWS Lake Formation table, the IAM role associated with the table's external schema must have permission to grant privileges to the external table\. The following example creates an external schema with an associated IAM role `myGrantor`\. The IAM role `myGrantor` has the permission to grant permissions to others\. The GRANT command uses the permission of the IAM role `myGrantor` that is associated with the external schema to grant permission to the IAM role `myGrantee`\. ``` create external schema mySchema from data catalog database 'spectrum_db' iam_role 'arn:aws:iam::123456789012:role/myGrantor'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
9fdcdb9cae1e-1
iam_role 'arn:aws:iam::123456789012:role/myGrantor' create external database if not exists; ``` ``` grant select on external table mySchema.mytable to iam_role 'arn:aws:iam::123456789012:role/myGrantee'; ``` If you GRANT ALL privileges to an IAM role, individual privileges are granted in the related Lake Formation–enabled Data Catalog\. For example, the following GRANT ALL results in the granted individual privileges \(SELECT, ALTER, DROP, DELETE, and INSERT\) showing in the Lake Formation console\. ``` grant all on external table mySchema.mytable to iam_role 'arn:aws:iam::123456789012:role/myGrantee'; ``` Superusers can access all objects regardless of GRANT and REVOKE commands that set object privileges\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
5dcaf5677704-0
The following usage notes apply to column\-level privileges on Amazon Redshift tables and views\. These notes describe tables; the same notes apply to views unless we explicitly note an exception\. For an Amazon Redshift table, you can grant only the SELECT and UPDATE privileges at the column level\. For an Amazon Redshift view, you can grant only the SELECT privilege at the column level\. The ALL keyword is a synonym for SELECT and UPDATE privileges combined when used in the context of a column\-level GRANT on a table\. If you don't have SELECT privilege on all columns in a table, performing a SELECT operation for all columns \(`SELECT *`\) fails\. If you have SELECT or UPDATE privilege on a table or view and add a column, you still have the same privileges on the table or view and thus all its columns\. Only a table's owner or a superuser can grant column\-level privileges\. The WITH GRANT OPTION clause isn't supported for column\-level privileges\. You can't hold the same privilege at both the table level and the column level\. For example, the user `data_scientist` can't have both SELECT privilege on the table `employee` and SELECT privilege on the column `employee.department`\. Consider the following results when granting the same privilege to a table and a column within the table: + If a user has a table\-level privilege on a table, then granting the same privilege at the column level has no effect\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
5dcaf5677704-1
+ If a user has a table\-level privilege on a table, then granting the same privilege at the column level has no effect\. + If a user has a table\-level privilege on a table, then revoking the same privilege for one or more columns of the table returns an error\. Instead, revoke the privilege at the table level\. + If a user has a column\-level privilege, then granting the same privilege at the table level returns an error\. + If a user has a column\-level privilege, then revoking the same privilege at the table level revokes both column and table privileges for all columns on the table\. You can't grant column\-level privileges on late\-binding views\. You must have table\-level SELECT privilege on the base tables to create a materialized view\. Even if you have column\-level privileges on specific columns, you can't create a materialized view on only those columns\. However, you can grant SELECT privilege to columns of a materialized view, similar to regular views\. To look up grants of column\-level privileges, use the [PG\_ATTRIBUTE\_INFO](r_PG_ATTRIBUTE_INFO.md) view\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
47f06a44dc1b-0
The following example grants the SELECT privilege on the SALES table to the user `fred`\. ``` grant select on table sales to fred; ``` The following example grants the SELECT privilege on all tables in the QA\_TICKIT schema to the user `fred`\. ``` grant select on all tables in schema qa_tickit to fred; ``` The following example grants all schema privileges on the schema QA\_TICKIT to the user group QA\_USERS\. Schema privileges are CREATE and USAGE\. USAGE grants users access to the objects in the schema, but doesn't grant privileges such as INSERT or SELECT on those objects\. Grant privileges on each object separately\. ``` create group qa_users; grant all on schema qa_tickit to group qa_users; ``` The following example grants all privileges on the SALES table in the QA\_TICKIT schema to all users in the group QA\_USERS\. ``` grant all on table qa_tickit.sales to group qa_users; ``` The following sequence of commands shows how access to a schema doesn't grant privileges on a table in the schema\. ``` create user schema_user in group qa_users password 'Abcd1234'; create schema qa_tickit;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
47f06a44dc1b-1
create user schema_user in group qa_users password 'Abcd1234'; create schema qa_tickit; create table qa_tickit.test (col1 int); grant all on schema qa_tickit to schema_user; set session authorization schema_user; select current_user; current_user -------------- schema_user (1 row) select count(*) from qa_tickit.test; ERROR: permission denied for relation test [SQL State=42501] set session authorization dw_user; grant select on table qa_tickit.test to schema_user; set session authorization schema_user; select count(*) from qa_tickit.test; count ------- 0 (1 row) ``` The following sequence of commands shows how access to a view doesn't imply access to its underlying tables\. The user called VIEW\_USER can't select from the DATE table, although this user has been granted all privileges on VIEW\_DATE\. ``` create user view_user password 'Abcd1234'; create view view_date as select * from date; grant all on view_date to view_user; set session authorization view_user;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
47f06a44dc1b-2
create view view_date as select * from date; grant all on view_date to view_user; set session authorization view_user; select current_user; current_user -------------- view_user (1 row) select count(*) from view_date; count ------- 365 (1 row) select count(*) from date; ERROR: permission denied for relation date ``` The following example grants SELECT privilege on the `cust_name` and `cust_phone` columns of the `cust_profile` table to the user `user1`\. ``` grant select(cust_name, cust_phone) on cust_profile to user1; ``` The following example grants SELECT privilege on the `cust_name` and `cust_phone` columns and UPDATE privilege on the `cust_contact_preference` column of the `cust_profile` table to the `sales_group` group\. ``` grant select(cust_name, cust_phone), update(cust_contact_preference) on cust_profile to group sales_group; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
47f06a44dc1b-3
``` The following example shows the usage of the ALL keyword to grant both SELECT and UPDATE privileges on three columns of the table `cust_profile` to the `sales_admin` group\. ``` grant ALL(cust_name, cust_phone,cust_contact_preference) on cust_profile to group sales_admin; ``` The following example grants the SELECT privilege on the `cust_name` column of the `cust_profile_vw` view to the `user2` user\. ``` grant select(cust_name) on cust_profile_vw to user2; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GRANT.md
76b988047906-0
To analyze query summary information by stream, do the following: 1. Run the following query to determine your query ID: ``` select query, elapsed, substring from svl_qlog order by query desc limit 5; ``` Examine the truncated query text in the `substring` field to determine which `query` value represents your query\. If you have run the query more than once, use the `query` value from the row with the lower `elapsed` value\. That is the row for the compiled version\. If you have been running many queries, you can raise the value used by the LIMIT clause used to make sure that your query is included\. 1. Select rows from SVL\_QUERY\_SUMMARY for your query\. Order the results by stream, segment, and step: ``` select * from svl_query_summary where query = MyQueryID order by stm, seg, step; ``` ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/svl_query_summary_results.png)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Summary.md
76b988047906-1
1. Map the steps to the operations in the query plan using the information in [Mapping the query plan to the query summary](query-plan-summary-map.md)\. They should have approximately the same values for rows and bytes \(rows \* width from the query plan\)\. If they don’t, see [Table statistics missing or out of date](query-performance-improvement-opportunities.md#table-statistics-missing-or-out-of-date) for recommended solutions\. 1. See if the `is_diskbased` field has a value of `t` \(true\) for any step\. Hashes, aggregates, and sorts are the operators that are likely to write data to disk if the system doesn't have enough memory allocated for query processing\. If `is_diskbased` is true, see [Insufficient memory allocated to the query](query-performance-improvement-opportunities.md#insufficient-memory-allocated-to-the-query) for recommended solutions\. 1. Review the `label` field values and see if there is an AGG\-DIST\-AGG sequence anywhere in the steps\. Its presence indicates two\-step aggregation, which is expensive\. To fix this, change the GROUP BY clause to use the distribution key \(the first key, if there are multiple ones\)\. 1. Review the `maxtime` value for each segment \(it is the same across all steps in the segment\)\. Identify the segment with the highest `maxtime` value and review the steps in this segment for the following operators\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Summary.md
76b988047906-2
**Note** A high `maxtime` value doesn't necessarily indicate a problem with the segment\. Despite a high value, the segment might not have taken a long time to process\. All segments in a stream start getting timed in unison\. However, some downstream segments might not be able to run until they get data from upstream ones\. This effect might make them seem to have taken a long time because their `maxtime` value includes both their waiting time and their processing time\. + **BCAST or DIST**: In these cases, the high `maxtime` value might be the result of redistributing a large number of rows\. For recommended solutions, see [Suboptimal data distribution](query-performance-improvement-opportunities.md#suboptimal-data-distribution)\. + **HJOIN \(hash join\)**: If the step in question has a very high value in the `rows` field compared to the `rows` value in the final RETURN step in the query, see [Hash join](query-performance-improvement-opportunities.md#hash-join) for recommended solutions\. + **SCAN/SORT**: Look for a SCAN, SORT, SCAN, MERGE sequence of steps just prior to a join step\. This pattern indicates that unsorted data is being scanned, sorted, and then merged with the sorted area of the table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Summary.md
76b988047906-3
See if the rows value for the SCAN step has a very high value compared to the rows value in the final RETURN step in the query\. This pattern indicates that the execution engine is scanning rows that are later discarded, which is inefficient\. For recommended solutions, see [Insufficiently restrictive predicate](query-performance-improvement-opportunities.md#insufficiently-restrictive-predicate)\. If the `maxtime` value for the SCAN step is high, see [Suboptimal WHERE clause](query-performance-improvement-opportunities.md#suboptimal-WHERE-clause) for recommended solutions\. If the `rows` value for the SORT step is not zero, see [Unsorted or missorted rows](query-performance-improvement-opportunities.md#unsorted-or-mis-sorted-rows) for recommended solutions\. 1. Review the `rows` and `bytes` values for the 5–10 steps that precede the final RETURN step to get an idea of the amount of data that is being returned to the client\. This process can be a bit of an art\. For example, in the following query summary, you can see that the third PROJECT step provides a `rows` value but not a `bytes` value\. By looking through the preceding steps for one with the same `rows` value, you find the SCAN step that provides both rows and bytes information: ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/rows_and_bytes.png)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Summary.md
76b988047906-4
If you are returning an unusually large volume of data, see [Very large result set](query-performance-improvement-opportunities.md#very-large-result-set) for recommended solutions\. 1. See if the `bytes` value is high relative to the `rows` value for any step, in comparison to other steps\. This pattern can indicate that you are selecting a lot of columns\. For recommended solutions, see [Large SELECT list](query-performance-improvement-opportunities.md#large-SELECT-list)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Summary.md
f08df53e1ee4-0
SIN is a trigonometric function that returns the sine of a number\. The return value is between `-1` and `1`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIN.md
664a73f2c9c2-0
``` SIN(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIN.md
4bc3670734e7-0
*number* A double precision number in radians\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIN.md
40661969e1c9-0
The SIN function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIN.md
1d529d41a21b-0
The following example returns the sine of `-PI`: ``` select sin(-pi()); sin ----------------------- -1.22464679914735e-16 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIN.md
30ac32415d14-0
Use an interval literal to identify specific periods of time, such as `12 hours` or `6 weeks`\. You can use these interval literals in conditions and calculations that involve datetime expressions\. **Note** You cannot use the INTERVAL data type for columns in Amazon Redshift tables\. An interval is expressed as a combination of the INTERVAL keyword with a numeric quantity and a supported datepart; for example: `INTERVAL '7 days'` or `INTERVAL '59 minutes'`\. Several quantities and units can be connected to form a more precise interval; for example: `INTERVAL '7 days, 3 hours, 59 minutes'`\. Abbreviations and plurals of each unit are also supported; for example: `5 s`, `5 second`, and `5 seconds` are equivalent intervals\. If you do not specify a datepart, the interval value represents seconds\. You can specify the quantity value as a fraction \(for example: `0.5 days`\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_interval_literals.md
b240b3ce524a-0
The following examples show a series of calculations with different interval values\. Add 1 second to the specified date: ``` select caldate + interval '1 second' as dateplus from date where caldate='12-31-2008'; dateplus --------------------- 2008-12-31 00:00:01 (1 row) ``` Add 1 minute to the specified date: ``` select caldate + interval '1 minute' as dateplus from date where caldate='12-31-2008'; dateplus --------------------- 2008-12-31 00:01:00 (1 row) ``` Add 3 hours and 35 minutes to the specified date: ``` select caldate + interval '3 hours, 35 minutes' as dateplus from date where caldate='12-31-2008'; dateplus --------------------- 2008-12-31 03:35:00 (1 row) ``` Add 52 weeks to the specified date: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_interval_literals.md
b240b3ce524a-1
(1 row) ``` Add 52 weeks to the specified date: ``` select caldate + interval '52 weeks' as dateplus from date where caldate='12-31-2008'; dateplus --------------------- 2009-12-30 00:00:00 (1 row) ``` Add 1 week, 1 hour, 1 minute, and 1 second to the specified date: ``` select caldate + interval '1w, 1h, 1m, 1s' as dateplus from date where caldate='12-31-2008'; dateplus --------------------- 2009-01-07 01:01:01 (1 row) ``` Add 12 hours \(half a day\) to the specified date: ``` select caldate + interval '0.5 days' as dateplus from date where caldate='12-31-2008'; dateplus --------------------- 2008-12-31 12:00:00 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_interval_literals.md
59dc0ea5ec2f-0
The considerations and recommendations for designating distribution styles in this section use a star schema as an example\. Your database design might be based on a star schema, some variant of a star schema, or an entirely different schema\. Amazon Redshift is designed to work effectively with whatever schema design you choose\. The principles in this section can be applied to any design schema\. 1. **Specify the primary key and foreign keys for all your tables\.** Amazon Redshift does not enforce primary key and foreign key constraints, but the query optimizer uses them when it generates query plans\. If you set primary keys and foreign keys, your application must maintain the validity of the keys\. 1. **Distribute the fact table and its largest dimension table on their common columns\.** Choose the largest dimension based on the size of dataset that participates in the most common join, not just the size of the table\. If a table is commonly filtered, using a WHERE clause, only a portion of its rows participate in the join\. Such a table has less impact on redistribution than a smaller table that contributes more data\. Designate both the dimension table's primary key and the fact table's corresponding foreign key as DISTKEY\. If multiple tables use the same distribution key, they will also be collocated with the fact table\. Your fact table can have only one distribution key\. Any tables that join on another key will not be collocated with the fact table\. 1. **Designate distribution keys for the other dimension tables\.** Distribute the tables on their primary keys or their foreign keys, depending on how they most commonly join with other tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_designating_distribution_styles.md
59dc0ea5ec2f-1
Distribute the tables on their primary keys or their foreign keys, depending on how they most commonly join with other tables\. 1. **Evaluate whether to change some of the dimension tables to use ALL distribution\.** If a dimension table cannot be collocated with the fact table or other important joining tables, you can improve query performance significantly by distributing the entire table to all of the nodes\. Using ALL distribution multiplies storage space requirements and increases load times and maintenance operations, so you should weigh all factors before choosing ALL distribution\. The following section explains how to identify candidates for ALL distribution by evaluating the EXPLAIN plan\. 1. **Use EVEN distribution for the remaining tables\.** If a table is largely denormalized and does not participate in joins, or if you don't have a clear choice for another distribution style, use EVEN distribution\. To let Amazon Redshift choose the appropriate distribution style, don't explicitly specify a distribution style\. You cannot change the distribution style of a table after it is created\. To use a different distribution style, you can recreate the table and populate the new table with a deep copy\. For more information, see [Performing a deep copy](performing-a-deep-copy.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_designating_distribution_styles.md
ba1cde78d6d7-0
The following examples show how data is distributed according to the options that you define in the CREATE TABLE statement\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Distribution_examples.md
c0f242b9a361-0
Look at the schema of the USERS table in the TICKIT database\. USERID is defined as the SORTKEY column and the DISTKEY column: ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 'users'; column | type | encoding | distkey | sortkey ---------------+------------------------+----------+---------+--------- userid | integer | none | t | 1 username | character(8) | none | f | 0 firstname | character varying(30) | text32k | f | 0 ... ``` USERID is a good choice for the distribution column on this table\. If you query the SVV\_DISKUSAGE system view, you can see that the table is very evenly distributed\. Column numbers are zero\-based, so USERID is column 0\. ``` select slice, col, num_values as rows, minvalue, maxvalue from svv_diskusage where name='users' and col=0 and rows>0 order by slice, col; slice| col | rows | minvalue | maxvalue
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Distribution_examples.md
c0f242b9a361-1
order by slice, col; slice| col | rows | minvalue | maxvalue -----+-----+-------+----------+---------- 0 | 0 | 12496 | 4 | 49987 1 | 0 | 12498 | 1 | 49988 2 | 0 | 12497 | 2 | 49989 3 | 0 | 12499 | 3 | 49990 (4 rows) ``` The table contains 49,990 rows\. The rows \(num\_values\) column shows that each slice contains about the same number of rows\. The minvalue and maxvalue columns show the range of values on each slice\. Each slice includes nearly the entire range of values, so there's a good chance that every slice will participate in executing a query that filters for a range of user IDs\. This example demonstrates distribution on a small test system\. The total number of slices is typically much higher\. If you commonly join or group using the STATE column, you might choose to distribute on the STATE column\. The following examples shows that if you create a new table with the same data as the USERS table, but you set the DISTKEY to the STATE column, the distribution will not be as even\. Slice 0 \(13,587 rows\) holds approximately 30% more rows than slice 3 \(10,150 rows\)\. In a much larger table, this amount of distribution skew could have an adverse impact on query processing\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Distribution_examples.md
c0f242b9a361-2
``` create table userskey distkey(state) as select * from users; select slice, col, num_values as rows, minvalue, maxvalue from svv_diskusage where name = 'userskey' and col=0 and rows>0 order by slice, col; slice | col | rows | minvalue | maxvalue ------+-----+-------+----------+---------- 0 | 0 | 13587 | 5 | 49989 1 | 0 | 11245 | 2 | 49990 2 | 0 | 15008 | 1 | 49976 3 | 0 | 10150 | 4 | 49986 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Distribution_examples.md
1f731bad30a5-0
If you create a new table with the same data as the USERS table but set the DISTSTYLE to EVEN, rows are always evenly distributed across slices\. ``` create table userseven diststyle even as select * from users; select slice, col, num_values as rows, minvalue, maxvalue from svv_diskusage where name = 'userseven' and col=0 and rows>0 order by slice, col; slice | col | rows | minvalue | maxvalue ------+-----+-------+----------+---------- 0 | 0 | 12497 | 4 | 49990 1 | 0 | 12498 | 8 | 49984 2 | 0 | 12498 | 2 | 49988 3 | 0 | 12497 | 1 | 49989 (4 rows) ``` However, because distribution is not based on a specific column, query processing can be degraded, especially if the table is joined to other tables\. The lack of distribution on a joining column often influences the type of join operation that can be performed efficiently\. Joins, aggregations, and grouping operations are optimized when both tables are distributed and sorted on their respective joining columns\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Distribution_examples.md
b2c9f2c6d69d-0
If you create a new table with the same data as the USERS table but set the DISTSTYLE to ALL, all the rows are distributed to the first slice of each node\. ``` select slice, col, num_values as rows, minvalue, maxvalue from svv_diskusage where name = 'usersall' and col=0 and rows > 0 order by slice, col; slice | col | rows | minvalue | maxvalue ------+-----+-------+----------+---------- 0 | 0 | 49990 | 4 | 49990 2 | 0 | 49990 | 2 | 49990 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Distribution_examples.md
7866f5ab368e-0
Stored procedures are commonly used to encapsulate logic for data transformation, data validation, and business\-specific logic\. By combining multiple SQL steps into a stored procedure, you can reduce round trips between your applications and the database\. For fine\-grained access control, you can create stored procedures to perform functions without giving a user access to the underlying tables\. For example, only the owner or a superuser can truncate a table, and a user needs write permission to insert data into a table\. Instead of granting a user permissions on the underlying tables, you can create a stored procedure that performs the task\. You then give the user permission to run the stored procedure\. A stored procedure with the DEFINER security attribute runs with the privileges of the stored procedure's owner\. By default, a stored procedure has INVOKER security, which means the procedure uses the permissions of the user that calls the procedure\. To create a stored procedure, use the [CREATE PROCEDURE](r_CREATE_PROCEDURE.md) command\. To run a procedure, use the [CALL](r_CALL_procedure.md) command\. Examples follow later in this section\. **Note** Some clients might throw the following error when creating an Amazon Redshift stored procedure\. ``` ERROR: 42601: [Amazon](500310) unterminated dollar-quoted string at or near "$$ ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-create.md
7866f5ab368e-1
ERROR: 42601: [Amazon](500310) unterminated dollar-quoted string at or near "$$ ``` This error occurs due to the inability of the client to correctly parse the CREATE PROCEDURE statement with semicolons delimiting statements and with dollar sign \($\) quoting\. This results in only a part of the statement sent to the Amazon Redshift server\. You can often work around this error by using the `Run as batch` or `Execute selected` option of the client\. For example, when using an Aginity client, use the `Run entire script as batch` option\. When using SQL Workbench/J, we recommend version 124\. When using SQL Workbench/J version 125, consider specifying an alternate delimiter as a workaround\. Because the CREATE PROCEDURE contains SQL statements delimited with a semicolon \(;\), defining an alternate delimiter such as a forward slash \(/\) and placing it at the end of the CREATE PROCEDURE statement sends the entire statement to the Amazon Redshift server for processing\. As shown in the following example\. ``` CREATE OR REPLACE PROCEDURE test() AS $$ BEGIN SELECT 1 a; END; $$ LANGUAGE plpgsql ; / ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-create.md
7866f5ab368e-2
END; $$ LANGUAGE plpgsql ; / ``` For more information, see [Alternate delimiter](http://www.sql-workbench.net/manual/profiles.html#profile-alternate-delimiter) in the SQL Workbench/J documentation\. Or use a client with better support for parsing CREATE PROCEDURE statements, such as the [Query editor in the Amazon Redshift console](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor.html) or TablePlus\. **Topics** + [Naming stored procedures](stored-procedure-naming.md) + [Security and privileges for stored procedures](stored-procedure-security-and-privileges.md) + [Returning a result set](stored-procedure-result-set.md) + [Managing transactions](stored-procedure-transaction-management.md) + [Trapping errors](stored-procedure-trapping-errors.md) + [Logging stored procedures](c_PLpgSQL-logging.md) + [Limits and differences for stored procedure support](stored-procedure-constraints.md) The following example shows a procedure with no output arguments\. By default, arguments are input \(IN\) arguments\. ``` CREATE OR REPLACE PROCEDURE test_sp1(f1 int, f2 varchar) AS $$ BEGIN
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-create.md
7866f5ab368e-3
``` CREATE OR REPLACE PROCEDURE test_sp1(f1 int, f2 varchar) AS $$ BEGIN RAISE INFO 'f1 = %, f2 = %', f1, f2; END; $$ LANGUAGE plpgsql; call test_sp1(5, 'abc'); INFO: f1 = 5, f2 = abc CALL ``` The following example shows a procedure with output arguments\. Arguments are input \(IN\), input and output \(INOUT\), and output \(OUT\)\. ``` CREATE OR REPLACE PROCEDURE test_sp2(f1 IN int, f2 INOUT varchar(256), out_var OUT varchar(256)) AS $$ DECLARE loop_var int; BEGIN IF f1 is null OR f2 is null THEN RAISE EXCEPTION 'input cannot be null'; END IF; DROP TABLE if exists my_etl; CREATE TEMP TABLE my_etl(a int, b varchar); FOR loop_var IN 1..f1 LOOP insert into my_etl values (loop_var, f2); f2 := f2 || '+' || f2; END LOOP; SELECT INTO out_var count(*) from my_etl; END;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-create.md
7866f5ab368e-4
END LOOP; SELECT INTO out_var count(*) from my_etl; END; $$ LANGUAGE plpgsql; call test_sp2(2,'2019'); f2 | column2 ---------------------+--------- 2019+2019+2019+2019 | 2 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-create.md
c6395bc5d85a-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_categorytable.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_categorytable.md
f1233f3dd9d2-0
Shows summary information for tables in the database\. The view filters system tables and shows only user\-defined tables\. You can use the SVV\_TABLE\_INFO view to diagnose and address table design issues that can influence query performance, including issues with compression encoding, distribution keys, sort style, data distribution skew, table size, and statistics\. The SVV\_TABLE\_INFO view doesn't return any information for empty tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TABLE_INFO.md
f1233f3dd9d2-1
The SVV\_TABLE\_INFO view summarizes information from the [STV\_BLOCKLIST](r_STV_BLOCKLIST.md), [STV\_PARTITIONS](r_STV_PARTITIONS.md), [STV\_TBL\_PERM](r_STV_TBL_PERM.md), and [STV\_SLICES](r_STV_SLICES.md) system tables and from the [PG\_DATABASE](https://www.postgresql.org/docs/8.0/static/catalog-pg-database.html), [PG\_ATTRIBUTE](https://www.postgresql.org/docs/8.0/static/catalog-pg-attribute.html), [PG\_CLASS](https://www.postgresql.org/docs/8.0/static/catalog-pg-class.html), [PG\_NAMESPACE](https://www.postgresql.org/docs/8.0/static/catalog-pg-namespace.html), and [PG\_TYPE](https://www.postgresql.org/docs/8.0/static/catalog-pg-type.html) catalog tables\. SVV\_TABLE\_INFO is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. To permit a user to query the view, grant SELECT privilege on SVV\_TABLE\_INFO to the user\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TABLE_INFO.md
abe58f68d985-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_TABLE_INFO.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TABLE_INFO.md
0121bc281d54-0
The following example shows encoding, distribution style, sorting, and data skew for all user\-defined tables in the database\. Note that "table" must be enclosed in double quotes because it is a reserved word\. ``` select "table", encoded, diststyle, sortkey1, skew_sortkey1, skew_rows from svv_table_info order by 1; table | encoded | diststyle | sortkey1 | skew_sortkey1 | skew_rows ---------------+---------+-----------------+--------------+---------------+---------- category | N | EVEN | | | date | N | ALL | dateid | 1.00 | event | Y | KEY(eventid) | dateid | 1.00 | 1.02 listing | Y | KEY(listid) | dateid | 1.00 | 1.01 sales | Y | KEY(listid) | dateid | 1.00 | 1.02 users | Y | KEY(userid) | userid | 1.00 | 1.01 venue | N | ALL | venueid | 1.00 | (7 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_TABLE_INFO.md
6ff0c43b02f3-0
Return any 10 rows from the SALES table\. Because no ORDER BY clause is specified, the set of rows that this query returns is unpredictable\. ``` select top 10 * from sales; ``` The following query is functionally equivalent, but uses a LIMIT clause instead of a TOP clause: ``` select * from sales limit 10; ``` Return the first 10 rows from the SALES table, ordered by the QTYSOLD column in descending order\. ``` select top 10 qtysold, sellerid from sales order by qtysold desc, sellerid; qtysold | sellerid --------+---------- 8 | 518 8 | 520 8 | 574 8 | 718 8 | 868 8 | 2663 8 | 3396 8 | 3726 8 | 5250 8 | 6216 (10 rows) ``` Return the first two QTYSOLD and SELLERID values from the SALES table, ordered by the QTYSOLD column: ``` select top 2 qtysold, sellerid from sales order by qtysold desc, sellerid; qtysold | sellerid
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_TOP.md
6ff0c43b02f3-1
from sales order by qtysold desc, sellerid; qtysold | sellerid --------+---------- 8 | 518 8 | 520 (2 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_TOP.md
a4b37d0d3d2e-0
Use the SVL\_S3QUERY\_SUMMARY view to get a summary of all Amazon Redshift Spectrum queries \(S3 queries\) that have been run on the system\. SVL\_S3QUERY\_SUMMARY aggregates detail from SVL\_S3QUERY at the segment level\. SVL\_S3QUERY\_SUMMARY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. For SVCS\_S3QUERY\_SUMMARY, see [SVCS\_S3QUERY\_SUMMARY](r_SVCS_S3QUERY_SUMMARY.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY_SUMMARY.md
d640c2862d18-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_S3QUERY_SUMMARY.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY_SUMMARY.md
8a2e62624bf1-0
The following example gets the scan step details for the last query executed\. ``` select query, segment, elapsed, s3_scanned_rows, s3_scanned_bytes, s3query_returned_rows, s3query_returned_bytes, files from svl_s3query_summary where query = pg_last_query_id() order by query,segment; ``` ``` query | segment | elapsed | s3_scanned_rows | s3_scanned_bytes | s3query_returned_rows | s3query_returned_bytes | files ------+---------+---------+-----------------+------------------+-----------------------+------------------------+------ 4587 | 2 | 67811 | 0 | 0 | 0 | 0 | 0 4587 | 2 | 591568 | 172462 | 11260097 | 8513 | 170260 | 1 4587 | 2 | 216849 | 0 | 0 | 0 | 0 | 0 4587 | 2 | 216671 | 0 | 0 | 0 | 0 | 0
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY_SUMMARY.md
8a2e62624bf1-1
4587 | 2 | 216671 | 0 | 0 | 0 | 0 | 0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3QUERY_SUMMARY.md
c70d40e6b416-0
Records the state of tables that are temporarily locked during cluster restart operations\. Amazon Redshift places a temporary lock on tables while they are being processed to resolve stale transactions following a cluster restart\. STV\_STARTUP\_RECOVERY\_STATE is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_STARTUP_RECOVERY_STATE.md
e375acfbbb3a-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_STARTUP_RECOVERY_STATE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_STARTUP_RECOVERY_STATE.md
1516185e0bc1-0
To monitor which tables are temporarily locked, execute the following query after a cluster restart\. ``` select * from STV_STARTUP_RECOVERY_STATE; db_id | tbl_id | table_name --------+--------+------------ 100044 | 100058 | lineorder 100044 | 100068 | part 100044 | 100072 | customer 100044 | 100192 | supplier (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_STARTUP_RECOVERY_STATE.md
1e6530336988-0
Runlength encoding replaces a value that is repeated consecutively with a token that consists of the value and a count of the number of consecutive occurrences \(the length of the run\)\. A separate dictionary of unique values is created for each block of column values on disk\. \(An Amazon Redshift disk block occupies 1 MB\.\) This encoding is best suited to a table in which data values are often repeated consecutively, for example, when the table is sorted by those values\. For example, if a column in a large dimension table has a predictably small domain, such as a COLOR column with fewer than 10 possible values, these values are likely to fall in long sequences throughout the table, even if the data is not sorted\. We do not recommend applying runlength encoding on any column that is designated as a sort key\. Range\-restricted scans perform better when blocks contain similar numbers of rows\. If sort key columns are compressed much more highly than other columns in the same query, range\-restricted scans might perform poorly\. The following table uses the COLOR column example to show how the runlength encoding works: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Runlength_encoding.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Runlength_encoding.md
abce4dfc94cc-0
For more information about the tables used in the following examples, see [Sample database](c_sampledb.md)\. The CATEGORY table in the TICKIT database contains the following rows: ``` catid | catgroup | catname | catdesc -------+----------+-----------+----------------------------------------- 1 | Sports | MLB | Major League Baseball 2 | Sports | NHL | National Hockey League 3 | Sports | NFL | National Football League 4 | Sports | NBA | National Basketball Association 5 | Sports | MLS | Major League Soccer 6 | Shows | Musicals | Musical theatre 7 | Shows | Plays | All non-musical theatre 8 | Shows | Opera | All opera and light opera 9 | Concerts | Pop | All rock and pop music concerts 10 | Concerts | Jazz | All jazz singers and bands 11 | Concerts | Classical | All symphony, concerto, and choir concerts (11 rows) ``` **Updating a table based on a range of values** Update the CATGROUP column based on a range of values in the CATID column\. ``` update category set catgroup='Theatre' where catid between 6 and 8; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
abce4dfc94cc-1
update category set catgroup='Theatre' where catid between 6 and 8; ``` ``` select * from category where catid between 6 and 8; catid | catgroup | catname | catdesc -------+----------+-----------+-------------------------------------------- 6 | Theatre | Musicals | Musical theatre 7 | Theatre | Plays | All non-musical theatre 8 | Theatre | Opera | All opera and light opera (3 rows) ``` **Updating a table based on a current value** Update the CATNAME and CATDESC columns based on their current CATGROUP value: ``` update category set catdesc=default, catname='Shows' where catgroup='Theatre'; ``` ``` select * from category where catname='Shows'; catid | catgroup | catname | catdesc
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
abce4dfc94cc-2
select * from category where catname='Shows'; catid | catgroup | catname | catdesc -------+----------+-----------+-------------------------------------------- 6 | Theatre | Shows | 7 | Theatre | Shows | 8 | Theatre | Shows | (3 rows) ``` In this case, the CATDESC column was set to null because no default value was defined when the table was created\. Run the following commands to set the CATEGORY table data back to the original values: ``` truncate category; copy category from 's3://mybucket/data/category_pipe.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|'; ``` **Updating a table based on the result of a WHERE clause subquery** Update the CATEGORY table based on the result of a subquery in the WHERE clause: ``` update category set catdesc='Broadway Musical' where category.catid in
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
abce4dfc94cc-3
``` update category set catdesc='Broadway Musical' where category.catid in (select category.catid from category join event on category.catid = event.catid join venue on venue.venueid = event.venueid join sales on sales.eventid = event.eventid where venuecity='New York City' and catname='Musicals'); ``` View the updated table: ``` select * from category order by 1; catid | catgroup | catname | catdesc -------+----------+-----------+-------------------------------------------- 1 | Sports | MLB | Major League Baseball 2 | Sports | NHL | National Hockey League 3 | Sports | NFL | National Football League 4 | Sports | NBA | National Basketball Association 5 | Sports | MLS | Major League Soccer 6 | Shows | Musicals | Broadway Musical 7 | Shows | Plays | All non-musical theatre 8 | Shows | Opera | All opera and light opera 9 | Concerts | Pop | All rock and pop music concerts 10 | Concerts | Jazz | All jazz singers and bands
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
abce4dfc94cc-4
9 | Concerts | Pop | All rock and pop music concerts 10 | Concerts | Jazz | All jazz singers and bands 11 | Concerts | Classical | All symphony, concerto, and choir concerts (11 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
8171a5741e16-0
Update the original 11 rows in the CATEGORY table based on matching CATID rows in the EVENT table: ``` update category set catid=100 from event where event.catid=category.catid; select * from category order by 1; catid | catgroup | catname | catdesc -------+----------+-----------+-------------------------------------------- 1 | Sports | MLB | Major League Baseball 2 | Sports | NHL | National Hockey League 3 | Sports | NFL | National Football League 4 | Sports | NBA | National Basketball Association 5 | Sports | MLS | Major League Soccer 10 | Concerts | Jazz | All jazz singers and bands 11 | Concerts | Classical | All symphony, concerto, and choir concerts 100 | Shows | Opera | All opera and light opera 100 | Shows | Musicals | Musical theatre 100 | Concerts | Pop | All rock and pop music concerts 100 | Shows | Plays | All non-musical theatre (11 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
8171a5741e16-1
100 | Shows | Plays | All non-musical theatre (11 rows) ``` Note that the EVENT table is listed in the FROM clause and the join condition to the target table is defined in the WHERE clause\. Only four rows qualified for the update\. These four rows are the rows whose CATID values were originally 6, 7, 8, and 9; only those four categories are represented in the EVENT table: ``` select distinct catid from event; catid ------- 9 8 6 7 (4 rows) ``` Update the original 11 rows in the CATEGORY table by extending the previous example and adding another condition to the WHERE clause\. Because of the restriction on the CATGROUP column, only one row qualifies for the update \(although four rows qualify for the join\)\. ``` update category set catid=100 from event where event.catid=category.catid and catgroup='Concerts'; select * from category where catid=100; catid | catgroup | catname | catdesc -------+----------+---------+--------------------------------- 100 | Concerts | Pop | All rock and pop music concerts (1 row)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
8171a5741e16-2
100 | Concerts | Pop | All rock and pop music concerts (1 row) ``` An alternative way to write this example is as follows: ``` update category set catid=100 from event join category cat on event.catid=cat.catid where cat.catgroup='Concerts'; ``` The advantage to this approach is that the join criteria are clearly separated from any other criteria that qualify rows for the update\. Note the use of the alias CAT for the CATEGORY table in the FROM clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
ec11467416a7-0
The previous example showed an inner join specified in the FROM clause of an UPDATE statement\. The following example returns an error because the FROM clause does not support outer joins to the target table: ``` update category set catid=100 from event left join category cat on event.catid=cat.catid where cat.catgroup='Concerts'; ERROR: Target table must be part of an equijoin predicate ``` If the outer join is required for the UPDATE statement, you can move the outer join syntax into a subquery: ``` update category set catid=100 from (select event.catid from event left join category cat on event.catid=cat.catid) eventcat where category.catid=eventcat.catid and catgroup='Concerts'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_UPDATE_statements.md
b02cfe87d970-0
Searches a string for a regular expression pattern and returns an integer that indicates the beginning position or ending position of the matched substring\. If no match is found, then the function returns 0\. REGEXP\_INSTR is similar to the [POSITION](r_POSITION.md) function, but lets you search a string for a regular expression pattern\. For more information about regular expressions, see [POSIX operators](pattern-matching-conditions-posix.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md
5d5c632ea9ab-0
``` REGEXP_INSTR ( source_string, pattern [, position [, occurrence] [, option [, parameters ] ] ] ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md
7a1a30d36049-0
*source\_string* A string expression, such as a column name, to be searched\. *pattern* A string literal that represents a SQL standard regular expression pattern\. *position* A positive integer that indicates the position within *source\_string* to begin searching\. The position is based on the number of characters, not bytes, so that multibyte characters are counted as single characters\. The default is 1\. If *position* is less than 1, the search begins at the first character of *source\_string*\. If *position* is greater than the number of characters in *source\_string*, the result is 0\. *occurrence* A positive integer that indicates which occurrence of the pattern to use\. REGEXP\_INSTR skips the first *occurrence* \-1 matches\. The default is 1\. If *occurrence* is less than 1 or greater than the number of characters in *source\_string*, the search is ignored and the result is 0\. *option* A value that indicates whether to return the position of the first character of the match \(`0`\) or the position of the first character following the end of the match \(`1`\)\. A nonzero value is the same as 1\. The default value is 0\. *parameters* One or more string literals that indicate how the function matches the pattern\. The possible values are the following: + c – Perform case\-sensitive matching\. The default is to use case\-sensitive matching\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md
7a1a30d36049-1
+ c – Perform case\-sensitive matching\. The default is to use case\-sensitive matching\. + i – Perform case\-insensitive matching\. + e – Extract a substring using a subexpression\. If *pattern* includes a subexpression, REGEXP\_INSTR matches a substring using the first subexpression in *pattern*\. REGEXP\_INSTR considers only the first subexpression; additional subexpressions are ignored\. If the pattern doesn't have a subexpression, REGEXP\_INSTR ignores the 'e' parameter\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md
a66b0eda6f4e-0
Integer
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_INSTR.md