id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
361570fba5d5-0
*schema\_name* The name of the database schema to be altered\. RENAME TO A clause that renames the schema\. *new\_name* The new name of the schema\. For more information about valid names, see [Names and identifiers](r_names.md)\. OWNER TO A clause that changes the owner of the schema\. *new\_owner* The new owner of the schema\. QUOTA The maximum amount of disk space that the specified schema can use\. This space is the collective size of all tables under the specified schema\. Amazon Redshift converts the selected value to megabytes\. Gigabytes is the default unit of measurement when you don't specify a value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_SCHEMA.md
d853a0cd5cd3-0
The following example renames the SALES schema to US\_SALES\. ``` alter schema sales rename to us_sales; ``` The following example gives ownership of the US\_SALES schema to the user DWUSER\. ``` alter schema us_sales owner to dwuser; ``` The following example changes the quota to 300 GB and removes the quota\. ``` alter schema us_sales QUOTA 300 GB; alter schema us_sales QUOTA UNLIMITED; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_SCHEMA.md
c1c621504591-0
**false**, true
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_json_serialization_enable.md
06996d5de5c5-0
A session configuration that modifies the JSON serialization behavior of ORC, JSON, Ion, and Parquet formatted data\. If `json_serialization_enable` is `true`, all top\-level collections are automatically serialized to JSON and returned as VARCHAR\(65535\)\. Noncomplex columns are not affected or serialized\. Because collection columns are serialized as VARCHAR\(65535\), their nested subfields can no longer be accessed directly as part of the query syntax \(that is, in the filter clause\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_json_serialization_enable.md
06996d5de5c5-1
syntax \(that is, in the filter clause\)\. If `json_serialization_enable` is `false`, top\-level collections are not serialized to JSON\. For more information about nested JSON serialization, see [Serializing complex nested JSON](serializing-complex-JSON.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_json_serialization_enable.md
098b2d8e568a-0
Returns the characters extracted from a string by searching for a regular expression pattern\. REGEXP\_SUBSTR is similar to the [SUBSTRING function](r_SUBSTRING.md) function, but lets you search a string for a regular expression pattern\. For more information about regular expressions, see [POSIX operators](pattern-matching-conditions-posix.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_SUBSTR.md
04b59f553173-0
``` REGEXP_SUBSTR ( source_string, pattern [, position [, occurrence [, parameters ] ] ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_SUBSTR.md
4ed87a54740d-0
*source\_string* A string expression, such as a column name, to be searched\. *pattern* A string literal that represents a SQL standard regular expression pattern\. *position* A positive integer that indicates the position within *source\_string* to begin searching\. The position is based on the number of characters, not bytes, so that multi\-byte characters are counted as single characters\. The default is 1\. If *position* is less than 1, the search begins at the first character of *source\_string*\. If *position* is greater than the number of characters in *source\_string*, the result is an empty string \(""\)\. *occurrence* A positive integer that indicates which occurrence of the pattern to use\. REGEXP\_SUBSTR skips the first *occurrence* \-1 matches\. The default is 1\. If *occurrence* is less than 1 or greater than the number of characters in *source\_string*, the search is ignored and the result is NULL\. *parameters* One or more string literals that indicate how the function matches the pattern\. The possible values are the following: + c – Perform case\-sensitive matching\. The default is to use case\-sensitive matching\. + i – Perform case\-insensitive matching\. + e – Extract a substring using a subexpression\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_SUBSTR.md
4ed87a54740d-1
+ i – Perform case\-insensitive matching\. + e – Extract a substring using a subexpression\. If *pattern* includes a subexpression, REGEXP\_SUBSTR matches a substring using the first subexpression in *pattern*\. REGEXP\_SUBSTR considers only the first subexpression; additional subexpressions are ignored\. If the pattern doesn't have a subexpression, REGEXP\_SUBSTR ignores the 'e' parameter\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_SUBSTR.md
4536395c0921-0
VARCHAR
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_SUBSTR.md
ce58c136152d-0
The following example returns the portion of an email address between the @ character and the domain extension\. ``` select email, regexp_substr(email,'@[^.]*') from users limit 5; email | regexp_substr --------------------------------------------+---------------- Suspendisse.tristique@nonnisiAenean.edu | @nonnisiAenean sed@lacusUtnec.ca | @lacusUtnec elementum@semperpretiumneque.ca | @semperpretiumneque Integer.mollis.Integer@tristiquealiquet.org | @tristiquealiquet Donec.fringilla@sodalesat.org | @sodalesat ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/REGEXP_SUBSTR.md
fb4eeb96a641-0
The SUM function returns the sum of the input column or expression values\. The SUM function works with numeric values and ignores NULL values\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUM.md
5398dad15c12-0
``` SUM ( [ DISTINCT | ALL ] expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUM.md
5de681aedd86-0
*expression * The target column or expression that the function operates on\. DISTINCT \| ALL With the argument DISTINCT, the function eliminates all duplicate values from the specified expression before calculating the sum\. With the argument ALL, the function retains all duplicate values from the expression for calculating the sum\. ALL is the default\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUM.md
5c66688eefa8-0
The argument types supported by the SUM function are SMALLINT, INTEGER, BIGINT, NUMERIC, DECIMAL, REAL, and DOUBLE PRECISION\. The return types supported by the SUM function are + BIGINT for BIGINT, SMALLINT, and INTEGER arguments + NUMERIC for NUMERIC arguments + DOUBLE PRECISION for floating point arguments The default precision for a SUM function result with a 64\-bit NUMERIC or DECIMAL argument is 19\. The default precision for a result with a 128\-bit NUMERIC or DECIMAL argument is 38\. The scale of the result is the same as the scale of the argument\. For example, a SUM of a DEC\(5,2\) column returns a DEC\(19,2\) data type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUM.md
f3d824716ba2-0
Find the sum of all commissions paid from the SALES table: ``` select sum(commission) from sales; sum ------------- 16614814.65 (1 row) ``` Find the number of seats in all venues in the state of Florida: ``` select sum(venueseats) from venue where venuestate = 'FL'; sum -------- 250411 (1 row) ``` Find the number of seats sold in May: ``` select sum(qtysold) from sales, date where sales.dateid = date.dateid and date.month = 'MAY'; sum ------- 32291 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SUM.md
30a0ec5956af-0
The following examples demonstrate various column and table attributes in Amazon Redshift CREATE TABLE statements\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
1e511e18962c-0
The following example creates a SALES table in the TICKIT database with compression defined for several columns\. LISTID is declared as the distribution key, and LISTID and SELLERID are declared as a multicolumn compound sort key\. Primary key and foreign key constraints are also defined for the table\. ``` create table sales( salesid integer not null, listid integer not null, sellerid integer not null, buyerid integer not null, eventid integer not null encode mostly16, dateid smallint not null, qtysold smallint not null encode mostly8, pricepaid decimal(8,2) encode delta32k, commission decimal(8,2) encode delta32k, saletime timestamp, primary key(salesid), foreign key(listid) references listing(listid), foreign key(sellerid) references users(userid), foreign key(buyerid) references users(userid), foreign key(dateid) references date(dateid)) distkey(listid) compound sortkey(listid,sellerid); ``` The result is as follows: ``` schemaname | tablename | column | type | encoding | distkey | sortkey | notnull
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
1e511e18962c-1
``` schemaname | tablename | column | type | encoding | distkey | sortkey | notnull -----------+-----------+------------+-----------------------------+----------+---------+---------+-------- public | sales | salesid | integer | lzo | false | 0 | true public | sales | listid | integer | none | true | 1 | true public | sales | sellerid | integer | none | false | 2 | true public | sales | buyerid | integer | lzo | false | 0 | true public | sales | eventid | integer | mostly16 | false | 0 | true public | sales | dateid | smallint | lzo | false | 0 | true public | sales | qtysold | smallint | mostly8 | false | 0 | true public | sales | pricepaid | numeric(8,2) | delta32k | false | 0 | false public | sales | commission | numeric(8,2) | delta32k | false | 0 | false public | sales | saletime | timestamp without time zone | lzo | false | 0 | false ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
371c1367ddd7-0
The following example creates the CUSTOMER table with an interleaved sort key\. ``` create table customer_interleaved ( c_custkey integer not null, c_name varchar(25) not null, c_address varchar(25) not null, c_city varchar(10) not null, c_nation varchar(15) not null, c_region varchar(12) not null, c_phone varchar(15) not null, c_mktsegment varchar(10) not null) diststyle all interleaved sortkey (c_custkey, c_city, c_mktsegment); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
15fb2d3d8630-0
The following example either creates the CITIES table, or does nothing and returns a message if it already exists: ``` create table if not exists cities( cityid integer not null, city varchar(100) not null, state char(2) not null); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
2e85986cde3f-0
The following example creates the VENUE table with ALL distribution\. ``` create table venue( venueid smallint not null, venuename varchar(100), venuecity varchar(30), venuestate char(2), venueseats integer, primary key(venueid)) diststyle all; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
2f39f3ed8cb0-0
The following example creates a table called MYEVENT with three columns\. ``` create table myevent( eventid int, eventname varchar(200), eventcity varchar(30)) diststyle even; ``` The table is distributed evenly and isn't sorted\. The table has no declared DISTKEY or SORTKEY columns\. ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 'myevent'; column | type | encoding | distkey | sortkey -----------+------------------------+----------+---------+--------- eventid | integer | lzo | f | 0 eventname | character varying(200) | lzo | f | 0 eventcity | character varying(30) | lzo | f | 0 (3 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
4aa637210b54-0
The following example creates a temporary table called TEMPEVENT, which inherits its columns from the EVENT table\. ``` create temp table tempevent(like event); ``` This table also inherits the DISTKEY and SORTKEY attributes of its parent table: ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 'tempevent'; column | type | encoding | distkey | sortkey -----------+-----------------------------+----------+---------+--------- eventid | integer | none | t | 1 venueid | smallint | none | f | 0 catid | smallint | none | f | 0 dateid | smallint | none | f | 0 eventname | character varying(200) | lzo | f | 0 starttime | timestamp without time zone | bytedict | f | 0 (6 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
516e026fea78-0
The following example creates a table named VENUE\_IDENT, which has an IDENTITY column named VENUEID\. This column starts with 0 and increments by 1 for each record\. VENUEID is also declared as the primary key of the table\. ``` create table venue_ident(venueid bigint identity(0, 1), venuename varchar(100), venuecity varchar(30), venuestate char(2), venueseats integer, primary key(venueid)); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
ccf010b10086-0
The following example creates a table named `t1`\. This table has an IDENTITY column named `hist_id` and a default IDENTITY column named `base_id`\. ``` CREATE TABLE t1( hist_id BIGINT IDENTITY NOT NULL, /* Cannot be overridden */ base_id BIGINT GENERATED BY DEFAULT AS IDENTITY NOT NULL, /* Can be overridden */ business_key varchar(10) , some_field varchar(10) ); ``` Inserting a row into the table shows that both `hist_id` and `base_id` values are generated\. ``` INSERT INTO T1 (business_key, some_field) values ('A','MM'); SELECT * FROM t1; hist_id | base_id | business_key | some_field ---------+---------+--------------+------------ 1 | 1 | A | MM ``` Inserting a second row shows that the default value for `base_id` is generated\. ``` INSERT INTO T1 (base_id, business_key, some_field) values (DEFAULT, 'B','MNOP'); SELECT * FROM t1;
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
ccf010b10086-1
SELECT * FROM t1; hist_id | base_id | business_key | some_field ---------+---------+--------------+------------ 1 | 1 | A | MM 2 | 2 | B | MNOP ``` Inserting a third row shows that the value for `base_id` doesn't need to be unique\. ``` INSERT INTO T1 (base_id, business_key, some_field) values (2,'B','MNNN'); SELECT * FROM t1; hist_id | base_id | business_key | some_field ---------+---------+--------------+------------ 1 | 1 | A | MM 2 | 2 | B | MNOP 3 | 2 | B | MNNN ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
529b7ed0ccb8-0
The following example creates a CATEGORYDEF table that declares default values for each column: ``` create table categorydef( catid smallint not null default 0, catgroup varchar(10) default 'Special', catname varchar(10) default 'Other', catdesc varchar(50) default 'Special events', primary key(catid)); insert into categorydef values(default,default,default,default); select * from categorydef; catid | catgroup | catname | catdesc -------+----------+---------+---------------- 0 | Special | Other | Special events (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
cc986de3a966-0
The following example shows how the DISTKEY, SORTKEY, and DISTSTYLE options work\. In this example, COL1 is the distribution key; therefore, the distribution style must be either set to KEY or not set\. By default, the table has no sort key and so isn't sorted: ``` create table t1(col1 int distkey, col2 int) diststyle key; ``` The result is as follows: ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 't1'; column | type | encoding | distkey | sortkey -------+---------+----------+---------+--------- col1 | integer | lzo | t | 0 col2 | integer | lzo | f | 0 ``` In the following example, the same column is defined as the distribution key and the sort key\. Again, the distribution style must be either set to KEY or not set\. ``` create table t2(col1 int distkey sortkey, col2 int); ``` The result is as follows: ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 't2';
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
cc986de3a966-1
select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 't2'; column | type | encoding | distkey | sortkey -------+---------+----------+---------+--------- col1 | integer | none | t | 1 col2 | integer | lzo | f | 0 ``` In the following example, no column is set as the distribution key, COL2 is set as the sort key, and the distribution style is set to ALL: ``` create table t3(col1 int, col2 int sortkey) diststyle all; ``` The result is as follows: ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 't3'; Column | Type | Encoding | DistKey | SortKey -------+---------+----------+---------+-------- col1 | integer | lzo | f | 0 col2 | integer | none | f | 1 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
cc986de3a966-2
col1 | integer | lzo | f | 0 col2 | integer | none | f | 1 ``` In the following example, the distribution style is set to EVEN and no sort key is defined explicitly; therefore the table is distributed evenly but isn't sorted\. ``` create table t4(col1 int, col2 int) diststyle even; ``` The result is as follows: ``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 't4'; column | type | encoding | distkey | sortkey --------+---------+---------+---------+-------- col1 | integer | lzo | f | 0 col2 | integer | lzo | f | 0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_examples.md
d53829423343-0
Use the STV\_SLICES table to view the current mapping of a slice to a node\. The information in STV\_SLICES is used mainly for investigation purposes\. STV\_SLICES is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SLICES.md
7ee06b50db9c-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_SLICES.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SLICES.md
c3eaa43af714-0
To view which cluster nodes are managing which slices, type the following query: ``` select node, slice from stv_slices; ``` This query returns the following sample output: ``` node | slice ------+------- 0 | 2 0 | 3 0 | 1 0 | 0 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_SLICES.md
b797f5240c54-0
The following input dates are all valid examples of literal date values that you can load into Amazon Redshift tables\. The default MDY DateStyle mode is assumed to be in effect, which means that the month value precedes the day value in strings such as `1999-01-08` and `01/02/00`\. **Note** A date or time stamp literal must be enclosed in quotes when you load it into a table\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Date_and_time_literals.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Date_and_time_literals.md
7e72d1366d4f-0
The following input timestamps are all valid examples of literal time values that you can load into Amazon Redshift tables\. All of the valid date literals can be combined with the following time literals\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Date_and_time_literals.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Date_and_time_literals.md
d20636a376ad-0
The following special values can be used as datetime literals and as arguments to date functions\. They require single quotes and are converted to regular timestamp values during query processing\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Date_and_time_literals.html) The following examples show how `now` and `today` work in conjunction with the DATEADD function: ``` select dateadd(day,1,'today'); date_add --------------------- 2009-11-17 00:00:00 (1 row) select dateadd(day,1,'now'); date_add ---------------------------- 2009-11-17 10:45:32.021394 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Date_and_time_literals.md
2a9aa20181d7-0
Suppose that your cluster WLM is configured with two queues, using the following dynamic properties\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-dynamic-example.html) Now suppose that your cluster has 200 GB of memory available for query processing\. \(This number is arbitrary and used for illustration only\.\) As the following equation shows, each slot is allocated 25 GB\. ``` (200 GB * 50% ) / 4 slots = 25 GB ``` Next, you change your WLM to use the following dynamic properties\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-dynamic-example.html) As the following equation shows, the new memory allocation for each slot in queue 1 is 50 GB\. ``` (200 GB * 75% ) / 3 slots = 50 GB ``` Suppose that queries A1, A2, A3, and A4 are running when the new configuration is applied, and queries B1, B2, B3, and B4 are queued\. WLM dynamically reconfigures the query slots as follows\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-example.md
2a9aa20181d7-1
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-dynamic-example.html) 1. WLM recalculates the memory allocation for each query slot\. Originally, queue 1 was allocated 100 GB\. The new queue has a total allocation of 150 GB, so the new queue immediately has 50 GB available\. Queue 1 is now using four slots, and the new concurrency level is three slots, so no new slots are added\. 1. When one query finishes, the slot is removed and 25 GB is freed\. Queue 1 now has three slots and 75 GB of available memory\. The new configuration needs 50 GB for each new slot, but the new concurrency level is three slots, so no new slots are added\. 1. When a second query finishes, the slot is removed, and 25 GB is freed\. Queue 1 now has two slots and 100 GB of free memory\. 1. A new slot is added using 50 GB of the free memory\. Queue 1 now has three slots, and 50 GB free memory\. Queued queries can now be routed to the new slot\. 1. When a third query finishes, the slot is removed, and 25 GB is freed\. Queue 1 now has two slots, and 75 GB of free memory\. 1. A new slot is added using 50 GB of the free memory\. Queue 1 now has three slots, and 25 GB free memory\. Queued queries can now be routed to the new slot\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-example.md
2a9aa20181d7-2
1. When the fourth query finishes, the slot is removed, and 25 GB is freed\. Queue 1 now has two slots and 50 GB of free memory\. 1. A new slot is added using the 50 GB of free memory\. Queue 1 now has three slots with 50 GB each and all available memory has been allocated\. The transition is complete and all query slots are available to queued queries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-dynamic-example.md
4e612b8c97fb-0
ST\_SRID returns the spatial reference system identifier \(SRID\) of an input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SRID-function.md
04596e066d36-0
``` ST_SRID(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SRID-function.md
17ac7dbe6cc6-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SRID-function.md
b9def4b1d8d2-0
`INTEGER` representing the SRID value of *geom*\. If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SRID-function.md
341ced3e5fbc-0
The following SQL returns an SRID value of a linestring\. ``` SELECT ST_SRID(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_srid ------------- 0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_SRID-function.md
824f70cf036b-0
A deep copy recreates and repopulates a table by using a bulk insert, which automatically sorts the table\. If a table has a large unsorted Region, a deep copy is much faster than a vacuum\. The trade off is that you should not make concurrent updates during a deep copy operation unless you can track it and move the delta updates into the new table after the process has completed\. A VACUUM operation supports concurrent updates automatically\. You can choose one of the following methods to create a copy of the original table: + Use the original table DDL\. If the CREATE TABLE DDL is available, this is the fastest and preferred method\. If you create a new table, you can specify all table and column attributes, including primary key and foreign keys\. **Note** If the original DDL is not available, you might be able to recreate the DDL by running a script called `v_generate_tbl_ddl`\. You can download the script from [amazon\-redshift\-utils](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_tbl_ddl.sql), which is part of the [Amazon Web Services \- Labs](https://github.com/awslabs) git hub repository\. + Use CREATE TABLE LIKE\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/performing-a-deep-copy.md
824f70cf036b-1
+ Use CREATE TABLE LIKE\. If the original DDL is not available, you can use CREATE TABLE LIKE to recreate the original table\. The new table inherits the encoding, distkey, sortkey, and notnull attributes of the parent table\. The new table doesn't inherit the primary key and foreign key attributes of the parent table, but you can add them using [ALTER TABLE](r_ALTER_TABLE.md)\. + Create a temporary table and truncate the original table\. If you need to retain the primary key and foreign key attributes of the parent table, or if the parent table has dependencies, you can use CREATE TABLE \.\.\. AS \(CTAS\) to create a temporary table, then truncate the original table and populate it from the temporary table\. Using a temporary table improves performance significantly compared to using a permanent table, but there is a risk of losing data\. A temporary table is automatically dropped at the end of the session in which it is created\. TRUNCATE commits immediately, even if it is inside a transaction block\. If the TRUNCATE succeeds but the session terminates before the subsequent INSERT completes, the data is lost\. If data loss is unacceptable, use a permanent table\. **To perform a deep copy using the original table DDL** 1. \(Optional\) Recreate the table DDL by running a script called `v_generate_tbl_ddl`\. 1. Create a copy of the table using the original CREATE TABLE DDL\. 1. Use an INSERT INTO … SELECT statement to populate the copy with data from the original table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/performing-a-deep-copy.md
824f70cf036b-2
1. Use an INSERT INTO … SELECT statement to populate the copy with data from the original table\. 1. Drop the original table\. 1. Use an ALTER TABLE statement to rename the copy to the original table name\. The following example performs a deep copy on the SALES table using a duplicate of SALES named SALESCOPY\. ``` create table salescopy ( … ); insert into salescopy (select * from sales); drop table sales; alter table salescopy rename to sales; ``` **To perform a deep copy using CREATE TABLE LIKE** 1. Create a new table using CREATE TABLE LIKE\. 1. Use an INSERT INTO … SELECT statement to copy the rows from the current table to the new table\. 1. Drop the current table\. 1. Use an ALTER TABLE statement to rename the new table to the original table name\. The following example performs a deep copy on the SALES table using CREATE TABLE LIKE\. ``` create table likesales (like sales); insert into likesales (select * from sales); drop table sales; alter table likesales rename to sales; ``` **To perform a deep copy by creating a temporary table and truncating the original table** 1. Use CREATE TABLE AS to create a temporary table with the rows from the original table\. 1. Truncate the current table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/performing-a-deep-copy.md
824f70cf036b-3
1. Use CREATE TABLE AS to create a temporary table with the rows from the original table\. 1. Truncate the current table\. 1. Use an INSERT INTO … SELECT statement to copy the rows from the temporary table to the original table\. 1. Drop the temporary table\. The following example performs a deep copy on the SALES table by creating a temporary table and truncating the original table: ``` create temp table salestemp as select * from sales; truncate sales; insert into sales (select * from salestemp); drop table salestemp; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/performing-a-deep-copy.md
136c18fd8111-0
**Topics** + [Data warehouse system architecture](c_high_level_system_architecture.md) + [Performance](c_challenges_achieving_high_performance_queries.md) + [Columnar storage](c_columnar_storage_disk_mem_mgmnt.md) + [Workload management](c_workload_mngmt_classification.md) + [Using Amazon Redshift with other services](using-redshift-with-other-services.md) An Amazon Redshift data warehouse is an enterprise\-class relational database query and management system\. Amazon Redshift supports client connections with many types of applications, including business intelligence \(BI\), reporting, data, and analytics tools\. When you execute analytic queries, you are retrieving, comparing, and evaluating large amounts of data in multiple\-stage operations to produce a final result\. Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes\. This section presents an introduction to the Amazon Redshift system architecture\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift_system_overview.md
f7d2de1678b0-0
Analyzes aggregate execution steps for queries\. These steps occur during execution of aggregate functions and GROUP BY clauses\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_AGGR.md
b1743b440b94-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_AGGR.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_AGGR.md
97ca4e507471-0
Returns information about aggregate execution steps for SLICE 1 and TBL 239\. ``` select query, segment, bytes, slots, occupied, maxlength, is_diskbased, workmem, type from stl_aggr where slice=1 and tbl=239 order by rows limit 10; ``` ``` query | segment | bytes | slots | occupied | maxlength | is_diskbased | workmem | type -------+---------+-------+---------+----------+-----------+--------------+-----------+-------- 562 | 1 | 0 | 4194304 | 0 | 0 | f | 383385600 | HASHED 616 | 1 | 0 | 4194304 | 0 | 0 | f | 383385600 | HASHED 546 | 1 | 0 | 4194304 | 0 | 0 | f | 383385600 | HASHED 547 | 0 | 8 | 0 | 0 | 0 | f | 0 | PLAIN 685 | 1 | 32 | 4194304 | 1 | 0 | f | 383385600 | HASHED 652 | 0 | 8 | 0 | 0 | 0 | f | 0 | PLAIN
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_AGGR.md
97ca4e507471-1
652 | 0 | 8 | 0 | 0 | 0 | f | 0 | PLAIN 680 | 0 | 8 | 0 | 0 | 0 | f | 0 | PLAIN 658 | 0 | 8 | 0 | 0 | 0 | f | 0 | PLAIN 686 | 0 | 8 | 0 | 0 | 0 | f | 0 | PLAIN 695 | 1 | 32 | 4194304 | 1 | 0 | f | 383385600 | HASHED (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_AGGR.md
c4c45f697204-0
Captures the query text for SQL commands\. Query the STL\_QUERYTEXT view to capture the SQL that was logged for the following statements: + SELECT, SELECT INTO + INSERT, UPDATE, DELETE + COPY + UNLOAD + VACUUM, ANALYZE + CREATE TABLE AS \(CTAS\) To query activity for these statements over a given time period, join the STL\_QUERYTEXT and STL\_QUERY views\. **Note** The STL\_QUERY and STL\_QUERYTEXT views only contain information about queries, not other utility and DDL commands\. For a listing and information on all statements executed by Amazon Redshift, you can also query the STL\_DDLTEXT and STL\_UTILITYTEXT views\. For a complete listing of all statements executed by Amazon Redshift, you can query the SVL\_STATEMENTTEXT view\. See also [STL\_DDLTEXT](r_STL_DDLTEXT.md), [STL\_UTILITYTEXT](r_STL_UTILITYTEXT.md), and [SVL\_STATEMENTTEXT](r_SVL_STATEMENTTEXT.md)\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
dce7debbd58f-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_QUERYTEXT.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
3817ef971950-0
You can use the PG\_BACKEND\_PID\(\) function to retrieve information for the current session\. For example, the following query returns the query ID and a portion of the query text for queries executed in the current session\. ``` select query, substring(text,1,60) from stl_querytext where pid = pg_backend_pid() order by query desc; query | substring -------+-------------------------------------------------------------- 28262 | select query, substring(text,1,80) from stl_querytext where 28252 | select query, substring(path,0,80) as path from stl_unload_l 28248 | copy category from 's3://dw-tickit/manifest/category/1030_ma 28247 | Count rows in target table 28245 | unload ('select * from category') to 's3://dw-tickit/manifes 28240 | select query, substring(text,1,40) from stl_querytext where (6 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
54f973aeaffe-0
To reconstruct the SQL stored in the `text` column of STL\_QUERYTEXT, run a SELECT statement to create SQL from 1 or more parts in the `text` column\. Before running the reconstructed SQL, replace any \(`\n`\) special characters with a new line\. The result of the following SELECT statement is rows of reconstructed SQL in the `query_statement` field\. ``` SELECT query, LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END) WITHIN GROUP (ORDER BY sequence) as query_statement, COUNT(*) as row_count FROM stl_querytext GROUP BY query ORDER BY query desc; ``` For example, the following query selects 3 columns\. The query itself is longer than 200 characters and is stored in parts in STL\_QUERYTEXT\. ``` select 1 AS a0123456789012345678901234567890123456789012345678901234567890, 2 AS b0123456789012345678901234567890123456789012345678901234567890, 3 AS b012345678901234567890123456789012345678901234 FROM stl_querytext; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
54f973aeaffe-1
FROM stl_querytext; ``` In this example, the query is stored in 2 parts \(rows\) in the `text` column of STL\_QUERYTEXT\. ``` select query, sequence, text from stl_querytext where query=pg_last_query_id() order by query desc, sequence limit 10; ``` ``` query | sequence | text -------+----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
54f973aeaffe-2
45 | 0 | select\n1 AS a0123456789012345678901234567890123456789012345678901234567890,\n2 AS b0123456789012345678901234567890123456789012345678901234567890,\n3 AS b012345678901234567890123456789012345678901234 45 | 1 | \nFROM stl_querytext; ``` To reconstruct the SQL stored in STL\_QUERYTEXT, run the following SQL\. ``` select LISTAGG(CASE WHEN LEN(RTRIM(text)) = 0 THEN text ELSE RTRIM(text) END, '') within group (order by sequence) AS text from stl_querytext where query=pg_last_query_id(); ``` To use the resulting reconstructed SQL in your client, replace any \(`\n`\) special characters with a new line\. ``` text
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
54f973aeaffe-3
``` text ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ select\n1 AS a0123456789012345678901234567890123456789012345678901234567890,\n2 AS b0123456789012345678901234567890123456789012345678901234567890,\n3 AS b012345678901234567890123456789012345678901234\nFROM stl_querytext; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERYTEXT.md
75ddf3b22616-0
Inserts the results of a SELECT query into existing external tables on external catalog such as for AWS Glue, AWS Lake Formation, or an Apache Hive metastore\. Use the same AWS Identity and Access Management \(IAM\) role used for the CREATE EXTERNAL SCHEMA command to interact with external catalogs and Amazon S3\. For nonpartitioned tables, the INSERT \(external table\) command writes data to the Amazon S3 location defined in the table, based on the specified table properties and file format\. For partitioned tables, INSERT \(external table\) writes data to the Amazon S3 location according to the partition key specified in the table\. It also automatically registers new partitions in the external catalog after the INSERT operation completes\. You can't run INSERT \(external table\) within a transaction block \(BEGIN \.\.\. END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_external_table.md
8aa31f5cf86f-0
``` INSERT INTO external_schema.table_name { select_statement } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_external_table.md
1c01743f968a-0
*external\_schema\.table\_name* The name of an existing external schema and a target external table to insert into\. *select\_statement* A statement that inserts one or more rows into the external table by defining any query\. All of the rows that the query produces are written to Amazon S3 in either text or Parquet format based on the table definition\. The query must return a column list that is compatible with the column data types in the external table\. However, the column names don't have to match\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_external_table.md
7a153842f062-0
The number of columns in the SELECT query must be the same as the sum of data columns and partition columns\. The location and the data type of each data column must match that of the external table\. The location of partition columns must be at the end of the SELECT query, in the same order they were defined in CREATE EXTERNAL TABLE command\. The column names don't have to match\. In some cases, you might want to run the INSERT \(external table\) command on an AWS Glue Data Catalog or a Hive metastore\. In the case of AWS Glue, the IAM role used to create the external schema must have both read and write permissions on Amazon S3 and AWS Glue\. If you use an AWS Lake Formation catalog, This IAM role becomes the owner of the new Lake Formation table\. This IAM role must at least have the following permissions: + SELECT, INSERT, UPDATE permission on the external table + Data location permission on the Amazon S3 path of the external table To ensure that file names are unique, Amazon Redshift uses the following format for the name of each file uploaded to Amazon S3 by default\. `<date>_<time>_<microseconds>_<query_id>_<slice-number>_part_<part-number>.<format>`\. An example is `20200303_004509_810669_1007_0001_part_00.parquet`\. Consider the following when running the INSERT \(external table\) command: + External tables that have a format other than PARQUET or TEXTFILE aren't supported\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_external_table.md
7a153842f062-1
+ External tables that have a format other than PARQUET or TEXTFILE aren't supported\. + This command supports existing table properties such as 'write\.parallel', 'write\.maxfilesize\.mb', 'compression\_type’, and 'serialization\.null\.format'\. To update those values, run the ALTER TABLE SET TABLE PROPERTIES command\. + The 'numRows’ table property is automatically updated toward the end of the INSERT operation\. The table property must be defined or added to the table already if it wasn't created by CREATE EXTERNAL TABLE AS operation\. + The LIMIT clause isn't supported in the outer SELECT query\. Instead, use a nested LIMIT clause\. + You can use the [STL\_UNLOAD\_LOG](r_STL_UNLOAD_LOG.md) table to track the files that got written to Amazon S3 by each INSERT \(external table\) operation\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_external_table.md
14b4f753125d-0
The following example inserts the results of the SELECT statement into the external table\. ``` INSERT INTO spectrum.lineitem SELECT * FROM local_lineitem; ``` The following example inserts the results of the SELECT statement into a partitioned external table using static partitioning\. The partition columns are hard\-coded in the SELECT statement\. The partition columns must be at the end of the query\. ``` INSERT INTO spectrum.customer SELECT name, age, gender, 'May', 28 FROM local_customer; ``` The following example inserts the results of the SELECT statement into a partitioned external table using dynamic partitioning\. The partition columns aren't hard\-coded\. Data is automatically added to the existing partition folders, or to new folders if a new partition is added\. ``` INSERT INTO spectrum.customer SELECT name, age, gender, month, day FROM local_customer; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_INSERT_external_table.md
8cb17e96137a-0
Logs information about network activity during execution of query steps that broadcast data\. Network traffic is captured by numbers of rows, bytes, and packets that are sent over the network during a given step on a given slice\. The duration of the step is the difference between the logged start and end times\. To identify broadcast steps in a query, look for bcast labels in the SVL\_QUERY\_SUMMARY view or run the EXPLAIN command and then look for step attributes that include bcast\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_BCAST.md
da726b8a412c-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_BCAST.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_BCAST.md
bf438bb854c2-0
The following example returns broadcast information for the queries where there are one or more packets, and the difference between the start and end of the query was one second or more\. ``` select query, slice, step, rows, bytes, packets, datediff(seconds, starttime, endtime) from stl_bcast where packets>0 and datediff(seconds, starttime, endtime)>0; ``` ``` query | slice | step | rows | bytes | packets | date_diff -------+-------+------+------+-------+---------+----------- 453 | 2 | 5 | 1 | 264 | 1 | 1 798 | 2 | 5 | 1 | 264 | 1 | 1 1408 | 2 | 5 | 1 | 264 | 1 | 1 2993 | 0 | 5 | 1 | 264 | 1 | 1 5045 | 3 | 5 | 1 | 264 | 1 | 1 8073 | 3 | 5 | 1 | 264 | 1 | 1 8163 | 3 | 5 | 1 | 264 | 1 | 1 9212 | 1 | 5 | 1 | 264 | 1 | 1 9873 | 1 | 5 | 1 | 264 | 1 | 1 (9 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_BCAST.md
fa236ae9bd60-0
Removes access privileges, such as privileges to create or update tables, from a user or user group\. You can only GRANT or REVOKE USAGE permissions on an external schema to database users and user groups using the ON SCHEMA syntax\. When using ON EXTERNAL SCHEMA with AWS Lake Formation, you can only GRANT and REVOKE privileges to an AWS Identity and Access Management \(IAM\) role\. For the list of privileges, see the syntax\. For stored procedures, USAGE ON LANGUAGE `plpgsql` permissions are granted to PUBLIC by default\. EXECUTE ON PROCEDURE permission is granted only to the owner and superusers by default\. Specify in the REVOKE command the privileges that you want to remove\. To give privileges, use the [GRANT](r_GRANT.md) command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
85ae6bac1168-0
``` REVOKE [ GRANT OPTION FOR ] { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [,...] | ALL [ PRIVILEGES ] } ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] } FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { { CREATE | TEMPORARY | TEMP } [,...] | ALL [ PRIVILEGES ] } ON DATABASE db_name [, ...] FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { { CREATE | USAGE } [,...] | ALL [ PRIVILEGES ] } ON SCHEMA schema_name [, ...] FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] EXECUTE ON FUNCTION function_name ( [ [ argname ] argtype [, ...] ] ) [, ...] FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { { EXECUTE } [,...] | ALL [ PRIVILEGES ] }
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
85ae6bac1168-1
[ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] { { EXECUTE } [,...] | ALL [ PRIVILEGES ] } ON PROCEDURE procedure_name ( [ [ argname ] argtype [, ...] ] ) [, ...] FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] REVOKE [ GRANT OPTION FOR ] USAGE ON LANGUAGE language_name [, ...] FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] ``` The following is the syntax for column\-level privileges on Amazon Redshift tables and views\. ``` REVOKE { { SELECT | UPDATE } ( column_name [, ...] ) [, ...] | ALL [ PRIVILEGES ] ( column_name [,...] ) } ON { [ TABLE ] table_name [, ...] } FROM { username | GROUP group_name | PUBLIC } [, ...] [ CASCADE | RESTRICT ] ``` The following is the syntax for Redshift Spectrum integration with Lake Formation\. ``` REVOKE [ GRANT OPTION FOR ] { SELECT | ALL [ PRIVILEGES ] } ( column_list ) ON EXTERNAL TABLE schema_name.table_name FROM { IAM_ROLE iam_role } [, ...]
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
85ae6bac1168-2
ON EXTERNAL TABLE schema_name.table_name FROM { IAM_ROLE iam_role } [, ...] REVOKE [ GRANT OPTION FOR ] { { SELECT | ALTER | DROP | DELETE | INSERT } [, ...] | ALL [ PRIVILEGES ] } ON EXTERNAL TABLE schema_name.table_name [, ...] FROM { { IAM_ROLE iam_role } [, ...] | PUBLIC } REVOKE [ GRANT OPTION FOR ] { { CREATE | ALTER | DROP } [, ...] | ALL [ PRIVILEGES ] } ON EXTERNAL SCHEMA schema_name [, ...] FROM { IAM_ROLE iam_role } [, ...] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
df630935be49-0
GRANT OPTION FOR Revokes only the option to grant a specified privilege to other users and doesn't revoke the privilege itself\. You can't revoke GRANT OPTION from a group or from PUBLIC\. SELECT Revokes the privilege to select data from a table or a view using a SELECT statement\. INSERT Revokes the privilege to load data into a table using an INSERT statement or a COPY statement\. UPDATE Revokes the privilege to update a table column using an UPDATE statement\. DELETE Revokes the privilege to delete a data row from a table\. REFERENCES Revokes the privilege to create a foreign key constraint\. You should revoke this privilege on both the referenced table and the referencing table\. ALL \[ PRIVILEGES \] Revokes all available privileges at once from the specified user or group\. The PRIVILEGES keyword is optional\. ALTER Revokes privilege to alter a table in an AWS Glue Data Catalog that is enabled for Lake Formation\. This privilege only applies when using Lake Formation\. DROP Revokes privilege to drop a table in an AWS Glue Data Catalog that is enabled for Lake Formation\. This privilege only applies when using Lake Formation\. ON \[ TABLE \] *table\_name* Revokes the specified privileges on a table or a view\. The TABLE keyword is optional\. ON ALL TABLES IN SCHEMA *schema\_name* Revokes the specified privileges on all tables in the referenced schema\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
df630935be49-1
ON ALL TABLES IN SCHEMA *schema\_name* Revokes the specified privileges on all tables in the referenced schema\. \( *column\_name* \[,\.\.\.\] \) ON TABLE *table\_name* <a name="revoke-column-level-privileges"></a> Revokes the specified privileges from users, groups, or PUBLIC on the specified columns of the Amazon Redshift table or view\. \( *column\_list* \) ON EXTERNAL TABLE *schema\_name\.table\_name* <a name="revoke-external-table-column"></a> Revokes the specified privileges from an IAM role on the specified columns of the Lake Formation table in the referenced schema\. ON EXTERNAL TABLE *schema\_name\.table\_name* <a name="revoke-external-table"></a> Revokes the specified privileges from an IAM role on the specified Lake Formation tables in the referenced schema\. ON EXTERNAL SCHEMA *schema\_name* <a name="revoke-external-schema"></a> Revokes the specified privileges from an IAM role on the referenced schema\. FROM IAM\_ROLE *iam\_role* <a name="revoke-from-iam-role"></a> Indicates the IAM role losing the privileges\. GROUP *group\_name* Revokes the privileges from the specified user group\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
df630935be49-2
Indicates the IAM role losing the privileges\. GROUP *group\_name* Revokes the privileges from the specified user group\. PUBLIC Revokes the specified privileges from all users\. PUBLIC represents a group that always includes all users\. An individual user's privileges consist of the sum of privileges granted to PUBLIC, privileges granted to any groups that the user belongs to, and any privileges granted to the user individually\. Revoking PUBLIC from a Lake Formation external table results in revoking the privilege from the Lake Formation *everyone* group\. CREATE Depending on the database object, revokes the following privileges from the user or group: + For databases, using the CREATE clause for REVOKE prevents users from creating schemas within the database\. + For schemas, using the CREATE clause for REVOKE prevents users from creating objects within a schema\. To rename an object, the user must have the CREATE privilege and own the object to be renamed\. By default, all users have CREATE and USAGE privileges on the PUBLIC schema\. TEMPORARY \| TEMP Revokes the privilege to create temporary tables in the specified database\. By default, users are granted permission to create temporary tables by their automatic membership in the PUBLIC group\. To remove the privilege for any users to create temporary tables, revoke the TEMP permission from the PUBLIC group and then explicitly grant the permission to create temporary tables to specific users or groups of users\. ON DATABASE *db\_name* Revokes the privileges on the specified database\. USAGE
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
df630935be49-3
ON DATABASE *db\_name* Revokes the privileges on the specified database\. USAGE Revokes USAGE privileges on objects within a specific schema, which makes these objects inaccessible to users\. Specific actions on these objects must be revoked separately \(such as the EXECUTE privilege on functions\)\. By default, all users have CREATE and USAGE privileges on the PUBLIC schema\. ON SCHEMA *schema\_name* Revokes the privileges on the specified schema\. You can use schema privileges to control the creation of tables; the CREATE privilege for a database only controls the creation of schemas\. CASCADE If a user holds a privilege with grant option and has granted the privilege to other users, the privileges held by those other users are *dependent privileges*\. If the privilege or the grant option held by the first user is being revoked and dependent privileges exist, those dependent privileges are also revoked if CASCADE is specified; otherwise, the revoke action fails\. For example, if user A has granted a privilege with grant option to user B, and user B has granted the privilege to user C, user A can revoke the grant option from user B and use the CASCADE option to in turn revoke the privilege from user C\. RESTRICT Revokes only those privileges that the user directly granted\. This behavior is the default\. EXECUTE ON FUNCTION *function\_name* Revokes the EXECUTE privilege on a specific function\. Because function names can be overloaded, you must include the argument list for the function\. For more information, see [Naming UDFs](udf-naming-udfs.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
df630935be49-4
EXECUTE ON PROCEDURE *procedure\_name* Revokes the EXECUTE privilege on a specific stored procedure\. Because stored procedure names can be overloaded, you must include the argument list for the procedure\. For more information, see [Naming stored procedures](stored-procedure-naming.md)\. EXECUTE ON ALL PROCEDURES IN SCHEMA *procedure\_name* Revokes the specified privileges on all procedures in the referenced schema\. USAGE ON LANGUAGE *language\_name* Revokes the USAGE privilege on a language\. For Python user\-defined functions \(UDFs\), use `plpythonu`\. For SQL UDFs, use `sql`\. For stored procedures, use `plpgsql`\. To create a UDF, you must have permission for usage on language for SQL or `plpythonu` \(Python\)\. By default, USAGE ON LANGUAGE SQL is granted to PUBLIC\. However, you must explicitly grant USAGE ON LANGUAGE PLPYTHONU to specific users or groups\. To revoke usage for SQL, first revoke usage from PUBLIC\. Then grant usage on SQL only to the specific users or groups permitted to create SQL UDFs\. The following example revokes usage on SQL from PUBLIC then grants usage to the user group `udf_devs`\. ``` revoke usage on language sql from PUBLIC; grant usage on language sql to group udf_devs; ``` For more information, see [UDF security and privileges](udf-security-and-privileges.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
df630935be49-5
``` For more information, see [UDF security and privileges](udf-security-and-privileges.md)\. To revoke usage for stored procedures, first revoke usage from PUBLIC\. Then grant usage on `plpgsql` only to the specific users or groups permitted to create stored procedures\. For more information, see [Security and privileges for stored procedures ](stored-procedure-security-and-privileges.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
a1ca5f3bf2f3-0
To revoke privileges from an object, you must meet one of the following criteria: + Be the object owner\. + Be a superuser\. + Have a grant privilege for that object and privilege\. For example, the following command enables the user HR both to perform SELECT commands on the employees table and to grant and revoke the same privilege for other users\. ``` grant select on table employees to HR with grant option; ``` HR can't revoke privileges for any operation other than SELECT, or on any other table than employees\. Superusers can access all objects regardless of GRANT and REVOKE commands that set object privileges\. PUBLIC represents a group that always includes all users\. By default all members of PUBLIC have CREATE and USAGE privileges on the PUBLIC schema\. To restrict any user's permissions on the PUBLIC schema, you must first revoke all permissions from PUBLIC on the PUBLIC schema, then grant privileges to specific users or groups\. The following example controls table creation privileges in the PUBLIC schema\. ``` revoke create on schema public from public; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
a1ca5f3bf2f3-1
``` revoke create on schema public from public; ``` To revoke privileges from a Lake Formation table, the IAM role associated with the table's external schema must have permission to revoke privileges to the external table\. The following example creates an external schema with an associated IAM role `myGrantor`\. IAM role `myGrantor` has the permission to revoke permissions from others\. The REVOKE command uses the permission of the IAM role `myGrantor` that is associated with the external schema to revoke permission to the IAM role `myGrantee`\. ``` create external schema mySchema from data catalog database 'spectrum_db' iam_role 'arn:aws:iam::123456789012:role/myGrantor' create external database if not exists; ``` ``` revoke select on external table mySchema.mytable from iam_role 'arn:aws:iam::123456789012:role/myGrantee'; ``` **Note** If the IAM role also has the `ALL` permission in an AWS Glue Data Catalog that is enabled for Lake Formation, the `ALL` permission isn't revoked\. Only the `SELECT` permission is revoked\. You can view the Lake Formation permissions in the Lake Formation console\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
e0f91c61ff6b-0
The following example revokes INSERT privileges on the SALES table from the GUESTS user group\. This command prevents members of GUESTS from being able to load data into the SALES table by using the INSERT command\. ``` revoke insert on table sales from group guests; ``` The following example revokes the SELECT privilege on all tables in the QA\_TICKIT schema from the user `fred`\. ``` revoke select on all tables in schema qa_tickit from fred; ``` The following example revokes the privilege to select from a view for user `bobr`\. ``` revoke select on table eventview from bobr; ``` The following example revokes the privilege to create temporary tables in the TICKIT database from all users\. ``` revoke temporary on database tickit from public; ``` The following example revokes SELECT privilege on the `cust_name` and `cust_phone` columns of the `cust_profile` table from the user `user1`\. ``` revoke select(cust_name, cust_phone) on cust_profile from user1; ``` The following example revokes SELECT privilege on the `cust_name` and `cust_phone` columns and UPDATE privilege on the `cust_contact_preference` column of the `cust_profile` table from the `sales_group` group\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
e0f91c61ff6b-1
``` revoke select(cust_name, cust_phone), update(cust_contact_preference) on cust_profile from group sales_group; ``` The following example shows the usage of the ALL keyword to revoke both SELECT and UPDATE privileges on three columns of the table `cust_profile` from the `sales_admin` group\. ``` revoke ALL(cust_name, cust_phone,cust_contact_preference) on cust_profile from group sales_admin; ``` The following example revokes the SELECT privilege on the `cust_name` column of the `cust_profile_vw` view from the `user2` user\. ``` revoke select(cust_name) on cust_profile_vw from user2; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVOKE.md
dc6467553f3b-0
Use the SVCS\_PLAN\_INFO table to look at the EXPLAIN output for a query in terms of a set of rows\. This is an alternative way to look at query plans\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the tables with the prefix STL except that the STL tables provide information only for queries run on the main cluster\. This table is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
358729541a96-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_PLAN_INFO.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
c9efd5d1b5fa-0
The following examples compare the query plans for a simple SELECT query returned by using the EXPLAIN command and by querying the SVCS\_PLAN\_INFO table\. ``` explain select * from category; QUERY PLAN ------------------------------------------------------------- XN Seq Scan on category (cost=0.00..0.11 rows=11 width=49) (1 row) select * from category; catid | catgroup | catname | catdesc -------+----------+-----------+-------------------------------------------- 1 | Sports | MLB | Major League Baseball 3 | Sports | NFL | National Football League 5 | Sports | MLS | Major League Soccer ... select * from svcs_plan_info where query=256; query | nodeid | segment | step | locus | plannode | startupcost | totalcost | rows | bytes
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
c9efd5d1b5fa-1
query | nodeid | segment | step | locus | plannode | startupcost | totalcost | rows | bytes -------+--------+---------+------+-------+----------+-------------+-----------+------+------- 256 | 1 | 0 | 1 | 0 | 104 | 0 | 0.11 | 11 | 539 256 | 1 | 0 | 0 | 0 | 104 | 0 | 0.11 | 11 | 539 (2 rows) ``` In this example, PLANNODE 104 refers to the sequential scan of the CATEGORY table\. ``` select distinct eventname from event order by 1; eventname ------------------------------------------------------------------------ .38 Special 3 Doors Down 70s Soul Jam A Bronx Tale ... explain select distinct eventname from event order by 1; QUERY PLAN
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
c9efd5d1b5fa-2
70s Soul Jam A Bronx Tale ... explain select distinct eventname from event order by 1; QUERY PLAN ------------------------------------------------------------------------------------- XN Merge (cost=1000000000136.38..1000000000137.82 rows=576 width=17) Merge Key: eventname -> XN Network (cost=1000000000136.38..1000000000137.82 rows=576 width=17) Send to leader -> XN Sort (cost=1000000000136.38..1000000000137.82 rows=576 width=17) Sort Key: eventname -> XN Unique (cost=0.00..109.98 rows=576 width=17) -> XN Seq Scan on event (cost=0.00..87.98 rows=8798 width=17) (8 rows) select * from svcs_plan_info where query=240 order by nodeid desc; query | nodeid | segment | step | locus | plannode | startupcost |
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
c9efd5d1b5fa-3
query | nodeid | segment | step | locus | plannode | startupcost | totalcost | rows | bytes -------+--------+---------+------+-------+----------+------------------+------------------+------+-------- 240 | 5 | 0 | 0 | 0 | 104 | 0 | 87.98 | 8798 | 149566 240 | 5 | 0 | 1 | 0 | 104 | 0 | 87.98 | 8798 | 149566 240 | 4 | 0 | 2 | 0 | 117 | 0 | 109.975 | 576 | 9792 240 | 4 | 0 | 3 | 0 | 117 | 0 | 109.975 | 576 | 9792 240 | 4 | 1 | 0 | 0 | 117 | 0 | 109.975 | 576 | 9792 240 | 4 | 1 | 1 | 0 | 117 | 0 | 109.975 | 576 | 9792 240 | 3 | 1 | 2 | 0 | 114 | 1000000000136.38 | 1000000000137.82 | 576 | 9792 240 | 3 | 2 | 0 | 0 | 114 | 1000000000136.38 | 1000000000137.82 | 576 | 9792
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
c9efd5d1b5fa-4
240 | 3 | 2 | 0 | 0 | 114 | 1000000000136.38 | 1000000000137.82 | 576 | 9792 240 | 2 | 2 | 1 | 0 | 123 | 1000000000136.38 | 1000000000137.82 | 576 | 9792 240 | 1 | 3 | 0 | 0 | 122 | 1000000000136.38 | 1000000000137.82 | 576 | 9792 (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_PLAN_INFO.md
5d960f715ae6-0
**false**, true
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_error_on_nondeterministic_update.md
dce970fbe4d7-0
Specifies whether UPDATE queries with multiple matches per row throws an error\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_error_on_nondeterministic_update.md
df2a1674aa33-0
``` SSET error_on_nondeterministic_update TO true; CREATE TABLE t1(x1 int, y1 int); CREATE TABLE t2(x2 int, y2 int); INSERT INTO t1 VALUES (1,10), (2,20), (3,30); INSERT INTO t2 VALUES (2,40), (2,50); UPDATE t1 SET y1=y2 FROM t2 WHERE x1=x2; ERROR: Found multiple matches to update the same tuple. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_error_on_nondeterministic_update.md
c3bf85a70328-0
**Topics** + [Syntax](#r_SELECT_list-synopsis) + [Parameters](#r_SELECT_list-parameters) + [Usage notes](#r_SELECT_list_usage_notes) + [Examples with TOP](r_Examples_with_TOP.md) + [SELECT DISTINCT examples](r_DISTINCT_examples.md) The SELECT list names the columns, functions, and expressions that you want the query to return\. The list represents the output of the query\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_list.md
625f324bed93-0
``` SELECT [ TOP number ] [ ALL | DISTINCT ] * | expression [ AS column_alias ] [, ...] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_list.md
6a60af36ba92-0
TOP *number* TOP takes a positive integer as its argument, which defines the number of rows that are returned to the client\. The behavior with the TOP clause is the same as the behavior with the LIMIT clause\. The number of rows that is returned is fixed, but the set of rows isn't\. To return a consistent set of rows, use TOP or LIMIT in conjunction with an ORDER BY clause\. ALL A redundant keyword that defines the default behavior if you don't specify DISTINCT\. `SELECT ALL *` means the same as `SELECT *` \(select all rows for all columns and retain duplicates\)\. DISTINCT Option that eliminates duplicate rows from the result set, based on matching values in one or more columns\. \* \(asterisk\) Returns the entire contents of the table \(all columns and all rows\)\. *expression* An expression formed from one or more columns that exist in the tables referenced by the query\. An expression can contain SQL functions\. For example: ``` avg(datediff(day, listtime, saletime)) ``` AS *column\_alias* A temporary name for the column that is used in the final result set\. The AS keyword is optional\. For example: ``` avg(datediff(day, listtime, saletime)) as avgwait ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_list.md
6a60af36ba92-1
``` avg(datediff(day, listtime, saletime)) as avgwait ``` If you don't specify an alias for an expression that isn't a simple column name, the result set applies a default name to that column\. The alias is recognized right after it is defined in the target list\. You can use an alias in other expressions defined after it in the same target list\. The following example illustrates this\. ``` select clicks / impressions as probability, round(100 * probability, 1) as percentage from raw_data; ``` The benefit of the lateral alias reference is you don't need to repeat the aliased expression when building more complex expressions in the same target list\. When Amazon Redshift parses this type of reference, it just inlines the previously defined aliases\. If there is a column with the same name defined in the `FROM` clause as the previously aliased expression, the column in the `FROM` clause takes priority\. For example, in the above query if there is a column named 'probability' in table raw\_data, the 'probability' in the second expression in the target list refers to that column instead of the alias name 'probability'\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_list.md
b9d46af1c830-0
TOP is a SQL extension; it provides an alternative to the LIMIT behavior\. You can't use TOP and LIMIT in the same query\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SELECT_list.md
0cd1fcfdcfdc-0
Columns with a CHAR data type only accept single\-byte UTF\-8 characters, up to byte value 127, or 7F hex, which is also the ASCII character set\. VARCHAR columns accept multibyte UTF\-8 characters, to a maximum of four bytes\. For more information, see [Character types](r_Character_types.md)\. If a line in your load data contains a character that is invalid for the column data type, COPY returns an error and logs a row in the STL\_LOAD\_ERRORS system log table with error number 1220\. The ERR\_REASON field includes the byte sequence, in hex, for the invalid character\. An alternative to fixing invalid characters in your load data is to replace the invalid characters during the load process\. To replace invalid UTF\-8 characters, specify the ACCEPTINVCHARS option with the COPY command\. For more information, see [ACCEPTINVCHARS](copy-parameters-data-conversion.md#acceptinvchars)\. The following example shows the error reason when COPY attempts to load UTF\-8 character e0 a1 c7a4 into a CHAR column: ``` Multibyte character not supported for CHAR (Hint: Try using VARCHAR). Invalid char: e0 a1 c7a4 ``` If the error is related to a VARCHAR data type, the error reason includes an error code as well as the invalid UTF\-8 hex sequence\. The following example shows the error reason when COPY attempts to load UTF\-8 a4 into a VARCHAR field: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/multi-byte-character-load-errors.md