id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
63fc15571cce-11
Specifies the time format\. If no TIMEFORMAT is specified, the default format is `YYYY-MM-DD HH:MI:SS` for TIMESTAMP columns or `YYYY-MM-DD HH:MI:SSOF` for TIMESTAMPTZ columns, where `OF` is the offset from Coordinated Universal Time \(UTC\)\. You can't include a time zone specifier in the *timeformat\_string*\. To load TIMESTAMPTZ data that is in a format different from the default format, specify 'auto'; for more information, see [Using automatic recognition with DATEFORMAT and TIMEFORMAT](automatic-recognition.md)\. For more information about *timeformat\_string*, see [ DATEFORMAT and TIMEFORMAT strings](r_DATEFORMAT_and_TIMEFORMAT_strings.md)\. The `'auto'` argument recognizes several formats that aren't supported when using a DATEFORMAT and TIMEFORMAT string\. If the COPY command doesn't recognize the format of your date or time values, or if your date and time values use formats different from each other, use the `'auto'` argument with the DATEFORMAT or TIMEFORMAT parameter\. For more information, see [Using automatic recognition with DATEFORMAT and TIMEFORMAT](automatic-recognition.md)\. If your source data is represented as epoch time, that is the number of seconds or milliseconds since January 1, 1970, 00:00:00 UTC, specify `'epochsecs'` or `'epochmillisecs'`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
63fc15571cce-12
The `'auto'`, `'epochsecs'`, and `'epochmillisecs'` keywords are case\-sensitive\. The AS keyword is optional\. TRIMBLANKS <a name="copy-trimblanks"></a> Removes the trailing white space characters from a VARCHAR string\. This parameter applies only to columns with a VARCHAR data type\. TRUNCATECOLUMNS <a name="copy-truncatecolumns"></a> Truncates data in columns to the appropriate number of characters so that it fits the column specification\. Applies only to columns with a VARCHAR or CHAR data type, and rows 4 MB or less in size\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-conversion.md
7e93d044b437-0
Defines a new schema for the current database\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
97b5d7c58de7-0
``` CREATE SCHEMA [ IF NOT EXISTS ] schema_name [ AUTHORIZATION username ] [ QUOTA {quota [MB | GB | TB] | UNLIMITED} ] [ schema_element [ ... ] CREATE SCHEMA AUTHORIZATION username[ QUOTA {quota [MB | GB | TB] | UNLIMITED} ] [ schema_element [ ... ] ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
b48f4511c4ed-0
IF NOT EXISTS Clause that indicates that if the specified schema already exists, the command should make no changes and return a message that the schema exists, rather than terminating with an error\. This clause is useful when scripting, so the script doesn’t fail if CREATE SCHEMA tries to create a schema that already exists\. *schema\_name* Name of the new schema\. The schema name can't be `PUBLIC`\. For more information about valid names, see [Names and identifiers](r_names.md)\. The list of schemas in the [search\_path](r_search_path.md) configuration parameter determines the precedence of identically named objects when they are referenced without schema names\. AUTHORIZATION Clause that gives ownership to a specified user\. *username* Name of the schema owner\. *schema\_element* Definition for one or more objects to be created within the schema\. QUOTA The maximum amount of disk space that the specified schema can use\. This space is the collective disk usage\. It includes all permanent tables, materialized views under the specified schema, and duplicate copies of all tables with ALL distribution on each compute node\. The schema quota doesn't take into account temporary tables created as part of a temporary namespace or schema\. Amazon Redshift converts the selected value to megabytes\. Gigabytes is the default unit of measurement when you don't specify a value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
b48f4511c4ed-1
Amazon Redshift converts the selected value to megabytes\. Gigabytes is the default unit of measurement when you don't specify a value\. You must be a database superuser to set and change a schema quota\. A user that is not a superuser but that has CREATE SCHEMA permission can create a schema with a defined quota\. When you create a schema without defining a quota, the schema has an unlimited quota\. When you set the quota below the current value used by the schema, Amazon Redshift doesn't allow further ingestion until you free disk space\. A DELETE statement deletes data from a table and disk space is freed up only when VACUUM runs\. Amazon Redshift checks each transaction for quota violations before committing the transaction\. Amazon Redshift checks the size \(the disk space used by all tables in a schema\) of each modified schema against the set quota\. Because the quota violation check occurs at the end of a transaction, the size limit can exceed the quota temporarily within a transaction before it's committed\. When a transaction exceeds the quota, Amazon Redshift aborts the transaction, prohibits subsequent ingestions, and reverts all the changes until you free disk space\. Due to background VACUUM and internal cleanup, it is possible that a schema isn't full by the time that you check the schema after an aborted transaction\. As an exception, Amazon Redshift disregards the quota violation and commits transactions in certain cases\. Amazon Redshift does this for transactions that consist solely of one or more of the following statements where there isn't an INSERT or COPY ingestion statement in the same schema: + DELETE + TRUNCATE + VACUUM
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
b48f4511c4ed-2
+ DELETE + TRUNCATE + VACUUM + DROP TABLE + ALTER TABLE APPEND only when moving data from the full schema to another non\-full schema *UNLIMITED* Amazon Redshift imposes no limit to the growth of the total size of the schema\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
0699f8ec6102-0
Amazon Redshift enforces the following limits for schemas\. + There is a maximum of 9900 schemas per database\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
0d538f7b1753-0
The following example creates a schema named US\_SALES and gives ownership to the user DWUSER\. ``` create schema us_sales authorization dwuser; ``` The following example creates a schema named US\_SALES, gives ownership to the user DWUSER, and sets the quota to 50 GB\. ``` create schema us_sales authorization dwuser QUOTA 50 GB; ``` To view the new schema, query the PG\_NAMESPACE catalog table as shown following\. ``` select nspname as schema, usename as owner from pg_namespace, pg_user where pg_namespace.nspowner = pg_user.usesysid and pg_user.usename ='dwuser'; schema | owner ----------+---------- us_sales | dwuser (1 row) ``` The following example either creates the US\_SALES schema, or does nothing and returns a message if it already exists\. ``` create schema if not exists us_sales; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_SCHEMA.md
79fc15867290-0
DATE\_PART extracts datepart values from an expression\. DATE\_PART is a synonym of the PGDATE\_PART function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_function.md
e7080f2fb1d7-0
``` DATE_PART ( datepart, {date|timestamp} ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_function.md
8c544f94596c-0
*datepart* The specific part of the date value \(year, month, or day, for example\) that the function operates on\. For more information, see [Dateparts for Date or Time Stamp functions](r_Dateparts_for_datetime_functions.md)\. \{*date*\|*timestamp*\} A date or timestamp column or an expression that implicitly converts to a date or time stamp\. The expression must be a date or time stamp expression that contains the specified datepart\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_function.md
c938d4d6b4e4-0
DOUBLE
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_function.md
883bb9e63a34-0
Apply the DATE\_PART function to a column in a table: ``` select date_part(w, listtime) as weeks, listtime from listing where listid=10; weeks | listtime ------+--------------------- 25 | 2008-06-17 09:44:54 (1 row) ``` You can name dateparts in full or abbreviate them; in this case, *w* stands for weeks\. The day of week datepart returns an integer from 0\-6, starting with Sunday\. Use DATE\_PART with dow \(DAYOFWEEK\) to view events on a Saturday\. ``` select date_part(dow, starttime) as dow, starttime from event where date_part(dow, starttime)=6 order by 2,1; dow | starttime -----+--------------------- 6 | 2008-01-05 14:00:00 6 | 2008-01-05 14:00:00 6 | 2008-01-05 14:00:00 6 | 2008-01-05 14:00:00 ... (1147 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_function.md
883bb9e63a34-1
6 | 2008-01-05 14:00:00 ... (1147 rows) ``` Apply the DATE\_PART function to a literal date value: ``` select date_part(minute, '2009-01-01 02:08:01'); pgdate_part ------------- 8 (1 row) ``` The default column name for the DATE\_PART function is PGDATE\_PART\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_PART_function.md
ace604d65a2c-0
ST\_IsCollection returns true if the input geometry is one of the following subtypes: `GEOMETRYCOLLECTION`, `MULTIPOINT`, `MULTILINESTRING`, or `MULTIPOLYGON`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsCollection-function.md
5e6a8a98c135-0
``` ST_IsCollection(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsCollection-function.md
96d576d943cf-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsCollection-function.md
99e6546fedbf-0
`BOOLEAN` If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsCollection-function.md
39bdd5a763b4-0
The following SQL checks if the polygon is a collection\. ``` SELECT ST_IsCollection(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))')); ``` ``` st_iscollection ----------- false ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_IsCollection-function.md
250cf55582c6-0
Converts a string to uppercase\. UPPER supports UTF\-8 multibyte characters, up to a maximum of four bytes per character\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPPER.md
daa2ffd73bcf-0
``` UPPER(string) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPPER.md
f7a4978ea56f-0
*string* The input parameter is a CHAR or VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPPER.md
5675a7be3a65-0
The UPPER function returns a character string that is the same data type as the input string \(CHAR or VARCHAR\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPPER.md
08ca8aaee5b8-0
The following example converts the CATNAME field to uppercase: ``` select catname, upper(catname) from category order by 1,2; catname | upper ----------+----------- Classical | CLASSICAL Jazz | JAZZ MLB | MLB MLS | MLS Musicals | MUSICALS NBA | NBA NFL | NFL NHL | NHL Opera | OPERA Plays | PLAYS Pop | POP (11 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPPER.md
ccb04e8f7ec5-0
ST\_EndPoint returns the last point of an input linestring\. The spatial reference system identifier \(SRID\) value of the result is the same as that of the input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_EndPoint-function.md
d5d9935699f3-0
``` ST_EndPoint(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_EndPoint-function.md
d037825f91bd-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_EndPoint-function.md
c319a25d12d7-0
`GEOMETRY` If *geom* is null, then null is returned\. If *geom* is empty, then null is returned\. If *geom* isn't a `LINESTRING`, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_EndPoint-function.md
ab265bf77355-0
The following SQL returns an extended well\-known text \(EWKT\) representation of a four\-point `LINESTRING` to a `GEOMETRY` object and returns the end point of the linestring\. ``` SELECT ST_AsEWKT(ST_StartPoint(ST_GeomFromText('LINESTRING(0 0,10 0,10 10,5 5,0 5)',4326))); ``` ``` st_asewkt ------------- SRID=4326;POINT(0 5) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_EndPoint-function.md
9019438c5c59-0
Calculates the ratio of a value to the sum of the values in a window or partition\. The ratio to report value is determined using the formula: `value of `*ratio\_expression* `argument for the current row / sum of` *ratio\_expression* `argument for the window or partition` The following dataset illustrates use of this formula: ``` Row# Value Calculation RATIO_TO_REPORT 1 2500 (2500)/(13900) 0.1798 2 2600 (2600)/(13900) 0.1870 3 2800 (2800)/(13900) 0.2014 4 2900 (2900)/(13900) 0.2086 5 3100 (3100)/(13900) 0.2230 ``` The return value range is 0 to 1, inclusive\. If *ratio\_expression* is NULL, then the return value is NULL\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RATIO_TO_REPORT.md
0b929f054291-0
``` RATIO_TO_REPORT ( ratio_expression ) OVER ( [ PARTITION BY partition_expression ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RATIO_TO_REPORT.md
5a0a750f1918-0
*ratio\_expression* An expression, such as a column name, that provides the value for which to determine the ratio\. The expression must have either a numeric data type or be implicitly convertible to one\. You cannot use any other analytic function in *ratio\_expression*\. OVER A clause that specifies the window partitioning\. The OVER clause cannot contain a window ordering or window frame specification\. PARTITION BY *partition\_expression* Optional\. An expression that sets the range of records for each group in the OVER clause\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RATIO_TO_REPORT.md
715afd0ac940-0
FLOAT8
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RATIO_TO_REPORT.md
ecc0cf98dd44-0
The following example calculates the ratios of the sales quantities for each seller: ``` select sellerid, qty, ratio_to_report(qty) over (partition by sellerid) from winsales; sellerid qty ratio_to_report ------------------------------------------- 2 20.12312341 0.5 2 20.08630000 0.5 4 10.12414400 0.2 4 40.23000000 0.8 1 30.37262000 0.6 1 10.64000000 0.21 1 10.00000000 0.2 3 10.03500000 0.13 3 15.14660000 0.2 3 30.54790000 0.4 3 20.74630000 0.27 ``` For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RATIO_TO_REPORT.md
375cdd2c2cf3-0
Summarizes details for queries that spent time in a workload management \(WLM\) query queue or a commit queue\. The SVL\_QUERY\_QUEUE\_INFO view filters queries executed by the system and shows only queries executed by a user\. The SVL\_QUERY\_QUEUE\_INFO view summarizes information from the [STL\_QUERY](r_STL_QUERY.md), [STL\_WLM\_QUERY](r_STL_WLM_QUERY.md), and [STL\_COMMIT\_STATS](r_STL_COMMIT_STATS.md) system tables\. This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_QUEUE_INFO.md
297c5aca304c-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QUERY_QUEUE_INFO.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_QUEUE_INFO.md
f1c6934af4cd-0
The following example shows the time that queries spent in WLM queues\. ``` select query, service_class, queue_elapsed, exec_elapsed, wlm_total_elapsed from svl_query_queue_info where wlm_total_elapsed > 0; query | service_class | queue_elapsed | exec_elapsed | wlm_total_elapsed ---------+---------------+---------------+--------------+------------------- 2742669 | 6 | 2 | 916 | 918 2742668 | 6 | 4 | 197 | 201 (2 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_QUERY_QUEUE_INFO.md
bda6d2cd3660-0
DATE\_CMP\_TIMESTAMPTZ compares a date to a time stamp with time zone\. If the date and time stamp values are identical, the function returns 0\. If the date is greater alphabetically, the function returns 1\. If the time stamp is greater, the function returns –1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMPTZ.md
dec96815f995-0
``` DATE_CMP_TIMESTAMPTZ(date, timestamptz) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMPTZ.md
9798b9989903-0
*date* A DATE column or an expression that implicitly converts to a date\. *timestamptz* A TIMESTAMPTZ column or an expression that implicitly converts to a time stamp with a time zone\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMPTZ.md
15d4740d4acd-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMPTZ.md
5cf059150719-0
STV\_BLOCKLIST contains the number of 1 MB disk blocks that are used by each slice, table, or column in a database\. Use aggregate queries with STV\_BLOCKLIST, as the following examples show, to determine the number of 1 MB disk blocks allocated per database, table, slice, or column\. You can also use [STV\_PARTITIONS](r_STV_PARTITIONS.md) to view summary information about disk utilization\. STV\_BLOCKLIST is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_BLOCKLIST.md
f444c8e08b8a-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_BLOCKLIST.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_BLOCKLIST.md
1f3850052fd2-0
STV\_BLOCKLIST contains one row per allocated disk block, so a query that selects all the rows potentially returns a very large number of rows\. We recommend using only aggregate queries with STV\_BLOCKLIST\. The [SVV\_DISKUSAGE](r_SVV_DISKUSAGE.md) view provides similar information in a more user\-friendly format; however, the following example demonstrates one use of the STV\_BLOCKLIST table\. To determine the number of 1 MB blocks used by each column in the VENUE table, type the following query: ``` select col, count(*) from stv_blocklist, stv_tbl_perm where stv_blocklist.tbl = stv_tbl_perm.id and stv_blocklist.slice = stv_tbl_perm.slice and stv_tbl_perm.name = 'venue' group by col order by col; ``` This query returns the number of 1 MB blocks allocated to each column in the VENUE table, shown by the following sample data: ``` col | count -----+------- 0 | 4 1 | 4 2 | 4 3 | 4 4 | 4 5 | 4 7 | 4 8 | 4 (8 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_BLOCKLIST.md
1f3850052fd2-1
5 | 4 7 | 4 8 | 4 (8 rows) ``` The following query shows whether or not table data is actually distributed over all slices: ``` select trim(name) as table, stv_blocklist.slice, stv_tbl_perm.rows from stv_blocklist,stv_tbl_perm where stv_blocklist.tbl=stv_tbl_perm.id and stv_tbl_perm.slice=stv_blocklist.slice and stv_blocklist.id > 10000 and name not like '%#m%' and name not like 'systable%' group by name, stv_blocklist.slice, stv_tbl_perm.rows order by 3 desc; ``` This query produces the following sample output, showing the even data distribution for the table with the most rows: ``` table | slice | rows ----------+-------+------- listing | 13 | 10527 listing | 14 | 10526 listing | 8 | 10526 listing | 9 | 10526 listing | 7 | 10525 listing | 4 | 10525 listing | 17 | 10525 listing | 11 | 10525 listing | 5 | 10525
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_BLOCKLIST.md
1f3850052fd2-2
listing | 4 | 10525 listing | 17 | 10525 listing | 11 | 10525 listing | 5 | 10525 listing | 18 | 10525 listing | 12 | 10525 listing | 3 | 10525 listing | 10 | 10525 listing | 2 | 10524 listing | 15 | 10524 listing | 16 | 10524 listing | 6 | 10524 listing | 19 | 10524 listing | 1 | 10523 listing | 0 | 10521 ... (180 rows) ``` The following query determines whether any tombstoned blocks were committed to disk: ``` select slice, col, tbl, blocknum, newblock from stv_blocklist where tombstone > 0; slice | col | tbl | blocknum | newblock -------+-----+--------+----------+---------- 4 | 0 | 101285 | 0 | 1 4 | 2 | 101285 | 0 | 1 4 | 4 | 101285 | 1 | 1 5 | 2 | 101285 | 0 | 1 5 | 0 | 101285 | 0 | 1 5 | 1 | 101285 | 0 | 1 5 | 4 | 101285 | 1 | 1
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_BLOCKLIST.md
1f3850052fd2-3
5 | 1 | 101285 | 0 | 1 5 | 4 | 101285 | 1 | 1 ... (24 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_BLOCKLIST.md
b79847f3203e-0
Zstandard \(ZSTD\) encoding provides a high compression ratio with very good performance across diverse datasets\. ZSTD works especially well with CHAR and VARCHAR columns that store a wide range of long and short strings, such as product descriptions, user comments, logs, and JSON strings\. Where some algorithms, such as [Delta](c_Delta_encoding.md) encoding or [Mostly](c_MostlyN_encoding.md) encoding, can potentially use more storage space than no compression, ZSTD is very unlikely to increase disk usage\. ZSTD supports SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE PRECISION, BOOLEAN, CHAR, VARCHAR, DATE, TIMESTAMP, and TIMESTAMPTZ data types\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/zstd-encoding.md
6ea938f5c62c-0
Use the SVL\_FEDERATED\_QUERY view to view information about a federated query call\. SVL\_FEDERATED\_QUERY is visible to all users\. Superusers can see all rows\. Regular users can see only their own data\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_FEDERATED_QUERY.md
f8b73d930583-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_FEDERATED_QUERY.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_FEDERATED_QUERY.md
8ec78f5b5fbe-0
To show information about federated query calls, run the following query\. ``` select query, trim(sourcetype) as type, recordtime, trim(querytext) as "PG Subquery" from svl_federated_query where query = 4292; query | type | recordtime | pg subquery -------+------+----------------------------+--------------------------------------------------------------- 4292 | PG | 2020-03-27 04:29:58.485126 | SELECT "level" FROM functional.employees WHERE ("level" >= 6) (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_FEDERATED_QUERY.md
91d4601e67d6-0
Uses the MD5 cryptographic hash function to convert a variable\-length string into a 32\-character string that is a text representation of the hexadecimal value of a 128\-bit checksum\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MD5.md
419f0a2a84e5-0
``` MD5(string) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MD5.md
05da43efcc07-0
*string* A variable\-length string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MD5.md
4b4e96b066d4-0
The MD5 function returns a 32\-character string that is a text representation of the hexadecimal value of a 128\-bit checksum\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MD5.md
c52fca5af5c8-0
The following example shows the 128\-bit value for the string 'Amazon Redshift': ``` select md5('Amazon Redshift'); md5 ---------------------------------- f7415e33f972c03abd4f3fed36748f7a (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_MD5.md
36812cf4dfc3-0
Not all queries are of equal importance, and often performance of one workload or set of users might be more important\. If you have enabled [automatic WLM](automatic-wlm.md), you can define the relative importance of queries in a workload by setting a priority value\. The priority is specified for a queue and inherited by all queries associated with the queue\. You associate queries to a queue by mapping user groups and query groups to the queue\. You can set the following priorities \(listed from highest to lowest priority\): 1. `HIGHEST` 1. `HIGH` 1. `NORMAL` 1. `LOW` 1. `LOWEST` Administrators use these priorities to show the relative importance of their workloads when there are queries with different priorities contending for the same resources\. Amazon Redshift uses the priority when letting queries into the system, and to determine the amount of resources allocated to a query\. By default, queries run with their priority set to `NORMAL`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
36812cf4dfc3-1
An additional priority, `CRITICAL`, which is a higher priority than `HIGHEST`, is available to superusers\. To set this priority, you can use the functions [CHANGE\_QUERY\_PRIORITY](r_CHANGE_QUERY_PRIORITY.md), [CHANGE\_SESSION\_PRIORITY](r_CHANGE_SESSION_PRIORITY.md)\. and [CHANGE\_USER\_PRIORITY](r_CHANGE_USER_PRIORITY.md)\. To grant a database user permission to use these functions, you can create a stored procedure and grant permission to a user\. For an example, see [CHANGE\_SESSION\_PRIORITY](r_CHANGE_SESSION_PRIORITY.md)\. **Note** Only one `CRITICAL` query can run at a time\. Let's take an example where the priority of an extract, transform, load \(ETL\) workload is higher than the priority of the analytics workload\. The ETL workload runs every six hours, and the analytics workload runs throughout the day\. When only the analytics workload is running on the cluster, it gets the entire system to itself, yielding high throughput with optimal system utilization\. However, when the ETL workload starts, it gets the right of the way because it has a higher priority\. Queries running as part of the ETL workload get the right of the way during admission and also preferential resource allocation after they are admitted\. As a consequence, the ETL workload performs predictably regardless of what else might be running on the system\. Thus, it provides predictable performance and the ability for administrators to provide service level agreements \(SLAs\) for their business users\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
36812cf4dfc3-2
Within a given cluster, the predictable performance for a high priority workload comes at the cost of other, lower priority workloads\. Lower priority workloads might run longer either because their queries are waiting behind more important queries to complete\. Or they might run longer because they're getting a smaller fraction of resources when they are running concurrently with higher priority queries\. Lower priority queries don't suffer from starvation, but rather keep making progress at a slower pace\. In the preceding example, the administrator can enable [concurrency scaling](concurrency-scaling.md) for the analytics workload\. Doing this enables that workload to maintain its throughput, even though the ETL workload is running at high priority\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
30f4853bb01c-0
If you have enabled automatic WLM, each queue has a priority value\. Queries are routed to queues based on user groups and query groups\. The default queue priority is `NORMAL`\. Set the priority higher or lower based on the workload associated with the queue's user groups and query groups\. You can change the priority of a queue on the Amazon Redshift console\. On the Amazon Redshift console, the **Workload Management** page displays the queues and enables editing of queue properties such as **Priority**\. To set the priority using the CLI or API operations, use the `wlm_json_configuration` parameter\. For more information, see [Configuring Workload Management](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html) in the *Amazon Redshift Cluster Management Guide*\. The following `wlm_json_configuration` example defines three user groups \(`ingest`, `reporting`, and `analytics`\)\. Queries submitted from users from one of these groups run with priority `highest`, `normal`, and `low`, respectively\. ``` [ { "user_group": [ "ingest" ], "priority": "highest", "queue_type": "auto" }, { "user_group": [ "reporting" ], "priority": "normal",
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
30f4853bb01c-1
{ "user_group": [ "reporting" ], "priority": "normal", "queue_type": "auto" }, { "user_group": [ "analytics" ], "priority": "low", "queue_type": "auto", "auto_wlm": true } ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
db1b82a5c8b9-0
Query monitoring rules \(QMR\) enable you to change the priority of a query based on its behavior while it is running\. You do this by specifying the priority attribute in a QMR predicate in addition to an action\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\. For example, you can define a rule to abort any query classified as `high` priority that runs for more than 10 minutes\. ``` "rules" :[ { "rule_name":"rule_abort", "predicate":[ { "metric_name":"query_cpu_time", "operator":">", "value":600 }, { "metric_name":"query_priority", "operator":"=", "value":"high" } ], "action":"abort" } ] ``` Another example is to define a rule to change the query priority to `lowest` for any query with current priority `normal` that spills more than 1 TB to disk\. ``` "rules":[ { "rule_name":"rule_change_priority", "predicate":[ {
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
db1b82a5c8b9-1
{ "rule_name":"rule_change_priority", "predicate":[ { "metric_name":"query_temp_blocks_to_disk", "operator":">", "value":1000000 }, { "metric_name":"query_priority", "operator":"=", "value":"normal" } ], "action":"change_query_priority", "value":"lowest" } ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
f67a2e76c817-0
To display priority for waiting and running queries, view the `query_priority` column in the stv\_wlm\_query\_state system table\. ``` query | service_cl | wlm_start_time | state | queue_time | query_priority ---------+------------+----------------------------+------------------+------------+---------------- 2673299 | 102 | 2019-06-24 17:35:38.866356 | QueuedWaiting | 265116 | Highest 2673236 | 101 | 2019-06-24 17:35:33.313854 | Running | 0 | Highest 2673265 | 102 | 2019-06-24 17:35:33.523332 | Running | 0 | High 2673284 | 102 | 2019-06-24 17:35:38.477366 | Running | 0 | Highest 2673288 | 102 | 2019-06-24 17:35:38.621819 | Running | 0 | Highest 2673310 | 103 | 2019-06-24 17:35:39.068513 | QueuedWaiting | 62970 | High
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
f67a2e76c817-1
2673310 | 103 | 2019-06-24 17:35:39.068513 | QueuedWaiting | 62970 | High 2673303 | 102 | 2019-06-24 17:35:38.968921 | QueuedWaiting | 162560 | Normal 2673306 | 104 | 2019-06-24 17:35:39.002733 | QueuedWaiting | 128691 | Lowest ``` To list query priority for completed queries, see the `query_priority` column in the stl\_wlm\_query system table\. ``` select query, service_class as svclass, service_class_start_time as starttime, query_priority from stl_wlm_query order by 3 desc limit 10; ``` ``` query | svclass | starttime | query_priority ---------+---------+----------------------------+---------------------- 2723254 | 100 | 2019-06-24 18:14:50.780094 | Normal 2723251 | 102 | 2019-06-24 18:14:50.749961 | Highest 2723246 | 102 | 2019-06-24 18:14:50.725275 | Highest
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
f67a2e76c817-2
2723246 | 102 | 2019-06-24 18:14:50.725275 | Highest 2723244 | 103 | 2019-06-24 18:14:50.719241 | High 2723243 | 101 | 2019-06-24 18:14:50.699325 | Low 2723242 | 102 | 2019-06-24 18:14:50.692573 | Highest 2723239 | 101 | 2019-06-24 18:14:50.668535 | Low 2723237 | 102 | 2019-06-24 18:14:50.661918 | Highest 2723236 | 102 | 2019-06-24 18:14:50.643636 | Highest ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/query-priority.md
d75889cd3d66-0
Given an ordered set of rows, FIRST\_VALUE returns the value of the specified expression with respect to the first row in the window frame\. The LAST\_VALUE function returns the value of the expression with respect to the last row in the frame\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
85da98d935f8-0
``` FIRST_VALUE | LAST_VALUE ( expression [ IGNORE NULLS | RESPECT NULLS ] ) OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list frame_clause ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
58424219bd93-0
*expression* The target column or expression that the function operates on\. IGNORE NULLS When this option is used with FIRST\_VALUE, the function returns the first value in the frame that is not NULL \(or NULL if all values are NULL\)\. When this option is used with LAST\_VALUE, the function returns the last value in the frame that is not NULL \(or NULL if all values are NULL\)\. RESPECT NULLS Indicates that Amazon Redshift should include null values in the determination of which row to use\. RESPECT NULLS is supported by default if you do not specify IGNORE NULLS\. OVER Introduces the window clauses for the function\. PARTITION BY *expr\_list* Defines the window for the function in terms of one or more expressions\. ORDER BY *order\_list* Sorts the rows within each partition\. If no PARTITION BY clause is specified, ORDER BY sorts the entire table\. If you specify an ORDER BY clause, you must also specify a *frame\_clause*\. The results of the FIRST\_VALUE and LAST\_VALUE functions depend on the ordering of the data\. The results are nondeterministic in the following cases: + When no ORDER BY clause is specified and a partition contains two different values for an expression + When the expression evaluates to different values that correspond to the same value in the ORDER BY list\. *frame\_clause*
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
58424219bd93-1
+ When the expression evaluates to different values that correspond to the same value in the ORDER BY list\. *frame\_clause* If an ORDER BY clause is used for an aggregate function, an explicit frame clause is required\. The frame clause refines the set of rows in a function's window, including or excluding sets of rows in the ordered result\. The frame clause consists of the ROWS keyword and associated specifiers\. See [Window function syntax summary](r_Window_function_synopsis.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
bfff8f9e600a-0
These functions support expressions that use any of the Amazon Redshift data types\. The return type is the same as the type of the *expression*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
d8b4a643d7f3-0
The following example returns the seating capacity for each venue in the VENUE table, with the results ordered by capacity \(high to low\)\. The FIRST\_VALUE function is used to select the name of the venue that corresponds to the first row in the frame: in this case, the row with the highest number of seats\. The results are partitioned by state, so when the VENUESTATE value changes, a new first value is selected\. The window frame is unbounded so the same first value is selected for each row in each partition\. For California, `Qualcomm Stadium` has the highest number of seats \(`70561`\), so this name is the first value for all of the rows in the `CA` partition\. ``` select venuestate, venueseats, venuename, first_value(venuename) over(partition by venuestate order by venueseats desc rows between unbounded preceding and unbounded following) from (select * from venue where venueseats >0) order by venuestate; venuestate | venueseats | venuename | first_value -----------+------------+--------------------------------+------------------------------ CA | 70561 | Qualcomm Stadium | Qualcomm Stadium
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
d8b4a643d7f3-1
CA | 70561 | Qualcomm Stadium | Qualcomm Stadium CA | 69843 | Monster Park | Qualcomm Stadium CA | 63026 | McAfee Coliseum | Qualcomm Stadium CA | 56000 | Dodger Stadium | Qualcomm Stadium CA | 45050 | Angel Stadium of Anaheim | Qualcomm Stadium CA | 42445 | PETCO Park | Qualcomm Stadium CA | 41503 | AT&T Park | Qualcomm Stadium CA | 22000 | Shoreline Amphitheatre | Qualcomm Stadium CO | 76125 | INVESCO Field | INVESCO Field CO | 50445 | Coors Field | INVESCO Field DC | 41888 | Nationals Park | Nationals Park FL | 74916 | Dolphin Stadium | Dolphin Stadium FL | 73800 | Jacksonville Municipal Stadium | Dolphin Stadium FL | 65647 | Raymond James Stadium | Dolphin Stadium FL | 36048 | Tropicana Field | Dolphin Stadium ... ``` The next example uses the LAST\_VALUE function instead of FIRST\_VALUE; otherwise, the query is the same as the previous example\. For California, `Shoreline Amphitheatre` is returned for every row in the partition because it has the lowest number of seats \(`22000`\)\. ``` select venuestate, venueseats, venuename, last_value(venuename) over(partition by venuestate
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
d8b4a643d7f3-2
select venuestate, venueseats, venuename, last_value(venuename) over(partition by venuestate order by venueseats desc rows between unbounded preceding and unbounded following) from (select * from venue where venueseats >0) order by venuestate; venuestate | venueseats | venuename | last_value -----------+------------+--------------------------------+------------------------------ CA | 70561 | Qualcomm Stadium | Shoreline Amphitheatre CA | 69843 | Monster Park | Shoreline Amphitheatre CA | 63026 | McAfee Coliseum | Shoreline Amphitheatre CA | 56000 | Dodger Stadium | Shoreline Amphitheatre CA | 45050 | Angel Stadium of Anaheim | Shoreline Amphitheatre CA | 42445 | PETCO Park | Shoreline Amphitheatre CA | 41503 | AT&T Park | Shoreline Amphitheatre CA | 22000 | Shoreline Amphitheatre | Shoreline Amphitheatre CO | 76125 | INVESCO Field | Coors Field CO | 50445 | Coors Field | Coors Field
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
d8b4a643d7f3-3
CO | 76125 | INVESCO Field | Coors Field CO | 50445 | Coors Field | Coors Field DC | 41888 | Nationals Park | Nationals Park FL | 74916 | Dolphin Stadium | Tropicana Field FL | 73800 | Jacksonville Municipal Stadium | Tropicana Field FL | 65647 | Raymond James Stadium | Tropicana Field FL | 36048 | Tropicana Field | Tropicana Field ... ``` The following example shows the use of the IGNORE NULLS option and relies on the addition of a new row to the VENUE table: ``` insert into venue values(2000,null,'Stanford','CA',90000); ``` This new row contains a NULL value for the VENUENAME column\. Now repeat the FIRST\_VALUE query that was shown earlier in this section: ``` select venuestate, venueseats, venuename, first_value(venuename) over(partition by venuestate order by venueseats desc rows between unbounded preceding and unbounded following) from (select * from venue where venueseats >0) order by venuestate; venuestate | venueseats | venuename | first_value
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
d8b4a643d7f3-4
order by venuestate; venuestate | venueseats | venuename | first_value -----------+------------+----------------------------+------------- CA | 90000 | | CA | 70561 | Qualcomm Stadium | CA | 69843 | Monster Park | ... ``` Because the new row contains the highest VENUESEATS value \(`90000`\) and its VENUENAME is NULL, the FIRST\_VALUE function returns NULL for the `CA` partition\. To ignore rows like this in the function evaluation, add the IGNORE NULLS option to the function argument: ``` select venuestate, venueseats, venuename, first_value(venuename ignore nulls) over(partition by venuestate order by venueseats desc rows between unbounded preceding and unbounded following) from (select * from venue where venuestate='CA') order by venuestate; venuestate | venueseats | venuename | first_value
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
d8b4a643d7f3-5
order by venuestate; venuestate | venueseats | venuename | first_value ------------+------------+----------------------------+------------------ CA | 90000 | | Qualcomm Stadium CA | 70561 | Qualcomm Stadium | Qualcomm Stadium CA | 69843 | Monster Park | Qualcomm Stadium ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_first_value.md
89e80ec38634-0
ST\_AsText returns the well\-known text \(WKT\) representation of an input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsText-function.md
40e318da72ca-0
``` ST_AsText(geom) ``` ``` ST_AsText(geom, precision) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsText-function.md
107d43e8554d-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. *precision* A value of data type `INTEGER`\. The coordinates of *geom* are displayed using the specified precision 1–20\. If *precision* is not specified, the default is 15\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsText-function.md
bece6a0ea15a-0
`VARCHAR` If *geom* is null, then null is returned\. If *precision* is null, then null is returned\. If the result is larger than a 64\-KB `VARCHAR`, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsText-function.md
5a6c5d6b2d69-0
The following SQL returns the WKT representation of a linestring\. ``` SELECT ST_AsText(ST_GeomFromText('LINESTRING(3.141592653589793 -6.283185307179586,2.718281828459045 -1.414213562373095)', 4326)); ``` ``` st_astext -------------------------------- LINESTRING(3.14159265358979 -6.28318530717959,2.71828182845905 -1.41421356237309) ``` The following SQL returns the WKT representation of a linestring\. The coordinates of the geometries are displayed with six digits of precision\. ``` SELECT ST_AsText(ST_GeomFromText('LINESTRING(3.141592653589793 -6.283185307179586,2.718281828459045 -1.414213562373095)', 4326), 6); ``` ``` st_astext
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsText-function.md
5a6c5d6b2d69-1
``` ``` st_astext ---------------------------------------------- LINESTRING(3.14159 -6.28319,2.71828 -1.41421) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsText-function.md
dd780826d5ec-0
Creates a view in a database\. The view isn't physically materialized; the query that defines the view is run every time the view is referenced in a query\. To create a view with an external table, include the WITH NO SCHEMA BINDING clause\. To create a standard view, you need access to the underlying tables\. To query a standard view, you need select privileges for the view itself, but you don't need select privileges for the underlying tables\. To query a late binding view, you need select privileges for the late binding view itself\. You should also make sure the owner of the late binding view has select privileges to the referenced objects \(tables, views, or user\-defined functions\)\. For more information about Late Binding Views, see [Usage notes](#r_CREATE_VIEW_usage_notes)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
52ba16685abf-0
``` CREATE [ OR REPLACE ] VIEW name [ ( column_name [, ...] ) ] AS query [ WITH NO SCHEMA BINDING ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
02a75e6a0d0f-0
OR REPLACE If a view of the same name already exists, the view is replaced\. You can only replace a view with a new query that generates the identical set of columns, using the same column names and data types\. CREATE OR REPLACE VIEW locks the view for reads and writes until the operation completes\. *name* The name of the view\. If a schema name is given \(such as `myschema.myview`\) the view is created using the specified schema\. Otherwise, the view is created in the current schema\. The view name must be different from the name of any other view or table in the same schema\. If you specify a view name that begins with '\# ', the view is created as a temporary view that is visible only in the current session\. For more information about valid names, see [Names and identifiers](r_names.md)\. You can't create tables or views in the system databases template0, template1, and padb\_harvest\. *column\_name* Optional list of names to be used for the columns in the view\. If no column names are given, the column names are derived from the query\. The maximum number of columns you can define in a single view is 1,600\. *query* A query \(in the form of a SELECT statement\) that evaluates to a table\. This table defines the columns and rows in the view\. WITH NO SCHEMA BINDING
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
02a75e6a0d0f-1
WITH NO SCHEMA BINDING Clause that specifies that the view isn't bound to the underlying database objects, such as tables and user\-defined functions\. As a result, there is no dependency between the view and the objects it references\. You can create a view even if the referenced objects don't exist\. Because there is no dependency, you can drop or alter a referenced object without affecting the view\. Amazon Redshift doesn't check for dependencies until the view is queried\. To view details about late binding views, run the [PG\_GET\_LATE\_BINDING\_VIEW\_COLS](PG_GET_LATE_BINDING_VIEW_COLS.md) function\. When you include the WITH NO SCHEMA BINDING clause, tables and views referenced in the SELECT statement must be qualified with a schema name\. The schema must exist when the view is created, even if the referenced table doesn't exist\. For example, the following statement returns an error\. ``` create view myevent as select eventname from event with no schema binding; ``` The following statement executes successfully\. ``` create view myevent as select eventname from public.event with no schema binding; ``` **Note** You can't update, insert into, or delete from a view\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
3af977666bed-0
A late\-binding view doesn't check the underlying database objects, such as tables and other views, until the view is queried\. As a result, you can alter or drop the underlying objects without dropping and recreating the view\. If you drop underlying objects, queries to the late\-binding view will fail\. If the query to the late\-binding view references columns in the underlying object that aren't present, the query will fail\. If you drop and then re\-create a late\-binding view's underlying table or view, the new object is created with default access permissions\. You might need to grant permissions to the underling objects for users who will query the view\. To create a late\-binding view, include the WITH NO SCHEMA BINDING clause\. The following example creates a view with no schema binding\. ``` create view event_vw as select * from public.event with no schema binding; select * from event_vw limit 1; eventid | venueid | catid | dateid | eventname | starttime --------+---------+-------+--------+---------------+-------------------- 2 | 306 | 8 | 2114 | Boris Godunov | 2008-10-15 20:00:00 ``` The following example shows that you can alter an underlying table without recreating the view\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
3af977666bed-1
``` The following example shows that you can alter an underlying table without recreating the view\. ``` alter table event rename column eventname to title; select * from event_vw limit 1; eventid | venueid | catid | dateid | title | starttime --------+---------+-------+--------+---------------+-------------------- 2 | 306 | 8 | 2114 | Boris Godunov | 2008-10-15 20:00:00 ``` You can reference Amazon Redshift Spectrum external tables only in a late\-binding view\. One application of late\-binding views is to query both Amazon Redshift and Redshift Spectrum tables\. For example, you can use the [UNLOAD](r_UNLOAD.md) command to archive older data to Amazon S3\. Then, create a Redshift Spectrum external table that references the data on Amazon S3 and create a view that queries both tables\. The following example uses a UNION ALL clause to join the Amazon Redshift `SALES` table and the Redshift Spectrum `SPECTRUM.SALES` table\. ``` create view sales_vw as select * from public.sales union all select * from spectrum.sales with no schema binding; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
3af977666bed-2
union all select * from spectrum.sales with no schema binding; ``` For more information about creating Redshift Spectrum external tables, including the `SPECTRUM.SALES` table, see [Getting started with Amazon Redshift Spectrum](c-getting-started-using-spectrum.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
9d8aa49697ee-0
The following command creates a view called *myevent* from a table called EVENT\. ``` create view myevent as select eventname from event where eventname = 'LeAnn Rimes'; ``` The following command creates a view called* myuser* from a table called USERS\. ``` create view myuser as select lastname from users; ``` The following command creates or replaces a view called* myuser* from a table called USERS\. ``` create or replace view myuser as select lastname from users; ``` The following example creates a view with no schema binding\. ``` create view myevent as select eventname from public.event with no schema binding; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_VIEW.md
b4510003ab3d-0
**Topics** + [Amazon Redshift security overview](c_security-overview.md) + [Default database user privileges](r_Privileges.md) + [superuser](r_superusers.md) + [Users](r_Users.md) + [Groups](r_Groups.md) + [Schemas](r_Schemas_and_tables.md) + [Example for controlling user and group access](t_user_group_examples.md) You manage database security by controlling which users have access to which database objects\. Access to database objects depends on the privileges that you grant to user accounts or groups\. The following guidelines summarize how database security works: + By default, privileges are granted only to the object owner\. + Amazon Redshift database users are named user accounts that can connect to a database\. A user account is granted privileges explicitly, by having those privileges assigned directly to the account, or implicitly, by being a member of a group that is granted privileges\. + Groups are collections of users that can be collectively assigned privileges for easier security maintenance\. + Schemas are collections of database tables and other database objects\. Schemas are similar to file system directories, except that schemas cannot be nested\. Users can be granted access to a single schema or to multiple schemas\. For examples of security implementation, see [Example for controlling user and group access](t_user_group_examples.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Database_objects.md
b4510003ab3d-1
For examples of security implementation, see [Example for controlling user and group access](t_user_group_examples.md)\. For more information about protecting your data, see [Security in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/iam-redshift-user-mgmt.html) in the *Amazon Redshift Cluster Management Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Database_objects.md
6bfa94f53a23-0
The LEAD window function returns the values for a row at a given offset below \(after\) the current row in the partition\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LEAD.md
956c04431a0e-0
``` LEAD (value_expr [, offset ]) [ IGNORE NULLS | RESPECT NULLS ] OVER ( [ PARTITION BY window_partition ] ORDER BY window_ordering ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LEAD.md
19d749401bec-0
*value\_expr* The target column or expression that the function operates on\. *offset* An optional parameter that specifies the number of rows below the current row to return values for\. The offset can be a constant integer or an expression that evaluates to an integer\. If you do not specify an offset, Amazon Redshift uses `1` as the default value\. An offset of `0` indicates the current row\. IGNORE NULLS An optional specification that indicates that Amazon Redshift should skip null values in the determination of which row to use\. Null values are included if IGNORE NULLS is not listed\. You can use an NVL or COALESCE expression to replace the null values with another value\. For more information, see [NVL expression](r_NVL_function.md)\. RESPECT NULLS Indicates that Amazon Redshift should include null values in the determination of which row to use\. RESPECT NULLS is supported by default if you do not specify IGNORE NULLS\. OVER Specifies the window partitioning and ordering\. The OVER clause cannot contain a window frame specification\. PARTITION BY *window\_partition* An optional argument that sets the range of records for each group in the OVER clause\. ORDER BY *window\_ordering* Sorts the rows within each partition\. The LEAD window function supports expressions that use any of the Amazon Redshift data types\. The return type is the same as the type of the *value\_expr*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LEAD.md
6d43cb8b0310-0
The following example provides the commission for events in the SALES table for which tickets were sold on January 1, 2008 and January 2, 2008 and the commission paid for ticket sales for the subsequent sale\. ``` select eventid, commission, saletime, lead(commission, 1) over (order by saletime) as next_comm from sales where saletime between '2008-01-01 00:00:00' and '2008-01-02 12:59:59' order by saletime; eventid | commission | saletime | next_comm ---------+------------+---------------------+----------- 6213 | 52.05 | 2008-01-01 01:00:19 | 106.20 7003 | 106.20 | 2008-01-01 02:30:52 | 103.20 8762 | 103.20 | 2008-01-01 03:50:02 | 70.80 1150 | 70.80 | 2008-01-01 06:06:57 | 50.55 1749 | 50.55 | 2008-01-01 07:05:02 | 125.40 8649 | 125.40 | 2008-01-01 07:26:20 | 35.10 2903 | 35.10 | 2008-01-01 09:41:06 | 259.50
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LEAD.md
6d43cb8b0310-1
2903 | 35.10 | 2008-01-01 09:41:06 | 259.50 6605 | 259.50 | 2008-01-01 12:50:55 | 628.80 6870 | 628.80 | 2008-01-01 12:59:34 | 74.10 6977 | 74.10 | 2008-01-02 01:11:16 | 13.50 4650 | 13.50 | 2008-01-02 01:40:59 | 26.55 4515 | 26.55 | 2008-01-02 01:52:35 | 22.80 5465 | 22.80 | 2008-01-02 02:28:01 | 45.60 5465 | 45.60 | 2008-01-02 02:28:02 | 53.10 7003 | 53.10 | 2008-01-02 02:31:12 | 70.35 4124 | 70.35 | 2008-01-02 03:12:50 | 36.15 1673 | 36.15 | 2008-01-02 03:15:00 | 1300.80 ... (39 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_LEAD.md
1aabd884bd77-0
A `BETWEEN` condition tests expressions for inclusion in a range of values, using the keywords `BETWEEN` and `AND`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_range_condition.md