id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
211666d8e8c9-1
You can create an Amazon S3 bucket in a specific Region either by selecting the Region when you create the bucket by using the Amazon S3 console, or by specifying an endpoint when you create the bucket using the Amazon S3 API or CLI\. Following the data load, verify that the correct files are present on Amazon S3\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_uploading-data-to-S3.md
295b4a9fcfd4-0
**Topics** + [CAST and CONVERT functions](r_CAST_function.md) + [TO\_CHAR](r_TO_CHAR.md) + [TO\_DATE](r_TO_DATE_function.md) + [TO\_NUMBER](r_TO_NUMBER.md) + [Datetime format strings](r_FORMAT_strings.md) + [Numeric Format Strings](r_Numeric_formating.md) Data type formatting functions provide an easy way to convert values from one data type to another\. For each of these functions, the first argument is always the value to be formatted and the second argument contains the template for the new format\. Amazon Redshift supports several data type formatting functions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Data_type_formatting.md
191a749be7d3-0
**Topics** + [Amazon Redshift and PostgreSQL JDBC and ODBC](c_redshift-postgres-jdbc.md) + [Features that are implemented differently](c_redshift-sql-implementated-differently.md) + [Unsupported PostgreSQL features](c_unsupported-postgresql-features.md) + [Unsupported PostgreSQL data types](c_unsupported-postgresql-datatypes.md) + [Unsupported PostgreSQL functions](c_unsupported-postgresql-functions.md) Amazon Redshift is based on PostgreSQL 8\.0\.2\. Amazon Redshift and PostgreSQL have a number of very important differences that you must be aware of as you design and develop your data warehouse applications\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift-and-postgres-sql.md
191a749be7d3-1
Amazon Redshift is specifically designed for online analytic processing \(OLAP\) and business intelligence \(BI\) applications, which require complex queries against large datasets\. Because it addresses very different requirements, the specialized data storage schema and query execution engine that Amazon Redshift uses are completely different from the PostgreSQL implementation\. For example, where online transaction processing \(OLTP\) applications typically store data in rows, Amazon Redshift stores data in columns, using specialized data compression encodings for optimum memory usage and disk I/O\. Some PostgreSQL features that are suited to smaller\-scale OLTP processing, such as secondary indexes and efficient single\-row data manipulation operations, have been omitted to improve performance\. See [System and architecture overview](c_redshift_system_overview.md) for a detailed explanation of the Amazon Redshift data warehouse system architecture\. PostgreSQL 9\.x includes some features that are not supported in Amazon Redshift\. In addition, there are important differences between Amazon Redshift SQL and PostgreSQL 8\.0\.2 that you must be aware of\. This section highlights the differences between Amazon Redshift and PostgreSQL 8\.0\.2 and provides guidance for developing a data warehouse that takes full advantage of the Amazon Redshift SQL implementation\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_redshift-and-postgres-sql.md
ef62aceb3a7b-0
The ANALYZE operation updates the statistical metadata that the query planner uses to choose optimal plans\. In most cases, you don't need to explicitly run the ANALYZE command\. Amazon Redshift monitors changes to your workload and automatically updates statistics in the background\. In addition, the COPY command performs an analysis automatically when it loads data into an empty table\. To explicitly analyze a table or the entire database, run the [ANALYZE](r_ANALYZE.md) command\. **Topics** + [Automatic analyze](#t_Analyzing_tables-auto-analyze) + [Analysis of new table data](#t_Analyzing_tables-new-tables) + [ANALYZE command history](c_check_last_analyze.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
24d9752f4f58-0
Amazon Redshift continuously monitors your database and automatically performs analyze operations in the background\. To minimize impact to your system performance, automatic analyze runs during periods when workloads are light\. Automatic analyze is enabled by default\. To disable automatic analyze, set the `auto_analyze` parameter to **false** by modifying your cluster's parameter group\. To reduce processing time and improve overall system performance, Amazon Redshift skips automatic analyze for any table where the extent of modifications is small\. An analyze operation skips tables that have up\-to\-date statistics\. If you run ANALYZE as part of your extract, transform, and load \(ETL\) workflow, automatic analyze skips tables that have current statistics\. Similarly, an explicit ANALYZE skips tables when automatic analyze has updated the table's statistics\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-0
By default, the COPY command performs an ANALYZE after it loads data into an empty table\. You can force an ANALYZE regardless of whether a table is empty by setting STATUPDATE ON\. If you specify STATUPDATE OFF, an ANALYZE is not performed\. Only the table owner or a superuser can run the ANALYZE command or run the COPY command with STATUPDATE set to ON\. Amazon Redshift also analyzes new tables that you create with the following commands: + CREATE TABLE AS \(CTAS\) + CREATE TEMP TABLE AS + SELECT INTO Amazon Redshift returns a warning message when you run a query against a new table that was not analyzed after its data was initially loaded\. No warning occurs when you query a table after a subsequent update or load\. The same warning message is returned when you run the EXPLAIN command on a query that references tables that have not been analyzed\. Whenever adding data to a nonempty table significantly changes the size of the table, you can explicitly update statistics\. You do so either by running an ANALYZE command or by using the STATUPDATE ON option with the COPY command\. To view details about the number of rows that have been inserted or deleted since the last ANALYZE, query the [PG\_STATISTIC\_INDICATOR](r_PG_STATISTIC_INDICATOR.md) system catalog table\. You can specify the scope of the [ANALYZE](r_ANALYZE.md) command to one of the following: + The entire current database + A single table + One or more specific columns in a single table + Columns that are likely to be used as predicates in queries
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-1
+ A single table + One or more specific columns in a single table + Columns that are likely to be used as predicates in queries The ANALYZE command gets a sample of rows from the table, does some calculations, and saves resulting column statistics\. By default, Amazon Redshift runs a sample pass for the DISTKEY column and another sample pass for all of the other columns in the table\. If you want to generate statistics for a subset of columns, you can specify a comma\-separated column list\. You can run ANALYZE with the PREDICATE COLUMNS clause to skip columns that aren’t used as predicates\. ANALYZE operations are resource intensive, so run them only on tables and columns that actually require statistics updates\. You don't need to analyze all columns in all tables regularly or on the same schedule\. If the data changes substantially, analyze the columns that are frequently used in the following: + Sorting and grouping operations + Joins + Query predicates To reduce processing time and improve overall system performance, Amazon Redshift skips ANALYZE for any table that has a low percentage of changed rows, as determined by the [analyze\_threshold\_percent](r_analyze_threshold_percent.md) parameter\. By default, the analyze threshold is set to 10 percent\. You can change the analyze threshold for the current session by running a [SET](r_SET.md) command\. Columns that are less likely to require frequent analysis are those that represent facts and measures and any related attributes that are never actually queried, such as large VARCHAR columns\. For example, consider the LISTING table in the TICKIT database\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-2
``` select "column", type, encoding, distkey, sortkey from pg_table_def where tablename = 'listing'; column | type | encoding | distkey | sortkey ---------------+--------------------+----------+---------+--------- listid | integer | none | t | 1 sellerid | integer | none | f | 0 eventid | integer | mostly16 | f | 0 dateid | smallint | none | f | 0 numtickets | smallint | mostly8 | f | 0 priceperticket | numeric(8,2) | bytedict | f | 0 totalprice | numeric(8,2) | mostly32 | f | 0 listtime | timestamp with... | none | f | 0 ``` If this table is loaded every day with a large number of new records, the LISTID column, which is frequently used in queries as a join key, needs to be analyzed regularly\. If TOTALPRICE and LISTTIME are the frequently used constraints in queries, you can analyze those columns and the distribution key on every weekday\. ``` analyze listing(listid, totalprice, listtime); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-3
``` analyze listing(listid, totalprice, listtime); ``` Suppose that the sellers and events in the application are much more static, and the date IDs refer to a fixed set of days covering only two or three years\. In this case,the unique values for these columns don't change significantly\. However, the number of instances of each unique value will increase steadily\. In addition, consider the case where the NUMTICKETS and PRICEPERTICKET measures are queried infrequently compared to the TOTALPRICE column\. In this case, you can run the ANALYZE command on the whole table once every weekend to update statistics for the five columns that are not analyzed daily: <a name="t_Analyzing_tables-predicate-columns"></a> **Predicate columns** As a convenient alternative to specifying a column list, you can choose to analyze only the columns that are likely to be used as predicates\. When you run a query, any columns that are used in a join, filter condition, or group by clause are marked as predicate columns in the system catalog\. When you run ANALYZE with the PREDICATE COLUMNS clause, the analyze operation includes only columns that meet the following criteria: + The column is marked as a predicate column\. + The column is a distribution key\. + The column is part of a sort key\. If none of a table's columns are marked as predicates, ANALYZE includes all of the columns, even when PREDICATE COLUMNS is specified\. If no columns are marked as predicate columns, it might be because the table has not yet been queried\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-4
You might choose to use PREDICATE COLUMNS when your workload's query pattern is relatively stable\. When the query pattern is variable, with different columns frequently being used as predicates, using PREDICATE COLUMNS might temporarily result in stale statistics\. Stale statistics can lead to suboptimal query execution plans and long execution times\. However, the next time you run ANALYZE using PREDICATE COLUMNS, the new predicate columns are included\. To view details for predicate columns, use the following SQL to create a view named PREDICATE\_COLUMNS\. ``` CREATE VIEW predicate_columns AS WITH predicate_column_info as ( SELECT ns.nspname AS schema_name, c.relname AS table_name, a.attnum as col_num, a.attname as col_name, CASE WHEN 10002 = s.stakind1 THEN array_to_string(stavalues1, '||') WHEN 10002 = s.stakind2 THEN array_to_string(stavalues2, '||') WHEN 10002 = s.stakind3 THEN array_to_string(stavalues3, '||') WHEN 10002 = s.stakind4 THEN array_to_string(stavalues4, '||') ELSE NULL::varchar END AS pred_ts FROM pg_statistic s JOIN pg_class c ON c.oid = s.starelid
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-5
END AS pred_ts FROM pg_statistic s JOIN pg_class c ON c.oid = s.starelid JOIN pg_namespace ns ON c.relnamespace = ns.oid JOIN pg_attribute a ON c.oid = a.attrelid AND a.attnum = s.staattnum) SELECT schema_name, table_name, col_num, col_name, pred_ts NOT LIKE '2000-01-01%' AS is_predicate, CASE WHEN pred_ts NOT LIKE '2000-01-01%' THEN (split_part(pred_ts, '||',1))::timestamp ELSE NULL::timestamp END as first_predicate_use, CASE WHEN pred_ts NOT LIKE '%||2000-01-01%' THEN (split_part(pred_ts, '||',2))::timestamp ELSE NULL::timestamp END as last_analyze FROM predicate_column_info; ``` Suppose you run the following query against the LISTING table\. Note that LISTID, LISTTIME, and EVENTID are used in the join, filter, and group by clauses\. ``` select s.buyerid,l.eventid, sum(l.totalprice) from listing l join sales s on l.listid = s.listid where l.listtime > '2008-12-01'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-6
from listing l join sales s on l.listid = s.listid where l.listtime > '2008-12-01' group by l.eventid, s.buyerid; ``` When you query the PREDICATE\_COLUMNS view, as shown in the following example, you see that LISTID, EVENTID, and LISTTIME are marked as predicate columns\. ``` select * from predicate_columns where table_name = 'listing'; ``` ``` schema_name | table_name | col_num | col_name | is_predicate | first_predicate_use | last_analyze ------------+------------+---------+----------------+--------------+---------------------+-------------------- public | listing | 1 | listid | true | 2017-05-05 19:27:59 | 2017-05-03 18:27:41 public | listing | 2 | sellerid | false | | 2017-05-03 18:27:41 public | listing | 3 | eventid | true | 2017-05-16 20:54:32 | 2017-05-03 18:27:41
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-7
public | listing | 3 | eventid | true | 2017-05-16 20:54:32 | 2017-05-03 18:27:41 public | listing | 4 | dateid | false | | 2017-05-03 18:27:41 public | listing | 5 | numtickets | false | | 2017-05-03 18:27:41 public | listing | 6 | priceperticket | false | | 2017-05-03 18:27:41 public | listing | 7 | totalprice | false | | 2017-05-03 18:27:41 public | listing | 8 | listtime | true | 2017-05-16 20:54:32 | 2017-05-03 18:27:41 ``` Keeping statistics current improves query performance by enabling the query planner to choose optimal plans\. Amazon Redshift refreshes statistics automatically in the background, and you can also explicitly run the ANALYZE command\. If you choose to explicitly run ANALYZE, do the following: + Run the ANALYZE command before running queries\. + Run the ANALYZE command on the database routinely at the end of every regular load or update cycle\. + Run the ANALYZE command on any new tables that you create and any existing tables or columns that undergo significant change\. + Consider running ANALYZE operations on different schedules for different types of tables and columns, depending on their use in queries and their propensity to change\. + To save time and cluster resources, use the PREDICATE COLUMNS clause when you run ANALYZE\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
881a283eabbe-8
+ To save time and cluster resources, use the PREDICATE COLUMNS clause when you run ANALYZE\. An analyze operation skips tables that have up\-to\-date statistics\. If you run ANALYZE as part of your extract, transform, and load \(ETL\) workflow, automatic analyze skips tables that have current statistics\. Similarly, an explicit ANALYZE skips tables when automatic analyze has updated the table's statistics\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Analyzing_tables.md
1792a2d87fc4-0
TIMEOFDAY is a special alias used to return the weekday, date, and time as a string value\. It returns the time of day string for the current statement, even when it is within a transaction block\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEOFDAY_function.md
a3a4ec4ad938-0
``` TIMEOFDAY() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEOFDAY_function.md
ed8a0a49fa03-0
VARCHAR
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEOFDAY_function.md
aec7bf424628-0
Return the current date and time by using the TIMEOFDAY function: ``` select timeofday(); timeofday ------------ Thu Sep 19 22:53:50.333525 2013 UTC (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEOFDAY_function.md
e5e9c8c1eb98-0
You can use the COPY command to load data in parallel from an Amazon EMR cluster configured to write text files to the cluster's Hadoop Distributed File System \(HDFS\) in the form of fixed\-width files, character\-delimited files, CSV files, JSON\-formatted files, or Avro files\. **Topics** + [Syntax](#copy-parameters-data-source-emr-syntax) + [Example](#copy-parameters-data-source-emr-example) + [Parameters](#copy-parameters-data-source-emr-parameters) + [Supported parameters](#copy-parameters-data-source-emr-optional-parms) + [Unsupported parameters](#copy-parameters-data-source-emr-unsupported-parms)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
32ff98499f34-0
``` FROM 'emr://emr_cluster_id/hdfs_filepath' authorization [ optional_parameters ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
56922c59020e-0
The following example loads data from as Amazon EMR cluster\. ``` copy sales from 'emr://j-SAMPLE2B500FC/myoutput/part-*' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
cc1c67374d23-0
FROM The source of the data to be loaded\. 'emr://*emr\_cluster\_id*/*hdfs\_file\_path*' <a name="copy-emr"></a> The unique identifier for the Amazon EMR cluster and the HDFS file path that references the data files for the COPY command\. The HDFS data file names must not contain the wildcard characters asterisk \(\*\) and question mark \(?\)\. The Amazon EMR cluster must continue running until the COPY operation completes\. If any of the HDFS data files are changed or deleted before the COPY operation completes, you might have unexpected results, or the COPY operation might fail\. You can use the wildcard characters asterisk \(\*\) and question mark \(?\) as part of the *hdfs\_file\_path* argument to specify multiple files to be loaded\. For example, `'emr://j-SAMPLE2B500FC/myoutput/part*'` identifies the files `part-0000`, `part-0001`, and so on\. If the file path doesn't contain wildcard characters, it is treated as a string literal\. If you specify only a folder name, COPY attempts to load all files in the folder\. If you use wildcard characters or use only the folder name, verify that no unwanted files will be loaded\. For example, some processes might write a log file to the output folder\. For more information, see [Loading data from Amazon EMR](loading-data-from-emr.md)\. *authorization*
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
cc1c67374d23-1
For more information, see [Loading data from Amazon EMR](loading-data-from-emr.md)\. *authorization* The COPY command needs authorization to access data in another AWS resource, including in Amazon S3, Amazon EMR, Amazon DynamoDB, and Amazon EC2\. You can provide that authorization by referencing an AWS Identity and Access Management \(IAM\) role that is attached to your cluster \(role\-based access control\) or by providing the access credentials for an IAM user \(key\-based access control\)\. For increased security and flexibility, we recommend using IAM role\-based access control\. For more information, see [Authorization parameters](copy-parameters-authorization.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
67a1cd859bc0-0
You can optionally specify the following parameters with COPY from Amazon EMR: + [Column mapping options](copy-parameters-column-mapping.md) + [Data format parameters](copy-parameters-data-format.md#copy-data-format-parameters) + [Data conversion parameters](copy-parameters-data-conversion.md) + [ Data load operations](copy-parameters-data-load.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
c4e7da94a612-0
You cannot use the following parameters with COPY from Amazon EMR: + ENCRYPTED + MANIFEST + REGION + READRATIO + SSH
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source-emr.md
59a46510ff5b-0
ST\_Perimeter2D is an alias for ST\_Perimeter\. For more information, see [ST\_Perimeter](ST_Perimeter-function.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Perimeter2D-function.md
64c3375e5e1e-0
**Topics** + [How automatic compression works](#c_Loading_tables_auto_compress-how-automatic-compression-works) + [Automatic compression example](#r_COPY_COMPRESS_examples) You can apply compression encodings to columns in tables manually, based on your own evaluation of the data\. Or you can use the COPY command with COMPUPDATE set to ON to analyze and apply compression automatically based on sample data\. You can use automatic compression when you create and load a brand new table\. The COPY command performs a compression analysis\. You can also perform a compression analysis without loading data or changing the compression on a table by running the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command on an already populated table\. For example, you can run ANALYZE COMPRESSION when you want to analyze compression on a table for future use, while preserving the existing data definition language \(DDL\) statements\. Automatic compression balances overall performance when choosing compression encodings\. Range\-restricted scans might perform poorly if sort key columns are compressed much more highly than other columns in the same query\. As a result, automatic compression skips the data analyzing phase on the sort key columns and keeps the user\-defined encoding types\. Automatic compression chooses RAW encoding if you haven't explicitly defined a type of encoding\. ANALYZE COMPRESSION behaves the same\. For optimal query performance, consider using RAW for sort keys\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Loading_tables_auto_compress.md
319a66d79cf7-0
When the COMPUPDATE parameter is ON, the COPY command applies automatic compression whenever you run the COPY command with an empty target table and all of the table columns either have RAW encoding or no encoding\. To apply automatic compression to an empty table, regardless of its current compression encodings, run the COPY command with the COMPUPDATE option set to ON\. To disable automatic compression, run the COPY command with the COMPUPDATE option set to OFF\. You cannot apply automatic compression to a table that already contains data\. **Note** Automatic compression analysis requires enough rows in the load data \(at least 100,000 rows per slice\) to generate a meaningful sample\. Automatic compression performs these operations in the background as part of the load transaction: 1. An initial sample of rows is loaded from the input file\. Sample size is based on the value of the COMPROWS parameter\. The default is 100,000\. 1. Compression options are chosen for each column\. 1. The sample rows are removed from the table\. 1. The table is recreated with the chosen compression encodings\. 1. The entire input file is loaded and compressed using the new encodings\. After you run the COPY command, the table is fully loaded, compressed, and ready for use\. If you load more data later, appended rows are compressed according to the existing encoding\. If you only want to perform a compression analysis, run ANALYZE COMPRESSION, which is more efficient than running a full COPY\. Then you can evaluate the results to decide whether to use automatic compression or recreate the table manually\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Loading_tables_auto_compress.md
319a66d79cf7-1
Automatic compression is supported only for the COPY command\. Alternatively, you can manually apply compression encoding when you create the table\. For information about manual compression encoding, see [Choosing a column compression type](t_Compressing_data_on_disk.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Loading_tables_auto_compress.md
f466e7c1b2dd-0
In this example, assume that the TICKIT database contains a copy of the LISTING table called BIGLIST, and you want to apply automatic compression to this table when it is loaded with approximately 3 million rows\. **To load and automatically compress the table** 1. Ensure that the table is empty\. You can apply automatic compression only to an empty table: ``` truncate biglist; ``` 1. Load the table with a single COPY command\. Although the table is empty, some earlier encoding might have been specified\. To ensure that Amazon Redshift performs a compression analysis, set the COMPUPDATE parameter to ON\. ``` copy biglist from 's3://mybucket/biglist.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|' COMPUPDATE ON; ``` Because no COMPROWS option is specified, the default and recommended sample size of 100,000 rows per slice is used\. 1. Look at the new schema for the BIGLIST table in order to review the automatically chosen encoding schemes\. ``` select "column", type, encoding from pg_table_def where tablename = 'biglist'; Column | Type | Encoding
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Loading_tables_auto_compress.md
f466e7c1b2dd-1
select "column", type, encoding from pg_table_def where tablename = 'biglist'; Column | Type | Encoding ---------------+-----------------------------+---------- listid | integer | delta sellerid | integer | delta32k eventid | integer | delta32k dateid | smallint | delta +numtickets | smallint | delta priceperticket | numeric(8,2) | delta32k totalprice | numeric(8,2) | mostly32 listtime | timestamp without time zone | none ``` 1. Verify that the expected number of rows were loaded: ``` select count(*) from biglist; count --------- 3079952 (1 row) ``` When rows are later appended to this table using COPY or INSERT statements, the same compression encodings are applied\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Loading_tables_auto_compress.md
78f8d4573730-0
The CASE expression is a conditional expression, similar to if/then/else statements found in other languages\. CASE is used to specify a result when there are multiple conditions\. There are two types of CASE expressions: simple and searched\. In simple CASE expressions, an expression is compared with a value\. When a match is found, the specified action in the THEN clause is applied\. If no match is found, the action in the ELSE clause is applied\. In searched CASE expressions, each CASE is evaluated based on a Boolean expression, and the CASE statement returns the first matching CASE\. If no matching CASEs are found among the WHEN clauses, the action in the ELSE clause is returned\. Simple CASE statement used to match conditions: ``` CASE expression WHEN value THEN result [WHEN...] [ELSE result] END ``` Searched CASE statement used to evaluate each condition: ``` CASE WHEN boolean condition THEN result [WHEN ...] [ELSE result] END ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CASE_function.md
92503fdedc0d-0
*expression* A column name or any valid expression\. *value* Value that the expression is compared with, such as a numeric constant or a character string\. *result* The target value or expression that is returned when an expression or Boolean condition is evaluated\. *Boolean condition* A Boolean condition is valid or true when the value is equal to the constant\. When true, the result specified following the THEN clause is returned\. If a condition is false, the result following the ELSE clause is returned\. If the ELSE clause is omitted and no condition matches, the result is null\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CASE_function.md
befb92dac046-0
Use a simple CASE expression to replace `New York City` with `Big Apple` in a query against the VENUE table\. Replace all other city names with `other`\. ``` select venuecity, case venuecity when 'New York City' then 'Big Apple' else 'other' end from venue order by venueid desc; venuecity | case -----------------+----------- Los Angeles | other New York City | Big Apple San Francisco | other Baltimore | other ... (202 rows) ``` Use a searched CASE expression to assign group numbers based on the PRICEPAID value for individual ticket sales: ``` select pricepaid, case when pricepaid <10000 then 'group 1' when pricepaid >10000 then 'group 2' else 'group 3' end from sales order by 1 desc; pricepaid | case -----------+--------- 12624.00 | group 2 10000.00 | group 3 10000.00 | group 3 9996.00 | group 1 9988.00 | group 1 ...
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CASE_function.md
befb92dac046-1
10000.00 | group 3 9996.00 | group 1 9988.00 | group 1 ... (172456 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CASE_function.md
61a77a2b4afc-0
When a user runs a query, WLM assigns the query to the first matching queue, based on the WLM queue assignment rules: 1. If a user is logged in as a superuser and runs a query in the query group labeled superuser, the query is assigned to the superuser queue\. 1. If a user belongs to a listed user group or runs a query within a listed query group, the query is assigned to the first matching queue\. 1. If a query doesn't meet any criteria, the query is assigned to the default queue, which is the last queue defined in the WLM configuration\. The following diagram illustrates how these rules work\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/queue-assignment-rules-2.png)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-queue-assignment-rules.md
d2df2d8be722-0
The following table shows a WLM configuration with the superuser queue and four user\-defined queues\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/workflow-queues.png) The following illustration shows how queries are assigned to the queues in the previous table according to user groups and query groups\. For information about how to assign queries to user groups and query groups at runtime, see [Assigning queries to queues](cm-c-executing-queries.md) later in this section\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/queues-assignment.png) In this example, WLM makes the following assignments: 1. The first set of statements shows three ways to assign users to user groups\. The statements are executed by the user `masteruser`, which is not a member of a user group listed in any WLM queue\. No query group is set, so the statements are routed to the default queue\. 1. The user `masteruser` is a superuser and the query group is set to `'superuser'`, so the query is assigned to the superuser queue\. 1. The user `admin1` is a member of the user group listed in queue 1, so the query is assigned to queue 1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-queue-assignment-rules.md
d2df2d8be722-1
1. The user `admin1` is a member of the user group listed in queue 1, so the query is assigned to queue 1\. 1. The user `vp1` is not a member of any listed user group\. The query group is set to `'QG_B'`, so the query is assigned to queue 2\. 1. The user `analyst1` is a member of the user group listed in queue 3, but `'QG_B'` matches queue 2, so the query is assigned to queue 2\. 1. The user `ralph` is not a member of any listed user group and the query group was reset, so there is no matching queue\. The query is assigned to the default queue\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-queue-assignment-rules.md
94183df11546-0
Calculates the percent rank of a given row\. The percent rank is determined using this formula: `(x - 1) / (the number of rows in the window or partition - 1)` where *x* is the rank of the current row\. The following dataset illustrates use of this formula: ``` Row# Value Rank Calculation PERCENT_RANK 1 15 1 (1-1)/(7-1) 0.0000 2 20 2 (2-1)/(7-1) 0.1666 3 20 2 (2-1)/(7-1) 0.1666 4 20 2 (2-1)/(7-1) 0.1666 5 30 5 (5-1)/(7-1) 0.6666 6 30 5 (5-1)/(7-1) 0.6666 7 40 7 (7-1)/(7-1) 1.0000 ``` The return value range is 0 to 1, inclusive\. The first row in any set has a PERCENT\_RANK of 0\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENT_RANK.md
cee523a0c940-0
``` PERCENT_RANK () OVER ( [ PARTITION BY partition_expression ] [ ORDER BY order_list ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENT_RANK.md
fbeb6942bd10-0
\( \) The function takes no arguments, but the empty parentheses are required\. OVER A clause that specifies the window partitioning\. The OVER clause cannot contain a window frame specification\. PARTITION BY *partition\_expression* Optional\. An expression that sets the range of records for each group in the OVER clause\. ORDER BY *order\_list* Optional\. The expression on which to calculate percent rank\. The expression must have either a numeric data type or be implicitly convertible to one\. If ORDER BY is omitted, the return value is 0 for all rows\. If ORDER BY does not produce a unique ordering, the order of the rows is nondeterministic\. For more information, see [Unique ordering of data for window functions](r_Examples_order_by_WF.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENT_RANK.md
b48b5c162685-0
FLOAT8
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENT_RANK.md
c92c4d96cf96-0
The following example calculates the percent rank of the sales quantities for each seller: ``` select sellerid, qty, percent_rank() over (partition by sellerid order by qty) from winsales; sellerid qty percent_rank ---------------------------------------- 1 10.00 0.0 1 10.64 0.5 1 30.37 1.0 3 10.04 0.0 3 15.15 0.33 3 20.75 0.67 3 30.55 1.0 2 20.09 0.0 2 20.12 1.0 4 10.12 0.0 4 40.23 1.0 ``` For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENT_RANK.md
a1553b0bc314-0
Creates a new external schema in the current database\. You can use this external schema to connect to Amazon RDS for PostgreSQL or Amazon Aurora with PostgreSQL compatibility databases\. You can also create an external schema that references a database in an external data catalog such as AWS Glue, Athena, or a database in an Apache Hive metastore, such as Amazon EMR\. The owner of this schema is the issuer of the CREATE EXTERNAL SCHEMA command\. To transfer ownership of an external schema, use [ALTER SCHEMA](r_ALTER_SCHEMA.md) to change the owner\. To grant access to the schema to other users or user groups, use the [GRANT](r_GRANT.md) command\. You can't use the GRANT or REVOKE commands for permissions on an external table\. Instead, grant or revoke the permissions on the external schema\. **Note** If you currently have Redshift Spectrum external tables in the Amazon Athena data catalog, you can migrate your Athena data catalog to an AWS Glue Data Catalog\. To use the AWS Glue Data Catalog with Redshift Spectrum, you might need to change your AWS Identity and Access Management \(IAM\) policies\. For more information, see [Upgrading to the AWS Glue Data Catalog](https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html#glue-upgrade) in the *Athena User Guide*\. To view details for external schemas, query the [SVV\_EXTERNAL\_SCHEMAS](r_SVV_EXTERNAL_SCHEMAS.md) system view\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
56d9c8515cb4-0
The following syntax describes the CREATE EXTERNAL SCHEMA command used to reference data using an external data catalog\. For more information, see [Querying external data using Amazon Redshift Spectrum](c-using-spectrum.md)\. ``` CREATE EXTERNAL SCHEMA [IF NOT EXISTS] local_schema_name FROM { [ DATA CATALOG ] | HIVE METASTORE | POSTGRES } DATABASE 'database_name' [ REGION 'aws-region' ] [ URI 'hive_metastore_uri' [ PORT port_number ] ] IAM_ROLE 'iam-role-arn-string' SECRET_ARN 'ssm-secret-arn' [ CATALOG_ROLE 'catalog-role-arn-string' ] [ CREATE EXTERNAL DATABASE IF NOT EXISTS ] ``` The following syntax describes the CREATE EXTERNAL SCHEMA command used to reference data using a federated query\. For more information, see [Querying data with federated queries in Amazon Redshift](federated-overview.md)\. ``` CREATE EXTERNAL SCHEMA [IF NOT EXISTS] local_schema_name FROM POSTGRES DATABASE 'database_name' [SCHEMA 'schema_name'] URI 'hostname' [ PORT port_number ] IAM_ROLE 'iam-role-arn-string' SECRET_ARN 'ssm-secret-arn' ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-0
IF NOT EXISTS A clause that indicates that if the specified schema already exists, the command should make no changes and return a message that the schema exists, rather than terminating with an error\. This clause is useful when scripting, so the script doesn’t fail if CREATE EXTERNAL SCHEMA tries to create a schema that already exists\. local\_schema\_name The name of the new external schema\. For more information about valid names, see [Names and identifiers](r_names.md)\. FROM \[ DATA CATALOG \] \| HIVE METASTORE A keyword that indicates where the external database is located\. DATA CATALOG indicates that the external database is defined in the Athena data catalog or the AWS Glue Data Catalog\. If the external database is defined in an external Data Catalog in a different AWS Region, the REGION parameter is required\. DATA CATALOG is the default\. HIVE METASTORE indicates that the external database is defined in an Apache Hive metastore\. If HIVE METASTORE, is specified, URI is required\. POSTGRES indicates that the external database is defined in RDS PostgreSQL or Aurora PostgreSQL\. DATABASE '*database\_name*' \[SCHEMA '*schema\_name*'\] A keyword that indicates the name of the external database in RDS PostgreSQL or Aurora PostgreSQL\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-1
A keyword that indicates the name of the external database in RDS PostgreSQL or Aurora PostgreSQL\. The *schema\_name* indicates the schema in RDS PostgreSQL or Aurora PostgreSQL\. The default *schema\_name* is `public`\. REGION '*aws\-region*' If the external database is defined in an Athena data catalog or the AWS Glue Data Catalog, the AWS Region in which the database is located\. This parameter is required if the database is defined in an external Data Catalog\. URI '*hive\_metastore\_uri*' \[ PORT port\_number \] The hostname URI and port\_number of an RDS PostgreSQL or Aurora PostgreSQL\. The *hostname* is the head node of the replica set\. The endpoint must be reachable \(routable\) from the Amazon Redshift cluster\. The default port\_number is 5432\. If the database is in a Hive metastore, specify the URI and optionally the port number for the metastore\. The default port number is 9083\. A URI doesn't contain a protocol specification \("http://"\)\. An example valid URI: `uri '172.10.10.10'`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-2
The RDS PostgreSQL or Aurora PostgreSQL must be in the same VPC as your Amazon Redshift cluster\. Create a security group linking Amazon Redshift and RDS PostgreSQL or Aurora PostgreSQL\. IAM\_ROLE '*iam\-role\-arn\-string*' The Amazon Resource Name \(ARN\) for an IAM role that your cluster uses for authentication and authorization\. As a minimum, the IAM role must have permission to perform a LIST operation on the Amazon S3 bucket to be accessed and a GET operation on the Amazon S3 objects the bucket contains\. If the external database is defined in an Amazon Athena data catalog or the AWS Glue Data Catalog, the IAM role must have permission to access Athena unless CATALOG\_ROLE is specified\. For more information, see [IAM policies for Amazon Redshift Spectrum](c-spectrum-iam-policies.md)\. The following shows the syntax for the IAM\_ROLE parameter string for a single ARN\. ``` IAM_ROLE 'arn:aws:iam::<aws-account-id>:role/<role-name>' ``` You can chain roles so that your cluster can assume another IAM role, possibly belonging to another account\. You can chain up to 10 roles\. For more information, see [Chaining IAM roles in Amazon Redshift Spectrum](c-spectrum-iam-policies.md#c-spectrum-chaining-roles)\. To this IAM role, attach an IAM permissions policy similar to the following\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-3
To this IAM role, attach an IAM permissions policy similar to the following\. ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AccessSecret", "Effect": "Allow", "Action": [ "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ], "Resource": "arn:aws:secretsmanager:us-west-2:123456789012:secret:my-rds-secret-VNenFy" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "secretsmanager:GetRandomPassword", "secretsmanager:ListSecrets" ], "Resource": "*" } ] } ``` For the steps to create an IAM role to use with federated query, see [Creating a secret and an IAM role to use federated queries](federated-create-secret-iam-role.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-4
Don't include spaces in the list of chained roles\. The following shows the syntax for chaining three roles\. ``` IAM_ROLE 'arn:aws:iam::<aws-account-id>:role/<role-1-name>,arn:aws:iam::<aws-account-id>:role/<role-2-name>,arn:aws:iam::<aws-account-id>:role/<role-3-name>' ``` SECRET\_ARN '*ssm\-secret\-arn*' The Amazon Resource Name \(ARN\) of an RDS PostgreSQL or Aurora PostgreSQL secret created using AWS Secrets Manager\. For information about how to create and retrieve an ARN for a secret, see [Creating a Basic Secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) and [Retrieving the Secret Value Secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the *AWS Secrets Manager User Guide*\. CATALOG\_ROLE '*catalog\-role\-arn\-string*'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-5
CATALOG\_ROLE '*catalog\-role\-arn\-string*' The ARN for an IAM role that your cluster uses for authentication and authorization for the data catalog\. If CATALOG\_ROLE isn't specified, Amazon Redshift uses the specified IAM\_ROLE\. The catalog role must have permission to access the Data Catalog in AWS Glue or Athena\. For more information, see [IAM policies for Amazon Redshift Spectrum](c-spectrum-iam-policies.md)\. The following shows the syntax for the CATALOG\_ROLE parameter string for a single ARN\. ``` CATALOG_ROLE 'arn:aws:iam::<aws-account-id>:role/<catalog-role>' ``` You can chain roles so that your cluster can assume another IAM role, possibly belonging to another account\. You can chain up to 10 roles\. For more information, see [Chaining IAM roles in Amazon Redshift Spectrum](c-spectrum-iam-policies.md#c-spectrum-chaining-roles)\. The list of chained roles must not include spaces\. The following shows the syntax for chaining three roles\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
010b08fd6785-6
The list of chained roles must not include spaces\. The following shows the syntax for chaining three roles\. ``` CATALOG_ROLE 'arn:aws:iam::<aws-account-id>:role/<catalog-role-1-name>,arn:aws:iam::<aws-account-id>:role/<catalog-role-2-name>,arn:aws:iam::<aws-account-id>:role/<catalog-role-3-name>' ``` CREATE EXTERNAL DATABASE IF NOT EXISTS A clause that creates an external database with the name specified by the DATABASE argument, if the specified external database doesn't exist\. If the specified external database exists, the command makes no changes\. In this case, the command returns a message that the external database exists, rather than terminating with an error\. You can't use CREATE EXTERNAL DATABASE IF NOT EXISTS with HIVE METASTORE\. To use CREATE EXTERNAL DATABASE IF NOT EXISTS with a Data Catalog enabled for AWS Lake Formation, you need `CREATE_DATABASE` permission on the Data Catalog\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
f87112d74378-0
For limits when using the Athena data catalog, see [Athena Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#amazon-athena-limits) in the AWS General Reference\. For limits when using the AWS Glue Data Catalog, see [AWS Glue Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_glue) in the AWS General Reference\. These limits don’t apply to a Hive metastore\. To unregister the schema, use the [DROP SCHEMA](r_DROP_SCHEMA.md) command\. To view details for external schemas, query the following system views: + [SVV\_EXTERNAL\_SCHEMAS](r_SVV_EXTERNAL_SCHEMAS.md) + [SVV\_EXTERNAL\_TABLES](r_SVV_EXTERNAL_TABLES.md) + [SVV\_EXTERNAL\_COLUMNS](r_SVV_EXTERNAL_COLUMNS.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
83c7e75566c7-0
The following example creates an external schema using a database in an Athena data catalog named `sampledb` in the US West \(Oregon\) Region\. ``` create external schema spectrum_schema from data catalog database 'sampledb' region 'us-west-2' iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole'; ``` The following example creates an external schema and creates a new external database named `spectrum_db`\. ``` create external schema spectrum_schema from data catalog database 'spectrum_db' iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole' create external database if not exists; ``` The following example creates an external schema using a Hive metastore database named `hive_db`\. ``` create external schema hive_schema from hive metastore database 'hive_db' uri '172.10.10.10' port 99 iam_role 'arn:aws:iam::123456789012:role/MySpectrumRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
83c7e75566c7-1
``` The following example chains roles to use the role `myS3Role` for accessing Amazon S3 and uses `myAthenaRole` for data catalog access\. For more information, see [Chaining IAM roles in Amazon Redshift Spectrum](c-spectrum-iam-policies.md#c-spectrum-chaining-roles)\. ``` create external schema spectrum_schema from data catalog database 'spectrum_db' iam_role 'arn:aws:iam::123456789012:role/myRedshiftRole,arn:aws:iam::123456789012:role/myS3Role' catalog_role 'arn:aws:iam::123456789012:role/myAthenaRole' create external database if not exists; ``` The following example creates an external schema that references an Aurora PostgreSQL database\. ``` CREATE EXTERNAL SCHEMA [IF NOT EXISTS] myRedshiftSchema FROM POSTGRES DATABASE 'my_aurora_db' SCHEMA 'my_aurora_schema' URI 'endpoint to aurora hostname' PORT 5432 IAM_ROLE 'arn:aws:iam::123456789012:role/MyAuroraRole'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
83c7e75566c7-2
IAM_ROLE 'arn:aws:iam::123456789012:role/MyAuroraRole' SECRET_ARN 'arn:aws:secretsmanager:us-east-2:123456789012:secret:development/MyTestDatabase-AbCdEf' ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_EXTERNAL_SCHEMA.md
5e2ae682fe14-0
Displays a log of data parse errors that occurred while using a COPY command to load tables\. To conserve disk space, a maximum of 20 errors per node slice are logged for each load operation\. A parse error occurs when Amazon Redshift cannot parse a field in a data row while loading it into a table\. For example, if a table column is expecting an integer data type and the data file contains a string of letters in that field, it causes a parse error\. Query STL\_LOADERROR\_DETAIL for additional details, such as the exact data row and column where a parse error occurred, after you query [STL\_LOAD\_ERRORS](r_STL_LOAD_ERRORS.md) to find out general information about the error\. The STL\_LOADERROR\_DETAIL view contains all data columns including and prior to the column where the parse error occurred\. Use the VALUE field to see the data value that was actually parsed in this column, including the columns that parsed correctly up to the error\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOADERROR_DETAIL.md
4e2e0507dc5d-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_LOADERROR_DETAIL.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOADERROR_DETAIL.md
f324fb2db05e-0
The following query joins STL\_LOAD\_ERRORS to STL\_LOADERROR\_DETAIL to view the details of a parse error that occurred while loading the EVENT table, which has a table ID of 100133: ``` select d.query, d.line_number, d.value, le.raw_line, le.err_reason from stl_loaderror_detail d, stl_load_errors le where d.query = le.query and tbl = 100133; ``` The following sample output shows the columns that loaded successfully, including the column with the error\. In this example, two columns successfully loaded before the parse error occurred in the third column, where a character string was incorrectly parsed for a field expecting an integer\. Because the field expected an integer, it parsed the string "aaa", which is uninitialized data, as a null and generated a parse error\. The output shows the raw value, parsed value, and error reason: ``` query | line_number | value | raw_line | err_reason -------+-------------+-------+----------+---------------- 4 | 3 | 1201 | 1201 | Invalid digit 4 | 3 | 126 | 126 | Invalid digit 4 | 3 | | aaa | Invalid digit (3 rows)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOADERROR_DETAIL.md
f324fb2db05e-1
4 | 3 | 126 | 126 | Invalid digit 4 | 3 | | aaa | Invalid digit (3 rows) ``` When a query joins STL\_LOAD\_ERRORS and STL\_LOADERROR\_DETAIL, it displays an error reason for each column in the data row, which simply means that an error occurred in that row\. The last row in the results is the actual column where the parse error occurred\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LOADERROR_DETAIL.md
f692fe99183d-0
The DLOG1 function returns the natural logarithm of the input parameter\. Synonym for the LN function\. Synonym of [LN function](r_LN.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DLOG1.md
a561008477b4-0
Fixed\-width data files have uniform lengths for each column of data\. Each field in a fixed\-width data file has exactly the same length and position\. For character data \(CHAR and VARCHAR\) in a fixed\-width data file, you must include leading or trailing spaces as placeholders in order to keep the width uniform\. For integers, you must use leading zeros as placeholders\. A fixed\-width data file has no delimiter to separate columns\. To load a fixed\-width data file into an existing table, USE the FIXEDWIDTH parameter in the COPY command\. Your table specifications must match the value of fixedwidth\_spec in order for the data to load correctly\. To load fixed\-width data from a file to a table, issue the following command: ``` copy table_name from 's3://mybucket/prefix' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' fixedwidth 'fixedwidth_spec'; ``` The *fixedwidth\_spec* parameter is a string that contains an identifier for each column and the width of each column, separated by a colon\. The **column:width** pairs are delimited by commas\. The identifier can be anything that you choose: numbers, letters, or a combination of the two\. The identifier has no relation to the table itself, so the specification must contain the columns in the same order as the table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading_fixed_width_data.md
a561008477b4-1
The following two examples show the same specification, with the first using numeric identifiers and the second using string identifiers: ``` '0:3,1:25,2:12,3:2,4:6' ``` ``` 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6' ``` The following example shows fixed\-width sample data that could be loaded into the VENUE table using the above specifications: ``` 1 Toyota Park Bridgeview IL0 2 Columbus Crew Stadium Columbus OH0 3 RFK Stadium Washington DC0 4 CommunityAmerica Ballpark Kansas City KS0 5 Gillette Stadium Foxborough MA68756 ``` The following COPY command loads this data set into the VENUE table: ``` copy venue from 's3://mybucket/data/venue_fw.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading_fixed_width_data.md
6d637e264f39-0
POSIX regular expressions provide a more powerful means for pattern matching than the [LIKE](r_patternmatching_condition_like.md) and [SIMILAR TO](pattern-matching-conditions-similar-to.md) operators\. POSIX regular expression patterns can match any portion of a string, unlike the SIMILAR TO operator, which returns true only if its pattern matches the entire string\. **Note** Regular expression matching using POSIX operators is computationally expensive\. We recommend using LIKE whenever possible, especially when processing a very large number of rows\. For example, the following queries are functionally identical, but the query that uses LIKE executes several times faster than the query that uses a regular expression: ``` select count(*) from event where eventname ~ '.*(Ring|Die).*'; select count(*) from event where eventname LIKE '%Ring%' OR eventname LIKE '%Die%'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-posix.md
30abbc0b0358-0
``` expression [ ! ] ~ pattern ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-posix.md
e99540ef2f4f-0
*expression* A valid UTF\-8 character expression, such as a column name\. \! Negation operator\. \~ Perform a case\-sensitive match for any substring of *expression*\. *pattern* A string literal that represents a SQL standard regular expression pattern\. If *pattern* does not contain wildcard characters, then the pattern only represents the string itself\. To search for strings that include metacharacters, such as ‘`. * | ? `‘, and so on, escape the character using two backslashes \('` \\` '\)\. Unlike `SIMILAR TO` and `LIKE`, POSIX regular expression syntax does not support a user\-defined escape character\. Either of the character expressions can be CHAR or VARCHAR data types\. If they differ, Amazon Redshift converts *pattern* to the data type of *expression*\. All of the character expressions can be CHAR or VARCHAR data types\. If the expressions differ in data type, Amazon Redshift converts them to the data type of *expression*\. POSIX pattern matching supports the following metacharacters: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-posix.html) Amazon Redshift supports the following POSIX character classes\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-posix.md
e99540ef2f4f-1
Amazon Redshift supports the following POSIX character classes\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-posix.html) Amazon Redshift supports the following Perl\-influenced operators in regular expressions\. Escape the operator using two backslashes \(‘`\\`’\)\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-posix.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-posix.md
faeb1b615a9b-0
The following table shows examples of pattern matching using POSIX operators: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-posix.html) The following example finds all cities whose names contain `E` or `H`: ``` select distinct city from users where city ~ '.*E.*|.*H.*' order by city; city ----------------------- Agoura Hills Auburn Hills Benton Harbor Beverly Hills Chicago Heights Chino Hills Citrus Heights East Hartford ``` The following example uses the escape string \('`\\`'\) to search for strings that include a period\. ``` select venuename from venue where venuename ~ '.*\\..*'; venuename ----------------------------- Bernard B. Jacobs Theatre E.J. Nutter Center Hubert H. Humphrey Metrodome Jobing.com Arena St. James Theatre St. Pete Times Forum Superpages.com Center
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-posix.md
faeb1b615a9b-1
Jobing.com Arena St. James Theatre St. Pete Times Forum Superpages.com Center U.S. Cellular Field ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-posix.md
cbe6ded77d68-0
**Topics** + [Amazon Redshift SQL](c_redshift-sql.md) + [Using SQL](c_SQL_reference.md) + [SQL commands](c_SQL_commands.md) + [SQL functions reference](c_SQL_functions.md) + [Reserved words](r_pg_keywords.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm_chap_SQLCommandRef.md
0b5a39658304-0
CONVERT\_TIMEZONE converts a time stamp from one time zone to another\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
4239c952f7f6-0
``` CONVERT_TIMEZONE ( ['source_timezone',] 'target_timezone', 'timestamp') ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
febecb4d6696-0
*source\_timezone* \(Optional\) The time zone of the current time stamp\. The default is UTC\. For more information, see [Time zone usage notes](#CONVERT_TIMEZONE-usage-notes)\. *target\_timezone* The time zone for the new time stamp\. For more information, see [Time zone usage notes](#CONVERT_TIMEZONE-usage-notes)\. *timestamp* A timestamp column or an expression that implicitly converts to a time stamp\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
faf0fb957af5-0
TIMESTAMP
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
5836f15063c4-0
Either *source\_timezone* or *target\_timezone* can be specified as a time zone name \(such as 'Africa/Kampala' or 'Singapore'\) or as a time zone abbreviation \(such as 'UTC' or 'PDT'\)\. **Note** The results of using a time zone name or a time zone abbreviation can be different due to local seasonal time, such as, Daylight Saving Time\. To view a list of supported time zone names, execute the following command\. ``` select pg_timezone_names(); ``` To view a list of supported time zone abbreviations, execute the following command\. ``` select pg_timezone_abbrevs(); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
3aa2cf260e33-0
If you specify a time zone using a time zone name, CONVERT\_TIMEZONE automatically adjusts for Daylight Saving Time \(DST\), or any other local seasonal protocol, such as Summer Time, Standard Time, or Winter Time, that is in force for that time zone during the date and time specified by '*timestamp*'\. For example, 'Europe/London' represents UTC in the winter and UTC\+1 in the summer\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
d8a64921f2cd-0
Time zone abbreviations represent a fixed offset from UTC\. If you specify a time zone using a time zone abbreviation, CONVERT\_TIMEZONE uses the fixed offset from UTC and does not adjust for any local seasonal protocol\. For example, ADT \(Atlantic Daylight Time\) always represents UTC\-03, even in winter\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
a43eed6afe76-0
A POSIX\-style time zone specification is in the form *STDoffset* or *STDoffsetDST*, where *STD* is a time zone abbreviation, *offset* is the numeric offset in hours west from UTC, and *DST* is an optional daylight\-savings zone abbreviation\. Daylight savings time is assumed to be one hour ahead of the given offset\. POSIX\-style time zone formats use positive offsets west of Greenwich, in contrast to the ISO\-8601 convention, which uses positive offsets east of Greenwich\. The following are examples of POSIX\-style time zones: + PST8 + PST8PDT + EST5 + EST5EDT **Note** Amazon Redshift doesn't validate POSIX\-style time zone specifications, so it is possible to set the time zone to an invalid value\. For example, the following command doesn't return an error, even though it sets the time zone to an invalid value\. ``` set timezone to ‘xxx36’; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
9504313b8636-0
The following example converts the time stamp value in the LISTTIME column from the default UTC time zone to PST\. Even though the time stamp is within the daylight time period, it is converted to standard time because the target time zone is specified as an abbreviation \(PST\)\. ``` select listtime, convert_timezone('PST', listtime) from listing where listid = 16; listtime | convert_timezone --------------------+------------------- 2008-08-24 09:36:12 2008-08-24 01:36:12 ``` The following example converts a timestamp LISTTIME column from the default UTC time zone to US/Pacific time zone\. The target time zone uses a time zone name, and the time stamp is within the daylight time period, so the function returns the daylight time\. ``` select listtime, convert_timezone('US/Pacific', listtime) from listing where listid = 16; listtime | convert_timezone --------------------+--------------------- 2008-08-24 09:36:12 | 2008-08-24 02:36:12 ``` The following example converts a time stamp string from EST to PST:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
9504313b8636-1
``` The following example converts a time stamp string from EST to PST: ``` select convert_timezone('EST', 'PST', '20080305 12:25:29'); convert_timezone ------------------- 2008-03-05 09:25:29 ``` The following example converts a time stamp to US Eastern Standard Time because the target time zone uses a time zone name \(America/New\_York\) and the time stamp is within the standard time period\. ``` select convert_timezone('America/New_York', '2013-02-01 08:00:00'); convert_timezone --------------------- 2013-02-01 03:00:00 (1 row) ``` The following example converts the time stamp to US Eastern Daylight Time because the target time zone uses a time zone name \(America/New\_York\) and the time stamp is within the daylight time period\. ``` select convert_timezone('America/New_York', '2013-06-01 08:00:00'); convert_timezone --------------------- 2013-06-01 04:00:00
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
9504313b8636-2
--------------------- 2013-06-01 04:00:00 (1 row) ``` The following example demonstrates the use of offsets\. ``` SELECT CONVERT_TIMEZONE('GMT','NEWZONE +2','2014-05-17 12:00:00') as newzone_plus_2, CONVERT_TIMEZONE('GMT','NEWZONE-2:15','2014-05-17 12:00:00') as newzone_minus_2_15, CONVERT_TIMEZONE('GMT','America/Los_Angeles+2','2014-05-17 12:00:00') as la_plus_2, CONVERT_TIMEZONE('GMT','GMT+2','2014-05-17 12:00:00') as gmt_plus_2; newzone_plus_2 | newzone_minus_2_15 | la_plus_2 | gmt_plus_2 ---------------------+---------------------+---------------------+---------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
9504313b8636-3
2014-05-17 10:00:00 | 2014-05-17 14:15:00 | 2014-05-17 10:00:00 | 2014-05-17 10:00:00 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/CONVERT_TIMEZONE.md
324f3c733a89-0
Amazon Redshift workload management \(WLM\) enables users to flexibly manage priorities within workloads so that short, fast\-running queries won't get stuck in queues behind long\-running queries\. Amazon Redshift WLM creates query queues at runtime according to *service classes*, which define the configuration parameters for various types of queues, including internal system queues and user\-accessible queues\. From a user perspective, a user\-accessible service class and a queue are functionally equivalent\. For consistency, this documentation uses the term *queue* to mean a user\-accessible service class as well as a runtime queue\. When you run a query, WLM assigns the query to a queue according to the user's user group or by matching a query group that is listed in the queue configuration with a query group label that the user sets at runtime\. | | | --- | | Currently, the default for clusters using the default parameter group is to use automatic WLM\. Automatic WLM manages query concurrency and memory allocation\. For more information, see [Implementing automatic WLM](automatic-wlm.md)\. | With manual WLM, Amazon Redshift configures one queue with a *concurrency level* of five, which enables up to five queries to run concurrently, plus one predefined Superuser queue, with a concurrency level of one\. You can define up to eight queues\. Each queue can be configured with a maximum concurrency level of 50\. The maximum total concurrency level for all user\-defined queues \(not including the Superuser queue\) is 50\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_workload_mngmt_classification.md
324f3c733a89-1
The easiest way to modify the WLM configuration is by using the Amazon Redshift Management Console\. You can also use the Amazon Redshift command line interface \(CLI\) or the Amazon Redshift API\. For more information about implementing and using workload management, see [Implementing workload management](cm-c-implementing-workload-management.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_workload_mngmt_classification.md
48f651cc4051-0
**Topics** + [Storage and ranges](#r_Datetime_types-storage-and-ranges) + [DATE](#r_Datetime_types-date) + [TIMESTAMP](#r_Datetime_types-timestamp) + [TIMESTAMPTZ](#r_Datetime_types-timestamptz) + [Examples with datetime types](r_Examples_with_datetime_types.md) + [Date and timestamp literals](r_Date_and_time_literals.md) + [Interval literals](r_interval_literals.md) Datetime data types include DATE, TIMESTAMP, and TIMESTAMPTZ\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Datetime_types.md
77f94b85e278-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Datetime_types.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Datetime_types.md
c6ab71b5e61b-0
Use the DATE data type to store simple calendar dates without time stamps\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Datetime_types.md
c3fd364bf2f9-0
TIMESTAMP is an alias of TIMESTAMP WITHOUT TIME ZONE\. Use the TIMESTAMP data type to store complete time stamp values that include the date and the time of day\. TIMESTAMP columns store values with up to a maximum of 6 digits of precision for fractional seconds\. If you insert a date into a TIMESTAMP column, or a date with a partial time stamp value, the value is implicitly converted into a full time stamp value with default values \(00\) for missing hours, minutes, and seconds\. Time zone values in input strings are ignored\. By default, TIMESTAMP values are Coordinated Universal Time \(UTC\) in both user tables and Amazon Redshift system tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Datetime_types.md
0e926cb59ab5-0
TIMESTAMPTZ is an alias of TIMESTAMP WITH TIME ZONE\. Use the TIMESTAMPTZ data type to input complete time stamp values that include the date, the time of day, and a time zone\. When an input value includes a time zone, Amazon Redshift uses the time zone to convert the value to Coordinated Universal Time \(UTC\) and stores the UTC value\. To view a list of supported time zone names, execute the following command\. ``` select pg_timezone_names(); ``` To view a list of supported time zone abbreviations, execute the following command\. ``` select pg_timezone_abbrevs(); ``` You can also find current information about time zones in the [IANA Time Zone Database](https://www.iana.org/time-zones)\. The following table has examples of time zone formats\. | Format | Example | | --- | --- | | day mon hh:mi:ss yyyy tz | 17 Dec 07:37:16 1997 PST | | mm/dd/yyyy hh:mi:ss\.ss tz | 12/17/1997 07:37:16\.00 PST | | mm/dd/yyyy hh:mi:ss\.ss tz | 12/17/1997 07:37:16\.00 US/Pacific |
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Datetime_types.md
0e926cb59ab5-1
| mm/dd/yyyy hh:mi:ss\.ss tz | 12/17/1997 07:37:16\.00 US/Pacific | | yyyy\-mm\-dd hh:mi:ss\+/\-tz | 1997\-12\-17 07:37:16\-08 | | dd\.mm\.yyyy hh:mi:ss tz | 17\.12\.1997 07:37:16\.00 PST | TIMESTAMPTZ columns store values with up to a maximum of 6 digits of precision for fractional seconds\. If you insert a date into a TIMESTAMPTZ column, or a date with a partial time stamp, the value is implicitly converted into a full time stamp value with default values \(00\) for missing hours, minutes, and seconds\. TIMESTAMPTZ values are UTC in user tables\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Datetime_types.md
5005ed5fb73e-0
ST\_Y returns the second coordinate of an input point\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Y-function.md
9c0c3d5ab503-0
``` ST_Y(point) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Y-function.md
daf354b318ef-0
*point* A `POINT` value of data type `GEOMETRY`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Y-function.md
5a02607a94aa-0
`DOUBLE PRECISION` value of the second coordinate\. If *point* is null, then null is returned\. If *point* is not a `POINT`, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Y-function.md
a78b2408678e-0
The following SQL returns the second coordinate of a point\. ``` SELECT ST_Y(ST_Point(1,2)); ``` ``` st_y ----------- 2.0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Y-function.md
f1cdc84956f0-0
Converts an angle in radians to its equivalent in degrees\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEGREES.md
0a8c83b5a310-0
``` DEGREES(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEGREES.md
79c559c3f6c3-0
*number* The input parameter is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEGREES.md
191afbbe32f3-0
The DEGREES function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEGREES.md