id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
9c4e626a44e9-0
The following query identifies queries that have had alert events logged for nested loops\. For information on how to fix the nested loop condition, see [Nested loop](query-performance-improvement-opportunities.md#nested-loop)\. ``` select query, trim(querytxt) as SQL, starttime from stl_query where query in ( select distinct query from stl_alert_event_log where event like 'Nested Loop Join in the query plan%') order by starttime desc; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
239e645456a0-0
The following query shows how long recent queries waited for an open slot in a query queue before being executed\. If you see a trend of high wait times, you might want to modify your query queue configuration for better throughput\. For more information, see [Implementing manual WLM](cm-c-defining-query-queues.md)\. ``` select trim(database) as DB , w.query, substring(q.querytxt, 1, 100) as querytxt, w.queue_start_time, w.service_class as class, w.slot_count as slots, w.total_queue_time/1000000 as queue_seconds, w.total_exec_time/1000000 exec_seconds, (w.total_queue_time+w.total_Exec_time)/1000000 as total_seconds from stl_wlm_query w left join stl_query q on q.query = w.query and q.userid = w.userid where w.queue_start_Time >= dateadd(day, -7, current_Date) and w.total_queue_Time > 0 and w.userid >1 and q.starttime >= dateadd(day, -7, current_Date) order by w.total_queue_time desc, w.queue_start_time desc limit 35; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
3b78de0755a9-0
The following query identifies tables that have had alert events logged for them, and also identifies what type of alerts are most frequently raised\. If the `minutes` value for a row with an identified table is high, check that table to see if it needs routine maintenance such as having [ANALYZE](r_ANALYZE.md) or [VACUUM](r_VACUUM_command.md) run against it\. If the `count` value is high for a row but the `table` value is null, run a query against STL\_ALERT\_EVENT\_LOG for the associated `event` value to investigate why that alert is getting raised so often\. ``` select trim(s.perm_table_name) as table, (sum(abs(datediff(seconds, s.starttime, s.endtime)))/60)::numeric(24,0) as minutes, trim(split_part(l.event,':',1)) as event, trim(l.solution) as solution, max(l.query) as sample_query, count(*) from stl_alert_event_log as l left join stl_scan as s on s.query = l.query and s.slice = l.slice and s.segment = l.segment and s.step = l.step where l.event_time >= dateadd(day, -7, current_Date) group by 1,3,4 order by 2 desc,6 desc; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
619a241f1239-0
The following query provides a count of the queries that you are running against tables that are missing statistics\. If this query returns any rows, look at the `plannode` value to determine the affected table, and then run [ANALYZE](r_ANALYZE.md) on it\. ``` select substring(trim(plannode),1,100) as plannode, count(*) from stl_explain where plannode like '%missing statistics%' group by plannode order by 2 desc; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/diagnostic-queries-for-query-tuning.md
91e757508862-0
Displays the current value of a server configuration parameter\. This value may be specific to the current session if a SET command is in effect\. For a list of configuration parameters, see [Configuration reference](cm_chap_ConfigurationRef.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW.md
da74f38268c1-0
``` SHOW { parameter_name | ALL } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW.md
96e2caef228f-0
*parameter\_name* Displays the current value of the specified parameter\. ALL Displays the current values of all of the parameters\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW.md
0e816fa770a7-0
The following example displays the value for the query\_group parameter: ``` show query_group; query_group unset (1 row) ``` The following example displays a list of all parameters and their values: ``` show all; name | setting --------------------+-------------- datestyle | ISO, MDY extra_float_digits | 0 query_group | unset search_path | $user,public statement_timeout | 0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SHOW.md
a1681b3da582-0
In this step, you download a set of sample data files to your computer\. In the next step, you upload the files to an Amazon S3 bucket\. **To download the data files** 1. Download the zipped file: [LoadingDataSampleFiles\.zip](samples/LoadingDataSampleFiles.zip)\. 1. Extract the files to a folder on your computer\. 1. Verify that your folder contains the following files\. ``` customer-fw-manifest customer-fw.tbl-000 customer-fw.tbl-000.bak customer-fw.tbl-001 customer-fw.tbl-002 customer-fw.tbl-003 customer-fw.tbl-004 customer-fw.tbl-005 customer-fw.tbl-006 customer-fw.tbl-007 customer-fw.tbl.log dwdate-tab.tbl-000 dwdate-tab.tbl-001 dwdate-tab.tbl-002 dwdate-tab.tbl-003 dwdate-tab.tbl-004 dwdate-tab.tbl-005 dwdate-tab.tbl-006 dwdate-tab.tbl-007
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-download-files.md
a1681b3da582-1
dwdate-tab.tbl-006 dwdate-tab.tbl-007 part-csv.tbl-000 part-csv.tbl-001 part-csv.tbl-002 part-csv.tbl-003 part-csv.tbl-004 part-csv.tbl-005 part-csv.tbl-006 part-csv.tbl-007 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-download-files.md
fece4e5db116-0
[Step 3: Upload the files to an Amazon S3 bucket](tutorial-loading-data-upload-files.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-download-files.md
6164db87c8fe-0
You can specify compression encodings when you create a table, but in most cases, automatic compression produces the best results\. The COPY command analyzes your data and applies compression encodings to an empty table automatically as part of the load operation\. Automatic compression balances overall performance when choosing compression encodings\. Range\-restricted scans might perform poorly if sort key columns are compressed much more highly than other columns in the same query\. As a result, automatic compression chooses a less efficient compression encoding to keep the sort key columns balanced with other columns\. Suppose that your table's sort key is a date or timestamp and the table uses many large varchar columns\. In this case, you might get better performance by not compressing the sort key column at all\. Run the [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md) command on the table, then use the encodings to create a new table, but leave out the compression encoding for the sort key\. There is a performance cost for automatic compression encoding, but only if the table is empty and does not already have compression encoding\. For short\-lived tables and tables that you create frequently, such as staging tables, load the table once with automatic compression or run the ANALYZE COMPRESSION command\. Then use those encodings to create new tables\. You can add the encodings to the CREATE TABLE statement, or use CREATE TABLE LIKE to create a new table with the same encoding\. For more information, see [Tutorial: Tuning table design](tutorial-tuning-tables.md) and [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-use-auto-compression.md
9c6e2307c647-0
The DATE\_TRUNC function truncates a time stamp expression or literal based on the date part that you specify, such as hour, week, or month\. DATE\_TRUNC returns the first day of the specified year, the first day of the specified month, or the Monday of the specified week\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_TRUNC.md
e2f271b56b6c-0
``` DATE_TRUNC('datepart', timestamp) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_TRUNC.md
94c77d00277f-0
*datepart* The date part to which to truncate the time stamp value\. See [Dateparts for Date or Time Stamp functions](r_Dateparts_for_datetime_functions.md) for valid formats\. *timestamp* A timestamp column or an expression that implicitly converts to a time stamp\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_TRUNC.md
044e9022c70b-0
TIMESTAMP
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_TRUNC.md
a40038b29e03-0
In the following example, the DATE\_TRUNC function uses the 'week' datepart to return the date for the Monday of each week\. ``` select date_trunc('week', saletime), sum(pricepaid) from sales where saletime like '2008-09%' group by date_trunc('week', saletime) order by 1; date_trunc | sum ------------+------------ 2008-09-01 | 2474899.00 2008-09-08 | 2412354.00 2008-09-15 | 2364707.00 2008-09-22 | 2359351.00 2008-09-29 | 705249.00 (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_TRUNC.md
a7f94f8e3d23-0
ASIN is a trigonometric function that returns the arc sine of a number\. The return value is in radians and is between PI/2 and \-PI/2\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ASIN.md
b7b16c5917b2-0
``` ASIN(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ASIN.md
3aff5971d0f6-0
*number* The input parameter is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ASIN.md
4547de23a9be-0
The ASIN function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ASIN.md
5619c6b9b9e5-0
The following example returns the arc sine of 1 and multiples it by 2: ``` select asin(1)*2 as pi; pi ------------------ 3.14159265358979 (1 row) ``` The following example converts the arc sine of \.5 to the equivalent number of degrees: ``` select (asin(.5) * 180/(select pi())) as degrees; degrees --------- 30 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ASIN.md
8531513b2985-0
**Topics** + [APPROXIMATE PERCENTILE\_DISC function](r_APPROXIMATE_PERCENTILE_DISC.md) + [AVG function](r_AVG.md) + [COUNT function](r_COUNT.md) + [LISTAGG function](r_LISTAGG.md) + [MAX function](r_MAX.md) + [MEDIAN function](r_MEDIAN.md) + [MIN function](r_MIN.md) + [PERCENTILE\_CONT function](r_PERCENTILE_CONT.md) + [STDDEV\_SAMP and STDDEV\_POP functions](r_STDDEV_functions.md) + [SUM function](r_SUM.md) + [VAR\_SAMP and VAR\_POP functions](r_VARIANCE_functions.md) Aggregate functions compute a single result value from a set of input values\. SELECT statements using aggregate functions can include two optional clauses: GROUP BY and HAVING\. The syntax for these clauses is as follows \(using the COUNT function as an example\): ``` SELECT count (*) expression FROM table_reference WHERE condition [GROUP BY expression ] [ HAVING condition] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Aggregate_Functions.md
8531513b2985-1
SELECT count (*) expression FROM table_reference WHERE condition [GROUP BY expression ] [ HAVING condition] ``` The GROUP BY clause aggregates and groups results by the unique values in a specified column or columns\. The HAVING clause restricts the results returned to rows where a particular aggregate condition is true, such as count \(\*\) > 1\. The HAVING clause is used in the same way as WHERE to restrict rows based on the value of a column\. For an example of these additional clauses, see the [COUNT](r_COUNT.md)\. Aggregate functions don't accept nested aggregate functions or window functions as arguments\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Aggregate_Functions.md
153ffe5fe50e-0
The ROUND function rounds numbers to the nearest integer or decimal\. The ROUND function can optionally include a second argument: an integer to indicate the number of decimal places for rounding, in either direction\. If the second argument is not provided, the function rounds to the nearest whole number; if the second argument *n* is specified, the function rounds to the nearest number with *n* decimal places of precision\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROUND.md
047938d688a4-0
``` ROUND (number [ , integer ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROUND.md
de9fcc256af0-0
*number* INTEGER, DECIMAL, and FLOAT data types are supported\. If the first argument is an integer, the parser converts the integer into a decimal data type prior to processing\. If the first argument is a decimal number, the parser processes the function without conversion, resulting in better performance\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROUND.md
0e3e8769ea18-0
ROUND returns the same numeric data type as the input argument\(s\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROUND.md
b0072c61e3a3-0
Round the commission paid for a given transaction to the nearest whole number\. ``` select commission, round(commission) from sales where salesid=10000; commission | round -----------+------- 28.05 | 28 (1 row) ``` Round the commission paid for a given transaction to the first decimal place\. ``` select commission, round(commission, 1) from sales where salesid=10000; commission | round -----------+------- 28.05 | 28.1 (1 row) ``` For the same query, extend the precision in the opposite direction\. ``` select commission, round(commission, -1) from sales where salesid=10000; commission | round -----------+------- 28.05 | 30 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ROUND.md
ec179a4d4ce0-0
To load or unload data using another AWS resource, such as Amazon S3, Amazon DynamoDB, Amazon EMR, or Amazon EC2, your cluster must have permission to access the resource and perform the necessary actions to access the data\. For example, to load data from Amazon S3, COPY must have LIST access to the bucket and GET access for the bucket objects\. To obtain authorization to access a resource, your cluster must be authenticated\. You can choose either role\-based access control or key\-based access control\. This section presents an overview of the two methods\. For complete details and examples, see [Permissions to access other AWS Resources](copy-usage_notes-access-permissions.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-access-permissions.md
1b0aec60ed32-0
With role\-based access control, your cluster temporarily assumes an AWS Identity and Access Management \(IAM\) role on your behalf\. Then, based on the authorizations granted to the role, your cluster can access the required AWS resources\. We recommend using role\-based access control because it is provides more secure, fine\-grained control of access to AWS resources and sensitive user data, in addition to safeguarding your AWS credentials\. To use role\-based access control, you must first create an IAM role using the Amazon Redshift service role type, and then attach the role to your cluster\. The role must have, at a minimum, the permissions listed in [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](copy-usage_notes-access-permissions.md#copy-usage_notes-iam-permissions)\. For steps to create an IAM role and attach it to your cluster, see [Creating an IAM Role to Allow Your Amazon Redshift Cluster to Access AWS Services](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html#authorizing-redshift-service-creating-an-iam-role) in the *Amazon Redshift Cluster Management Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-access-permissions.md
1b0aec60ed32-1
You can add a role to a cluster or view the roles associated with a cluster by using the Amazon Redshift Management Console, CLI, or API\. For more information, see [Authorizing COPY and UNLOAD Operations Using IAM Roles](https://docs.aws.amazon.com/redshift/latest/mgmt/copy-unload-iam-role.html) in the *Amazon Redshift Cluster Management Guide*\. When you create an IAM role, IAM returns an Amazon Resource Name \(ARN\) for the role\. To execute a COPY command using an IAM role, provide the role ARN using the IAM\_ROLE parameter or the CREDENTIALS parameter\. The following COPY command example uses IAM\_ROLE parameter with the role `MyRedshiftRole` for authentication\. ``` copy customer from 's3://mybucket/mydata' iam_role 'arn:aws:iam::12345678901:role/MyRedshiftRole'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-access-permissions.md
b48c4aed8edf-0
With key\-based access control, you provide the access key ID and secret access key for an IAM user that is authorized to access the AWS resources that contain the data\. **Note** We strongly recommend using an IAM role for authentication instead of supplying a plain\-text access key ID and secret access key\. If you choose key\-based access control, never use your AWS account \(root\) credentials\. Always create an IAM user and provide that user's access key ID and secret access key\. For steps to create an IAM user, see [Creating an IAM User in Your AWS Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)\. To authenticate using IAM user credentials, replace *<access\-key\-id>* and *<secret\-access\-key* with an authorized user's access key ID and full secret access key for the ACCESS\_KEY\_ID and SECRET\_ACCESS\_KEY parameters as shown following\. ``` ACCESS_KEY_ID '<access-key-id>' SECRET_ACCESS_KEY '<secret-access-key>'; ``` The AWS IAM user must have, at a minimum, the permissions listed in [IAM permissions for COPY, UNLOAD, and CREATE LIBRARY](copy-usage_notes-access-permissions.md#copy-usage_notes-iam-permissions)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-access-permissions.md
766e1b1401dc-0
Changes the attributes of a database\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DATABASE.md
b31a08cb8103-0
``` ALTER DATABASE database_name | RENAME TO new_name | OWNER TO new_owner | CONNECTION LIMIT { limit | UNLIMITED } ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DATABASE.md
b12471471555-0
*database\_name* Name of the database to alter\. Typically, you alter a database that you are not currently connected to; in any case, the changes take effect only in subsequent sessions\. You can change the owner of the current database, but you can't rename it: ``` alter database tickit rename to newtickit; ERROR: current database may not be renamed ``` RENAME TO Renames the specified database\. For more information about valid names, see [Names and identifiers](r_names.md)\. You can't rename the dev, padb\_harvest, template0, or template1 databases, and you can't rename the current database\. Only the database owner or a [superuser](r_superusers.md#def_superusers) can rename a database; non\-superuser owners must also have the CREATEDB privilege\. *new\_name* New database name\. OWNER TO Changes the owner of the specified database\. You can change the owner of the current database or some other database\. Only a superuser can change the owner\. *new\_owner* New database owner\. The new owner must be an existing database user with write privileges\. See [GRANT](r_GRANT.md) for more information about user privileges\. CONNECTION LIMIT \{ *limit* \| UNLIMITED \}
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DATABASE.md
b12471471555-1
CONNECTION LIMIT \{ *limit* \| UNLIMITED \} The maximum number of database connections users are permitted to have open concurrently\. The limit is not enforced for superusers\. Use the UNLIMITED keyword to permit the maximum number of concurrent connections\. A limit on the number of connections for each user might also apply\. For more information, see [CREATE USER](r_CREATE_USER.md)\. The default is UNLIMITED\. To view current connections, query the [STV\_SESSIONS](r_STV_SESSIONS.md) system view\. If both user and database connection limits apply, an unused connection slot must be available that is within both limits when a user attempts to connect\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DATABASE.md
cd204cd7beca-0
ALTER DATABASE commands apply to subsequent sessions not current sessions\. You need to reconnect to the altered database to see the effect of the change\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DATABASE.md
93a9e83f02c2-0
The following example renames a database named TICKIT\_SANDBOX to TICKIT\_TEST: ``` alter database tickit_sandbox rename to tickit_test; ``` The following example changes the owner of the TICKIT database \(the current database\) to DWUSER: ``` alter database tickit owner to dwuser; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_DATABASE.md
b6c84a66ff8b-0
The CATEGORY table in the TICKIT database contains the following rows: ``` catid | catgroup | catname | catdesc -------+----------+-----------+-------------------------------------------- 1 | Sports | MLB | Major League Baseball 2 | Sports | NHL | National Hockey League 3 | Sports | NFL | National Football League 4 | Sports | NBA | National Basketball Association 5 | Sports | MLS | Major League Soccer 6 | Shows | Musicals | Musical theatre 7 | Shows | Plays | All non-musical theatre 8 | Shows | Opera | All opera and light opera 9 | Concerts | Pop | All rock and pop music concerts 10 | Concerts | Jazz | All jazz singers and bands 11 | Concerts | Classical | All symphony, concerto, and choir concerts (11 rows) ``` Create a CATEGORY\_STAGE table with a similar schema to the CATEGORY table but define default values for the columns: ``` create table category_stage (catid smallint default 0, catgroup varchar(10) default 'General', catname varchar(10) default 'General', catdesc varchar(50) default 'General');
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_INSERT_30.md
b6c84a66ff8b-1
catname varchar(10) default 'General', catdesc varchar(50) default 'General'); ``` The following INSERT statement selects all of the rows from the CATEGORY table and inserts them into the CATEGORY\_STAGE table\. ``` insert into category_stage (select * from category); ``` The parentheses around the query are optional\. This command inserts a new row into the CATEGORY\_STAGE table with a value specified for each column in order: ``` insert into category_stage values (12, 'Concerts', 'Comedy', 'All stand-up comedy performances'); ``` You can also insert a new row that combines specific values and default values: ``` insert into category_stage values (13, 'Concerts', 'Other', default); ``` Run the following query to return the inserted rows: ``` select * from category_stage where catid in(12,13) order by 1; catid | catgroup | catname | catdesc -------+----------+---------+----------------------------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_INSERT_30.md
b6c84a66ff8b-2
12 | Concerts | Comedy | All stand-up comedy performances 13 | Concerts | Other | General (2 rows) ``` The following examples show some multiple\-row INSERT VALUES statements\. The first example inserts specific CATID values for two rows and default values for the other columns in both rows\. ``` insert into category_stage values (14, default, default, default), (15, default, default, default); select * from category_stage where catid in(14,15) order by 1; catid | catgroup | catname | catdesc -------+----------+---------+--------- 14 | General | General | General 15 | General | General | General (2 rows) ``` The next example inserts three rows with various combinations of specific and default values: ``` insert into category_stage values (default, default, default, default), (20, default, 'Country', default), (21, 'Concerts', 'Rock', default); select * from category_stage where catid in(0,20,21) order by 1; catid | catgroup | catname | catdesc
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_INSERT_30.md
b6c84a66ff8b-3
select * from category_stage where catid in(0,20,21) order by 1; catid | catgroup | catname | catdesc -------+----------+---------+--------- 0 | General | General | General 20 | General | Country | General 21 | Concerts | Rock | General (3 rows) ``` The first set of VALUES in this example produce the same results as specifying DEFAULT VALUES for a single\-row INSERT statement\. The following examples show INSERT behavior when a table has an IDENTITY column\. First, create a new version of the CATEGORY table, then insert rows into it from CATEGORY: ``` create table category_ident (catid int identity not null, catgroup varchar(10) default 'General', catname varchar(10) default 'General', catdesc varchar(50) default 'General'); insert into category_ident(catgroup,catname,catdesc) select catgroup,catname,catdesc from category; ``` Note that you can't insert specific integer values into the CATID IDENTITY column\. IDENTITY column values are automatically generated\. The following example demonstrates that subqueries can't be used as expressions in multiple\-row INSERT VALUES statements: ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_INSERT_30.md
b6c84a66ff8b-4
The following example demonstrates that subqueries can't be used as expressions in multiple\-row INSERT VALUES statements: ``` insert into category(catid) values ((select max(catid)+1 from category)), ((select max(catid)+2 from category)); ERROR: can't use subqueries in multi-row VALUES ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Examples_of_INSERT_30.md
6d992d8391b0-0
The easiest way to modify the WLM configuration is by using the Amazon Redshift console\. You can also use the AWS CLI or the Amazon Redshift API\. When you switch your cluster between automatic and manual WLM, your cluster is put into `pending reboot` state\. The change doesn't take effect until the next cluster reboot\. For detailed information about modifying WLM configurations, see [Configuring Workload Management](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html) in the *Amazon Redshift Cluster Management Guide\.*
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-modifying-wlm-configuration.md
3995e7047691-0
To maximize system throughput and use resources most effectively, we recommend that you set up automatic WLM for your queues\. Consider taking the following approach to set up a smooth transition from manual WLM to automatic WLM\. To migrate from manual WLM to automatic WLM and use query priorities, we recommend that you create a new parameter group and then attach that parameter group to your cluster\. For more information, see [Amazon Redshift Parameter Groups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) in the *Amazon Redshift Cluster Management Guide\.*\. **Important** To change the parameter group or to switch from manual to automatic WLM requires a cluster reboot\. For more information, see [WLM dynamic and static configuration properties](cm-c-wlm-dynamic-properties.md)\. Let's take an example where there are three manual WLM queues\. One each for an ETL workload, an analytics workload, and a data science workload\. The ETL workload runs every 6 hours, the analytics workload runs throughout the day, and the data science workload can spike at any time\. With manual WLM, you specify the memory and concurrency that each workload queue gets based on your understanding of the importance of each workload to the business\. Specifying the memory and concurrency is not just hard to figure out, but it also results in cluster resources being statically partitioned and thereby wasted when only a subset of the workloads is running\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-modifying-wlm-configuration.md
3995e7047691-1
You can use automatic WLM with query priorities to indicate the relative priorities of the workloads, avoiding the preceding issues\. For this example, follow these steps: + Create a new parameter group and switch to **Auto WLM** mode\. + Add queues for each of the three workloads: ETL workload, analytics workload, and data science workload\. Use the same user groups for each workload that was used with **Manual WLM** mode\. + Set the priority for the ETL workload to `High`, the analytics workload to `Normal`, and the data science to `Low`\. These priorities reflect your business priorities for the different workloads or user groups\. + Optionally, enable concurrency scaling for the analytics or data science queue so that queries in these queues get consistent performance even when the ETL workload is executing every 6 hours\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-modifying-wlm-configuration.md
3995e7047691-2
With query priorities, when only the analytics workload is running on the cluster, it gets the entire system to itself yielding high throughput with optimal system utilization\. However, when the ETL workload starts, it gets the right of the way since it has a higher priority\. Queries running as part of the ETL workload get priority during admission in addition to preferential resource allocation after they are admitted\. As a consequence, the ETL workload performs predictably regardless of what else might be running on the system\. The predictable performance for a high priority workload comes at the cost of other, lower priority workloads that run longer either because their queries are waiting behind more important queries to complete\. Or, because they are getting a smaller fraction of resources when they are running concurrently with higher priority queries\. The scheduling algorithms used by Amazon Redshift ensure that the lower priority queries do not suffer from starvation, but rather continue to make progress albeit at a slower pace\. **Note** The timeout field is not available in automatic WLM\. Instead, use the QMR rule, `query_execution_time`\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\. The QMR action, HOP, is not applicable to automatic WLM\. Instead, use the `change priority` action\. For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\. Within a parameter group, avoid mixing automatic WLM queues and manual WLM queues\. Instead, create a new parameter group when migrating to automatic WLM\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-modifying-wlm-configuration.md
11256a64dd8a-0
TO\_NUMBER converts a string to a numeric \(decimal\) value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_NUMBER.md
06f9d0b3cd30-0
``` to_number(string, format) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_NUMBER.md
45700cff69ec-0
*string* String to be converted\. The format must be a literal value\. *format* The second argument is a format string that indicates how the character string should be parsed to create the numeric value\. For example, the format `'99D999'` specifies that the string to be converted consists of five digits with the decimal point in the third position\. For example, `to_number('12.345','99D999')` returns `12.345` as a numeric value\. For a list of valid formats, see [ Numeric Format Strings](r_Numeric_formating.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_NUMBER.md
e5c43daa76cb-0
TO\_NUMBER returns a DECIMAL number\. If the conversion to *format* fails, then an error is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_NUMBER.md
8958a8baa640-0
The following example converts the string `12,454.8-` to a number: ``` select to_number('12,454.8-', '99G999D9S'); to_number ----------- -12454.8 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_NUMBER.md
c3d9347775d3-0
ST\_NumGeometries returns the number of geometries in an input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumGeometries-function.md
1f92b542e591-0
``` ST_NumGeometries(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumGeometries-function.md
a7f830958e22-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumGeometries-function.md
89d4ad7528c0-0
`INTEGER` representing the number of geometries in *geom*\. If *geom* is `GEOMETRYCOLLECTION` or a `MULTI` subtype, then the number of geometries is returned\. If *geom* is a single geometry, then `1` is returned\. If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumGeometries-function.md
2eba96250c79-0
The following SQL returns the number of geometries in the input multilinestring\. ``` SELECT ST_NumGeometries(ST_GeomFromText('MULTILINESTRING((0 0,1 0,0 5),(3 4,13 26))')); ``` ``` st_numgeometries ------------- 2 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NumGeometries-function.md
c194d7bbd1cc-0
The SIGN function returns the sign \(positive or negative\) of a number\. The result of the SIGN function is `1`, `-1`, or `0` indicating the sign of the argument\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIGN.md
67307b11f864-0
``` SIGN (number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIGN.md
c522d4154eb2-0
*number* Number to be evaluated\. The data type can be `numeric` or `double precision`\. Other data types can be converted by Amazon Redshift per the implicit conversion rules\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIGN.md
ed3be65622b5-0
The output type is `numeric(1, 0)` for a numeric input, and `double precision` for a double precision input\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIGN.md
bb007ddeb0bd-0
Determine the sign of the commission paid for a given transaction: ``` select commission, sign (commission) from sales where salesid=10000; commission | sign -----------+------ 28.05 | 1 (1 row) ``` The following example shows that “t2\.d” has double precision as its type since the input is double precision and that “t2\.n” has numeric\(1,0\) as output since the input is numeric\. ``` CREATE TABLE t1(d double precision, n numeric(12, 2)); INSERT INTO t1 VALUES (4.25, 4.25), (-4.25, -4.25); CREATE TABLE t2 AS SELECT SIGN(d) AS d, SIGN(n) AS n FROM t1; \d t1 \d t2 The output of the “\d” commands is: Table "public.t1" Column | Type | Encoding | DistKey | SortKey | Preload | Encryption | Modifiers
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIGN.md
bb007ddeb0bd-1
Table "public.t1" Column | Type | Encoding | DistKey | SortKey | Preload | Encryption | Modifiers --------+------------------+----------+---------+---------+---------+------------+----------- d | double precision | none | f | 0 | f | none | n | numeric(12,2) | lzo | f | 0 | f | none | Table "public.t2" Column | Type | Encoding | DistKey | SortKey | Preload | Encryption | Modifiers --------+------------------+----------+---------+---------+---------+------------+----------- d | double precision | none | f | 0 | f | none | n | numeric(1,0) | lzo | f | 0 | f | none | ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SIGN.md
1cd538ec5d54-0
Returns the column metadata for all late\-binding views in the database\. For more information, see [Late\-binding views](r_CREATE_VIEW.md#r_CREATE_VIEW_late-binding-views)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
e646fba1549b-0
``` pg_get_late_binding_view_cols() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
8ec8f9dc5d7c-0
VARCHAR
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
503c3bbda64d-0
The `PG_GET_LATE_BINDING_VIEW_COLS` function returns one row for each column in late\-binding views\. The row contains a comma\-separated list with the schema name, relation name, column name, data type, and column number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
cf935b896212-0
The following example returns the column metadata for all late\-binding views\. ``` select pg_get_late_binding_view_cols(); pg_get_late_binding_view_cols ------------------------------------------------------------ (public,myevent,eventname,"character varying(200)",1) (public,sales_lbv,salesid,integer,1) (public,sales_lbv,listid,integer,2) (public,sales_lbv,sellerid,integer,3) (public,sales_lbv,buyerid,integer,4) (public,sales_lbv,eventid,integer,5) (public,sales_lbv,dateid,smallint,6) (public,sales_lbv,qtysold,smallint,7) (public,sales_lbv,pricepaid,"numeric(8,2)",8) (public,sales_lbv,commission,"numeric(8,2)",9) (public,sales_lbv,saletime,"timestamp without time zone",10) (public,event_lbv,eventid,integer,1)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
cf935b896212-1
(public,event_lbv,eventid,integer,1) (public,event_lbv,venueid,smallint,2) (public,event_lbv,catid,smallint,3) (public,event_lbv,dateid,smallint,4) (public,event_lbv,eventname,"character varying(200)",5) (public,event_lbv,starttime,"timestamp without time zone",6) ``` The following example returns the column metadata for all late\-binding views in table format\. ``` select * from pg_get_late_binding_view_cols() cols(view_schema name, view_name name, col_name name, col_type varchar, col_num int); view_schema | view_name | col_name | col_type | col_num ------------+-----------+------------+-----------------------------+-------- public | sales_lbv | salesid | integer | 1 public | sales_lbv | listid | integer | 2 public | sales_lbv | sellerid | integer | 3 public | sales_lbv | buyerid | integer | 4
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
cf935b896212-2
public | sales_lbv | sellerid | integer | 3 public | sales_lbv | buyerid | integer | 4 public | sales_lbv | eventid | integer | 5 public | sales_lbv | dateid | smallint | 6 public | sales_lbv | qtysold | smallint | 7 public | sales_lbv | pricepaid | numeric(8,2) | 8 public | sales_lbv | commission | numeric(8,2) | 9 public | sales_lbv | saletime | timestamp without time zone | 10 public | event_lbv | eventid | integer | 1 public | event_lbv | venueid | smallint | 2 public | event_lbv | catid | smallint | 3 public | event_lbv | dateid | smallint | 4 public | event_lbv | eventname | character varying(200) | 5 public | event_lbv | starttime | timestamp without time zone | 6 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_LATE_BINDING_VIEW_COLS.md
b149d251ffcf-0
The DEXP function returns the exponential value in scientific notation for a double precision number\. The only difference between the DEXP and EXP functions is that the parameter for DEXP must be a double precision\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEXP.md
9dfaf3c55293-0
``` DEXP(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEXP.md
9fa824b7aee1-0
*number* The input parameter is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEXP.md
9b066e46a394-0
The DEXP function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEXP.md
9cf4f48c8b40-0
Use the DEXP function to forecast ticket sales based on a continuous growth pattern\. In this example, the subquery returns the number of tickets sold in 2008\. That result is multiplied by the result of the DEXP function, which specifies a continuous growth rate of 7% over 10 years\. ``` select (select sum(qtysold) from sales, date where sales.dateid=date.dateid and year=2008) * dexp((7::float/100)*10) qty2010; qty2010 ------------------ 695447.483772222 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEXP.md
299b1210e305-0
TIMEZONE returns a time stamp for the specified time zone and time stamp value\. For information and examples about how to set time zone, see [timezone](r_timezone_config.md)\. For information and examples about how to convert time zone, see [CONVERT\_TIMEZONE](CONVERT_TIMEZONE.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEZONE.md
1d25c939072c-0
``` TIMEZONE ('timezone', { timestamp | timestamptz ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEZONE.md
d3d951b984b3-0
*timezone* The time zone for the return value\. The time zone can be specified as a time zone name \(such as **'Africa/Kampala'** or **'Singapore'**\) or as a time zone abbreviation \(such as **'UTC'** or **'PDT'**\)\. To view a list of supported time zone names, execute the following command\. ``` select pg_timezone_names(); ``` To view a list of supported time zone abbreviations, execute the following command\. ``` select pg_timezone_abbrevs(); ``` For more information and examples, see [Time zone usage notes](CONVERT_TIMEZONE.md#CONVERT_TIMEZONE-usage-notes)\. *timestamp* An expression that results in a TIMESTAMP type, or a value that can implicitly be coerced to a time stamp\. *timestamptz* An expression that results in a TIMESTAMPTZ type, or a value that can implicitly be coerced to a time stamp with time zone\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEZONE.md
cc9be7ab57e1-0
TIMESTAMPTZ when used with a TIMESTAMP expression\. TIMESTAMP when used with a TIMESTAMPTZ expression\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMEZONE.md
5169768fcec3-0
**Topics** + [\|\| \(Concatenation\) operator](r_concat_op.md) + [BPCHARCMP function](r_BPCHARCMP.md) + [BTRIM function](r_BTRIM.md) + [BTTEXT\_PATTERN\_CMP function](r_BTTEXT_PATTERN_CMP.md) + [CHAR\_LENGTH function](r_CHAR_LENGTH.md) + [CHARACTER\_LENGTH function](r_CHARACTER_LENGTH.md) + [CHARINDEX Function](r_CHARINDEX.md) + [CHR function](r_CHR.md) + [CONCAT](r_CONCAT.md) + [CRC32 function](crc32-function.md) + [INITCAP function](r_INITCAP.md) + [LEFT and RIGHT functions](r_LEFT.md) + [LEN function](r_LEN.md) + [LENGTH function](r_LENGTH.md) + [LOWER function](r_LOWER.md) + [LPAD and RPAD functions](r_LPAD.md) + [LTRIM function](r_LTRIM.md) + [OCTET\_LENGTH function](r_OCTET_LENGTH.md) + [POSITION function](r_POSITION.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/String_functions_header.md
5169768fcec3-1
+ [OCTET\_LENGTH function](r_OCTET_LENGTH.md) + [POSITION function](r_POSITION.md) + [QUOTE\_IDENT function](r_QUOTE_IDENT.md) + [QUOTE\_LITERAL function](r_QUOTE_LITERAL.md) + [REGEXP\_COUNT function](REGEXP_COUNT.md) + [REGEXP\_INSTR function](REGEXP_INSTR.md) + [REGEXP\_REPLACE function](REGEXP_REPLACE.md) + [REGEXP\_SUBSTR function](REGEXP_SUBSTR.md) + [REPEAT function](r_REPEAT.md) + [REPLACE function](r_REPLACE.md) + [REPLICATE function](r_REPLICATE.md) + [REVERSE function](r_REVERSE.md) + [RTRIM function](r_RTRIM.md) + [SPLIT\_PART function](SPLIT_PART.md) + [STRPOS function](r_STRPOS.md) + [STRTOL function](r_STRTOL.md) + [SUBSTRING function](r_SUBSTRING.md) + [TEXTLEN function](r_TEXTLEN.md) + [TRANSLATE function](r_TRANSLATE.md) + [TRIM function](r_TRIM.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/String_functions_header.md
5169768fcec3-2
+ [TRANSLATE function](r_TRANSLATE.md) + [TRIM function](r_TRIM.md) + [UPPER function](r_UPPER.md) String functions process and manipulate character strings or expressions that evaluate to character strings\. When the *string* argument in these functions is a literal value, it must be enclosed in single quotes\. Supported data types include CHAR and VARCHAR\. The following section provides the function names, syntax, and descriptions for supported functions\. All offsets into strings are one\-based\. <a name="string-functions-deprecated"></a> **Deprecated leader node\-only functions** The following string functions are deprecated because they execute only on the leader node\. For more information, see [Leader node–only functions](c_SQL_functions_leader_node_only.md) + ASCII + GET\_BIT + GET\_BYTE + SET\_BIT + SET\_BYTE + TO\_ASCII
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/String_functions_header.md
5c0695971583-0
After you create a table and populate it with data, use a SELECT statement to display the data contained in the table\. The SELECT \* statement returns all the column names and row values for all of the data in a table and is a good way to verify that recently added data was correctly inserted into the table\. To view the data that you entered in the **testtable** table, issue the following command: ``` select * from testtable; ``` The result will look like this: ``` testcol --------- 100 (1 row) ``` For more information about using the SELECT statement to query tables, see [SELECT](r_SELECT_synopsis.md) in the SQL Command Reference\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_selecting_data.md
0519858d8261-0
Amazon Redshift stores your data on disk in sorted order according to the sort key\. The Amazon Redshift query optimizer uses sort order when it determines optimal query plans\. Some suggestions for best approach follow: + **If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key\. ** Queries are more efficient because they can skip entire blocks that fall outside the time range\. + **If you do frequent range filtering or equality filtering on one column, specify that column as the sort key\.** Amazon Redshift can skip reading entire blocks of data for that column\. It can do so because it tracks the minimum and maximum column values stored on each block and can skip blocks that don't apply to the predicate range\. + **If you frequently join a table, specify the join column as both the sort key and the distribution key\. ** Doing this enables the query optimizer to choose a sort merge join instead of a slower hash join\. Because the data is already sorted on the join key, the query optimizer can bypass the sort phase of the sort merge join\. For more information about choosing and specifying sort keys, see [Tutorial: Tuning table design](tutorial-tuning-tables.md) and [Choosing sort keys](t_Sorting_data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-sort-key.md
61287aabdc37-0
To validate the data in the Amazon S3 input files or Amazon DynamoDB table before you actually load the data, use the NOLOAD option with the [COPY](r_COPY.md) command\. Use NOLOAD with the same COPY commands and options you would use to actually load the data\. NOLOAD checks the integrity of all of the data without loading it into the database\. The NOLOAD option displays any errors that would occur if you had attempted to load the data\. For example, if you specified the incorrect Amazon S3 path for the input file, Amazon Redshift would display the following error: ``` ERROR: No such file or directory DETAIL: ----------------------------------------------- Amazon Redshift error: The specified key does not exist code: 2 context: S3 key being read : location: step_scan.cpp:1883 process: xenmaster [pid=22199] ----------------------------------------------- ``` To troubleshoot error messages, see the [Load error reference](r_Load_Error_Reference.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Validating_input_files.md
5b8df2736373-0
Records all errors seen by the SSH client\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SSHCLIENT_ERROR.md
73ec72a84380-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SSHCLIENT_ERROR.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SSHCLIENT_ERROR.md
f0d16abaa648-0
Lists the relationship between streams and concurrent segments\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the tables with the prefix STL except that the STL tables provide information only for queries run on the main cluster\. This table is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_STREAM_SEGS.md
a4f29de42813-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_STREAM_SEGS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_STREAM_SEGS.md
c85be5b72fe5-0
To view the relationship between streams and concurrent segments for the most recent query, type the following query: ``` select * from svcs_stream_segs where query = pg_last_query_id(); query | stream | segment -------+--------+--------- 10 | 1 | 2 10 | 0 | 0 10 | 2 | 4 10 | 1 | 3 10 | 0 | 1 (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_STREAM_SEGS.md
acfb1f64ab1c-0
Consider the following limits when you create a table\. + There is a limit for the maximum number of tables in a cluster by node type\. For more information, see [Limits](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Cluster Management Guide*\. + The maximum number of characters for a table name is 127\. + The maximum number of columns you can define in a single table is 1,600\. + The maximum number of SORTKEY columns you can define in a single table is 400\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_usage.md
514c70dbcc92-0
Several attributes and settings can be set at the column level or at the table level\. In some cases, setting an attribute or constraint at the column level or at the table level has the same effect\. In other cases, they produce different results\. The following list summarizes column\-level and table\-level settings: DISTKEY There is no difference in effect whether set at the column level or at the table level\. If DISTKEY is set, either at the column level or at the table level, DISTSTYLE must be set to KEY or not set at all\. DISTSTYLE can be set only at the table level\. SORTKEY If set at the column level, SORTKEY must be a single column\. If SORTKEY is set at the table level, one or more columns can make up a compound or interleaved composite sort key\. UNIQUE At the column level, one or more keys can be set to UNIQUE; the UNIQUE constraint applies to each column individually\. If UNIQUE is set at the table level, one or more columns can make up a composite UNIQUE constraint\. PRIMARY KEY If set at the column level, PRIMARY KEY must be a single column\. If PRIMARY KEY is set at the table level, one or more columns can make up a composite primary key \. FOREIGN KEY There is no difference in effect whether FOREIGN KEY is set at the column level or at the table level\. At the column level, the syntax is simply `REFERENCES` *reftable* \[ \( *refcolumn* \)\]\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_usage.md
eaa0dd6c40cc-0
When the hash distribution scheme of the incoming data matches that of the target table, no physical distribution of the data is actually necessary when the data is loaded\. For example, if a distribution key is set for the new table and the data is being inserted from another table that is distributed on the same key column, the data is loaded in place, using the same nodes and slices\. However, if the source and target tables are
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_usage.md
eaa0dd6c40cc-1
However, if the source and target tables are both set to EVEN distribution, data is redistributed into the target table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_usage.md
020e0d2ba717-0
You might be able to create a very wide table but be unable to perform query processing, such as INSERT or SELECT statements, on the table\. The maximum width of a table with fixed width columns, such as CHAR, is 64KB \- 1 \(or 65535 bytes\)\. If a table includes VARCHAR columns, the table can have a larger declared width without returning an error because VARCHARS columns don't contribute their full declared width to the calculated query\-processing limit\. The effective query\-processing limit with VARCHAR columns will vary based on a number of factors\. If a table is too wide for inserting or selecting, you receive the following error\. ``` ERROR: 8001 DETAIL: The combined length of columns processed in the SQL statement exceeded the query-processing limit of 65535 characters (pid:7627) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_TABLE_usage.md
5bb1ae5ce154-0
If you define a procedure with the same name and different input argument data types, or signature, you create a new procedure\. In other words, the procedure name is overloaded\. For more information, see [Overloading procedure names](#stored-procedure-overloading-name)\. Amazon Redshift doesn't enable procedure overloading based on output arguments\. In other words, you can't have two procedures with the same name and input argument data types but different output argument types\. The owner or a superuser can replace the body of a stored procedure with a new one with the same signature\. To change the signature or return types of a stored procedure, drop the stored procedure and recreate it\. For more information, see [DROP PROCEDURE](r_DROP_PROCEDURE.md) and [CREATE PROCEDURE](r_CREATE_PROCEDURE.md)\. You can avoid potential conflicts and unexpected results by considering your naming conventions for stored procedures before implementing them\. Because you can overload procedure names, they can collide with existing and future Amazon Redshift procedure names\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-naming.md
4cd476f257d9-0
A procedure is identified by its name and signature, which is the number of input arguments and the data types of the arguments\. Two procedures in the same schema can have the same name if they have different signatures\. In other words, you can overload procedure names\. When you run a procedure, the query engine determines which procedure to call based on the number of arguments that you provide and the data types of the arguments\. You can use overloading to simulate procedures with a variable number of arguments, up to the limit allowed by the CREATE PROCEDURE command\. For more information, see [CREATE PROCEDURE](r_CREATE_PROCEDURE.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-naming.md
3ac73e4354c4-0
We recommend that you name all procedures using the prefix `sp_`\. Amazon Redshift reserves the `sp_` prefix exclusively for stored procedures\. By prefixing your procedure names with `sp_`, you ensure that your procedure name won't conflict with any existing or future Amazon Redshift procedure names\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-naming.md
08f07b4a7089-0
Use the SVCS\_S3LOG view to get troubleshooting details about Redshift Spectrum queries at the segment level\. One segment can perform one external table scan\. This view is derived from the SVL\_S3LOG system view but doesn't show slice\-level for queries run on a concurrency scaling cluster\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the views with the prefix SVL except that the SVL views provide information only for queries run on the main cluster\. SVCS\_S3LOG is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. For information about SVL\_S3LOG, see [SVL\_S3LOG](r_SVL_S3LOG.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3LOG.md