id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
0a3a3111d6bd-0
``` TRUNCATE [ TABLE ] table_name ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNCATE.md
5d9fb65798a7-0
TABLE Optional keyword\. *table\_name* A temporary or persistent table\. Only the owner of the table or a superuser may truncate it\. You can truncate any table, including tables that are referenced in foreign\-key constraints\. You don't need to vacuum a table after truncating it\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNCATE.md
f90329f62ecd-0
The TRUNCATE command commits the transaction in which it is run; therefore, you can't roll back a TRUNCATE operation, and a TRUNCATE command may commit other operations when it commits itself\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNCATE.md
72482f9ed7f8-0
Use the TRUNCATE command to delete all of the rows from the CATEGORY table: ``` truncate category; ``` Attempt to roll back a TRUNCATE operation: ``` begin; truncate date; rollback; select count(*) from date; count ------- 0 (1 row) ``` The DATE table remains empty after the ROLLBACK command because the TRUNCATE command committed automatically\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TRUNCATE.md
d49e27c8d40e-0
**Topics** + [SQL reference conventions](c_SQL_reference_conventions.md) + [Basic elements](c_Basic_elements.md) + [Expressions](r_expressions.md) + [Conditions](r_conditions.md) The SQL language consists of commands and functions that you use to work with databases and database objects\. The language also enforces rules regarding the use of data types, expressions, and literals\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_reference.md
5bbc38b8baee-0
ST\_XMax returns the maximum first coordinate of an input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMax-function.md
646c1ee6c1bd-0
``` ST_XMax(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMax-function.md
36cefbdd38d7-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMax-function.md
88b37f3b4ae5-0
`DOUBLE PRECISION` value of the maximum first coordinate\. If *geom* is empty, then null is returned\. If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMax-function.md
fcfbdfcb4846-0
The following SQL returns the largest first coordinate of a linestring\. ``` SELECT ST_XMax(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_xmax ----------- 77.42 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_XMax-function.md
fef68f9a267f-0
If your data includes non\-ASCII multibyte characters \(such as Chinese or Cyrillic characters\), you must load the data to VARCHAR columns\. The VARCHAR data type supports four\-byte UTF\-8 characters, but the CHAR data type only accepts single\-byte ASCII characters\. You cannot load five\-byte or longer characters into Amazon Redshift tables\. For more information about CHAR and VARCHAR, see [Data types](c_Supported_data_types.md)\. To check which encoding an input file uses, use the Linux * `file` * command: ``` $ file ordersdata.txt ordersdata.txt: ASCII English text $ file uni_ordersdata.dat uni_ordersdata.dat: UTF-8 Unicode text ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading_unicode_data.md
6f621dd7feb5-0
EXISTS conditions test for the existence of rows in a subquery, and return true if a subquery returns at least one row\. If NOT is specified, the condition returns true if a subquery returns no rows\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_exists_condition.md
12fd57bf549a-0
``` [ NOT ] EXISTS (table_subquery) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_exists_condition.md
d7a88010762f-0
EXISTS Is true when the *table\_subquery* returns at least one row\. NOT EXISTS Is true when the *table\_subquery* returns no rows\. *table\_subquery* A subquery that evaluates to a table with one or more columns and one or more rows\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_exists_condition.md
9cedf25c1426-0
This example returns all date identifiers, one time each, for each date that had a sale of any kind: ``` select dateid from date where exists ( select 1 from sales where date.dateid = sales.dateid ) order by dateid; dateid -------- 1827 1828 1829 ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_exists_condition.md
862a866c867b-0
Synonym of the LEN function\. See [LEN function](r_LEN.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHARACTER_LENGTH.md
db83cc4d5346-0
Delta encodings are very useful for datetime columns\. Delta encoding compresses data by recording the difference between values that follow each other in the column\. This difference is recorded in a separate dictionary for each block of column values on disk\. \(An Amazon Redshift disk block occupies 1 MB\.\) For example, if the column contains 10 integers in sequence from 1 to 10, the first will be stored as a 4\-byte integer \(plus a 1\-byte flag\), and the next 9 will each be stored as a byte with the value 1, indicating that it is one greater than the previous value\. Delta encoding comes in two variations: + DELTA records the differences as 1\-byte values \(8\-bit integers\) + DELTA32K records differences as 2\-byte values \(16\-bit integers\) If most of the values in the column could be compressed by using a single byte, the 1\-byte variation is very effective; however, if the deltas are larger, this encoding, in the worst case, is somewhat less effective than storing the uncompressed data\. Similar logic applies to the 16\-bit version\. If the difference between two values exceeds the 1\-byte range \(DELTA\) or 2\-byte range \(DELTA32K\), the full original value is stored, with a leading 1\-byte flag\. The 1\-byte range is from \-127 to 127, and the 2\-byte range is from \-32K to 32K\. The following table shows how a delta encoding works for a numeric column:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Delta_encoding.md
db83cc4d5346-1
The following table shows how a delta encoding works for a numeric column: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Delta_encoding.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Delta_encoding.md
a09e17bb4554-0
Cancels a query\. PG\_CANCEL\_BACKEND is functionally equivalent to the [CANCEL](r_CANCEL.md) command\. You can cancel queries currently being run by your user\. Superusers can cancel any query\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
bc6c4fc146f3-0
``` pg_cancel_backend( pid ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
4af9a6ed03c7-0
*pid* The process ID \(PID\) of the query to be canceled\. You cannot cancel a query by specifying a query ID; you must specify the query's process ID\. Requires an integer value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
c7a07bd3cf1e-0
None
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
a27191b73909-0
If queries in multiple sessions hold locks on the same table, you can use the [PG\_TERMINATE\_BACKEND](PG_TERMINATE_BACKEND.md) function to terminate one of the sessions, which forces any currently running transactions in the terminated session to release all locks and roll back the transaction\. Query the PG\_\_LOCKS catalog table to view currently held locks\. If you cannot cancel a query because it is in transaction block \(BEGIN … END\), you can terminate the session in
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
a27191b73909-1
END\), you can terminate the session in which the query is running by using the PG\_TERMINATE\_BACKEND function\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
30f392faa935-0
To cancel a currently running query, first retrieve the process ID for the query that you want to cancel\. To determine the process IDs for all currently running queries, execute the following command: ``` select pid, trim(starttime) as start, duration, trim(user_name) as user, substring (query,1,40) as querytxt from stv_recents where status = 'Running'; pid | starttime | duration | user | querytxt -----+------------------------+----------+----------+-------------------------- 802 | 2013-10-14 09:19:03.55 | 132 | dwuser | select venuename from venue 834 | 2013-10-14 08:33:49.47 | 1250414 | dwuser | select * from listing; 964 | 2013-10-14 08:30:43.29 | 326179 | dwuser | select sellerid from sales ``` The following statement cancels the query with process ID 802: ``` select pg_cancel_backend(802); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_CANCEL_BACKEND.md
3dd4bcc6e084-0
COPY can load data from Amazon S3 in the following columnar formats: + ORC + Parquet COPY supports columnar formatted data with the following restrictions: + The cluster must be in one of the following AWS Regions: + US East \(N\. Virginia\) Region \(us\-east\-1\) + US East \(Ohio\) Region \(us\-east\-2\) + US West \(N\. California\) Region \(us\-west\-1\) + US West \(Oregon\) Region \(us\-west\-2\) + Asia Pacific \(Hong Kong\) Region \(ap\-east\-1\) + Asia Pacific \(Mumbai\) Region \(ap\-south\-1\) + Asia Pacific \(Seoul\) Region \(ap\-northeast\-2\) + Asia Pacific \(Singapore\) Region \(ap\-southeast\-1\) + Asia Pacific \(Sydney\) Region \(ap\-southeast\-2\) + Asia Pacific \(Tokyo\) Region \(ap\-northeast\-1\) + Canada \(Central\) Region \(ca\-central\-1\) + China \(Beijing\) Region \(cn\-north\-1\) + China \(Ningxia\) Region \(cn\-northwest\-1\)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-columnar.md
3dd4bcc6e084-1
+ China \(Ningxia\) Region \(cn\-northwest\-1\) + Europe \(Frankfurt\) Region \(eu\-central\-1\) + Europe \(Ireland\) Region \(eu\-west\-1\) + Europe \(London\) Region \(eu\-west\-2\) + Europe \(Paris\) Region \(eu\-west\-3\) + Europe \(Stockholm\) Region \(eu\-north\-1\) + Middle East \(Bahrain\) Region \(me\-south\-1\) + South America \(São Paulo\) Region \(sa\-east\-1\) + AWS GovCloud \(US\-West\) \(us\-gov\-west\-1\) + The Amazon S3 bucket must be in the same AWS Region as the Amazon Redshift cluster\. + To access your Amazon S3 data through a VPC endpoint, set up access using IAM policies and IAM roles as described in [Using Amazon Redshift Spectrum with Enhanced VPC Routing](https://docs.aws.amazon.com/redshift/latest/mgmt/spectrum-enhanced-vpc.html) in the *Amazon Redshift Cluster Management Guide*\. + COPY doesn't automatically apply compression encodings\. + Only the following COPY parameters are supported: + [FROM](copy-parameters-data-source-s3.md#copy-parameters-from)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-columnar.md
3dd4bcc6e084-2
+ Only the following COPY parameters are supported: + [FROM](copy-parameters-data-source-s3.md#copy-parameters-from) + [IAM\_ROLE](copy-parameters-authorization.md#copy-iam-role) + [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) + [STATUPDATE ](copy-parameters-data-load.md#copy-statupdate) + [MANIFEST](copy-parameters-data-source-s3.md#copy-manifest) + [ACCESS\_KEY\_ID, SECRET\_ACCESS\_KEY, and SESSION\_TOKEN](copy-parameters-authorization.md#copy-access-key-id) + If COPY encounters an error while loading, the command fails\. ACCEPTANYDATE, ACCEPTINVCHARS, and MAXERROR aren't supported for columnar data types\. + Error messages are sent only to the SQL client\. Errors aren't logged in STL\_LOAD\_ERRORS\. + COPY inserts values into the target table's columns in the same order as the columns occur in the columnar data files\. The number of columns in the target table and the number of columns in the data file must match\. + If the file you specify for the COPY operation includes one of the following extensions, we decompress the data without the need for adding any parameters: + `.gz` + `.snappy` + `.bz2`
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-copy-from-columnar.md
d69ba6897c19-0
**Topics** + [Syntax](#r_UNION-synopsis) + [Parameters](#r_UNION-parameters) + [Order of evaluation for set operators](#r_UNION-order-of-evaluation-for-set-operators) + [Usage notes](#r_UNION-usage-notes) + [Example UNION queries](c_example_union_query.md) + [Example UNION ALL query](c_example_unionall_query.md) + [Example INTERSECT queries](c_example_intersect_query.md) + [Example EXCEPT query](c_Example_MINUS_query.md) The UNION, INTERSECT, and EXCEPT *set operators* are used to compare and merge the results of two separate query expressions\. For example, if you want to know which users of a website are both buyers and sellers but their user names are stored in separate columns or tables, you can find the *intersection* of these two types of users\. If you want to know which website users are buyers but not sellers, you can use the EXCEPT operator to find the *difference* between the two lists of users\. If you want to build a list of all users, regardless of role, you can use the UNION operator\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
6b49e77c0233-0
``` query { UNION [ ALL ] | INTERSECT | EXCEPT | MINUS } query ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
a7e6dc457b15-0
*query* A query expression that corresponds, in the form of its select list, to a second query expression that follows the UNION, INTERSECT, or EXCEPT operator\. The two expressions must contain the same number of output columns with compatible data types; otherwise, the two result sets can't be compared and merged\. Set operations don't allow implicit conversion between different categories of data types; for more information, see [Type compatibility and conversion](r_Type_conversion.md)\. You can build queries that contain an unlimited number of query expressions and link them with UNION, INTERSECT, and EXCEPT operators in any combination\. For example, the following query structure is valid, assuming that the tables T1, T2, and T3 contain compatible sets of columns: ``` select * from t1 union select * from t2 except select * from t3 order by c1; ``` UNION Set operation that returns rows from two query expressions, regardless of whether the rows derive from one or both expressions\. INTERSECT Set operation that returns rows that derive from two query expressions\. Rows that aren't returned by both expressions are discarded\. EXCEPT \| MINUS Set operation that returns rows that derive from one of two query expressions\. To qualify for the result, rows must exist in the first result table but not the second\. MINUS and EXCEPT are exact synonyms\. ALL
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
a7e6dc457b15-1
ALL The ALL keyword retains any duplicate rows that are produced by UNION\. The default behavior when the ALL keyword isn't used is to discard these duplicates\. INTERSECT ALL, EXCEPT ALL, and MINUS ALL aren't supported\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
1fd35c2d3630-0
The UNION and EXCEPT set operators are left\-associative\. If parentheses aren't specified to influence the order of precedence, a combination of these set operators is evaluated from left to right\. For example, in the following query, the UNION of T1 and T2 is evaluated first, then the EXCEPT operation is performed on the UNION result: ``` select * from t1 union select * from t2 except select * from t3 order by c1; ``` The INTERSECT operator takes precedence over the UNION and EXCEPT operators when a combination of operators is used in the same query\. For example, the following query evaluates the intersection of T2 and T3, then union the result with T1: ``` select * from t1 union select * from t2 intersect select * from t3 order by c1; ``` By adding parentheses, you can enforce a different order of evaluation\. In the following case, the result of the union of T1 and T2 is intersected with T3, and the query is likely to produce a different result\. ``` (select * from t1 union select * from t2) intersect (select * from t3) order by c1; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
d8ff3f529f57-0
+ The column names returned in the result of a set operation query are the column names \(or aliases\) from the tables in the first query expression\. Because these column names are potentially misleading, in that the values in the column derive from tables on either side of the set operator, you might want to provide meaningful aliases for the result set\. + A query expression that precedes a set operator should not contain an ORDER BY clause\. An ORDER BY clause produces meaningful sorted results only when it is used at the end of a query that contains set operators\. In this case, the ORDER BY clause applies to the final results of all of the set operations\. The outermost query can also contain standard LIMIT and OFFSET clauses\. + The LIMIT and OFFSET clauses aren't supported as a means of restricting the number of rows returned by an intermediate result of a set operation\. For example, the following query returns an error: ``` (select listid from listing limit 10) intersect select listid from sales; ERROR: LIMIT may not be used within input to set operations. ``` + When set operator queries return decimal results, the corresponding result columns are promoted to return the same precision and scale\. For example, in the following query, where T1\.REVENUE is a DECIMAL\(10,2\) column and T2\.REVENUE is a DECIMAL\(8,4\) column, the decimal result is promoted to DECIMAL\(12,4\): ``` select t1.revenue union select t2.revenue; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
d8ff3f529f57-1
``` select t1.revenue union select t2.revenue; ``` The scale is `4` because that is the maximum scale of the two columns\. The precision is `12` because T1\.REVENUE requires 8 digits to the left of the decimal point \(12 \- 4 = 8\)\. This type promotion ensures that all values from both sides of the UNION fit in the result\. For 64\-bit values, the maximum result precision is 19 and the maximum result scale is 18\. For 128\-bit values, the maximum result precision is 38 and the maximum result scale is 37\. If the resulting data type exceeds Amazon Redshift precision and scale limits, the query returns an error\. + For set operations, two rows are treated as identical if, for each corresponding pair of columns, the two data values are either *equal* or *both NULL*\. For example, if tables T1 and T2 both contain one column and one row, and that row is NULL in both tables, an INTERSECT operation over those tables returns that row\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNION.md
dc746f944cf3-0
Some applications require not only concurrent querying and loading, but also the ability to write to multiple tables or the same table concurrently\. In this context, *concurrently* means overlapping, not scheduled to run at precisely the same time\. Two transactions are considered to be concurrent if the second one starts before the first commits\. Concurrent operations can originate from different sessions that are controlled either by the same user or by different users\. **Note** Amazon Redshift supports a default *automatic commit* behavior in which each separately executed SQL command commits individually\. If you enclose a set of commands in a transaction block \(defined by [BEGIN](r_BEGIN.md) and [END](r_END.md) statements\), the block commits as one transaction, so you can roll it back if necessary\. Exceptions to this behavior are the TRUNCATE and VACUUM commands, which automatically commit all outstanding changes made in the current transaction\. Some SQL clients issue BEGIN and COMMIT commands automatically, so the client controls whether a group of statements are run as a transaction or each individual statement is run as its own transaction\. Check the documentation for the interface you are using\. For example, when using the Amazon Redshift JDBC driver, a JDBC `PreparedStatement` with a query string that contains multiple \(semicolon separated\) SQL commands runs all the statements as a single transaction\. In contrast, if you use SQL Workbench/J and set AUTO COMMIT ON, then if you run multiple statements, each statement runs as its own transaction\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
dc746f944cf3-1
Concurrent write operations are supported in Amazon Redshift in a protective way, using write locks on tables and the principle of *serializable isolation*\. Serializable isolation preserves the illusion that a transaction running against a table is the only transaction that is running against that table\. For example, two concurrently running transactions, T1 and T2, must produce the same results as at least one of the following: + T1 and T2 run serially in that order\. + T2 and T1 run serially in that order\. Concurrent transactions are invisible to each other; they cannot detect each other's changes\. Each concurrent transaction will create a snapshot of the database at the beginning of the transaction\. A database snapshot is created within a transaction on the first occurrence of most SELECT statements, DML commands such as COPY, DELETE, INSERT, UPDATE, and TRUNCATE, and the following DDL commands : + ALTER TABLE \(to add or drop columns\) + CREATE TABLE + DROP TABLE + TRUNCATE TABLE If *any* serial execution of the concurrent transactions would produce the same results as their concurrent execution, those transactions are deemed "serializable" and can be run safely\. If no serial execution of those transactions would produce the same results, the transaction that executes a statement that would break serializability is aborted and rolled back\. System catalog tables \(PG\) and other Amazon Redshift system tables \(STL and STV\) are not locked in a transaction; therefore, changes to database objects that arise from DDL and TRUNCATE operations are visible on commit to any concurrent transactions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
dc746f944cf3-2
For example, suppose that table A exists in the database when two concurrent transactions, T1 and T2, start\. If T2 returns a list of tables by selecting from the PG\_TABLES catalog table, and then T1 drops table A and commits, and then T2 lists the tables again, table A is no longer listed\. If T2 tries to query the dropped table, Amazon Redshift returns a "relation does not exist" error\. The catalog query that returns the list of tables to T2 or checks that table A exists is not subject to the same isolation rules as operations against user tables\. Transactions for updates to these tables run in a *read committed* isolation mode\. PG\-prefix catalog tables do not support snapshot isolation\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
69c833aae7b9-0
A database snapshot is also created in a transaction for any SELECT query that references a user\-created table or Amazon Redshift system table \(STL or STV\)\. SELECT queries that do not reference any table will not create a new transaction database snapshot, nor will any INSERT, DELETE, or UPDATE statements that operate solely on system catalog tables \(PG\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
b3ebf803f304-0
When Amazon Redshift detects a serializable isolation error, you see an error message such as the following\. ``` ERROR:1023 DETAIL: Serializable isolation violation on table in Redshift ``` To address a serializable isolation error, you can try the following methods: + Move any operations that don't have to be in the same atomic transaction outside of the transaction\. This method applies when individual operations inside two transactions cross\-reference each other in a way that can affect the outcome of the other transaction\. For example, the following two sessions each start a transaction\. ``` Session1_Redshift=# begin; ``` ``` Session2_Redshift=# begin; ``` The result of a SELECT statement in each transaction might be affected by an INSERT statement in the other\. In other words, suppose that you run the following statements serially, in any order\. In every case, the result is one of the SELECT statements returning one more row than if the transactions were run concurrently\. There is no order in which the operations can run serially that produces the same result as when run concurrently\. Thus, the last operation that is run results in a serializable isolation error\. ``` Session1_Redshift=# select * from tab1; Session1_Redshift=# insert into tab2 values (1); ``` ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
b3ebf803f304-1
Session1_Redshift=# insert into tab2 values (1); ``` ``` Session2_Redshift=# insert into tab1 values (1); Session2_Redshift=# select * from tab2; ``` In many cases, the result of the SELECT statements isn't important\. In other words, the atomicity of the operations in the transactions isn't important\. In these cases, move the SELECT statements outside of their transactions, as shown in the following examples\. ``` Session1_Redshift=# begin; Session1_Redshift=# insert into tab1 values (1) Session1_Redshift=# end; Session1_Redshift=# select * from tab2; ``` ``` Session2_Redshift # select * from tab1; Session2_Redshift=# begin; Session2_Redshift=# insert into tab2 values (1) Session2_Redshift=# end; ``` In these examples, there are no cross\-references in the transactions\. The two INSERT statements don't affect each other\. In these examples, there is at least one order in which the transactions can run serially and produce the same result as if run concurrently\. This means that the transactions are serializable\. + Force serialization by locking all tables in each session\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
b3ebf803f304-2
+ Force serialization by locking all tables in each session\. The [LOCK](r_LOCK.md) command blocks operations that can result in serializable isolation errors\. When you use the LOCK command, be sure to do the following: + Lock all tables affected by the transaction, including those affected by read\-only SELECT statements inside the transaction\. + Lock tables in the same order, regardless of the order that operations are performed in\. + Lock all tables at the beginning of the transaction, before performing any operations\. + Retry the aborted transaction\. A transaction can encounter a serializable isolation error if it conflicts with operations performed by another concurrent transaction\. If the conflicting transactions don't need to run at the same time, simply retrying the aborted transaction might succeed\. If the issue persists, try one of the other methods\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_serial_isolation.md
124e42211f17-0
The RANDOM function generates a random value between 0\.0 \(inclusive\) and 1\.0 \(exclusive\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
155933d85ec9-0
``` RANDOM() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
54f4d4a2a0e9-0
RANDOM returns a DOUBLE PRECISION number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
755f39e299a6-0
Call RANDOM after setting a seed value with the [SET](r_SET.md) command to cause RANDOM to generate numbers in a predictable sequence\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
c59579eb32ce-0
1. Compute a random value between 0 and 99\. If the random number is 0 to 1, this query produces a random number from 0 to 100: ``` select cast (random() * 100 as int); int4 ------ 24 (1 row) ``` 1. Retrieve a uniform random sample of 10 items: ``` select * from sales order by random() limit 10; ``` Now retrieve a random sample of 10 items, but choose the items in proportion to their prices\. For example, an item that is twice the price of another would be twice as likely to appear in the query results: ``` select * from sales order by log(1 - random()) / pricepaid limit 10; ``` 1. This example uses the [SET](r_SET.md) command to set a SEED value so that RANDOM generates a predictable sequence of numbers\. First, return three RANDOM integers without setting the SEED value first: ``` select cast (random() * 100 as int); int4 ------ 6 (1 row) select cast (random() * 100 as int); int4 ------ 68 (1 row)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
c59579eb32ce-1
int4 ------ 68 (1 row) select cast (random() * 100 as int); int4 ------ 56 (1 row) ``` Now, set the SEED value to `.25`, and return three more RANDOM numbers: ``` set seed to .25; select cast (random() * 100 as int); int4 ------ 21 (1 row) select cast (random() * 100 as int); int4 ------ 79 (1 row) select cast (random() * 100 as int); int4 ------ 12 (1 row) ``` Finally, reset the SEED value to `.25`, and verify that RANDOM returns the same results as the previous three calls: ``` set seed to .25; select cast (random() * 100 as int); int4 ------ 21 (1 row) select cast (random() * 100 as int); int4 ------ 79 (1 row)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
c59579eb32ce-2
int4 ------ 79 (1 row) select cast (random() * 100 as int); int4 ------ 12 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RANDOM.md
efec26d0aad2-0
Use the SVCS\_S3LIST view to get details about Amazon Redshift Spectrum queries at the segment level\. One segment can perform one external table scan\. This view is derived from the SVL\_S3LIST system view but doesn't show slice\-level for queries run on a concurrency scaling cluster\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the views with the prefix SVL except that the SVL views provide information only for queries run on the main cluster\. SVCS\_S3LIST is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. For information about SVL\_S3LIST, see [SVL\_S3LIST](r_SVL_S3LIST.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3LIST.md
fb64ef80241f-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_S3LIST.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3LIST.md
6a66737184b7-0
The following example queries SVCS\_S3LIST for the last query executed\. ``` select * from svcs_s3list where query = pg_last_query_id() order by query,segment; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3LIST.md
ff17757d4dde-0
The TO\_HEX function converts a number to its equivalent hexadecimal value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_HEX.md
00932a71f4c6-0
``` TO_HEX(string) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_HEX.md
3de8227a558e-0
*number* A number to convert to its hexadecimal value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_HEX.md
058bdb85449f-0
The TO\_HEX function returns a hexadecimal value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_HEX.md
14928e7072e9-0
The following example shows the conversion of a number to its hexadecimal value: ``` select to_hex(2147676847); to_hex ---------- 8002f2af (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_HEX.md
b9f43ea858c7-0
Unloads the result of a query to one or more text or Apache Parquet files on Amazon S3, using Amazon S3 server\-side encryption \(SSE\-S3\)\. You can also specify server\-side encryption with an AWS Key Management Service key \(SSE\-KMS\) or client\-side encryption with a customer\-managed key \(CSE\-CMK\)\. You can manage the size of files on Amazon S3, and by extension the number of files, by setting the MAXFILESIZE parameter\. You can unload the result of an Amazon Redshift query to your Amazon S3 data lake in Apache Parquet, an efficient open columnar storage format for analytics\. Parquet format is up to 2x faster to unload and consumes up to 6x less storage in Amazon S3, compared with text formats\. This enables you to save data transformation and enrichment you have done in Amazon S3 into your Amazon S3 data lake in an open format\. You can then analyze your data with Redshift Spectrum and other AWS services such as Amazon Athena, Amazon EMR, and Amazon SageMaker\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
89cb545bbcc6-0
``` UNLOAD ('select-statement') TO 's3://object-path/name-prefix' authorization [ option [ ... ] ] where option is { [ FORMAT [ AS ] ] CSV | PARQUET | PARTITION BY ( column_name [, ... ] ) [ INCLUDE ] | MANIFEST [ VERBOSE ] | HEADER | DELIMITER [ AS ] 'delimiter-char' | FIXEDWIDTH [ AS ] 'fixedwidth-spec' | ENCRYPTED [ AUTO ] | BZIP2 | GZIP | ZSTD | ADDQUOTES | NULL [ AS ] 'null-string' | ESCAPE | ALLOWOVERWRITE | PARALLEL [ { ON | TRUE } | { OFF | FALSE } ] | MAXFILESIZE [AS] max-size [ MB | GB ] | REGION [AS] 'aws-region' } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-0
\('*select\-statement*'\) A SELECT query\. The results of the query are unloaded\. In most cases, it is worthwhile to unload data in sorted order by specifying an ORDER BY clause in the query\. This approach saves the time required to sort the data when it is reloaded\. The query must be enclosed in single quotation marks as shown following: ``` ('select * from venue order by venueid') ``` If your query contains quotation marks \(for example to enclose literal values\), put the literal between two sets of single quotation marks—you must also enclose the query between single quotation marks: ``` ('select * from venue where venuestate=''NV''') ``` TO 's3://*object\-path/name\-prefix*' The full path, including bucket name, to the location on Amazon S3 where Amazon Redshift writes the output file objects, including the manifest file if MANIFEST is specified\. The object names are prefixed with *name\-prefix*\. If you use `PARTITION BY`, a forward slash \(/\) is automatically added to the end of the *name\-prefix* value if needed\. For added security, UNLOAD connects to Amazon S3 using an HTTPS connection\. By default, UNLOAD writes one or more files per slice\. UNLOAD appends a slice number and part number to the specified name prefix as follows:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-1
`<object-path>/<name-prefix><slice-number>_part_<part-number>`\. If MANIFEST is specified, the manifest file is written as follows: `<object_path>/<name_prefix>manifest`\. UNLOAD automatically creates encrypted files using Amazon S3 server\-side encryption \(SSE\), including the manifest file if MANIFEST is used\. The COPY command automatically reads server\-side encrypted files during the load operation\. You can transparently download server\-side encrypted files from your bucket using either the Amazon S3 Management Console or API\. For more information, go to [Protecting Data Using Server\-Side Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html)\. To use Amazon S3 client\-side encryption, specify the ENCRYPTED option\. REGION is required when the Amazon S3 bucket isn't in the same AWS Region as the Amazon Redshift cluster\. Authorization The UNLOAD command needs authorization to write data to Amazon S3\. The UNLOAD command uses the same parameters the COPY command uses for authorization\. For more information, see [Authorization parameters](copy-parameters-authorization.md) in the COPY command syntax reference\. \[ FORMAT \[AS\] \] CSV \| PARQUET <a name="unload-csv"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-2
\[ FORMAT \[AS\] \] CSV \| PARQUET <a name="unload-csv"></a> When CSV, unloads to a text file in CSV format using a comma \( , \) character as the default delimiter\. If a field contains delimiters, double quotation marks, newline characters, or carriage returns, then the field in the unloaded file is enclosed in double quotation marks\. A double quotation mark within a data field is escaped by an additional double quotation mark\. When PARQUET, unloads to a file in Apache Parquet version 1\.0 format\. By default, each row group is compressed using SNAPPY compression\. For more information about Apache Parquet format, see [Parquet](https://parquet.apache.org/)\. The FORMAT and AS keywords are optional\. You can't use CSV with FIXEDWIDTH\. You can't use PARQUET with DELIMITER, FIXEDWIDTH, ADDQUOTES, ESCAPE, NULL AS, HEADER, GZIP, BZIP2, or ZSTD\. PARQUET with ENCRYPTED is only supported with server\-side encryption with an AWS Key Management Service key \(SSE\-KMS\)\. PARTITION BY \( *column\_name* \[, \.\.\. \] \) \[INCLUDE\] <a name="unload-partitionby"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-3
Specifies the partition keys for the unload operation\. UNLOAD automatically partitions output files into partition folders based on the partition key values, following the Apache Hive convention\. For example, a Parquet file that belongs to the partition year 2019 and the month September has the following prefix: `s3://my_bucket_name/my_prefix/year=2019/month=September/000.parquet`\. The value for *column\_name* must be a column in the query results being unloaded\. If you specify PARTITION BY with the INCLUDE option, partition columns aren't removed from the unloaded files\. MANIFEST \[ VERBOSE \] Creates a manifest file that explicitly lists details for the data files that are created by the UNLOAD process\. The manifest is a text file in JSON format that lists the URL of each file that was written to Amazon S3\. If MANIFEST is specified with the VERBOSE option, the manifest includes the following details: + The column names and data types, and for CHAR, VARCHAR, or NUMERIC data types, dimensions for each column\. For CHAR and VARCHAR data types, the dimension is the length\. For a DECIMAL or NUMERIC data type, the dimensions are precision and scale\. + The row count unloaded to each file\. If the HEADER option is specified, the row count includes the header line\. + The total file size of all files unloaded and the total row count unloaded to all files\. If the HEADER option is specified, the row count includes the header lines\. + The author\. Author is always "Amazon Redshift"\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-4
+ The author\. Author is always "Amazon Redshift"\. You can specify VERBOSE only following MANIFEST\. The manifest file is written to the same Amazon S3 path prefix as the unload files in the format `<object_path_prefix>manifest`\. For example, if UNLOAD specifies the Amazon S3 path prefix '`s3://mybucket/venue_`', the manifest file location is '`s3://mybucket/venue_manifest`'\. HEADER Adds a header line containing column names at the top of each output file\. Text transformation options, such as CSV, DELIMITER, ADDQUOTES, and ESCAPE, also apply to the header line\. You can't use HEADER with FIXEDWIDTH\. DELIMITER AS '*delimiter\_character*' Specifies a single ASCII character that is used to separate fields in the output file, such as a pipe character \( \| \), a comma \( , \), or a tab \( \\t \)\. The default delimiter for text files is a pipe character\. The default delimiter for CSV files is a comma character\. The AS keyword is optional\. You can't use DELIMITER with FIXEDWIDTH\. If the data contains the delimiter character, you need to specify the ESCAPE option to escape the delimiter, or use ADDQUOTES to enclose the data in double quotation marks\. Alternatively, specify a delimiter that isn't contained in the data\. FIXEDWIDTH '*fixedwidth\_spec*'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-5
FIXEDWIDTH '*fixedwidth\_spec*' Unloads the data to a file where each column width is a fixed length, rather than separated by a delimiter\. The *fixedwidth\_spec* is a string that specifies the number of columns and the width of the columns\. The AS keyword is optional\. Because FIXEDWIDTH doesn't truncate data, the specification for each column in the UNLOAD statement needs to be at least as long as the length of the longest entry for that column\. The format for *fixedwidth\_spec* is shown below: ``` 'colID1:colWidth1,colID2:colWidth2, ...' ``` You can't use FIXEDWIDTH with DELIMITER or HEADER\. ENCRYPTED \[AUTO\] <a name="unload-parameters-encrypted"></a> Specifies that the output files on Amazon S3 are encrypted using Amazon S3 server\-side encryption or client\-side encryption\. If MANIFEST is specified, the manifest file is also encrypted\. For more information, see [Unloading encrypted data files](t_unloading_encrypted_files.md)\. If you don't specify the ENCRYPTED parameter, UNLOAD automatically creates encrypted files using Amazon S3 server\-side encryption with AWS\-managed encryption keys \(SSE\-S3\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-6
For ENCRYPTED, you might want to unload to Amazon S3 using server\-side encryption with an AWS KMS key \(SSE\-KMS\)\. If so, use the [KMS_KEY_ID](#unload-parameters-kms-key-id) parameter to provide the key ID\. You can't use the [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) parameter with the KMS\_KEY\_ID parameter\. If you run an UNLOAD command for data using KMS\_KEY\_ID, you can then do a COPY operation for the same data without specifying a key\. To unload to Amazon S3 using client\-side encryption with a customer\-supplied symmetric key \(CSE\-CMK\), provide the key in one of two ways\. To provide the key, use the [MASTER_SYMMETRIC_KEY](#unload-parameters-master-symmetric-key) parameter or the `master_symmetric_key` portion of a [CREDENTIALS](copy-parameters-authorization.md#copy-credentials) credential string\. If you unload data using a master symmetric key, make sure that you supply the same key when you perform a COPY operation for the encrypted data\. UNLOAD doesn't support Amazon S3 server\-side encryption with a customer\-supplied key \(SSE\-C\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-7
UNLOAD doesn't support Amazon S3 server\-side encryption with a customer\-supplied key \(SSE\-C\)\. If ENCRYPTED AUTO is used, the UNLOAD command fetches the default KMS encryption key on the target Amazon S3 cluster and encrypts the files written to Amazon S3 with the KMS key\. If the bucket doesn't have the default KMS encryption key, UNLOAD automatically creates encrypted files using Amazon Redshift server\-side encryption with AWS\-managed encryption keys \(SSE\-S3\)\. You can't use this option with KMS\_KEY\_ID, MASTER\_SYMMETRIC\_KEY, or CREDENTIALS that contains master\_symmetric\_key\. KMS\_KEY\_ID '*key\-id*' <a name="unload-parameters-kms-key-id"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-8
KMS\_KEY\_ID '*key\-id*' <a name="unload-parameters-kms-key-id"></a> Specifies the key ID for an AWS Key Management Service \(AWS KMS\) key to be used to encrypt data files on Amazon S3\. For more information, see [What is AWS Key Management Service?](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) If you specify KMS\_KEY\_ID, you must specify the [ENCRYPTED](#unload-parameters-encrypted) parameter also\. If you specify KMS\_KEY\_ID, you can't authenticate using the CREDENTIALS parameter\. Instead, use either [IAM_ROLE](copy-parameters-authorization.md#copy-iam-role) or [ACCESS_KEY_ID and SECRET_ACCESS_KEY](copy-parameters-authorization.md#copy-access-key-id)\. MASTER\_SYMMETRIC\_KEY '*master\_key*' <a name="unload-parameters-master-symmetric-key"></a> Specifies the master symmetric key to be used to encrypt data files on Amazon S3\. If you specify MASTER\_SYMMETRIC\_KEY, you must specify the [ENCRYPTED](#unload-parameters-encrypted) parameter also\. You can't use MASTER\_SYMMETRIC\_KEY with the CREDENTIALS parameter\. For more information, see [Loading encrypted data files from Amazon S3](c_loading-encrypted-files.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-9
BZIP2 Unloads data to one or more bzip2\-compressed files per slice\. Each resulting file is appended with a `.bz2` extension\. GZIP Unloads data to one or more gzip\-compressed files per slice\. Each resulting file is appended with a `.gz` extension\. ZSTD Unloads data to one or more Zstandard\-compressed files per slice\. Each resulting file is appended with a `.zst` extension\. ADDQUOTES Places quotation marks around each unloaded data field, so that Amazon Redshift can unload data values that contain the delimiter itself\. For example, if the delimiter is a comma, you could unload and reload the following data successfully: ``` "1","Hello, World" ``` Without the added quotation marks, the string `Hello, World` would be parsed as two separate fields\. If you use ADDQUOTES, you must specify REMOVEQUOTES in the COPY if you reload the data\. NULL AS '*null\-string*' Specifies a string that represents a null value in unload files\. If this option is used, all output files contain the specified string in place of any null values found in the selected data\. If this option isn't specified, null values are unloaded as: + Zero\-length strings for delimited output + Whitespace strings for fixed\-width output
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-10
+ Zero\-length strings for delimited output + Whitespace strings for fixed\-width output If a null string is specified for a fixed\-width unload and the width of an output column is less than the width of the null string, the following behavior occurs: + An empty field is output for non\-character columns + An error is reported for character columns ESCAPE For CHAR and VARCHAR columns in delimited unload files, an escape character \(`\`\) is placed before every occurrence of the following characters: + Linefeed: `\n` + Carriage return: `\r` + The delimiter character specified for the unloaded data\. + The escape character: `\` + A quotation mark character: `"` or `'` \(if both ESCAPE and ADDQUOTES are specified in the UNLOAD command\)\. If you loaded your data using a COPY with the ESCAPE option, you must also specify the ESCAPE option with your UNLOAD command to generate the reciprocal output file\. Similarly, if you UNLOAD using the ESCAPE option, you need to use ESCAPE when you COPY the same data\. ALLOWOVERWRITE <a name="allowoverwrite"></a> By default, UNLOAD fails if it finds files that it would possibly overwrite\. If ALLOWOVERWRITE is specified, UNLOAD overwrites existing files, including the manifest file\. PARALLEL <a name="unload-parallel"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-11
PARALLEL <a name="unload-parallel"></a> By default, UNLOAD writes data in parallel to multiple files, according to the number of slices in the cluster\. The default option is ON or TRUE\. If PARALLEL is OFF or FALSE, UNLOAD writes to one or more data files serially, sorted absolutely according to the ORDER BY clause, if one is used\. The maximum size for a data file is 6\.2 GB\. So, for example, if you unload 13\.4 GB of data, UNLOAD creates the following three files\. ``` s3://mybucket/key000 6.2 GB s3://mybucket/key001 6.2 GB s3://mybucket/key002 1.0 GB ``` The UNLOAD command is designed to use parallel processing\. We recommend leaving PARALLEL enabled for most cases, especially if the files are used to load tables using a COPY command\. MAXFILESIZE \[AS\] max\-size \[ MB \| GB \] <a name="unload-maxfilesize"></a> Specifies the maximum size of files that UNLOAD creates in Amazon S3\. Specify a decimal value between 5 MB and 6\.2 GB\. The AS keyword is optional\. The default unit is MB\. If MAXFILESIZE isn't specified, the default maximum file size is 6\.2 GB\. The size of the manifest file, if one is used, isn't affected by MAXFILESIZE\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
a2e7a2e9d085-12
REGION \[AS\] '*aws\-region*' <a name="unload-region"></a> Specifies the AWS Region where the target Amazon S3 bucket is located\. REGION is required for UNLOAD to an Amazon S3 bucket that isn't in the same AWS Region as the Amazon Redshift cluster\. The value for *aws\_region* must match an AWS Region listed in the [Amazon Redshift regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#redshift_region) table in the *AWS General Reference*\. By default, UNLOAD assumes that the target Amazon S3 bucket is located in the same AWS Region as the Amazon Redshift cluster\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
5bc6ea7f348b-0
When you UNLOAD using a delimiter, your data can include that delimiter or any of the characters listed in the ESCAPE option description\. In this case, you must use the ESCAPE option with the UNLOAD statement\. If you don't use the ESCAPE option with the UNLOAD, subsequent COPY operations using the unloaded data might fail\. **Important** We strongly recommend that you always use ESCAPE with both UNLOAD and COPY statements\. The exception is if you are certain that your data doesn't contain any delimiters or other characters that might need to be escaped\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
6afeb296d311-0
You might encounter loss of precision for floating\-point data that is successively unloaded and reloaded\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
accb3e7a1342-0
The SELECT query can't use a LIMIT clause in the outer SELECT\. For example, the following UNLOAD statement fails\. ``` unload ('select * from venue limit 10') to 's3://mybucket/venue_pipe_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` Instead, use a nested LIMIT clause, as in the following example\. ``` unload ('select * from venue where venueid in (select venueid from venue order by venueid desc limit 10)') to 's3://mybucket/venue_pipe_' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'; ``` You can also populate a table using SELECT…INTO or CREATE TABLE AS using a LIMIT clause, then unload from that table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
140be0513a9b-0
You can only unload GEOMETRY columns to text or CSV format\. You can't unload GEOMETRY data with the `FIXEDWIDTH` option\. The data is unloaded in the hexadecimal form of the extended well\-known binary \(EWKB\) format\. If the size of the EWKB data is more than 4 MB, then a warning occurs because the data can't later be loaded into a table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
5af569615004-0
Be aware of these considerations when using FORMAT AS PARQUET: + Unload to Parquet doesn't use file level compression\. Each row group is compressed with SNAPPY\. + If MAXFILESIZE isn't specified, the default maximum file size is 6\.2 GB\. You can use MAXFILESIZE to specify a file size of 5 MB–6\.2 GB\. The actual file size is approximated when the file is being written, so it might not be exactly equal to the number you specify\. To maximize scan performance, Amazon Redshift tries to create Parquet files that contain equally sized 32\-MB row groups\. The MAXFILESIZE value that you specify is automatically rounded down to the nearest multiple of 32 MB\. For example, if you specify MAXFILESIZE 200 MB, then each Parquet file unloaded is approximately 192 MB \(32 MB row group x 6 = 192 MB\)\. + If a column uses TIMESTAMPTZ data format, only the timestamp values are unloaded\. The time zone information isn't unloaded\. + Don't specify file name prefixes that begin with underscore \(\_\) or period \(\.\) characters\. Redshift Spectrum treats files that begin with these characters as hidden files and ignores them\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
d68fe2db8d38-0
Be aware of these considerations when using PARTITION BY: + Partition columns aren't included in the output file\. + Make sure to include partition columns in the SELECT query used in the UNLOAD statement\. You can specify any number of partition columns in the UNLOAD command\. However, there is a limitation that there should be at least one nonpartition column to be part of the file\. + If the partition key value is null, Amazon Redshift automatically unloads that data into a default partition called `partition_column=__HIVE_DEFAULT_PARTITION__`\. + The UNLOAD command doesn't make any calls to an external catalog\. To register your new partitions to be part of your existing external table, use a separate ALTER TABLE \.\.\. ADD PARTITION \.\.\. command\. Or you can run a CREATE EXTERNAL TABLE command to register the unloaded data as a new external table\. You can also use an AWS Glue crawler to populate your Data Catalog\. For more information, see [Defining Crawlers](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) in the *AWS Glue Developer Guide*\. + If you use the MANIFEST option, Amazon Redshift generates only one manifest file in the root Amazon S3 folder\. + The column data types that you can use as the partition key are SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, BOOLEAN, CHAR, VARCHAR, DATE, and TIMESTAMP\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD.md
44e74c8cd8cc-0
After you upload your files to your Amazon S3 bucket, we recommend listing the contents of the bucket to verify that all of the correct files are present and that no unwanted files are present\. For example, if the bucket `mybucket` holds a file named `venue.txt.back`, that file will be loaded, perhaps unintentionally, by the following command: ``` copy venue from 's3://mybucket/venue' … ; ``` If you want to control specifically which files are loaded, you can use a manifest file to explicitly list the data files\. For more information about using a manifest file, see the [copy_from_s3_manifest_file](copy-parameters-data-source-s3.md#copy-manifest-file) option for the COPY command and [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest) in the COPY examples\. For more information about listing the contents of the bucket, see [Listing Object Keys](https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysUsingAPIs.html) in the *Amazon S3 Developer Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/verifying-that-correct-files-are-present.md
8ea5e54d260c-0
This topic provides a reference for numeric format strings\. The following format strings apply to functions such as TO\_NUMBER and TO\_CHAR: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_Numeric_formating.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Numeric_formating.md
75f18f01d167-0
**false**, true
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_enable_vacuum_boost.md
06e5394ebf94-0
Specifies whether to enable the vacuum boost option for all VACUUM commands run in a session\. If `enable_vacuum_boost` is `true`, Amazon Redshift runs all VACUUM commands in the session with the BOOST option\. If `enable_vacuum_boost` is `false`, Amazon Redshift doesn't run with the BOOST option by default\. For more information about the BOOST option, see [VACUUM](r_VACUUM_command.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_enable_vacuum_boost.md
b0726464233f-0
COT is a trigonometric function that returns the cotangent of a number\. The input parameter must be nonzero\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COT.md
772c7bb16279-0
``` COT(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COT.md
05e43203b77b-0
*number* The input parameter is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COT.md
69e6ef9576e4-0
The COT function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COT.md
f4706e2da5bb-0
The following example returns the cotangent of 1: ``` select cot(1); cot ------------------- 0.642092615934331 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COT.md
22b40964f70d-0
Use the STV\_INFLIGHT table to determine what queries are currently running on the cluster\. STV\_INFLIGHT does not show leader\-node only queries\. For more information, see [Leader node–only functions](c_SQL_functions_leader_node_only.md)\. STV\_INFLIGHT is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_INFLIGHT.md
dcf4acf48d12-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_INFLIGHT.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_INFLIGHT.md
fbbced5df9f4-0
To view all active queries currently running on the database, type the following query: ``` select * from stv_inflight; ``` The sample output below shows two queries currently running, including the STV\_INFLIGHT query itself and a query that was run from a script called `avgwait.sql`: ``` select slice, query, trim(label) querylabel, pid, starttime, substring(text,1,20) querytext from stv_inflight; slice|query|querylabel | pid | starttime | querytext -----+-----+-----------+-----+--------------------------+-------------------- 1011 | 21 | | 646 |2012-01-26 13:23:15.645503|select slice, query, 1011 | 20 |avgwait.sql| 499 |2012-01-26 13:23:14.159912|select avg(datediff( (2 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_INFLIGHT.md
5483bf21dfc5-0
Analyzes the execution steps that occur when a LIMIT clause is used in a SELECT query\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LIMIT.md
f935f22f1c3e-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_LIMIT.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LIMIT.md
ded163185748-0
In order to generate a row in STL\_LIMIT, this example first runs the following query against the VENUE table using the LIMIT clause\. ``` select * from venue order by 1 limit 10; ``` ``` venueid | venuename | venuecity | venuestate | venueseats ---------+----------------------------+-----------------+------------+------------ 1 | Toyota Park | Bridgeview | IL | 0 2 | Columbus Crew Stadium | Columbus | OH | 0 3 | RFK Stadium | Washington | DC | 0 4 | CommunityAmerica Ballpark | Kansas City | KS | 0 5 | Gillette Stadium | Foxborough | MA | 68756 6 | New York Giants Stadium | East Rutherford | NJ | 80242 7 | BMO Field | Toronto | ON | 0 8 | The Home Depot Center | Carson | CA | 0 9 | Dick's Sporting Goods Park | Commerce City | CO | 0 10 | Pizza Hut Park | Frisco | TX | 0 (10 rows) ``` Next, run the following query to find the query ID of the last query you ran against the VENUE table\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LIMIT.md
ded163185748-1
``` Next, run the following query to find the query ID of the last query you ran against the VENUE table\. ``` select max(query) from stl_query; ``` ``` max -------- 127128 (1 row) ``` Optionally, you can run the following query to verify that the query ID corresponds to the LIMIT query you previously ran\. ``` select query, trim(querytxt) from stl_query where query=127128; ``` ``` query | btrim --------+------------------------------------------ 127128 | select * from venue order by 1 limit 10; (1 row) ``` Finally, run the following query to return information about the LIMIT query from the STL\_LIMIT table\. ``` select slice, segment, step, starttime, endtime, tasknum from stl_limit where query=127128 order by starttime, endtime; ``` ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LIMIT.md
ded163185748-2
where query=127128 order by starttime, endtime; ``` ``` slice | segment | step | starttime | endtime | tasknum -------+---------+------+----------------------------+----------------------------+--------- 1 | 1 | 3 | 2013-09-06 22:56:43.608114 | 2013-09-06 22:56:43.609383 | 15 0 | 1 | 3 | 2013-09-06 22:56:43.608708 | 2013-09-06 22:56:43.609521 | 15 10000 | 2 | 2 | 2013-09-06 22:56:43.612506 | 2013-09-06 22:56:43.612668 | 0 (3 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_LIMIT.md
44b94d8cb55a-0
Returns the unique identifier for the Amazon Redshift user logged in to the current session\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER_ID.md
f5836c7f128f-0
``` CURRENT_USER_ID ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER_ID.md
0ccd7d51995d-0
The CURRENT\_USER\_ID function returns an integer\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER_ID.md
8311a59e1805-0
The following example returns the user name and current user ID for this session: ``` select user, current_user_id; current_user | current_user_id --------------+----------------- dwuser | 1 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER_ID.md
92e85c2b05bc-0
Use the STL\_PLAN\_INFO view to look at the EXPLAIN output for a query in terms of a set of rows\. This is an alternative way to look at query plans\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_PLAN_INFO.md