id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
d36257de2b72-1
"three": "Musicals", "four": "All symphony, concerto, and choir concerts" } ``` The following JSONPaths file, named `category_jsonpath.json`, maps the source data to the table columns\. ``` { "jsonpaths": [ "$['one']", "$['two']", "$['three']", "$['four']" ] } ``` To load from the JSON data file in the previous example, execute the following COPY command\. ``` copy category from 's3://mybucket/category_object_paths.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' json 's3://mybucket/category_jsonpath.json'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
89a021941840-0
To load from JSON data that consists of a set of arrays, you must use a JSONPaths file to map the array elements to columns\. Suppose that you have the following data file, named `category_array_data.json`\. ``` [1,"Sports","MLB","Major League Baseball"] [2,"Sports","NHL","National Hockey League"] [3,"Sports","NFL","National Football League"] [4,"Sports","NBA","National Basketball Association"] [5,"Concerts","Classical","All symphony, concerto, and choir concerts"] ``` The following JSONPaths file, named `category_array_jsonpath.json`, maps the source data to the table columns\. ``` { "jsonpaths": [ "$[0]", "$[1]", "$[2]", "$[3]" ] } ``` To load from the JSON data file in the previous example, execute the following COPY command\. ``` copy category from 's3://mybucket/category_array_data.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
89a021941840-1
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' json 's3://mybucket/category_array_jsonpath.json'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
9c79c56db441-0
In the following examples, you load the CATEGORY table with the following data\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html) **Topics** + [Load from Avro data using the 'auto' option](#copy-from-avro-examples-using-auto) + [Load from Avro data using a JSONPaths file](#copy-from-avro-examples-using-avropaths)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
03d8ae133232-0
To load from Avro data using the `'auto'` argument, field names in the Avro schema must match the column names\. However, when using the `'auto'` argument, order doesn't matter\. The following shows the schema for a file named `category_auto.avro`\. ``` { "name": "category", "type": "record", "fields": [ {"name": "catid", "type": "int"}, {"name": "catdesc", "type": "string"}, {"name": "catname", "type": "string"}, {"name": "catgroup", "type": "string"}, } ``` The data in an Avro file is in binary format, so it isn't human\-readable\. The following shows a JSON representation of the data in the `category_auto.avro` file\. ``` { "catid": 1, "catdesc": "Major League Baseball", "catname": "MLB", "catgroup": "Sports" } { "catid": 2, "catdesc": "National Hockey League", "catname": "NHL", "catgroup": "Sports"
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
03d8ae133232-1
"catdesc": "National Hockey League", "catname": "NHL", "catgroup": "Sports" } { "catid": 3, "catdesc": "National Basketball Association", "catname": "NBA", "catgroup": "Sports" } { "catid": 4, "catdesc": "All symphony, concerto, and choir concerts", "catname": "Classical", "catgroup": "Concerts" } ``` To load from the Avro data file in the previous example, execute the following COPY command\. ``` copy category from 's3://mybucket/category_auto.avro' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' format as avro 'auto'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
59464cde60f1-0
If the field names in the Avro schema don't correspond directly to column names, you can use a JSONPaths file to map the schema elements to columns\. The order of the JSONPaths file expressions must match the column order\. Suppose that you have a data file named `category_paths.avro` that contains the same data as in the previous example, but with the following schema\. ``` { "name": "category", "type": "record", "fields": [ {"name": "id", "type": "int"}, {"name": "desc", "type": "string"}, {"name": "name", "type": "string"}, {"name": "group", "type": "string"}, {"name": "region", "type": "string"} ] } ``` The following JSONPaths file, named `category_path.avropath`, maps the source data to the table columns\. ``` { "jsonpaths": [ "$['id']", "$['group']", "$['name']", "$['desc']" ] } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
59464cde60f1-1
"$['desc']" ] } ``` To load from the Avro data file in the previous example, execute the following COPY command\. ``` copy category from 's3://mybucket/category_object_paths.avro' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' format avro 's3://mybucket/category_path.avropath '; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
65633aaab89f-0
The following example describes how you might prepare data to "escape" newline characters before importing the data into an Amazon Redshift table using the COPY command with the ESCAPE parameter\. Without preparing the data to delimit the newline characters, Amazon Redshift returns load errors when you run the COPY command, because the newline character is normally used as a record separator\. For example, consider a file or a column in an external table that you want to copy into an Amazon Redshift table\. If the file or column contains XML\-formatted content or similar data, you need to make sure that all of the newline characters \(\\n\) that are part of the content are escaped with the backslash character \(\\\)\. A good thing about a file or table containing embedded newlines characters is that it provides a relatively easy pattern to match\. Each embedded newline character most likely always follows a `>` character with potentially some white space characters \(`' '` or tab\) in between, as you can see in the following example of a text file named `nlTest1.txt`\. ``` $ cat nlTest1.txt <xml start> <newline characters provide> <line breaks at the end of each> <line in content> </xml>|1000 <xml> </xml>|2000 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
65633aaab89f-1
</xml>|1000 <xml> </xml>|2000 ``` With the following example, you can run a text\-processing utility to pre\-process the source file and insert escape characters where needed\. \(The `|` character is intended to be used as delimiter to separate column data when copied into an Amazon Redshift table\.\) ``` $ sed -e ':a;N;$!ba;s/>[[:space:]]*\n/>\\\n/g' nlTest1.txt > nlTest2.txt ``` Similarly, you can use Perl to perform a similar operation: ``` cat nlTest1.txt | perl -p -e 's/>\s*\n/>\\\n/g' > nlTest2.txt ``` To accommodate loading the data from the `nlTest2.txt` file into Amazon Redshift, we created a two\-column table in Amazon Redshift\. The first column c1, is a character column that holds XML\-formatted content from the `nlTest2.txt` file\. The second column c2 holds integer values loaded from the same file\. After running the `sed` command, you can correctly load data from the `nlTest2.txt` file into an Amazon Redshift table using the ESCAPE parameter\. **Note**
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
65633aaab89f-2
**Note** When you include the ESCAPE parameter with the COPY command, it escapes a number of special characters that include the backslash character \(including newline\)\. ``` copy t2 from 's3://mybucket/data/nlTest2.txt' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' escape delimiter as '|'; select * from t2 order by 2; c1 | c2 -------------+------ <xml start> <newline characters provide> <line breaks at the end of each> <line in content> </xml> | 1000 <xml> </xml> | 2000 (2 rows) ``` You can prepare data files exported from external databases in a similar way\. For example, with an Oracle database, you can use the REPLACE function on each affected column in a table that you want to copy into Amazon Redshift\. ``` SELECT c1, REPLACE(c2, \n',\\n' ) as c2 from my_table_with_xml ``` In addition, many database export and extract, transform, load \(ETL\) tools that routinely process large amounts of data provide options to specify escape and delimiter characters\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COPY_command_examples.md
48736e75d11b-0
In this tutorial, you learn how to use Amazon Redshift Spectrum to query data directly from files on Amazon S3\. If you already have a cluster and a SQL client, you can complete this tutorial in ten minutes or less\. **Note** Redshift Spectrum queries incur additional charges\. The cost of running the sample queries in this tutorial is nominal\. For more information about pricing, see [ Redshift Spectrum Pricing](https://aws.amazon.com/redshift/pricing/#redshift-spectrum-pricing)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum.md
5bf2a2c3edf1-0
To use Redshift Spectrum, you need an Amazon Redshift cluster and a SQL client that's connected to your cluster so that you can execute SQL commands\. The cluster and the data files in Amazon S3 must be in the same AWS Region\. For this example, the sample data is in the US West \(Oregon\) Region \(us\-west\-2\), so you need a cluster that is also in us\-west\-2\. If you don't have an Amazon
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum.md
5bf2a2c3edf1-1
If you don't have an Amazon Redshift cluster, you can create a new cluster in us\-west\-2 and install a SQL client by following the steps in [Getting Started with Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum.md
73f4a597f539-0
To get started using Amazon Redshift Spectrum, follow these steps: + [Step 1\. Create an IAM role for Amazon Redshift](c-getting-started-using-spectrum-create-role.md) + [Step 2: Associate the IAM role with your cluster](c-getting-started-using-spectrum-add-role.md) + [Step 3: Create an external schema and an external table](c-getting-started-using-spectrum-create-external-table.md) + [Step 4: Query your data in Amazon S3](c-getting-started-using-spectrum-query-s3-data.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum.md
69dba7e910a9-0
We recommend configuring automatic workload management \(WLM\) in Amazon Redshift\. For more information about automatic WLM, see [Implementing workload management](cm-c-implementing-workload-management.md)\. However, if you need multiple WLM queues, this tutorial walks you through the process of configuring manual workload management \(WLM\) in Amazon Redshift\. By configuring manual WLM, you can improve query performance and resource allocation in your cluster\. Amazon Redshift routes user queries to queues for processing\. WLM defines how those queries are routed to the queues\. By default, Amazon Redshift has two queues available for queries: one for superusers, and one for users\. The superuser queue cannot be configured and can only process one query at a time\. You should reserve this queue for troubleshooting purposes only\. The user queue can process up to five queries at a time, but you can configure this by changing the concurrency level of the queue if needed\. When you have several users running queries against the database, you might find another configuration to be more efficient\. For example, if some users run resource\-intensive operations, such as VACUUM, these might have a negative impact on less\-intensive queries, such as reports\. You might consider adding additional queues and configuring them for different workloads\. **Estimated time:** 75 minutes **Estimated cost:** 50 cents
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-configuring-workload-management.md
19c7b35fcc6b-0
You need an Amazon Redshift cluster, the sample TICKIT database, and the psql client tool\. If you do not already have these set up, go to [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html) and [Connect to Your Cluster by Using the psql Tool](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-from-psql.html)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-configuring-workload-management.md
9a4f1040b385-0
+ [Section 1: Understanding the default queue processing behavior](tutorial-wlm-understanding-default-processing.md) + [Section 2: Modifying the WLM query queue configuration](tutorial-wlm-modifying-wlm-configuration.md) + [Section 3: Routing queries to queues based on user groups and query groups](tutorial-wlm-routing-queries-to-queues.md) + [Section 4: Using wlm\_query\_slot\_count to temporarily override the concurrency level in a queue](tutorial-wlm-query-slot-count.md) + [Section 5: Cleaning up your resources](tutorial-wlm-cleaning-up-resources.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-configuring-workload-management.md
eab886e4b0fb-0
In a data warehouse environment, applications often need to perform complex queries on large tables—for example, SELECT statements that perform multiple\-table joins and aggregations on tables that contain billions of rows\. Processing these queries can be expensive, in terms of system resources and the time it takes to compute the results\. Materialized views in Amazon Redshift provide a way to address these issues\. A *materialized view* contains a precomputed result set, based on an SQL query over one or more base tables\. You can issue SELECT statements to query a materialized view, in the same way that you can query other tables or views in the database\. Amazon Redshift returns the precomputed results from the materialized view, without having to access the base tables at all\. From the user standpoint, the query results are returned much faster compared to when retrieving the same data from the base tables\. Materialized views are especially useful for speeding up queries that are predictable and repeated\. Instead of performing resource\-intensive queries against large database tables \(such as aggregates or multiple\-table joins\), applications can query a materialized view and retrieve a precomputed result set\. For example, consider the scenario where a set of queries is used to populate a collection of charts, such as Amazon QuickSight\. This use case is ideal for a materialized view, because the queries are predictable and repeated over and over again\. When you create a materialized view, Amazon Redshift runs the user\-specified SQL statement to gather the data from the base table or tables and stores the result set\. The following illustration provides an overview of the materialized view `tickets_mv` that an SQL query defines using two base tables `events` and `sales`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-overview.md
eab886e4b0fb-1
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/materialized-view.png) For information on how to create materialized views, see [CREATE MATERIALIZED VIEW](materialized-view-create-sql-command.md)\. You can issue SELECT statements to query a materialized view\. For information on how to query materialized views, see [Querying a materialized view](#materialized-view-query)\. The result set eventually becomes stale when data is inserted, updated, and deleted in the base tables\. You can refresh the materialized view at any time to update it with the latest changes from the base tables\. For information on how to refresh materialized views, see [REFRESH MATERIALIZED VIEW](materialized-view-refresh-sql-command.md)\. For details about SQL commands used to create and manage materialized views, see the following command topics: + [CREATE MATERIALIZED VIEW](materialized-view-create-sql-command.md) + [REFRESH MATERIALIZED VIEW](materialized-view-refresh-sql-command.md) + [DROP MATERIALIZED VIEW](materialized-view-drop-sql-command.md) For information about system tables and views to monitor materialized views, see the following topics: + [STV\_MV\_INFO](r_STV_MV_INFO.md) + [STL\_MV\_STATE](r_STL_MV_STATE.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-overview.md
eab886e4b0fb-2
+ [STL\_MV\_STATE](r_STL_MV_STATE.md) + [SVL\_MV\_REFRESH\_STATUS](r_SVL_MV_REFRESH_STATUS.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-overview.md
4c1f366229e6-0
You can use a materialized view in any SQL query by referencing the materialized view name as the data source, like a table or standard view\. When a query accesses a materialized view, it sees only the data that is stored in the materialized view as of its most recent refresh\. Thus, the query might not see all the latest changes from corresponding base tables of the materialized view\. **Note** If other users want to query the materialized view, the owner of the materialized view grants the SELECT privilege to those users\. The other users don't need to have the SELECT privilege on the underlying base tables\. The owner of the materialized view can also revoke the SELECT privilege from other users, to prevent them from querying the materialized view\. If the owner of the materialized view no longer has the SELECT privilege on the underlying base tables\. The owner can no longer query the materialized view\. Other users who have the SELECT privilege on the materialized view can no longer query the materialized view\. The following example queries the `tickets_mv` materialized view\. For more information on the SQL command used to create a materialized view, see [CREATE MATERIALIZED VIEW](materialized-view-create-sql-command.md)\. ``` SELECT sold FROM tickets_mv WHERE catgroup = 'Concerts'; ``` Because the query results are precomputed, there's no need to access the underlying tables \(`category`, `event`, and `sales`\)\. Amazon Redshift can return the results directly from `tickets_mv`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-overview.md
6e28e3eec862-0
CRC32 is an error\-detecting function that uses a CRC32 algorithm to detect changes between source and target data\. The CRC32 function converts a variable\-length string into an 8\-character string that is a text representation of the hexadecimal value of a 32 bit\-binary sequence\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/crc32-function.md
6f0021f29865-0
``` CRC32(string) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/crc32-function.md
2f20f77e2cdc-0
*string* A variable\-length string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/crc32-function.md
ed3c8721a80a-0
The CRC32 function returns an 8\-character string that is a text representation of the hexadecimal value of a 32\-bit binary sequence\. The Amazon Redshift CRC32 function is based on the CRC\-32C polynomial\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/crc32-function.md
d44d32ae9400-0
The following example shows the 32\-bit value for the string 'Amazon Redshift': ``` select crc32('Amazon Redshift'); crc32 ---------------------------------- f2726906 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/crc32-function.md
8fa088c67927-0
If a vacuum operation needs to merge new rows into a table's sorted region, the time required for a vacuum will increase as the table grows larger\. You can improve vacuum performance by reducing the number of rows that must be merged\. Prior to a vacuum, a table consists of a sorted region at the head of the table, followed by an unsorted region, which grows whenever rows are added or updated\. When a set of rows is added by a COPY operation, the new set of rows is sorted on the sort key as it is added to the unsorted region at the end of the table\. The new rows are ordered within their own set, but not within the unsorted region\. The following diagram illustrates the unsorted region after two successive COPY operations, where the sort key is CUSTID\. For simplicity, this example shows a compound sort key, but the same principles apply to interleaved sort keys, except that the impact of the unsorted region is greater for interleaved tables\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/vacuum-unsorted-region.png) A vacuum restores the table's sort order in two stages: 1. Sort the unsorted region into a newly\-sorted region\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-managing-volume-of-unmerged-rows.md
8fa088c67927-1
A vacuum restores the table's sort order in two stages: 1. Sort the unsorted region into a newly\-sorted region\. The first stage is relatively cheap, because only the unsorted region is rewritten\. If the range of sort key values of the newly\-sorted region is higher than the existing range, only the new rows need to be rewritten, and the vacuum is complete\. For example, if the sorted region contains ID values 1 to 500 and subsequent copy operations add key values greater than 500, then only the unsorted region only needs to be rewritten\. 1. Merge the newly\-sorted region with the previously\-sorted region\. If the keys in the newly sorted region overlap the keys in the sorted region, then VACUUM needs to merge the rows\. Starting at the beginning of the newly\-sorted region \(at the lowest sort key\), the vacuum writes the merged rows from the previously sorted region and the newly sorted region into a new set of blocks\. The extent to which the new sort key range overlaps the existing sort keys determines the extent to which the previously\-sorted region will need to be rewritten\. If the unsorted keys are scattered throughout the existing sort range, a vacuum might need to rewrite existing portions of the table\. The following diagram shows how a vacuum would sort and merge rows that are added to a table where CUSTID is the sort key\. Because each copy operation adds a new set of rows with key values that overlap the existing keys, almost the entire table needs to be rewritten\. The diagram shows single sort and merge, but in practice, a large vacuum consists of a series of incremental sort and merge steps\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-managing-volume-of-unmerged-rows.md
8fa088c67927-2
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/vacuum-unsorted-region-sort-merge.png) If the range of sort keys in a set of new rows overlaps the range of existing keys, the cost of the merge stage continues to grow in proportion to the table size as the table grows while the cost of the sort stage remains proportional to the size of the unsorted region\. In such a case, the cost of the merge stage overshadows the cost of the sort stage, as the following diagram shows\. ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/vacuum-example-merge-region-grows.png) To determine what proportion of a table was remerged, query SVV\_VACUUM\_SUMMARY after the vacuum operation completes\. The following query shows the effect of six successive vacuums as CUSTSALES grew larger over time\. ``` select * from svv_vacuum_summary where table_name = 'custsales'; table_name | xid | sort_ | merge_ | elapsed_ | row_ | sortedrow_ | block_ | max_merge_ | | partitions | increments | time | delta | delta | delta | partitions
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-managing-volume-of-unmerged-rows.md
8fa088c67927-3
| | partitions | increments | time | delta | delta | delta | partitions -----------+------+------------+------------+------------+-------+------------+---------+--------------- custsales | 7072 | 3 | 2 | 143918314 | 0 | 88297472 | 1524 | 47 custsales | 7122 | 3 | 3 | 164157882 | 0 | 88297472 | 772 | 47 custsales | 7212 | 3 | 4 | 187433171 | 0 | 88297472 | 767 | 47 custsales | 7289 | 3 | 4 | 255482945 | 0 | 88297472 | 770 | 47 custsales | 7420 | 3 | 5 | 316583833 | 0 | 88297472 | 769 | 47 custsales | 9007 | 3 | 6 | 306685472 | 0 | 88297472 | 772 | 47 (6 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-managing-volume-of-unmerged-rows.md
8fa088c67927-4
(6 rows) ``` The merge\_increments column gives an indication of the amount of data that was merged for each vacuum operation\. If the number of merge increments over consecutive vacuums increases in proportion to the growth in table size, that is an indication that each vacuum operation is remerging an increasing number of rows in the table because the existing and newly sorted regions overlap\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/vacuum-managing-volume-of-unmerged-rows.md
e124a2436e01-0
**on \(true\)**, off \(false\)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_enable_result_cache_for_session.md
77df5eb9eb1e-0
Specifies whether to use query results caching\. If `enable_result_cache_for_session` is `on`, Amazon Redshift checks for a valid, cached copy of the query results when a query is submitted\. If a match is found in the result cache, Amazon Redshift uses the cached results and doesn’t execute the query\. If `enable_result_cache_for_session` is `off`, Amazon Redshift ignores the results cache and executes all queries when they are submitted\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_enable_result_cache_for_session.md
dbc05047e0eb-0
Relationships between geometry objects are based on the Dimensionally Extended nine\-Intersection Model \(DE\-9IM\)\. This model defines predicates such as equals, contains, and covers\. For more information about the definition of spatial relationships, see [DE\-9IM](https://en.wikipedia.org/wiki/DE-9IM) in Wikipedia\. Amazon Redshift supports the following spatial functions\. **Topics** + [GeometryType](GeometryType-function.md) + [ST\_AddPoint](ST_AddPoint-function.md) + [ST\_Area](ST_Area-function.md) + [ST\_AsBinary](ST_AsBinary-function.md) + [ST\_AsEWKB](ST_AsEWKB-function.md) + [ST\_AsEWKT](ST_AsEWKT-function.md) + [ST\_AsGeoJSON](ST_AsGeoJSON-function.md) + [ST\_AsText](ST_AsText-function.md) + [ST\_Azimuth](ST_Azimuth-function.md) + [ST\_Contains](ST_Contains-function.md) + [ST\_CoveredBy](ST_CoveredBy-function.md) + [ST\_Covers](ST_Covers-function.md) + [ST\_Dimension](ST_Dimension-function.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/geospatial-functions.md
dbc05047e0eb-1
+ [ST\_Covers](ST_Covers-function.md) + [ST\_Dimension](ST_Dimension-function.md) + [ST\_Disjoint](ST_Disjoint-function.md) + [ST\_Distance](ST_Distance-function.md) + [ST\_DistanceSphere](ST_DistanceSphere-function.md) + [ST\_DWithin](ST_DWithin-function.md) + [ST\_EndPoint](ST_EndPoint-function.md) + [ST\_Envelope](ST_Envelope-function.md) + [ST\_Equals](ST_Equals-function.md) + [ST\_GeometryN](ST_GeometryN-function.md) + [ST\_GeometryType](ST_GeometryType-function.md) + [ST\_GeomFromEWKB](ST_GeomFromEWKB-function.md) + [ST\_GeomFromText](ST_GeomFromText-function.md) + [ST\_GeomFromWKB](ST_GeomFromWKB-function.md) + [ST\_Intersects](ST_Intersects-function.md) + [ST\_IsClosed](ST_IsClosed-function.md) + [ST\_IsCollection](ST_IsCollection-function.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/geospatial-functions.md
dbc05047e0eb-2
+ [ST\_IsCollection](ST_IsCollection-function.md) + [ST\_IsEmpty](ST_IsEmpty-function.md) + [ST\_Length](ST_Length-function.md) + [ST\_Length2D](ST_Length2D-function.md) + [ST\_LineFromMultiPoint](ST_LineFromMultiPoint-function.md) + [ST\_MakeLine](ST_MakeLine-function.md) + [ST\_MakePoint](ST_MakePoint-function.md) + [ST\_MakePolygon](ST_MakePolygon-function.md) + [ST\_MemSize](ST_MemSize-function.md) + [ST\_NPoints](ST_NPoints-function.md) + [ST\_NRings](ST_NRings-function.md) + [ST\_NumGeometries](ST_NumGeometries-function.md) + [ST\_NumInteriorRings](ST_NumInteriorRings-function.md) + [ST\_NumPoints](ST_NumPoints-function.md) + [ST\_Perimeter](ST_Perimeter-function.md) + [ST\_Perimeter2D](ST_Perimeter2D-function.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/geospatial-functions.md
dbc05047e0eb-3
+ [ST\_Perimeter2D](ST_Perimeter2D-function.md) + [ST\_Point](ST_Point-function.md) + [ST\_PointN](ST_PointN-function.md) + [ST\_Polygon](ST_Polygon-function.md) + [ST\_RemovePoint](ST_RemovePoint-function.md) + [ST\_SetSRID](ST_SetSRID-function.md) + [ST\_SRID](ST_SRID-function.md) + [ST\_StartPoint](ST_StartPoint-function.md) + [ST\_Touches](ST_Touches-function.md) + [ST\_Within](ST_Within-function.md) + [ST\_X](ST_X-function.md) + [ST\_XMax](ST_XMax-function.md) + [ST\_XMin](ST_XMin-function.md) + [ST\_Y](ST_Y-function.md) + [ST\_YMax](ST_YMax-function.md) + [ST\_YMin](ST_YMin-function.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/geospatial-functions.md
c6c35ee85ed8-0
Records details for [ANALYZE](r_ANALYZE.md) operations\. This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE.md
c02ac5f9c39d-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_ANALYZE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE.md
4f53492f2a7e-0
The following example joins STV\_TBL\_PERM to show the table name and execution details\. ``` select distinct a.xid, trim(t.name) as name, a.status, a.rows, a.modified_rows, a.starttime, a.endtime from stl_analyze a join stv_tbl_perm t on t.id=a.table_id where name = 'users' order by starttime; xid | name | status | rows | modified_rows | starttime | endtime -------+-------+-----------------+-------+---------------+---------------------+-------------------- 1582 | users | Full | 49990 | 49990 | 2016-09-22 22:02:23 | 2016-09-22 22:02:28 244287 | users | Full | 24992 | 74988 | 2016-10-04 22:50:58 | 2016-10-04 22:51:01 244712 | users | Full | 49984 | 24992 | 2016-10-04 22:56:07 | 2016-10-04 22:56:07
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE.md
4f53492f2a7e-1
244712 | users | Full | 49984 | 24992 | 2016-10-04 22:56:07 | 2016-10-04 22:56:07 245071 | users | Skipped | 49984 | 0 | 2016-10-04 22:58:17 | 2016-10-04 22:58:17 245439 | users | Skipped | 49984 | 1982 | 2016-10-04 23:00:13 | 2016-10-04 23:00:13 (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE.md
71177caf2b29-0
If your data includes non\-ASCII multibyte characters \(such as Chinese or Cyrillic characters\), you must load the data to VARCHAR columns\. The VARCHAR data type supports four\-byte UTF\-8 characters, but the CHAR data type only accepts single\-byte ASCII characters\. You cannot load five\-byte or longer characters into Amazon Redshift tables\. For more information, see [Multibyte characters](c_Supported_data_types.md#c_Supported_data_types-multi-byte-characters)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-multi-byte.md
fc691a8a6ded-0
The following table lists the supported mathematical operators\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OPERATOR_SYMBOLS.md
be941234cc19-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_OPERATOR_SYMBOLS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OPERATOR_SYMBOLS.md
69cf11f20269-0
Calculate the commission paid plus a $2\.00 handling for a given transaction: ``` select commission, (commission + 2.00) as comm from sales where salesid=10000; commission | comm ------------+------- 28.05 | 30.05 (1 row) ``` Calculate 20 percent of the sales price for a given transaction: ``` select pricepaid, (pricepaid * .20) as twentypct from sales where salesid=10000; pricepaid | twentypct -----------+----------- 187.00 | 37.400 (1 row) ``` Forecast ticket sales based on a continuous growth pattern\. In this example, the subquery returns the number of tickets sold in 2008\. That result is multiplied exponentially by a continuous growth rate of 5% over 10 years\. ``` select (select sum(qtysold) from sales, date where sales.dateid=date.dateid and year=2008) ^ ((5::float/100)*10) as qty10years; qty10years ------------------ 587.664019657491
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OPERATOR_SYMBOLS.md
69cf11f20269-1
------------------ 587.664019657491 (1 row) ``` Find the total price paid and commission for sales with a date ID that is greater than or equal to 2000\. Then subtract the total commission from the total price paid\. ``` select sum (pricepaid) as sum_price, dateid, sum (commission) as sum_comm, (sum (pricepaid) - sum (commission)) as value from sales where dateid >= 2000 group by dateid order by dateid limit 10; sum_price | dateid | sum_comm | value -----------+--------+----------+----------- 364445.00 | 2044 | 54666.75 | 309778.25 349344.00 | 2112 | 52401.60 | 296942.40 343756.00 | 2124 | 51563.40 | 292192.60 378595.00 | 2116 | 56789.25 | 321805.75 328725.00 | 2080 | 49308.75 | 279416.25 349554.00 | 2028 | 52433.10 | 297120.90
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OPERATOR_SYMBOLS.md
69cf11f20269-2
349554.00 | 2028 | 52433.10 | 297120.90 249207.00 | 2164 | 37381.05 | 211825.95 285202.00 | 2064 | 42780.30 | 242421.70 320945.00 | 2012 | 48141.75 | 272803.25 321096.00 | 2016 | 48164.40 | 272931.60 (10 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_OPERATOR_SYMBOLS.md
28ca7cae4c94-0
Starts a transaction\. Synonymous with START TRANSACTION\. A transaction is a single, logical unit of work, whether it consists of one command or multiple commands\. In general, all commands in a transaction execute on a snapshot of the database whose starting time is determined by the value set for the `transaction_snapshot_begin` system configuration parameter\. By default, individual Amazon Redshift operations \(queries, DDL statements, loads\) are automatically committed to the database\. If you want to suspend the commit for an operation until subsequent work is completed, you need to open a transaction with the BEGIN statement, then run the required commands, then close the transaction with a [COMMIT](r_COMMIT.md) or [END](r_END.md) statement\. If necessary, you can use a [ROLLBACK](r_ROLLBACK.md) statement to abort a transaction that is in progress\. An exception to this behavior is the [TRUNCATE](r_TRUNCATE.md) command, which commits the transaction in which it is run and can't be rolled back\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BEGIN.md
a51e806891b9-0
``` BEGIN [ WORK | TRANSACTION ] [ ISOLATION LEVEL option ] [ READ WRITE | READ ONLY ] START TRANSACTION [ ISOLATION LEVEL option ] [ READ WRITE | READ ONLY ] Where option is SERIALIZABLE | READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ Note: READ UNCOMMITTED, READ COMMITTED, and REPEATABLE READ have no operational impact and map to SERIALIZABLE in Amazon Redshift. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BEGIN.md
ec1289c7a46e-0
WORK Optional keyword\. TRANSACTION Optional keyword; WORK and TRANSACTION are synonyms\. ISOLATION LEVEL SERIALIZABLE Serializable isolation is supported by default, so the behavior of the transaction is the same whether or not this syntax is included in the statement\. See [Managing concurrent write operations](c_Concurrent_writes.md)\. No other isolation levels are supported\. The SQL standard defines four levels of transaction isolation to prevent *dirty reads* \(where a transaction reads data written by a concurrent uncommitted transaction\), *nonrepeatable reads* \(where a transaction re\-reads data it read previously and finds that data was changed by another transaction that committed since the initial read\), and *phantom reads* \(where a transaction re\-executes a query, returns a set of rows that satisfy a search condition, and then finds that the set of rows has changed because of another recently\-committed transaction\): + Read uncommitted: Dirty reads, nonrepeatable reads, and phantom reads are possible\. + Read committed: Nonrepeatable reads and phantom reads are possible\. + Repeatable read: Phantom reads are possible\. + Serializable: Prevents dirty reads, nonrepeatable reads, and phantom reads\. Though you can use any of the four transaction isolation levels, Amazon Redshift processes all isolation levels as serializable\. READ WRITE Gives the transaction read and write permissions\. READ ONLY Gives the transaction read\-only permissions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BEGIN.md
ead130454e95-0
The following example starts a serializable transaction block: ``` begin; ``` The following example starts the transaction block with a serializable isolation level and read and write permissions: ``` begin read write; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BEGIN.md
0c8ca8cbd023-0
A query can be hopped due to a [WLM timeout](cm-c-defining-query-queues.md#wlm-timeout) or a [query monitoring rule \(QMR\) hop action](cm-c-wlm-query-monitoring-rules.md#cm-c-wlm-defining-query-monitoring-rules)\. You can only hop queries in a manual WLM configuration\. When a query is hopped, WLM attempts to route the query to the next matching queue based on the [WLM queue assignment rules](cm-c-wlm-queue-assignment-rules.md)\. If the query doesn't match any other queue definition, the query is canceled\. It's not assigned to the default queue\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-queue-hopping.md
77569c56497a-0
The following table summarizes the behavior of different types of queries with a WLM timeout\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/wlm-queue-hopping.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-queue-hopping.md
97e9871b2b3e-0
WLM hops the following types of queries when they time out: + Read\-only queries, such as SELECT statements, that are in a WLM state of `running`\. To find the WLM state of a query, view the STATE column on the [STV\_WLM\_QUERY\_STATE](r_STV_WLM_QUERY_STATE.md) system table\. + CREATE TABLE AS \(CTAS\) statements\. WLM queue hopping supports both user\-defined and system\-generated CTAS statements\. + SELECT INTO statements\. Queries that aren't subject to WLM timeout continue running in the original queue until completion\. The following types of queries aren't subject to WLM timeout: + COPY statements + Maintenance operations, such as ANALYZE and VACUUM + Read\-only queries, such as SELECT statements, that have reached a WLM state of `returning`\. To find the WLM state of a query, view the STATE column on the [STV\_WLM\_QUERY\_STATE](r_STV_WLM_QUERY_STATE.md) system table\. Queries that aren't eligible for hopping by WLM timeout are canceled when they time out\. The following types of queries are not eligible for hopping by a WLM timeout: + INSERT, UPDATE, and DELETE statements + UNLOAD statements + User\-defined functions \(UDFs\)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-queue-hopping.md
b6f1c6303cab-0
When a query is hopped and no matching queue is found, the query is canceled\. When a query is hopped and a matching queue is found, WLM attempts to reassign the query to the new queue\. If a query can't be reassigned, it's restarted in the new queue, as described following\. A query is reassigned only if all of the following are true: + A matching queue is found\. + The new queue has enough free slots to run the query\. A query might require multiple slots if the [wlm\_query\_slot\_count](r_wlm_query_slot_count.md) parameter was set to a value greater than 1\. + The new queue has at least as much memory available as the query currently uses\. If the query is reassigned, the query continues executing in the new queue\. Intermediate results are preserved, so there is minimal effect on total execution time\. If the query can't be reassigned, the query is canceled and restarted in the new queue\. Intermediate results are deleted\. The query waits in the queue, then begins running when enough slots are available\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-queue-hopping.md
2cc67e9d74e7-0
The following table summarizes the behavior of different types of queries with a QMR hop action\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/wlm-queue-hopping.html) To find whether a query that was hopped by QMR was reassigned, restarted, or canceled, query the [STL\_WLM\_RULE\_ACTION](r_STL_WLM_RULE_ACTION.md) system log table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-queue-hopping.md
6605694ac25f-0
When a query is hopped and no matching queue is found, the query is canceled\. When a query is hopped and a matching queue is found, WLM attempts to reassign the query to the new queue\. If a query can't be reassigned, it's restarted in the new queue or continues execution in the original queue, as described following\. A query is reassigned only if all of the following are true: + A matching queue is found\. + The new queue has enough free slots to run the query\. A query might require multiple slots if the [wlm\_query\_slot\_count](r_wlm_query_slot_count.md) parameter was set to a value greater than 1\. + The new queue has at least as much memory available as the query currently uses\. If the query is reassigned, the query continues executing in the new queue\. Intermediate results are preserved, so there is minimal effect on total execution time\. If a query can't be reassigned, the query is either restarted or continues execution in the original queue\. If the query is restarted, the query is canceled and restarted in the new queue\. Intermediate results are deleted\. The query waits in the queue, then begins execution when enough slots are available\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-queue-hopping.md
cb8a3b36ffe3-0
SYSDATE returns the current date and time in the current session time zone \(UTC by default\)\. **Note** SYSDATE returns the start date and time for the current transaction, not for the start of the current statement\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SYSDATE.md
346a2aa004c7-0
``` SYSDATE ``` This function requires no arguments\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SYSDATE.md
bdc62a9f1c42-0
TIMESTAMP
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SYSDATE.md
20b3e21cc7aa-0
The following example uses the SYSDATE function to return the full time stamp for the current date: ``` select sysdate; timestamp ---------------------------- 2008-12-04 16:10:43.976353 (1 row) ``` The following example uses the SYSDATE function inside the TRUNC function to return the current date without the time: ``` select trunc(sysdate); trunc ------------ 2008-12-04 (1 row) ``` The following query returns sales information for dates that fall between the date when the query is issued and whatever date is 120 days earlier: ``` select salesid, pricepaid, trunc(saletime) as saletime, trunc(sysdate) as now from sales where saletime between trunc(sysdate)-120 and trunc(sysdate) order by saletime asc; salesid | pricepaid | saletime | now ---------+-----------+------------+------------
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SYSDATE.md
20b3e21cc7aa-1
91535 | 670.00 | 2008-08-07 | 2008-12-05 91635 | 365.00 | 2008-08-07 | 2008-12-05 91901 | 1002.00 | 2008-08-07 | 2008-12-05 ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SYSDATE.md
04f41d603c56-0
ATAN is a trigonometric function that returns the arc tangent of a number\. The return value is in radians and is between PI/2 and \-PI/2\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN.md
99eda13b5bc7-0
``` ATAN(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN.md
bb19b5897150-0
*number* The input parameter is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN.md
000e959b173e-0
The ATAN function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN.md
653f99b0e47d-0
The following example returns the arc tangent of 1 and multiplies it by 4: ``` select atan(1) * 4 as pi; pi ------------------ 3.14159265358979 (1 row) ``` The following example converts the arc tangent of 1 to the equivalent number of degrees: ``` select (atan(1) * 180/(select pi())) as degrees; degrees --------- 45 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ATAN.md
67fd7f4e2221-0
**Topics** + [CHECKSUM function](r_CHECKSUM.md) + [FUNC\_SHA1 function](FUNC_SHA1.md) + [FNV\_HASH function](r_FNV_HASH.md) + [MD5 function](r_MD5.md) + [SHA function](SHA.md) + [SHA1 function](SHA1.md) + [SHA2 function](SHA2.md) A hash function is a mathematical function that converts a numerical input value into another value\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/hash-functions.md
9e649813bbe9-0
Removes a custom Python library from the database\. Only the library owner or a superuser can drop a library\. DROP LIBRARY can't be run inside a transaction block \(BEGIN … END\)\. For more information about transactions, see [Serializable isolation](c_serial_isolation.md)\. This command isn't reversible\. The DROP LIBRARY command commits immediately\. If a UDF that depends on the library is running concurrently, the UDF might fail, even if the UDF is running within a transaction\. For more information, see [CREATE LIBRARY](r_CREATE_LIBRARY.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_LIBRARY.md
395e823d9aa1-0
``` DROP LIBRARY library_name ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_LIBRARY.md
99e9881ca30d-0
*library\_name* The name of the library\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_LIBRARY.md
d4fbc927ad31-0
If an ORDER BY clause for a window function doesn't produce a unique and total ordering of the data, the order of the rows is nondeterministic\. If the ORDER BY expression produces duplicate values \(a partial ordering\), the return order of those rows can vary in multiple runs\. In this case, window functions can also return unexpected or inconsistent results\. For example, the following query returns different results over multiple runs\. These different results occur because `order by dateid` doens't produce a unique ordering of the data for the SUM window function\. ``` select dateid, pricepaid, sum(pricepaid) over(order by dateid rows unbounded preceding) as sumpaid from sales group by dateid, pricepaid; dateid | pricepaid | sumpaid --------+-----------+------------- 1827 | 1730.00 | 1730.00 1827 | 708.00 | 2438.00 1827 | 234.00 | 2672.00 ... select dateid, pricepaid, sum(pricepaid) over(order by dateid rows unbounded preceding) as sumpaid from sales group by dateid, pricepaid; dateid | pricepaid | sumpaid
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_order_by_WF.md
d4fbc927ad31-1
from sales group by dateid, pricepaid; dateid | pricepaid | sumpaid --------+-----------+------------- 1827 | 234.00 | 234.00 1827 | 472.00 | 706.00 1827 | 347.00 | 1053.00 ... ``` In this case, adding a second ORDER BY column to the window function can solve the problem\. ``` select dateid, pricepaid, sum(pricepaid) over(order by dateid, pricepaid rows unbounded preceding) as sumpaid from sales group by dateid, pricepaid; dateid | pricepaid | sumpaid --------+-----------+--------- 1827 | 234.00 | 234.00 1827 | 337.00 | 571.00 1827 | 347.00 | 918.00 ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_order_by_WF.md
3a128b6b76aa-0
Amazon S3 provides eventual consistency for some operations, so it is possible that new data will not be available immediately after the upload, which could result in an incomplete data load or loading stale data\. COPY operations where the cluster and the bucket are in different regions are eventually consistent\. All regions provide read\-after\-write consistency for uploads of new objects with unique object keys\. For more information about data consistency, see [Amazon S3 Data Consistency Model](https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) in the *Amazon Simple Storage Service Developer Guide*\. To ensure that your application loads the correct data, we recommend the following practices: + Create new object keys\. Amazon S3 provides eventual consistency in all regions for overwrite operations\. Creating new file names, or object keys, in Amazon S3 for each data load operation provides strong consistency in all regions\. + Use a manifest file with your COPY operation\. The manifest explicitly names the files to be loaded\. Using a manifest file enforces strong consistency\. The rest of this section explains these steps in detail\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/managing-data-consistency.md
3a0b7b6d005d-0
Because of potential data consistency issues, we strongly recommend creating new files with unique Amazon S3 object keys for each data load operation\. If you overwrite existing files with new data, and then issue a COPY command immediately following the upload, it is possible for the COPY operation to begin loading from the old files before all of the new data is available\. For more information about eventual consistency, see [Amazon S3 Data Consistency
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/managing-data-consistency.md
3a0b7b6d005d-1
eventual consistency, see [Amazon S3 Data Consistency Model](https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) in the *Amazon S3 Developer Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/managing-data-consistency.md
540d45911fe7-0
You can explicitly specify which files to load by using a manifest file\. When you use a manifest file, COPY enforces strong consistency by searching secondary servers if it does not find a listed file on the primary server\. The manifest file can be configured with an optional `mandatory` flag\. If `mandatory` is `true` and the file is not found, COPY returns an error\. For more information about using a manifest file, see the [copy_from_s3_manifest_file](copy-parameters-data-source-s3.md#copy-manifest-file) option for the COPY command and [Using a manifest to specify data files](r_COPY_command_examples.md#copy-command-examples-manifest) in the COPY examples\. Because Amazon S3 provides eventual consistency for overwrites in all regions, it is possible to load stale data if you overwrite existing objects with new data\. As a best practice, never overwrite existing files with new data\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/managing-data-consistency.md
0699dfdb6d92-0
The STV\_TBL\_PERM table contains information about the permanent tables in Amazon Redshift, including temporary tables created by a user for the current session\. STV\_TBL\_PERM contains information for all tables in all databases\. This table differs from [STV\_TBL\_TRANS](r_STV_TBL_TRANS.md), which contains information about transient database tables that the system creates during query processing\. STV\_TBL\_PERM is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_PERM.md
e91eef97ffdc-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_TBL_PERM.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_PERM.md
f6741bb72e9b-0
The following query returns a list of distinct table IDs and names: ``` select distinct id, name from stv_tbl_perm order by name; id | name --------+------------------------- 100571 | category 100575 | date 100580 | event 100596 | listing 100003 | padb_config_harvest 100612 | sales ... ``` Other system tables use table IDs, so knowing which table ID corresponds to a certain table can be very useful\. In this example, SELECT DISTINCT is used to remove the duplicates \(tables are distributed across multiple slices\)\. To determine the number of blocks used by each column in the VENUE table, type the following query: ``` select col, count(*) from stv_blocklist, stv_tbl_perm where stv_blocklist.tbl = stv_tbl_perm.id and stv_blocklist.slice = stv_tbl_perm.slice and stv_tbl_perm.name = 'venue' group by col order by col; col | count -----+------- 0 | 8
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_PERM.md
f6741bb72e9b-1
order by col; col | count -----+------- 0 | 8 1 | 8 2 | 8 3 | 8 4 | 8 5 | 8 6 | 8 7 | 8 (8 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_PERM.md
6cf97a14e5fe-0
The ROWS column includes counts of deleted rows that have not been vacuumed \(or have been vacuumed but with the SORT ONLY option\)\. Therefore, the SUM of the ROWS column in the STV\_TBL\_PERM table might not match the COUNT\(\*\) result when you query a given table directly\. For example, if 2 rows are deleted from VENUE, the COUNT\(\*\) result is 200 but the SUM\(ROWS\) result is still 202: ``` delete from venue where venueid in (1,2); select count(*) from venue; count ------- 200 (1 row) select trim(name) tablename, sum(rows) from stv_tbl_perm where name='venue' group by name; tablename | sum -----------+----- venue | 202 (1 row) ``` To synchronize the data in STV\_TBL\_PERM, run a full vacuum the VENUE table\. ``` vacuum venue; select trim(name) tablename, sum(rows) from stv_tbl_perm where name='venue' group by name; tablename | sum -----------+----- venue | 200
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_PERM.md
6cf97a14e5fe-1
tablename | sum -----------+----- venue | 200 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_TBL_PERM.md
10f263576c7b-0
Short query acceleration \(SQA\) prioritizes selected short\-running queries ahead of longer\-running queries\. SQA runs short\-running queries in a dedicated space, so that SQA queries aren't forced to wait in queues behind longer queries\. SQA only prioritizes queries that are short\-running and are in a user\-defined queue\. With SQA, short\-running queries begin running more quickly and users see results sooner\. If you enable SQA, you can reduce or eliminate workload management \(WLM\) queues that are dedicated to running short queries\. In addition, long\-running queries don't need to contend with short queries for slots in a queue, so you can configure your WLM queues to use fewer query slots\. When you use lower concurrency, query throughput is increased and overall system performance is improved for most workloads\. [CREATE TABLE AS](r_CREATE_TABLE_AS.md) \(CTAS\) statements and read\-only queries, such as [SELECT](r_SELECT_synopsis.md) statements, are eligible for SQA\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-short-query-acceleration.md
10f263576c7b-1
Amazon Redshift uses a machine learning algorithm to analyze each eligible query and predict the query's execution time\. By default, WLM dynamically assigns a value for the SQA maximum runtime based on analysis of your cluster's workload\. Alternatively, you can specify a fixed value of 1–20 seconds\. In some cases, the query's predicted runtime might be less than the defined SQA maximum runtime\. In such cases, the query thus needs to wait in a queue\. Here, SQA separates the query from the WLM queues and schedules it for priority execution\. If a query runs longer than the SQA maximum runtime, WLM moves the query to the first matching WLM queue based on the [WLM queue assignment rules](cm-c-wlm-queue-assignment-rules.md)\. Over time, predictions improve as SQA learns from your query patterns\. SQA is enabled by default in the default parameter group and for all new parameter groups\. To disable SQA in the Amazon Redshift console, edit the WLM configuration for a parameter group and deselect **Enable short query acceleration**\. As a best practice, we recommend using a WLM query slot count of 15 or fewer to maintain optimum overall system performance\. For information about modifying WLM configurations, see [Configuring Workload Management](https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html) in the *Amazon Redshift Cluster Management Guide*\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-short-query-acceleration.md
6e9597061134-0
When you enable SQA, WLM sets the maximum runtime for short queries to dynamic by default\. We recommend keeping the dynamic setting for SQA maximum runtime\. You can override the default setting by specifying a fixed value of 1–20 seconds\. In some cases, you might consider using different values for the SQA maximum runtime values to improve your system performance\. In such cases, analyze your workload to find the maximum execution time for most of your short\-running queries\. The following query returns the maximum runtime for queries at about the 70th percentile\. ``` select least(greatest(percentile_cont(0.7) within group (order by total_exec_time / 1000000) + 2, 2), 20) from stl_wlm_query where userid >= 100 and final_state = 'Completed'; ``` After you identify a maximum runtime value that works well for your workload, you don't need to change it unless your workload changes significantly\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-short-query-acceleration.md
8528bcb52d4d-0
To check whether SQA is enabled, run the following query\. If the query returns a row, then SQA is enabled\. ``` select * from stv_wlm_service_class_config where service_class = 14; ``` The following query shows the number of queries that went through each query queue \(service class\)\. It also shows the average execution time, the number of queries with wait time at the 90th percentile, and the average wait time\. SQA queries use in service class 14\. ``` select final_state, service_class, count(*), avg(total_exec_time), percentile_cont(0.9) within group (order by total_queue_time), avg(total_queue_time) from stl_wlm_query where userid >= 100 group by 1,2 order by 2,1; ``` To find which queries were picked up by SQA and completed successfully, run the following query\. ``` select a.queue_start_time, a.total_exec_time, label, trim(querytxt) from stl_wlm_query a, stl_query b where a.query = b.query and a.service_class = 14 and a.final_state = 'Completed' order by b.query desc limit 5; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-short-query-acceleration.md
8528bcb52d4d-1
order by b.query desc limit 5; ``` To find queries that SQA picked up but that timed out, run the following query\. ``` select a.queue_start_time, a.total_exec_time, label, trim(querytxt) from stl_wlm_query a, stl_query b where a.query = b.query and a.service_class = 14 and a.final_state = 'Evicted' order by b.query desc limit 5; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/wlm-short-query-acceleration.md
809744635eec-0
Creates a new database user account\. You must be a database superuser to execute this command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
ceebc4d1418a-0
``` CREATE USER name [ WITH ] PASSWORD { 'password' | 'md5hash' | DISABLE } [ option [ ... ] ] where option can be: CREATEDB | NOCREATEDB | CREATEUSER | NOCREATEUSER | SYSLOG ACCESS { RESTRICTED | UNRESTRICTED } | IN GROUP groupname [, ... ] | VALID UNTIL 'abstime' | CONNECTION LIMIT { limit | UNLIMITED } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
dd98edc3b45e-0
*name* The name of the user account to create\. The user name can't be `PUBLIC`\. For more information about valid names, see [Names and identifiers](r_names.md)\. WITH Optional keyword\. WITH is ignored by Amazon Redshift PASSWORD \{ '*password*' \| '*md5hash*' \| DISABLE \} Sets the user's password\. By default, users can change their own passwords, unless the password is disabled\. To disable a user's password, specify DISABLE\. When a user's password is disabled, the password is deleted from the system and the user can log on only using temporary AWS Identity and Access Management \(IAM\) user credentials\. For more information, see [Using IAM Authentication to Generate Database User Credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html)\. Only a superuser can enable or disable passwords\. You can't disable a superuser's password\. To enable a password, run [ALTER USER](r_ALTER_USER.md) and specify a password\. You can specify the password in clear text or as an MD5 hash string\. When you launch a new cluster using the AWS Management Console, AWS CLI, or Amazon Redshift API, you must supply a clear text password for the master database user\. You can change the password later by using [ALTER USER](r_ALTER_USER.md)\. For clear text, the password must meet the following constraints:
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
dd98edc3b45e-1
For clear text, the password must meet the following constraints: + It must be 8 to 64 characters in length\. + It must contain at least one uppercase letter, one lowercase letter, and one number\. + It can use any ASCII characters with ASCII codes 33–126, except ' \(single quote\), " \(double quote\), \\, /, or @\. As a more secure alternative to passing the CREATE USER password parameter as clear text, you can specify an MD5 hash of a string that includes the password and user name\. When you specify an MD5 hash string, the CREATE USER command checks for a valid MD5 hash string, but it doesn't validate the password portion of the string\. It is possible in this case to create a password, such as an empty string, that you can't use to log on to the database\. To specify an MD5 password, follow these steps: 1. Concatenate the password and user name\. For example, for password `ez` and user `user1`, the concatenated string is `ezuser1`\. 1. Convert the concatenated string into a 32\-character MD5 hash string\. You can use any MD5 utility to create the hash string\. The following example uses the Amazon Redshift [MD5 function](r_MD5.md) and the concatenation operator \( \|\| \) to return a 32\-character MD5\-hash string\. ``` select md5('ez' || 'user1');
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
dd98edc3b45e-2
``` select md5('ez' || 'user1'); md5 -------------------------------- 153c434b4b77c89e6b94f12c5393af5b ``` 1. Concatenate '`md5`' in front of the MD5 hash string and provide the concatenated string as the *md5hash* argument\. ``` create user user1 password 'md5153c434b4b77c89e6b94f12c5393af5b'; ``` 1. Log on to the database using the user name and password\. For this example, log on as `user1` with password `ez`\. CREATEDB \| NOCREATEDB The CREATEDB option allows the new user account to create databases\. The default is NOCREATEDB\. CREATEUSER \| NOCREATEUSER The CREATEUSER option creates a superuser with all database privileges, including CREATE USER\. The default is NOCREATEUSER\. For more information, see [Superusers](r_superusers.md)\. SYSLOG ACCESS \{ RESTRICTED \| UNRESTRICTED \} <a name="create-user-syslog-access"></a>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
dd98edc3b45e-3
SYSLOG ACCESS \{ RESTRICTED \| UNRESTRICTED \} <a name="create-user-syslog-access"></a> A clause that specifies the level of access the user has to the Amazon Redshift system tables and views\. If RESTRICTED is specified, the user can see only the rows generated by that user in user\-visible system tables and views\. The default is RESTRICTED\. If UNRESTRICTED is specified, the user can see all rows in user\-visible system tables and views, including rows generated by another user\. UNRESTRICTED doesn't give a regular user access to superuser\-visible tables\. Only superusers can see superuser\-visible tables\. Giving a user unrestricted access to system tables gives the user visibility to data generated by other users\. For example, STL\_QUERY and STL\_QUERYTEXT contain the full text of INSERT, UPDATE, and DELETE statements, which might contain sensitive user\-generated data\. All rows in STV\_RECENTS and SVV\_TRANSACTIONS are visible to all users\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. IN GROUP *groupname* Specifies the name of an existing group that the user belongs to\. Multiple group names may be listed\. VALID UNTIL *abstime* The VALID UNTIL option sets an absolute time after which the user account password is no longer valid\. By default the password has no time limit\. CONNECTION LIMIT \{ *limit* \| UNLIMITED \}
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
dd98edc3b45e-4
CONNECTION LIMIT \{ *limit* \| UNLIMITED \} The maximum number of database connections the user is permitted to have open concurrently\. The limit isn't enforced for super users\. Use the UNLIMITED keyword to permit the maximum number of concurrent connections\. A limit on the number of connections for each database might also apply\. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md)\. The default is UNLIMITED\. To view current connections, query the [STV\_SESSIONS](r_STV_SESSIONS.md) system view\. If both user and database connection limits apply, an unused connection slot must be available that is within both limits when a user attempts to connect\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
3f30f2ff8b0b-0
By default, all users have CREATE and USAGE privileges on the PUBLIC schema\. To disallow users from creating objects in the PUBLIC schema of a database, use the REVOKE command to remove that privilege\. When using IAM authentication to create database user credentials, you might want to create a superuser that is able to log on only using temporary credentials\. You can't disable a superuser's password, but you can create an unknown password using a randomly generated MD5 hash string\. ``` create user iam_superuser password 'md5A1234567890123456780123456789012' createuser; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
cbd3390c9bef-0
The following command creates a user account named dbuser, with the password "abcD1234", database creation privileges, and a connection limit of 30\. ``` create user dbuser with password 'abcD1234' createdb connection limit 30; ``` Query the PG\_USER\_INFO catalog table to view details about a database user\. ``` select * from pg_user_info; usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig | useconnlimit -----------+----------+-------------+----------+-----------+----------+----------+-----------+------------- rdsdb | 1 | true | true | true | ******** | infinity | | adminuser | 100 | true | true | false | ******** | | | UNLIMITED dbuser | 102 | true | false | false | ******** | | | 30 ``` In the following example, the account password is valid until June 10, 2017\. ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
cbd3390c9bef-1
``` In the following example, the account password is valid until June 10, 2017\. ``` create user dbuser with password 'abcD1234' valid until '2017-06-10'; ``` The following example creates a user with a case\-sensitive password that contains special characters\. ``` create user newman with password '@AbC4321!'; ``` To use a backslash \('\\'\) in your MD5 password, escape the backslash with a backslash in your source string\. The following example creates a user named `slashpass` with a single backslash \( '`\`'\) as the password\. ``` select md5('\\'||'slashpass'); md5 -------------------------------- 0c983d1a624280812631c5389e60d48c create user slashpass password 'md50c983d1a624280812631c5389e60d48c'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_USER.md
0cf6075a77a6-0
In Amazon Redshift workload management \(WLM\), query monitoring rules define metrics\-based performance boundaries for WLM queues and specify what action to take when a query goes beyond those boundaries\. For example, for a queue dedicated to short running queries, you might create a rule that aborts queries that run for more than 60 seconds\. To track poorly designed queries, you might have another rule that logs queries that contain nested loops\. You define query monitoring rules as part of your workload management \(WLM\) configuration\. You can define up to 25 rules for each queue, with a limit of 25 rules for all queues\. Each rule includes up to three conditions, or predicates, and one action\. A *predicate* consists of a metric, a comparison condition \(=, <, or > \), and a value\. If all of the predicates for any rule are met, that rule's action is triggered\. Possible rule actions are log, hop, and abort, as discussed following\. The rules in a given queue apply only to queries running in that queue\. A rule is independent of other rules\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-c-wlm-query-monitoring-rules.md