id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
b894ff0ac61f-0
Use the SVCS\_UNLOAD\_LOG to get details of UNLOAD operations\. SVCS\_UNLOAD\_LOG records one row for each file created by an UNLOAD statement\. For example, if an UNLOAD creates 12 files, SVCS\_UNLOAD\_LOG contains 12 corresponding rows\. This view is derived from the STL\_UNLOAD\_LOG system table but doesn't show slice\-level for queries run on a concurrency scaling cluster\. **Note** System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the tables with the prefix STL except that the STL tables provide information only for queries run on the main cluster\. This table is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_UNLOAD_LOG.md
786e3a8ab729-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_UNLOAD_LOG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_UNLOAD_LOG.md
c49d6520ceef-0
To get a list of the files that were written to Amazon S3 by an UNLOAD command, you can call an Amazon S3 list operation after the UNLOAD completes; however, depending on how quickly you issue the call, the list might be incomplete because an Amazon S3 list operation is eventually consistent\. To get a complete, authoritative list immediately, query SVCS\_UNLOAD\_LOG\. The following query returns the path name for files that were created by an UNLOAD with for the last query executed: ``` select query, substring(path,0,40) as path from svcs_unload_log where query = pg_last_query_id() order by path; ``` This command returns the following sample output: ``` query | path ------+--------------------------------- 2320 | s3://my-bucket/venue0000_part_00 2320 | s3://my-bucket/venue0001_part_00 2320 | s3://my-bucket/venue0002_part_00 2320 | s3://my-bucket/venue0003_part_00 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_UNLOAD_LOG.md
ec04ccfee563-0
The SIMILAR TO operator matches a string expression, such as a column name, with a SQL standard regular expression pattern\. A SQL regular expression pattern can include a set of pattern\-matching metacharacters, including the two supported by the [LIKE](r_patternmatching_condition_like.md) operator\. The SIMILAR TO operator returns true only if its pattern matches the entire string, unlike POSIX regular expression behavior, where the pattern can match any portion of the string\. SIMILAR TO performs a case\-sensitive match\. **Note** Regular expression matching using SIMILAR TO is computationally expensive\. We recommend using LIKE whenever possible, especially when processing a very large number of rows\. For example, the following queries are functionally identical, but the query that uses LIKE executes several times faster than the query that uses a regular expression: ``` select count(*) from event where eventname SIMILAR TO '%(Ring|Die)%'; select count(*) from event where eventname LIKE '%Ring%' OR eventname LIKE '%Die%'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-similar-to.md
bbd772a1b966-0
``` expression [ NOT ] SIMILAR TO pattern [ ESCAPE 'escape_char' ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-similar-to.md
80cbeea647f4-0
*expression* A valid UTF\-8 character expression, such as a column name\. SIMILAR TO SIMILAR TO performs a case\-sensitive pattern match for the entire string in *expression*\. *pattern* A valid UTF\-8 character expression representing a SQL standard regular expression pattern\. *escape\_char* A character expression that will escape metacharacters in the pattern\. The default is two backslashes \('\\\\'\)\. If *pattern* does not contain metacharacters, then the pattern only represents the string itself\. Either of the character expressions can be CHAR or VARCHAR data types\. If they differ, Amazon Redshift converts *pattern* to the data type of *expression*\. SIMILAR TO supports the following pattern\-matching metacharacters: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-similar-to.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-similar-to.md
4df854c500af-0
The following table shows examples of pattern matching using SIMILAR TO: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-similar-to.html) The following example finds all cities whose names contain "E" or "H": ``` select distinct city from users where city similar to '%E%|%H%' order by city; city ----------------------- Agoura Hills Auburn Hills Benton Harbor Beverly Hills Chicago Heights Chino Hills Citrus Heights East Hartford ``` The following example uses the default escape string \('`\\`'\) to search for strings that include "`_`": ``` select tablename, "column" from pg_table_def where "column" similar to '%start\\_%' limit 5; tablename | column -------------------+--------------- stl_s3client | start_time stl_tr_conflict | xact_start_ts stl_undone | undo_start_ts
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-similar-to.md
4df854c500af-1
stl_tr_conflict | xact_start_ts stl_undone | undo_start_ts stl_unload_log | start_time stl_vacuum_detail | start_row (5 rows) ``` The following example specifies '`^`' as the escape string, then uses the escape string to search for strings that include "`_`": ``` select tablename, "column" from pg_table_def where "column" similar to '%start^_%' escape '^' limit 5; tablename | column -------------------+--------------- stl_s3client | start_time stl_tr_conflict | xact_start_ts stl_undone | undo_start_ts stl_unload_log | start_time stl_vacuum_detail | start_row (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/pattern-matching-conditions-similar-to.md
bd20dbfbec89-0
**To perform a merge operation by replacing existing rows** 1. Create a staging table, and then populate it with data to be merged, as shown in the following pseudocode\. ``` create temp table stage (like target); insert into stage select * from source where source.filter = 'filter_expression'; ``` 1. Use an inner join with the staging table to delete the rows from the target table that are being updated\. Put the delete and insert operations in a single transaction block so that if there is a problem, everything will be rolled back\. ``` begin transaction; delete from target using stage where target.primarykey = stage.primarykey; ``` 1. Insert all of the rows from the staging table\. ``` insert into target select * from stage; end transaction; ``` 1. Drop the staging table\. ``` drop table stage; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/merge-replacing-existing-rows.md
039f559f4b7c-0
Runs a stored procedure\. The CALL command must include the procedure name and the input argument values\. You must call a stored procedure by using the CALL statement\. CALL can't be part of any regular queries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
d9ef7ca2867a-0
``` CALL sp_name ( [ argument ] [, ...] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
2fdca2bc4acd-0
*sp\_name* The name of the procedure to run\. *argument* The value of the input argument\. This parameter can also be a function name, for example `pg_last_query_id()`\. You can't use queries as CALL arguments\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
92af0914dafe-0
Amazon Redshift stored procedures support nested and recursive calls, as described following\. In addition, make sure your driver support is up\-to\-date, also described following\. **Topics** + [Nested calls](#r_CALL_procedure-nested-calls) + [Driver support](#r_CALL_procedure-driver-support)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
45f976e536a4-0
Amazon Redshift stored procedures support nested and recursive calls\. The maximum number of nesting levels allowed is 16\. Nested calls can encapsulate business logic into smaller procedures, which can be shared by multiple callers\. If you call a nested procedure that has output parameters, the inner procedure must define INOUT arguments\. In this case, the inner procedure is passed in a nonconstant variable\. OUT arguments aren't allowed\. This behavior occurs because a variable is needed to hold the output of the inner call\. The relationship between inner and outer procedures is logged in the `from_sp_call` column of [SVL\_STORED\_PROC\_CALL](r_SVL_STORED_PROC_CALL.md)\. The following example shows passing variables to a nested procedure call through INOUT arguments\. ``` CREATE OR REPLACE PROCEDURE inner_proc(INOUT a int, b int, INOUT c int) LANGUAGE plpgsql AS $$ BEGIN a := b * a; c := b * c; END; $$; CREATE OR REPLACE PROCEDURE outer_proc(multiplier int) LANGUAGE plpgsql AS $$ DECLARE x int := 3; y int := 4; BEGIN DROP TABLE IF EXISTS test_tbl; CREATE TEMP TABLE test_tbl(a int, b varchar(256));
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
45f976e536a4-1
DROP TABLE IF EXISTS test_tbl; CREATE TEMP TABLE test_tbl(a int, b varchar(256)); CALL inner_proc(x, multiplier, y); insert into test_tbl values (x, y::varchar); END; $$; CALL outer_proc(5); SELECT * from test_tbl; a | b ----+---- 15 | 20 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
a9dd88e840c6-0
We recommend that you upgrade your Java Database Connectivity \(JDBC\) and Open Database Connectivity \(ODBC\) drivers to the latest version that has support for Amazon Redshift stored procedures\. You might be able to use your existing driver if your client tool uses driver API operations that pass through the CALL statement to the server\. Output parameters, if any, are returned as a result set of one row\. The latest versions of Amazon Redshift JDBC and ODBC drivers have metadata support for stored procedure discovery\. They also have `CallableStatement` support for custom Java applications\. For more information on drivers, see [Connecting to an Amazon Redshift Cluster Using SQL Client Tools](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-to-cluster.html) in the *Amazon Redshift Cluster Management Guide\.* **Important** Currently, you can't use a `refcursor` data type in a stored procedure using a JDBC or ODBC driver\. The following examples show how to use different API operations of the JDBC driver for stored procedure calls\. ``` void statement_example(Connection conn) throws SQLException { statement.execute("CALL sp_statement_example(1)"); } void prepared_statement_example(Connection conn) throws SQLException { String sql = "CALL sp_prepared_statement_example(42, 84)"; PreparedStatement pstmt = conn.prepareStatement(sql);
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
a9dd88e840c6-1
PreparedStatement pstmt = conn.prepareStatement(sql); pstmt.execute(); } void callable_statement_example(Connection conn) throws SQLException { CallableStatement cstmt = conn.prepareCall("CALL sp_create_out_in(?,?)"); cstmt.registerOutParameter(1, java.sql.Types.INTEGER); cstmt.setInt(2, 42); cstmt.executeQuery(); Integer out_value = cstmt.getInt(1); } ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
37e2dbdb8439-0
The following example calls the procedure name `test_spl`\. ``` call test_sp1(3,'book'); INFO: Table "tmp_tbl" does not exist and will be skipped INFO: min_val = 3, f2 = book ``` The following example calls the procedure name `test_spl2`\. ``` call test_sp2(2,'2019'); f2 | column2 ---------------------+--------- 2019+2019+2019+2019 | 2 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CALL_procedure.md
85686e162fe9-0
ST\_Length returns the Cartesian length of an input linear geometry\. The length units are the same as the units in which the coordinates of the input geometry are expressed\. The function returns zero \(0\) for points, multipoints, and areal geometries\. When the input is a geometry collection, the function returns the sum of the lengths of the geometries in the collection\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Length-function.md
2610386936e4-0
``` ST_Length(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Length-function.md
a5962e3b903f-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Length-function.md
c35df8c69557-0
`DOUBLE PRECISION` If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Length-function.md
fd8ac9a6c91e-0
The following SQL returns the Cartesian length of a multilinestring\. ``` SELECT ST_Length(ST_GeomFromText('MULTILINESTRING((0 0,10 0,0 10),(10 0,20 0,20 10))')); ``` ``` st_length -------------------------------- 44.142135623731 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Length-function.md
f07229297bd0-0
If you deployed a cluster in order to complete this exercise, when you are finished with the exercise, you should delete the cluster so that it will stop accruing charges to your AWS account\. To delete the cluster, follow the steps in [Deleting a cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#delete-cluster) in the Amazon Redshift Cluster Management Guide\. If you want to keep the cluster, you might want to keep the sample data for reference\. Most of the examples in this guide use the tables you created in this exercise\. The size of the data will not have any significant effect on your available storage\. If you want to keep the cluster, but want to clean up the sample data, you can run the following command to drop the TICKIT database: ``` drop database tickit; ``` If you didn't create a TICKIT database, or if you don't want to drop the database, run the following commands to drop just the tables: ``` drop table testtable; drop table users; drop table venue; drop table category; drop table date; drop table event; drop table listing; drop table sales; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/cm-dev-t-clean-up-resources.md
399bcd27a6b0-0
The SHA1 function uses the SHA1 cryptographic hash function to convert a variable\-length string into a 40\-character string that is a text representation of the hexadecimal value of a 160\-bit checksum\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA1.md
b3096fa92fe6-0
SHA1 is a synonym of [SHA function](SHA.md) and [FUNC\_SHA1 function](FUNC_SHA1.md)\. ``` SHA1(string) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA1.md
493a6410688e-0
*string* A variable\-length string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA1.md
6db004368d1c-0
The SHA1 function returns a 40\-character string that is a text representation of the hexadecimal value of a 160\-bit checksum\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA1.md
4a3504bf5988-0
The following example returns the 160\-bit value for the word 'Amazon Redshift': ``` select sha1('Amazon Redshift'); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/SHA1.md
b3ae64fbcc59-0
TIMESTAMPTZ\_CMP\_TIMESTAMP compares the value of a time stamp with time zone expression with a time stamp expression\. If the time stamp with time zone and time stamp values are identical, the function returns 0\. If the time stamp with time zone is greater alphabetically, the function returns 1\. If the time stamp is greater, the function returns –1\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_TIMESTAMP.md
b249e9f5859b-0
``` TIMESTAMPTZ_CMP_TIMESTAMP(timestamptz, timestamp) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_TIMESTAMP.md
b1c2846259d4-0
*timestamptz* A TIMESTAMPTZ column or an expression that implicitly converts to a time stamp with a time zone\. *timestamp* A TIMESTAMP column or an expression that implicitly converts to a time stamp\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_TIMESTAMP.md
ff1b39f66456-0
INTEGER
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TIMESTAMPTZ_CMP_TIMESTAMP.md
2a38930c9ce2-0
A number of factors can affect query performance\. The following aspects of your data, cluster, and database operations all play a part in how quickly your queries process\. + **Number of nodes, processors, or slices** – A compute node is partitioned into slices\. More nodes means more processors and more slices, which enables your queries to process faster by running portions of the query concurrently across the slices\. However, more nodes also means greater expense, so you need to find the balance of cost and performance that is appropriate for your system\. For more information on Amazon Redshift cluster architecture, see [Data warehouse system architecture](c_high_level_system_architecture.md)\. + **Node types** – An Amazon Redshift cluster can use either dense storage or dense compute nodes\. The dense storage node types are recommended for substantial data storage needs, while dense compute node types are optimized for performance\-intensive workloads\. Each node type offers different sizes and limits to help you scale your cluster appropriately\. The node size determines the storage capacity, memory, CPU, and price of each node in the cluster\. For more information on node types, see [Amazon Redshift Pricing](https://aws.amazon.com/redshift/pricing/)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-performance.md
2a38930c9ce2-1
+ **Data distribution** – Amazon Redshift stores table data on the compute nodes according to a tabl's distribution style\. When you execute a query, the query optimizer redistributes the data to the compute nodes as needed to perform any joins and aggregations\. Choosing the right distribution style for a table helps minimize the impact of the redistribution step by locating the data where it needs to be before the joins are performed\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\. + **Data sort order** – Amazon Redshift stores table data on disk in sorted order according to a table’s sort keys\. The query optimizer and the query processor use the information about where the data is located to reduce the number of blocks that need to be scanned and thereby improve query speed\. For more information, see [Choosing sort keys](t_Sorting_data.md)\. + **Dataset size** – A higher volume of data in the cluster can slow query performance for queries, because more rows need to be scanned and redistributed\. You can mitigate this effect by regular vacuuming and archiving of data, and by using a predicate to restrict the query dataset\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-performance.md
2a38930c9ce2-2
+ **Concurrent operations** – Running multiple operations at once can affect query performance\. Each operation takes one or more slots in an available query queue and uses the memory associated with those slots\. If other operations are running, enough query queue slots might not be available\. In this case, the query has to wait for slots to open before it can begin processing\. For more information about creating and configuring query queues, see [Implementing workload management](cm-c-implementing-workload-management.md)\. + **Query structure** – How your query is written affects its performance\. As much as possible, write queries to process and return as little data as meets your needs\. For more information, see [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)\. + **Code compilation** – Amazon Redshift generates and compiles code for each query execution plan\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-performance.md
2a38930c9ce2-3
+ **Code compilation** – Amazon Redshift generates and compiles code for each query execution plan\. The compiled code runs faster because it eliminates the overhead of using an interpreter\. You generally have some overhead cost the first time code is generated and compiled\. As a result, the performance of a query the first time you run it can be misleading\. The overhead cost might be especially noticeable when you run one\-off queries\. Run the query a second time to determine its typical performance\. Amazon Redshift uses a serverless compilation service to scale query compilations beyond the compute resources of an Amazon Redshift cluster\. The compiled code segments are cached locally on the cluster and in a virtually unlimited cache\. This cache persists after cluster reboots\. Subsequent executions of the same query run faster because they can skip the compilation phase\. The cache is not compatible across Amazon Redshift versions, so the code is recompiled when queries run after a version upgrade\. By using a scalable compilation service, Amazon Redshift is able to compile code in parallel to provide consistently fast performance\. The magnitude of workload speed\-up depends on the complexity and concurrency of queries\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-performance.md
bb4bf6adfec8-0
[Tutorial: Loading data from Amazon S3](tutorial-loading-data.md) walks you beginning to end through the steps to upload data to an Amazon S3 bucket and then use the COPY command to load the data into your tables\. The tutorial includes help with troubleshooting load errors and compares the performance difference between loading from a single file and loading from multiple files\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-loading-take-loading-data-tutorial.md
e25c3005485a-0
Performs compression analysis and produces a report with the suggested compression encoding for the tables analyzed\. For each column, the report includes an estimate of the potential reduction in disk space compared to the current encoding\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
0c8ac89d6b55-0
``` ANALYZE COMPRESSION [ [ table_name ] [ ( column_name [, ...] ) ] ] [COMPROWS numrows] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
867d413ae382-0
*table\_name* You can analyze compression for specific tables, including temporary tables\. You can qualify the table with its schema name\. You can optionally specify a *table\_name* to analyze a single table\. If you don't specify a *table\_name*, all of the tables in the currently connected database are analyzed\. You can't specify more than one *table\_name* with a single ANALYZE COMPRESSION statement\. *column\_name* If you specify a *table\_name*, you can also specify one or more columns in the table \(as a column\-separated list within parentheses\)\. COMPROWS Number of rows to be used as the sample size for compression analysis\. The analysis is run on rows from each data slice\. For example, if you specify COMPROWS 1000000 \(1,000,000\) and the system contains 4 total slices, no more than 250,000 rows per slice are read and analyzed\. If COMPROWS isn't specified, the sample size defaults to 100,000 per slice\. Values of COMPROWS lower than the default of 100,000 rows per slice are automatically upgraded to the default value\. However, compression analysis doesn't produce recommendations if the amount of data in the table is insufficient to produce a meaningful sample\. If the COMPROWS number is greater than the number of rows in the table, the ANALYZE COMPRESSION command still proceeds and runs the compression analysis against all of the available rows\. *numrows*
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
867d413ae382-1
*numrows* Number of rows to be used as the sample size for compression analysis\. The accepted range for *numrows* is a number between 1000 and 1000000000 \(1,000,000,000\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
1759a9ab51ba-0
Run ANALYZE COMPRESSION to get recommendations for column encoding schemes, based on a sample of the table's contents\. ANALYZE COMPRESSION is an advisory tool and doesn't modify the column encodings of the table\. You can apply the suggested encoding by recreating the table or by creating a new table with the same schema\. Recreating an uncompressed table with appropriate encoding schemes can significantly reduce its on\-disk footprint\. This approach saves disk space and improves query performance for I/O\-bound workloads\. ANALYZE COMPRESSION skips the actual analysis phase and directly returns the original encoding type on any column that is designated as a SORTKEY\. It does this because range\-restricted scans might perform poorly when SORTKEY columns are compressed much more highly than other columns\. ANALYZE COMPRESSION acquires an exclusive table lock, which prevents concurrent reads and writes against the table\. Only run the ANALYZE COMPRESSION command when the table is idle\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
b9e80db44cfd-0
The following example shows the encoding and estimated percent reduction for the columns in the LISTING table only: ``` analyze compression listing; Table | Column | Encoding | Est_reduction_pct --------+----------------+----------+------------------ listing | listid | delta | 75.00 listing | sellerid | delta32k | 38.14 listing | eventid | delta32k | 5.88 listing | dateid | zstd | 31.73 listing | numtickets | zstd | 38.41 listing | priceperticket | zstd | 59.48 listing | totalprice | zstd | 37.90 listing | listtime | zstd | 13.39 ``` The following example analyzes the QTYSOLD, COMMISSION, and SALETIME columns in the SALES table\. ``` analyze compression sales(qtysold, commission, saletime); Table | Column | Encoding | Est_reduction_pct ------+------------+----------+------------------ sales | salesid | N/A | 0.00
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
b9e80db44cfd-1
sales | salesid | N/A | 0.00 sales | listid | N/A | 0.00 sales | sellerid | N/A | 0.00 sales | buyerid | N/A | 0.00 sales | eventid | N/A | 0.00 sales | dateid | N/A | 0.00 sales | qtysold | zstd | 67.14 sales | pricepaid | N/A | 0.00 sales | commission | zstd | 13.94 sales | saletime | zstd | 13.38 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ANALYZE_COMPRESSION.md
597cb5a550d5-0
Returns the name of the database where you are currently connected\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATABASE.md
cf5e20b22b07-0
``` current_database() ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATABASE.md
54a8879b56e6-0
Returns a CHAR or VARCHAR string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATABASE.md
61f416c4ca0e-0
The following query returns the name of the current database: ``` select current_database(); current_database ------------------ tickit (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_DATABASE.md
a7485b7320ee-0
Use SVV\_EXTERNAL\_SCHEMAS to view information about external schemas\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\. SVV\_EXTERNAL\_SCHEMAS is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_SCHEMAS.md
a5193278d245-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_EXTERNAL_SCHEMAS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_SCHEMAS.md
456c16aa4f13-0
The following example shows details for external schemas\. ``` select * from svv_external_schemas; esoid | eskind | schemaname | esowner | databasename | esoptions -------+--------+------------+---------+--------------+------------------------------------------------------------- 100133 | 1 | spectrum | 100 | redshift | {"IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"} ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_SCHEMAS.md
26f767d6955c-0
Stored procedures in Amazon Redshift are based on the PostgreSQL PL/pgSQL procedural language, with some important differences\. In this reference, you can find details of PL/pgSQL syntax as implemented by Amazon Redshift\. For more information about PL/pgSQL, see [PL/pgSQL \- SQL procedural language](https://www.postgresql.org/docs/8.0/plpgsql.html) in the PostgreSQL 8\.0 documentation\. **Topics** + [PL/pgSQL reference conventions](c_PL_reference_conventions.md) + [Structure of PL/pgSQL](c_PLpgSQL-structure.md) + [Supported PL/pgSQL statements](c_PLpgSQL-statements.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_pl_pgSQL_reference.md
a351d7ab05e5-0
The following queries show a few of the ways in which you can query the catalog tables to get useful information about an Amazon Redshift database\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
71bc7b2f767d-0
The following view definition joins the STV\_TBL\_PERM system table with the PG\_CLASS, PG\_NAMESPACE, and PG\_DATABASE system catalog tables to return the table ID, database name, schema name, and table name\. ``` create view tables_vw as select distinct(id) table_id ,trim(datname) db_name ,trim(nspname) schema_name ,trim(relname) table_name from stv_tbl_perm join pg_class on pg_class.oid = stv_tbl_perm.id join pg_namespace on pg_namespace.oid = relnamespace join pg_database on pg_database.oid = stv_tbl_perm.db_id; ``` The following example returns the information for table ID 117855\. ``` select * from tables_vw where table_id = 117855; ``` ``` table_id | db_name | schema_name | table_name ---------+-----------+-------------+----------- 117855 | dev | public | customer ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
08dbf0030155-0
The following query joins some catalog tables to find out how many columns each Amazon Redshift table contains\. Amazon Redshift table names are stored in both PG\_TABLES and STV\_TBL\_PERM; where possible, use PG\_TABLES to return Amazon Redshift table names\. This query does not involve any Amazon Redshift tables\. ``` select nspname, relname, max(attnum) as num_cols from pg_attribute a, pg_namespace n, pg_class c where n.oid = c.relnamespace and a.attrelid = c.oid and c.relname not like '%pkey' and n.nspname not like 'pg%' and n.nspname not like 'information%' group by 1, 2 order by 1, 2; nspname | relname | num_cols --------+----------+---------- public | category | 4 public | date | 8 public | event | 6 public | listing | 8 public | sales | 10 public | users | 18 public | venue | 5 (7 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
a479de45dbe0-0
The following query joins STV\_TBL\_PERM to some PG tables to return a list of tables in the TICKIT database and their schema names \(NSPNAME column\)\. The query also returns the total number of rows in each table\. \(This query is helpful when multiple schemas in your system have the same table names\.\) ``` select datname, nspname, relname, sum(rows) as rows from pg_class, pg_namespace, pg_database, stv_tbl_perm where pg_namespace.oid = relnamespace and pg_class.oid = stv_tbl_perm.id and pg_database.oid = stv_tbl_perm.db_id and datname ='tickit' group by datname, nspname, relname order by datname, nspname, relname; datname | nspname | relname | rows --------+---------+----------+-------- tickit | public | category | 11 tickit | public | date | 365 tickit | public | event | 8798 tickit | public | listing | 192497 tickit | public | sales | 172456 tickit | public | users | 49990 tickit | public | venue | 202
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
a479de45dbe0-1
tickit | public | sales | 172456 tickit | public | users | 49990 tickit | public | venue | 202 (7 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
39f5a3fdab8e-0
The following query lists some information about each user table and its columns: the table ID, the table name, its column names, and the data type of each column: ``` select distinct attrelid, rtrim(name), attname, typname from pg_attribute a, pg_type t, stv_tbl_perm p where t.oid=a.atttypid and a.attrelid=p.id and a.attrelid between 100100 and 110000 and typname not in('oid','xid','tid','cid') order by a.attrelid asc, typname, attname; attrelid | rtrim | attname | typname ---------+----------+----------------+----------- 100133 | users | likebroadway | bool 100133 | users | likeclassical | bool 100133 | users | likeconcerts | bool ... 100137 | venue | venuestate | bpchar 100137 | venue | venueid | int2 100137 | venue | venueseats | int4 100137 | venue | venuecity | varchar ... ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
3470a281f013-0
The following query joins the STV\_BLOCKLIST table to PG\_CLASS to return storage information for the columns in the SALES table\. ``` select col, count(*) from stv_blocklist s, pg_class p where s.tbl=p.oid and relname='sales' group by col order by col; col | count ----+------- 0 | 4 1 | 4 2 | 4 3 | 4 4 | 4 5 | 4 6 | 4 7 | 4 8 | 4 9 | 8 10 | 4 12 | 4 13 | 8 (13 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_join_PG_examples.md
9cf897429da4-0
Synonym of the LEN function\. See [LEN function](r_LEN.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LENGTH.md
0e43b3dc7f4a-0
The BOOL\_OR function operates on a single Boolean or integer column or expression\. This function applies similar logic to the BIT\_AND and BIT\_OR functions\. For this function, the return type is a Boolean value \(`true` or `false`\)\. If any value in a set is `true`, the BOOL\_OR function returns `true` \(`t`\)\. If no value in a set is `true`, the function returns `false` \(`f`\)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_OR.md
8a67e9f921a4-0
``` BOOL_OR ( [DISTINCT | ALL] expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_OR.md
95c97a59e551-0
*expression * The target column or expression that the function operates on\. This expression must have a BOOLEAN or integer data type\. The return type of the function is BOOLEAN\. DISTINCT \| ALL With the argument DISTINCT, the function eliminates all duplicate values for the specified expression before calculating the result\. With the argument ALL, the function retains all duplicate values\. ALL is the default\. See [DISTINCT support for bit\-wise aggregations](c_bitwise_aggregate_functions.md#distinct-support-for-bit-wise-aggregations)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_OR.md
a34a039e9d9a-0
You can use the Boolean functions with either Boolean expressions or integer expressions\. For example, the following query return results from the standard USERS table in the TICKIT database, which has several Boolean columns\. The BOOL\_OR function returns `true` for all five rows\. At least one user in each of those states likes sports\. ``` select state, bool_or(likesports) from users group by state order by state limit 5; state | bool_or ------+-------- AB | t AK | t AL | t AZ | t BC | t (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_BOOL_OR.md
4859486b13ef-0
Converts an angle in degrees to its equivalent in radians\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RADIANS.md
0208a00f8e23-0
``` RADIANS(number) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RADIANS.md
ad029d97e443-0
*string* The input parameter is a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RADIANS.md
8cd2df18b2f0-0
The RADIANS function returns a double precision number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RADIANS.md
c0cddbe23537-0
The following example returns the radian equivalent of 180 degrees: ``` select radians(180); radians ------------------ 3.14159265358979 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_RADIANS.md
ee42ac4d74ae-0
Records the current state of the query queues for the service classes\. STV\_WLM\_QUERY\_QUEUE\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QUERY_QUEUE_STATE.md
58c4fda8c2fb-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_WLM_QUERY_QUEUE_STATE.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QUERY_QUEUE_STATE.md
60a12c9845c8-0
The following query shows the queries in the queue for service classes greater than 4\. ``` select * from stv_wlm_query_queue_state where service_class > 4 order by service_class; ``` This query returns the following sample output\. ``` service_class | position | task | query | slot_count | start_time | queue_time ---------------+----------+------+-------+------------+----------------------------+------------ 5 | 0 | 455 | 476 | 5 | 2010-10-06 13:18:24.065838 | 20937257 6 | 1 | 456 | 478 | 5 | 2010-10-06 13:18:26.652906 | 18350191 (2 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_QUERY_QUEUE_STATE.md
d277739f4657-0
CHANGE\_USER\_PRIORITY enables superusers to modify the priority of all queries issued by a user that are either running or waiting in workload management \(WLM\)\. Only one user, session, or query can run with the priority `CRITICAL`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_USER_PRIORITY.md
0e5c17123a58-0
``` CHANGE_USER_PRIORITY(user_name, priority) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_USER_PRIORITY.md
728c85a4b3df-0
*user\_name* The database user name whose query priority is changed\. *priority* The new priority to be assigned to all queries issued by `user_name`\. This argument must be a string with the value `CRITICAL`, `HIGHEST`, `HIGH`, `NORMAL`, `LOW`, `LOWEST`, or `RESET`\. Only superusers can change the priority to `CRITICAL`\. Changing the priority to `RESET` removes the priority setting for `user_name`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_USER_PRIORITY.md
bba129d69527-0
None
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_USER_PRIORITY.md
0567257b3739-0
In the following example, the priority is changed for the user `analysis_user` to `LOWEST`\. ``` select change_user_priority('analysis_user', 'lowest'); change_user_priority ------------------------------------------------------------------------------------- Succeeded to change user priority. Changed user (analysis_user) priority to lowest. (1 row) ``` In the next statement, the priority is changed to `LOW`\. ``` select change_user_priority('analysis_user', 'low'); change_user_priority ---------------------------------------------------------------------------------------------- Succeeded to change user priority. Changed user (analysis_user) priority from Lowest to low. (1 row) ``` In this example, the priority is reset\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_USER_PRIORITY.md
0567257b3739-1
(1 row) ``` In this example, the priority is reset\. ``` select change_user_priority('analysis_user', 'reset'); change_user_priority ------------------------------------------------------- Succeeded to reset priority for user (analysis_user). (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CHANGE_USER_PRIORITY.md
5ece58a8a85d-0
Defines a new cursor\. Use a cursor to retrieve a few rows at a time from the result set of a larger query\. When the first row of a cursor is fetched, the entire result set is materialized on the leader node, in memory or on disk, if needed\. Because of the potential negative performance impact of using cursors with large result sets, we recommend using alternative approaches whenever possible\. For more information, see [Performance considerations when using cursors](#declare-performance)\. You must declare a cursor within a transaction block\. Only one cursor at a time can be open per session\. For more information, see [FETCH](fetch.md), [CLOSE](close.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
548e045e9f99-0
``` DECLARE cursor_name CURSOR FOR query ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
4376af893133-0
*cursor\_name* Name of the new cursor\. *query* A SELECT statement that populates the cursor\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
c5266e039c69-0
If your client application uses an ODBC connection and your query creates a result set that is too large to fit in memory, you can stream the result set to your client application by using a cursor\. When you use a cursor, the entire result set is materialized on the leader node, and then your client can fetch the results incrementally\. **Note** To enable cursors in ODBC for Microsoft Windows, enable the **Use Declare/Fetch** option in the ODBC DSN you use for Amazon Redshift\. We recommend setting the ODBC cache size, using the **Cache Size** field in the ODBC DSN options dialog, to 4,000 or greater on multi\-node clusters to minimize round trips\. On a single\-node cluster, set Cache Size to 1,000\. Because of the potential negative performance impact of using cursors, we recommend using alternative approaches whenever possible\. For more information, see [Performance considerations when using cursors](#declare-performance)\. Amazon Redshift cursors are supported with the following limitations: + Only one cursor at a time can be open per session\. + Cursors must be used within a transaction \(BEGIN … END\)\. + The maximum cumulative result set size for all cursors is constrained based on the cluster node type\. If you need larger result sets, you can resize to an XL or 8XL node configuration\. For more information, see [Cursor constraints](#declare-constraints)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
37a3f8ee94eb-0
When the first row of a cursor is fetched, the entire result set is materialized on the leader node\. If the result set doesn't fit in memory, it is written to disk as needed\. To protect the integrity of the leader node, Amazon Redshift enforces constraints on the size of all cursor result sets, based on the cluster's node type\. The following table shows the maximum total result set size for each cluster node type\. Maximum result set sizes are in megabytes\. | Node type | Maximum result set per cluster \(MB\) | | --- | --- | | DS1 or DS2 XL single node | 64000 | | DS1 or DS2 XL multiple nodes | 1800000 | | DS1 or DS2 8XL multiple nodes | 14400000 | | RA3 16XL multiple nodes | 14400000 | | DC1 Large single node | 16000 | | DC1 Large multiple nodes | 384000 | | DC1 8XL multiple nodes | 3000000 | | DC2 Large single node | 8000 | | DC2 Large multiple nodes | 192000 | | DC2 8XL multiple nodes | 3200000 | | RA3 4XL multiple nodes | 3200000 |
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
37a3f8ee94eb-1
| DC2 8XL multiple nodes | 3200000 | | RA3 4XL multiple nodes | 3200000 | To view the active cursor configuration for a cluster, query the [STV\_CURSOR\_CONFIGURATION](r_STV_CURSOR_CONFIGURATION.md) system table as a superuser\. To view the state of active cursors, query the [STV\_ACTIVE\_CURSORS](r_STV_ACTIVE_CURSORS.md) system table\. Only the rows for a user's own cursors are visible to the user, but a superuser can view all cursors\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
8719ed5e7292-0
Because cursors materialize the entire result set on the leader node before beginning to return results to the client, using cursors with very large result sets can have a negative impact on performance\. We strongly recommend against using cursors with very large result sets\. In some cases, such as when your application uses an ODBC connection, cursors might be the only feasible solution\. If possible, we recommend using these alternatives: + Use [UNLOAD](r_UNLOAD.md) to export a large table\. When you use UNLOAD, the compute nodes work in parallel to transfer the data directly to data files on Amazon Simple Storage Service\. For more information, see [Unloading data](c_unloading_data.md)\. + Set the JDBC fetch size parameter in your client application\. If you use a JDBC connection and you are encountering client\-side out\-of\-memory errors, you can enable your client to retrieve result sets in smaller batches by setting the JDBC fetch size parameter\. For more information, see [Setting the JDBC fetch size parameter](queries-troubleshooting.md#set-the-JDBC-fetch-size-parameter)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
428d1eaa7040-0
The following example declares a cursor named LOLLAPALOOZA to select sales information for the Lollapalooza event, and then fetches rows from the result set using the cursor: ``` -- Begin a transaction begin; -- Declare a cursor declare lollapalooza cursor for select eventname, starttime, pricepaid/qtysold as costperticket, qtysold from sales, event where sales.eventid = event.eventid and eventname='Lollapalooza'; -- Fetch the first 5 rows in the cursor lollapalooza: fetch forward 5 from lollapalooza; eventname | starttime | costperticket | qtysold --------------+---------------------+---------------+--------- Lollapalooza | 2008-05-01 19:00:00 | 92.00000000 | 3 Lollapalooza | 2008-11-15 15:00:00 | 222.00000000 | 2 Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 3 Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 4
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
428d1eaa7040-1
Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 4 Lollapalooza | 2008-04-17 15:00:00 | 239.00000000 | 1 (5 rows) -- Fetch the next row: fetch next from lollapalooza; eventname | starttime | costperticket | qtysold --------------+---------------------+---------------+--------- Lollapalooza | 2008-10-06 14:00:00 | 114.00000000 | 2 -- Close the cursor and end the transaction: close lollapalooza; commit; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/declare.md
d0dc26c5daa5-0
Use the SVL\_S3PARTITION\_SUMMARY view to get a summary of Redshift Spectrum queries partition processing at the segment level\. SVL\_S3PARTITION\_SUMMARY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. For information about SVCS\_S3PARTITION, see [SVCS\_S3PARTITION\_SUMMARY](r_SVCS_S3PARTITION_SUMMARY.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION_SUMMARY.md
cd4757dc004a-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_S3PARTITION_SUMMARY.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION_SUMMARY.md
e7ed6d1f4687-0
The following example gets the partition scan details for the last query executed\. ``` select query, segment, assignment, min_starttime, max_endtime, min_duration, avg_duration from svl_s3partition_summary where query = pg_last_query_id() order by query,segment; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_S3PARTITION_SUMMARY.md
e82c4e89d156-0
By default, COPY inserts values into the target table's columns in the same order as fields occur in the data files\. If the default column order will not work, you can specify a column list or use JSONPath expressions to map source data fields to the target columns\. + [Column List](#copy-column-list) + [JSONPaths File](#copy-column-mapping-jsonpaths)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-column-mapping.md
0d61508a945c-0
You can specify a comma\-separated list of column names to load source data fields into specific target columns\. The columns can be in any order in the COPY statement, but when loading from flat files, such as in an Amazon S3 bucket, their order must match the order of the source data\. When loading from an Amazon DynamoDB table, order doesn't matter\. The COPY command matches attribute names in the items retrieved from the DynamoDB table to column names in the Amazon Redshift table\. For more information, see [Loading data from an Amazon DynamoDB table](t_Loading-data-from-dynamodb.md) The format for a column list is as follows\. ``` COPY tablename (column1 [,column2, ...]) ``` If a column in the target table is omitted from the column list, then COPY loads the target column's [DEFAULT](r_CREATE_TABLE_NEW.md#create-table-default) expression\. If the target column doesn't have a default, then COPY attempts to load NULL\. If COPY attempts to assign NULL to a column that is defined as NOT NULL, the COPY command fails\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-column-mapping.md
0d61508a945c-1
If COPY attempts to assign NULL to a column that is defined as NOT NULL, the COPY command fails\. If an [IDENTITY](r_CREATE_TABLE_NEW.md#identity-clause) column is included in the column list, then [EXPLICIT_IDS](copy-parameters-data-conversion.md#copy-explicit-ids) must also be specified; if an IDENTITY column is omitted, then EXPLICIT\_IDS cannot be specified\. If no column list is specified, the command behaves as if a complete, in\-order column list was specified, with IDENTITY columns omitted if EXPLICIT\_IDS was also not specified\. If a column is defined with GENERATED BY DEFAULT AS IDENTITY, then it can be copied\. Values are generated or updated with values that you supply\. The EXPLICIT\_IDS option isn't required\. COPY doesn't update the identity high watermark\. For more information, see [GENERATED BY DEFAULT AS IDENTITY](r_CREATE_TABLE_NEW.md#identity-generated-bydefault-clause)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-column-mapping.md
3685a544c6b2-0
When loading from data files in JSON or Avro format, COPY automatically maps the data elements in the JSON or Avro source data to the columns in the target table by matching field names in the Avro schema to column names in the target table or column list\. If your column names and field names don't match, or to map to deeper levels in the data hierarchy, you can use a JSONPaths file to explicitly map JSON or Avro data elements to columns\. For more information, see [JSONPaths file](copy-parameters-data-format.md#copy-json-jsonpaths)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-column-mapping.md
949dae9617bd-0
The REVERSE function operates on a string and returns the characters in reverse order\. For example, `reverse('abcde')` returns `edcba`\. This function works on numeric and date data types as well as character data types; however, in most cases it has practical value for character strings\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVERSE.md
de0658641c39-0
``` REVERSE ( expression ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVERSE.md
a022c0b18c74-0
*expression* An expression with a character, date, time stamp, or numeric data type that represents the target of the character reversal\. All expressions are implicitly converted to variable\-length character strings\. Trailing blanks in fixed\-width character strings are ignored\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVERSE.md
f6a44645f64e-0
REVERSE returns a VARCHAR\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVERSE.md
36ff7930b5fc-0
Select five distinct city names and their corresponding reversed names from the USERS table: ``` select distinct city as cityname, reverse(cityname) from users order by city limit 5; cityname | reverse ---------+---------- Aberdeen | needrebA Abilene | enelibA Ada | adA Agat | tagA Agawam | mawagA (5 rows) ``` Select five sales IDs and their corresponding reversed IDs cast as character strings: ``` select salesid, reverse(salesid)::varchar from sales order by salesid desc limit 5; salesid | reverse --------+--------- 172456 | 654271 172455 | 554271 172454 | 454271 172453 | 354271 172452 | 254271 (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REVERSE.md