id
stringlengths
14
16
text
stringlengths
1
2.43k
source
stringlengths
99
229
e6f595fcd123-0
Records details for compression analysis operations during COPY or ANALYZE COMPRESSION commands\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE_COMPRESSION.md
6df445963089-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_ANALYZE_COMPRESSION.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE_COMPRESSION.md
871e2cf2b8b6-0
The following example inspects the details of compression analysis on the `lineitem` table by the last COPY command run in the same session\. ``` select xid, tbl, btrim(tablename) as tablename, col, old_encoding, new_encoding, mode from stl_analyze_compression where xid = (select xid from stl_query where query = pg_last_copy_id()) order by col; xid | tbl | tablename | col | old_encoding | new_encoding | mode ======+========+===========+=====+=================+=================+============= 8196 | 248126 | lineitem | 0 | mostly32 | mostly32 | ON 8196 | 248126 | lineitem | 1 | mostly32 | lzo | ON 8196 | 248126 | lineitem | 2 | lzo | delta32k | ON 8196 | 248126 | lineitem | 3 | delta | delta | ON 8196 | 248126 | lineitem | 4 | bytedict | bytedict | ON 8196 | 248126 | lineitem | 5 | mostly32 | mostly32 | ON 8196 | 248126 | lineitem | 6 | delta | delta | ON
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE_COMPRESSION.md
871e2cf2b8b6-1
8196 | 248126 | lineitem | 6 | delta | delta | ON 8196 | 248126 | lineitem | 7 | delta | delta | ON 8196 | 248126 | lineitem | 8 | lzo | zstd | ON 8196 | 248126 | lineitem | 9 | runlength | zstd | ON 8196 | 248126 | lineitem | 10 | delta | lzo | ON 8196 | 248126 | lineitem | 11 | delta | delta | ON 8196 | 248126 | lineitem | 12 | delta | delta | ON 8196 | 248126 | lineitem | 13 | bytedict | zstd | ON 8196 | 248126 | lineitem | 14 | bytedict | zstd | ON 8196 | 248126 | lineitem | 15 | text255 | zstd | ON (16 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_ANALYZE_COMPRESSION.md
ef3bc91f8b8d-0
To create the external table for this tutorial, run the following command\. ``` CREATE EXTERNAL TABLE spectrum.customers ( id int, name struct<given:varchar(20), family:varchar(20)>, phones array<varchar(20)>, orders array<struct<shipdate:timestamp, price:double precision>> ) STORED AS PARQUET LOCATION 's3://awssampledbuswest2/nested_example/customers/'; ``` In the example preceding, the external table `spectrum.customers` uses the `struct` and `array` data types to define columns with nested data\. Amazon Redshift Spectrum supports querying nested data in Parquet, ORC, JSON, and Ion file formats\. The `LOCATION` parameter has to refer to the Amazon S3 folder that contains the nested data or files\. **Note** Amazon Redshift doesn't support complex data types in an Amazon Redshift database table\. You can use complex data types only with Redshift Spectrum external tables\. You can nest `array` and `struct` types at any level\. For example, you can define a column named `toparray` as shown in the following example\. ``` toparray array<struct<nestedarray: array<struct<morenestedarray: array<string>>>>>
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-nested-data-create-table.md
ef3bc91f8b8d-1
array<struct<morenestedarray: array<string>>>>> ``` You can also nest `struct` types as shown for column `x` in the following example\. ``` x struct<a: string, b: struct<c: integer, d: struct<e: string> > > ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-nested-data-create-table.md
6189d76a4db8-0
The following CREATE TABLE statement demonstrates the use of VARCHAR and CHAR data types: ``` create table address( address_id integer, address1 varchar(100), address2 varchar(50), district varchar(20), city_name char(20), state char(2), postal_code char(5) ); ``` The following examples use this table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_character_types.md
7d2ef7fb6c51-0
Because ADDRESS1 is a VARCHAR column, the trailing blanks in the second inserted address are semantically insignificant\. In other words, these two inserted addresses *match*\. ``` insert into address(address1) values('9516 Magnolia Boulevard'); insert into address(address1) values('9516 Magnolia Boulevard '); ``` ``` select count(*) from address where address1='9516 Magnolia Boulevard'; count ------- 2 (1 row) ``` If the ADDRESS1 column were a CHAR column and the same values were inserted, the COUNT\(\*\) query would recognize the character strings as the same and return `2`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_character_types.md
bee28d220143-0
The LENGTH function recognizes trailing blanks in VARCHAR columns: ``` select length(address1) from address; length -------- 23 25 (2 rows) ``` A value of `Augusta` in the CITY\_NAME column, which is a CHAR column, would always return a length of 7 characters, regardless of any trailing blanks in the input string\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_character_types.md
58760ef6830d-0
Character strings are not truncated to fit the declared width of the column: ``` insert into address(city_name) values('City of South San Francisco'); ERROR: value too long for type character(20) ``` A workaround for this problem is to cast the value to the size of the column: ``` insert into address(city_name) values('City of South San Francisco'::char(20)); ``` In this case, the first 20 characters of the string \(`City of South San Fr`\) would be loaded into the column\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Examples_with_character_types.md
7edc7bcf2170-0
Returns the largest or smallest value from a list of any number of expressions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GREATEST_LEAST.md
458132d6e362-0
``` GREATEST (value [, ...]) LEAST (value [, ...]) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GREATEST_LEAST.md
948cd5f6674d-0
*expression\_list* A comma\-separated list of expressions, such as column names\. The expressions must all be convertible to a common data type\. NULL values in the list are ignored\. If all of the expressions evaluate to NULL, the result is NULL\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GREATEST_LEAST.md
75855052c433-0
Returns the data type of the expressions\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GREATEST_LEAST.md
6f160b7c10ec-0
The following example returns the highest value alphabetically for `firstname` or `lastname`\. ``` select firstname, lastname, greatest(firstname,lastname) from users where userid < 10 order by 3; firstname | lastname | greatest -----------+-----------+----------- Lars | Ratliff | Ratliff Reagan | Hodge | Reagan Colton | Roy | Roy Barry | Roy | Roy Tamekah | Juarez | Tamekah Rafael | Taylor | Taylor Victor | Hernandez | Victor Vladimir | Humphrey | Vladimir Mufutau | Watkins | Watkins (9 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_GREATEST_LEAST.md
7269216b6122-0
Retrieving information from an Amazon Redshift data warehouse involves executing complex queries on extremely large amounts of data, which can take a long time to process\. To ensure queries process as quickly as possible, there are a number of tools you can use to identify potential performance issues\. **Topics** + [Query analysis workflow](c-query-analysis-process.md) + [Reviewing query alerts](c-reviewing-query-alerts.md) + [Analyzing the query plan](c-analyzing-the-query-plan.md) + [Analyzing the query summary](c-analyzing-the-query-summary.md) + [Improving query performance](query-performance-improvement-opportunities.md) + [Diagnostic queries for query tuning](diagnostic-queries-for-query-tuning.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-tuning.md
5d3ad81c23c5-0
Provides metrics related to commit performance, including the timing of the various stages of commit and the number of blocks committed\. Query STL\_COMMIT\_STATS to determine what portion of a transaction was spent on commit and how much queuing is occurring\. This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_COMMIT_STATS.md
4bcf8ee8b513-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_COMMIT_STATS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_COMMIT_STATS.md
df80d3a2c566-0
``` select node, datediff(ms,startqueue,startwork) as queue_time, datediff(ms, startwork, endtime) as commit_time, queuelen from stl_commit_stats where xid = 2574 order by node; node | queue_time | commit_time | queuelen -----+--------------+-------------+--------- -1 | 0 | 617 | 0 0 | 444950725641 | 616 | 0 1 | 444950725636 | 616 | 0 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_COMMIT_STATS.md
da81673d2d42-0
Contains metrics information, such as the number of rows processed, CPU usage, input/output, and disk use, for queries that have completed running in user\-defined query queues \(service classes\)\. To view metrics for active queries that are currently running, see the [STV\_QUERY\_METRICS](r_STV_QUERY_METRICS.md) system view\. Query metrics are sampled at one second intervals\. As a result, different runs of the same query might return slightly different times\. Also, query segments that run in less than one second might not be recorded\. STL\_QUERY\_METRICS tracks and aggregates metrics at the query, segment, and step level\. For information about query segments and steps, see [Query planning and execution workflow](c-query-planning.md)\. Many metrics \(such as `max_rows`, `cpu_time`, and so on\) are summed across node slices\. For more information about node slices, see [Data warehouse system architecture](c_high_level_system_architecture.md)\. To determine the level at which the row reports metrics, examine the `segment` and `step_type` columns\. + If both `segment` and `step_type` are `-1`, then the row reports metrics at the query level\. + If `segment` is not `-1` and `step_type` is `-1`, then the row reports metrics at the segment level\. + If both `segment` and `step_type` are not `-1`, then the row reports metrics at the step level\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY_METRICS.md
da81673d2d42-1
+ If both `segment` and `step_type` are not `-1`, then the row reports metrics at the step level\. The [SVL\_QUERY\_METRICS](r_SVL_QUERY_METRICS.md) view and the [SVL\_QUERY\_METRICS\_SUMMARY](r_SVL_QUERY_METRICS_SUMMARY.md) view aggregate the data in this view and present the information in a more accessible form\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY_METRICS.md
0c772326593c-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_QUERY_METRICS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY_METRICS.md
86d9255f1b9b-0
To find queries with high CPU time \(more the 1,000 seconds\), run the following query\. ``` Select query, cpu_time / 1000000 as cpu_seconds from stl_query_metrics where segment = -1 and cpu_time > 1000000000 order by cpu_time; query | cpu_seconds ------+------------ 25775 | 9540 ``` To find active queries with a nested loop join that returned more than one million rows, run the following query\. ``` select query, rows from stl_query_metrics where step_type = 15 and rows > 1000000 order by rows; query | rows ------+----------- 25775 | 2621562702 ``` To find active queries that have run for more than 60 seconds and have used less than 10 seconds of CPU time, run the following query\. ``` select query, run_time/1000000 as run_time_seconds from stl_query_metrics where segment = -1 and run_time > 60000000 and cpu_time < 10000000; query | run_time_seconds
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY_METRICS.md
86d9255f1b9b-1
where segment = -1 and run_time > 60000000 and cpu_time < 10000000; query | run_time_seconds ------+----------------- 25775 | 114 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_QUERY_METRICS.md
c04ddcd34eaf-0
You can define an Amazon Redshift stored procedure using the PostgreSQL procedural language PL/pgSQL to perform a set of SQL queries and logical operations\. The procedure is stored in the database and is available for any user with sufficient privileges to run\. Unlike a user\-defined function \(UDF\), a stored procedure can incorporate data definition language \(DDL\) and data manipulation language \(DML\) in addition to SELECT queries\. A stored procedure doesn't need to return a value\. You can use procedural language, including looping and conditional expressions, to control logical flow\. For details about SQL commands to create and manage stored procedures, see the following command topics: + [CREATE PROCEDURE](r_CREATE_PROCEDURE.md) + [ALTER PROCEDURE](r_ALTER_PROCEDURE.md) + [DROP PROCEDURE](r_DROP_PROCEDURE.md) + [SHOW PROCEDURE](r_SHOW_PROCEDURE.md) + [CALL](r_CALL_procedure.md) + [GRANT](r_GRANT.md) + [REVOKE](r_REVOKE.md) + [ALTER DEFAULT PRIVILEGES](r_ALTER_DEFAULT_PRIVILEGES.md) **Topics** + [Overview of stored procedures in Amazon Redshift](stored-procedure-create.md) + [PL/pgSQL language reference](c_pl_pgSQL_reference.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-overview.md
0a35f84e7729-0
The following example contains a *correlated subquery* in the WHERE clause; this kind of subquery contains one or more correlations between its columns and the columns produced by the outer query\. In this case, the correlation is `where s.listid=l.listid`\. For each row that the outer query produces, the subquery is executed to qualify or disqualify the row\. ``` select salesid, listid, sum(pricepaid) from sales s where qtysold= (select max(numtickets) from listing l where s.listid=l.listid) group by 1,2 order by 1,2 limit 5; salesid | listid | sum ---------+--------+---------- 27 | 28 | 111.00 81 | 103 | 181.00 142 | 149 | 240.00 146 | 152 | 231.00 194 | 210 | 144.00 (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_correlated_subqueries.md
32c188a268a4-0
The query planner uses a query rewrite method called subquery decorrelation to optimize several patterns of correlated subqueries for execution in an MPP environment\. A few types of correlated subqueries follow patterns that Amazon Redshift can't decorrelate and doesn't support\. Queries that contain the following correlation references return errors: + Correlation references that skip a query block, also known as "skip\-level correlation references\." For example, in the following query, the block containing the correlation reference and the skipped block are connected by a NOT EXISTS predicate: ``` select event.eventname from event where not exists (select * from listing where not exists (select * from sales where event.eventid=sales.eventid)); ``` The skipped block in this case is the subquery against the LISTING table\. The correlation reference correlates the EVENT and SALES tables\. + Correlation references from a subquery that is part of an ON clause in an outer join: ``` select * from category left join event on category.catid=event.catid and eventid = (select max(eventid) from sales where sales.eventid=event.eventid); ``` The ON clause contains a correlation reference from SALES in the subquery to EVENT in the outer query\. + Null\-sensitive correlation references to an Amazon Redshift system table\. For example: ``` select attrelid
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_correlated_subqueries.md
32c188a268a4-1
+ Null\-sensitive correlation references to an Amazon Redshift system table\. For example: ``` select attrelid from stv_locks sl, pg_attribute where sl.table_id=pg_attribute.attrelid and 1 not in (select 1 from pg_opclass where sl.lock_owner = opcowner); ``` + Correlation references from within a subquery that contains a window function\. ``` select listid, qtysold from sales s where qtysold not in (select sum(numtickets) over() from listing l where s.listid=l.listid); ``` + References in a GROUP BY column to the results of a correlated subquery\. For example: ``` select listing.listid, (select count (sales.listid) from sales where sales.listid=listing.listid) as list from listing group by list, listing.listid; ``` + Correlation references from a subquery with an aggregate function and a GROUP BY clause, connected to the outer query by an IN predicate\. \(This restriction doesn't apply to MIN and MAX aggregate functions\.\) For example: ``` select * from listing where listid in (select sum(qtysold) from sales where numtickets>4
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_correlated_subqueries.md
32c188a268a4-2
select * from listing where listid in (select sum(qtysold) from sales where numtickets>4 group by salesid); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_correlated_subqueries.md
14faa82311af-0
In this tutorial, you learned how to optimize the design of your tables by applying table design best practices\. You chose sort keys for the SSB tables based on these best practices: + If recent data is queried most frequently, specify the timestamp column as the leading column for the sort key\. + If you do frequent range filtering or equality filtering on one column, specify that column as the sort key\. + If you frequently join a \(dimension\) table, specify the join column as the sort key\. You applied the following best practices to improve the distribution of the tables\. + Distribute the fact table and one dimension table on their common columns + Change some dimension tables to use ALL distribution You evaluated the effects of compression on a table and determined that using automatic compression usually produces the best results\. For more information, see the following links: + [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md) + [Choose the best sort key](c_best-practices-sort-key.md) + [Choosing a data distribution style](t_Distributing_data.md) + [Choosing a column compression type](t_Compressing_data_on_disk.md) + [Analyzing table design](c_analyzing-table-design.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-summary.md
cb9334d4ca9d-0
For your next step, if you haven't done so already, we recommend taking [Tutorial: Loading data from Amazon S3](tutorial-loading-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-summary.md
e79c00c46632-0
Load your data in sort key order to avoid needing to vacuum\. If each batch of new data follows the existing rows in your table, your data is properly stored in sort order, and you don't need to run a vacuum\. You don't need to presort the rows in each load because COPY sorts each batch of incoming data as it loads\. For example, suppose that you load data every day based on the current day's activity\. If your sort key is a timestamp column, your data is stored in sort order\. This order occurs because the current day's data is always appended at the end of the previous day's data\. For more information, see [Loading your data in sort key order](vacuum-load-in-sort-key-order.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-sort-key-order.md
a18fcbf9a70a-0
Drops a procedure\. To drop a procedure, both the procedure name and input argument data types \(signature\), are required\. Optionally, you can include the full argument data types, including OUT arguments\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_PROCEDURE.md
7ee3779c5c6a-0
``` DROP PROCEDURE sp_name ( [ [ argname ] [ argmode ] argtype [, ...] ] ) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_PROCEDURE.md
ed7d184bbca8-0
*sp\_name* The name of the procedure to be removed\. *argname* The name of an input argument\. DROP PROCEDURE ignores argument names, because only the argument data types are needed to determine the procedure's identity\. *argmode* The mode of an argument, which can be IN, OUT, or INOUT\. OUT arguments are optional because they aren't used to identify a stored procedure\. *argtype* The data type of the input argument\. For a list of the supported data types, see [Data types](c_Supported_data_types.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_PROCEDURE.md
2a522e252f4e-0
The following example drops a stored procedure named `quarterly_revenue`\. ``` DROP PROCEDURE quarterly_revenue(volume INOUT bigint, at_price IN numeric,result OUT int); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_PROCEDURE.md
a65b27d72c79-0
By using window functions, you can enable your users to create analytic business queries more efficiently\. Window functions operate on a partition or "window" of a result set, and return a value for every row in that window\. In contrast, nonwindowed functions perform their calculations with respect to every row in the result set\. Unlike group functions that aggregate result rows, all rows in the table expression are retained\. The values returned are calculated by using values from the sets of rows in that window\. For each row in the table, the window defines a set of rows that is used to compute additional attributes\. A window is defined using a window specification \(the OVER clause\), and is based on three main concepts: + *Window partitioning,* which forms groups of rows \(PARTITION clause\) + *Window ordering*, which defines an order or sequence of rows within each partition \(ORDER BY clause\) + *Window frames*, which are defined relative to each row to further restrict the set of rows \(ROWS specification\) Window functions are the last set of operations performed in a query except for the final ORDER BY clause\. All joins and all WHERE, GROUP BY, and HAVING clauses are completed before the window functions are processed\. Therefore, window functions can appear only in the select list or ORDER BY clause\. You can use multiple window functions within a single query with different frame clauses\. You can also use window functions in other scalar expressions, such as CASE\. Amazon Redshift supports two types of window functions: aggregate and ranking\. These are the supported aggregate functions: + AVG + COUNT + CUME\_DIST + FIRST\_VALUE
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Window_functions.md
a65b27d72c79-1
+ AVG + COUNT + CUME\_DIST + FIRST\_VALUE + LAG + LAST\_VALUE + LEAD + MAX + MEDIAN + MIN + NTH\_VALUE + PERCENTILE\_CONT + PERCENTILE\_DISC + RATIO\_TO\_REPORT + STDDEV\_POP + STDDEV\_SAMP \(synonym for STDDEV\) + SUM + VAR\_POP + VAR\_SAMP \(synonym for VARIANCE\) These are the supported ranking functions: + DENSE\_RANK + NTILE + PERCENT\_RANK + RANK + ROW\_NUMBER **Topics** + [Window function syntax summary](r_Window_function_synopsis.md) + [Unique ordering of data for window functions](r_Examples_order_by_WF.md) + [Overview example for window functions](#r_Window_function_example) + [AVG window function](r_WF_AVG.md) + [COUNT window function](r_WF_COUNT.md) + [CUME\_DIST window function](r_WF_CUME_DIST.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Window_functions.md
a65b27d72c79-2
+ [CUME\_DIST window function](r_WF_CUME_DIST.md) + [DENSE\_RANK window function](r_WF_DENSE_RANK.md) + [FIRST\_VALUE and LAST\_VALUE window functions](r_WF_first_value.md) + [LAG window function](r_WF_LAG.md) + [LEAD window function](r_WF_LEAD.md) + [LISTAGG window function](r_WF_LISTAGG.md) + [MAX window function](r_WF_MAX.md) + [MEDIAN window function](r_WF_MEDIAN.md) + [MIN window function](r_WF_MIN.md) + [NTH\_VALUE window function](r_WF_NTH.md) + [NTILE window function](r_WF_NTILE.md) + [PERCENT\_RANK window function](r_WF_PERCENT_RANK.md) + [PERCENTILE\_CONT window function](r_WF_PERCENTILE_CONT.md) + [PERCENTILE\_DISC window function](r_WF_PERCENTILE_DISC.md) + [RANK window function](r_WF_RANK.md) + [RATIO\_TO\_REPORT window function](r_WF_RATIO_TO_REPORT.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Window_functions.md
a65b27d72c79-3
+ [RATIO\_TO\_REPORT window function](r_WF_RATIO_TO_REPORT.md) + [ROW\_NUMBER window function](r_WF_ROW_NUMBER.md) + [STDDEV\_SAMP and STDDEV\_POP window functions](r_WF_STDDEV.md) + [SUM window function](r_WF_SUM.md) + [VAR\_SAMP and VAR\_POP window functions](r_WF_VARIANCE.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Window_functions.md
c10781e2ba85-0
Following, you can find an overview example demonstrating how to work with the window functions\. You can also find specific code examples with each function description\. Some of the window function examples use a table named WINSALES, which contains 11 rows, as shown following\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Window_functions.html) The following script creates and populates the sample WINSALES table\. ``` create table winsales( salesid int, dateid date, sellerid int, buyerid char(10), qty int, qty_shipped int); insert into winsales values (30001, '8/2/2003', 3, 'b', 10, 10), (10001, '12/24/2003', 1, 'c', 10, 10), (10005, '12/24/2003', 1, 'a', 30, null), (40001, '1/9/2004', 4, 'a', 40, null), (10006, '1/18/2004', 1, 'c', 10, null), (20001, '2/12/2004', 2, 'b', 20, 20), (40005, '2/12/2004', 4, 'a', 10, 10),
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Window_functions.md
c10781e2ba85-1
(40005, '2/12/2004', 4, 'a', 10, 10), (20002, '2/16/2004', 2, 'c', 20, 20), (30003, '4/18/2004', 3, 'b', 15, null), (30004, '4/18/2004', 3, 'b', 20, null), (30007, '9/7/2004', 3, 'c', 30, null); ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Window_functions.md
ae607f3b7206-0
You can only COPY to `GEOMETRY` columns from data in text or CSV format\. The data must be in the hexadecimal form of the extended well\-known binary \(EWKB\) format and fit within the maximum size of a single input row to the COPY command\. For more information, see [COPY](r_COPY.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-spatial-data.md
f984232b07b3-0
Return a list of different category groups from the CATEGORY table: ``` select distinct catgroup from category order by 1; catgroup ---------- Concerts Shows Sports (3 rows) ``` Return the distinct set of week numbers for December 2008: ``` select distinct week, month, year from date where month='DEC' and year=2008 order by 1, 2, 3; week | month | year -----+-------+------ 49 | DEC | 2008 50 | DEC | 2008 51 | DEC | 2008 52 | DEC | 2008 53 | DEC | 2008 (5 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DISTINCT_examples.md
1f666f67efb1-0
PG\_PROC\_INFO is an Amazon Redshift system view built on the PostgreSQL catalog table PG\_PROC and the internal catalog table PG\_PROC\_EXTENDED\. PG\_PROC\_INFO includes details about stored procedures and functions, including information related to output arguments, if any\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_PROC_INFO.md
068e8f584f25-0
PG\_PROC\_INFO shows the following columns in addition to the columns in PG\_PROC\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_PROC_INFO.html) The field proargnames in PG\_PROC\_INFO contains the names of all types of arguments \(including OUT and INOUT\), if any\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_PROC_INFO.md
29646972ae48-0
Contains metrics information, such as the number of rows processed, CPU usage, input/output, and disk use, for active queries running in user\-defined query queues \(service classes\)\. To view metrics for queries that have completed, see the [STL\_QUERY\_METRICS](r_STL_QUERY_METRICS.md) system table\. Query metrics are sampled at one second intervals\. As a result, different runs of the same query might return slightly different times\. Also, query segments that run in less than 1 second might not be recorded\. STV\_QUERY\_METRICS tracks and aggregates metrics at the query, segment, and step level\. For information about query segments and steps, see [Query planning and execution workflow](c-query-planning.md)\. Many metrics \(such as `max_rows`, `cpu_time`, and so on\) are summed across node slices\. For more information about node slices, see [Data warehouse system architecture](c_high_level_system_architecture.md)\. To determine the level at which the row reports metrics, examine the `segment` and `step_type` columns: + If both `segment` and `step_type` are `-1`, then the row reports metrics at the query level\. + If `segment` is not `-1` and `step_type` is `-1`, then the row reports metrics at the segment level\. + If both `segment` and `step_type` are not `-1`, then the row reports metrics at the step level\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_QUERY_METRICS.md
29646972ae48-1
+ If both `segment` and `step_type` are not `-1`, then the row reports metrics at the step level\. This table is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_QUERY_METRICS.md
37a0ce420181-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_QUERY_METRICS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_QUERY_METRICS.md
8ceddd19ef74-0
The following table lists step types relevant to database users\. The table doesn't list step types that are for internal use only\. If step type is \-1, the metric is not reported at the step level\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_QUERY_METRICS.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_QUERY_METRICS.md
a8ce37c1bd48-0
To find active queries with high CPU time \(more the 1,000 seconds\), run the following query\. ``` select query, cpu_time / 1000000 as cpu_seconds from stv_query_metrics where segment = -1 and cpu_time > 1000000000 order by cpu_time; query | cpu_seconds ------+------------ 25775 | 9540 ``` To find active queries with a nested loop join that returned more than one million rows, run the following query\. ``` select query, rows from stv_query_metrics where step_type = 15 and rows > 1000000 order by rows; query | rows ------+----------- 25775 | 1580225854 ``` To find active queries that have run for more than 60 seconds and have used less than 10 seconds of CPU time, run the following query\. ``` select query, run_time/1000000 as run_time_seconds from stv_query_metrics where segment = -1 and run_time > 60000000 and cpu_time < 10000000; query | run_time_seconds
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_QUERY_METRICS.md
a8ce37c1bd48-1
where segment = -1 and run_time > 60000000 and cpu_time < 10000000; query | run_time_seconds ------+----------------- 25775 | 114 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_QUERY_METRICS.md
d11c04e20a64-0
ST\_NPoints returns the number of points in an input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NPoints-function.md
1405048a5c60-0
``` ST_NPoints(geom) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NPoints-function.md
2fbb220cc2af-0
*geom* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NPoints-function.md
275ce6c42a15-0
`INTEGER` If *geom* is null, then null is returned\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NPoints-function.md
afdedb42861a-0
The following SQL returns the number of points in a linestring\. ``` SELECT ST_NPoints(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')); ``` ``` st_npoints ------------- 4 ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NPoints-function.md
d22b9f41a442-0
**1**, 1 to 50 \(cannot exceed number of available slots \(concurrency level\) for the service class\)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_wlm_query_slot_count.md
fe71d2254454-0
Sets the number of query slots a query uses\. Workload management \(WLM\) reserves slots in a service class according to the concurrency level set for the queue \(for example, if concurrency level is set to 5, then the service class has 5 slots\)\. WLM allocates the available memory for a service class equally to each slot\. For more information, see [Implementing workload management](cm-c-implementing-workload-management.md)\. **Note** If the value of wlm\_query\_slot\_count is larger than the number of available slots \(concurrency level\) for the service class, the query fails\. If you encounter an error, decrease wlm\_query\_slot\_count to an allowable value\. For operations where performance is heavily affected by the amount of memory allocated, such as Vacuum, increasing the value of wlm\_query\_slot\_count can improve performance\. In particular, for slow Vacuum commands, inspect the corresponding record in the SVV\_VACUUM\_SUMMARY view\. If you see high values \(close to or higher than 100\) for sort\_partitions and merge\_increments in the SVV\_VACUUM\_SUMMARY view, consider increasing the value for wlm\_query\_slot\_count the next time you run Vacuum against that table\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_wlm_query_slot_count.md
fe71d2254454-1
Increasing the value of wlm\_query\_slot\_count limits the number of concurrent queries that can be run\. For example, suppose the service class has a concurrency level of 5 and wlm\_query\_slot\_count is set to 3\. While a query is running within the session with wlm\_query\_slot\_count set to 3, a maximum of 2 more concurrent queries can be executed within the same service class\. Subsequent queries wait in the queue until currently executing queries complete and slots are freed\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_wlm_query_slot_count.md
c2c2a194bf98-0
Use the SET command to set the value of wlm\_query\_slot\_count for the duration of the current session\. ``` set wlm_query_slot_count to 3; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_wlm_query_slot_count.md
07e05378e44b-0
The STL\_USAGE\_CONTROL view contains information that is logged when a usage limit is reached\. For more information about usage limits, see [Managing Usage Limits](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-usage-limits.html) in the *Amazon Redshift Cluster Management Guide*\. STL\_USAGE\_CONTROL is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_USAGE_CONTROL.md
5fa505ba7785-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_USAGE_CONTROL.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_USAGE_CONTROL.md
c2d6fc0b782d-0
The following SQL example returns some of the information logged when a usage limit is reached\. ``` select query, pid, eventtime, feature_type from stl_usage_control order by eventtime desc limit 5; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_USAGE_CONTROL.md
f575ca4a8792-0
Changes a database user account\. If you are the current user, you can change your own password\. For all other options, you must be a database superuser to execute this command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
09533577b491-0
``` ALTER USER username [ WITH ] option [, ... ] where option is CREATEDB | NOCREATEDB | CREATEUSER | NOCREATEUSER | SYSLOG ACCESS { RESTRICTED | UNRESTRICTED } | PASSWORD { 'password' | 'md5hash' | DISABLE } [ VALID UNTIL 'expiration_date' ] | RENAME TO new_name | | CONNECTION LIMIT { limit | UNLIMITED } | SET parameter { TO | = } { value | DEFAULT } | RESET parameter ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
583c2c79ec99-0
*username* Name of the user account\. WITH Optional keyword\. CREATEDB \| NOCREATEDB The CREATEDB option allows the user to create new databases\. NOCREATEDB is the default\. CREATEUSER \| NOCREATEUSER The CREATEUSER option creates a superuser with all database privileges, including CREATE USER\. The default is NOCREATEUSER\. For more information, see [Superusers](r_superusers.md)\. SYSLOG ACCESS \{ RESTRICTED \| UNRESTRICTED \} <a name="alter-user-syslog-access"></a> A clause that specifies the level of access that the user has to the Amazon Redshift system tables and views\. If RESTRICTED is specified, the user can see only the rows generated by that user in user\-visible system tables and views\. The default is RESTRICTED\. If UNRESTRICTED is specified, the user can see all rows in user\-visible system tables and views, including rows generated by another user\. UNRESTRICTED doesn't give a regular user access to superuser\-visible tables\. Only superusers can see superuser\-visible tables\. Giving a user unrestricted access to system tables gives the user visibility to data generated by other users\. For example, STL\_QUERY and STL\_QUERYTEXT contain the full text of INSERT, UPDATE, and DELETE statements, which might contain sensitive user\-generated data\. All rows in STV\_RECENTS and SVV\_TRANSACTIONS are visible to all users\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
583c2c79ec99-1
All rows in STV\_RECENTS and SVV\_TRANSACTIONS are visible to all users\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. PASSWORD \{ '*password*' \| '*md5hash*' \| DISABLE \} Sets the user's password\. By default, users can change their own passwords, unless the password is disabled\. To disable a user's password, specify DISABLE\. When a user's password is disabled, the password is deleted from the system and the user can log on only using temporary AWS Identity and Access Management \(IAM\) user credentials\. For more information, see [Using IAM Authentication to Generate Database User Credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html)\. Only a superuser can enable or disable passwords\. You can't disable a superuser's password\. To enable a password, run ALTER USER and specify a password\. You can specify the password in clear text or as an MD5 hash string\. For clear text, the password must meet the following constraints: + It must be 8 to 64 characters in length\. + It must contain at least one uppercase letter, one lowercase letter, and one number\. + It can use any ASCII characters with ASCII codes 33–126, except ' \(single quote\), " \(double quote\), \\, /, or @\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
583c2c79ec99-2
As a more secure alternative to passing the CREATE USER password parameter as clear text, you can specify an MD5 hash of a string that includes the password and user name\. When you specify an MD5 hash string, the ALTER USER command checks for a valid MD5 hash string, but it doesn't validate the password portion of the string\. It is possible in this case to create a password, such as an empty string, that you can't use to log on to the database\. To specify an MD5 password, follow these steps: 1. Concatenate the password and user name\. For example, for password `ez` and user `user1`, the concatenated string is `ezuser1`\. 1. Convert the concatenated string into a 32\-character MD5 hash string\. You can use any MD5 utility to create the hash string\. The following example uses the Amazon Redshift [MD5 function](r_MD5.md) and the concatenation operator \( \|\| \) to return a 32\-character MD5\-hash string\. ``` select md5('ez' || 'user1'); md5 -------------------------------- 153c434b4b77c89e6b94f12c5393af5b ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
583c2c79ec99-3
153c434b4b77c89e6b94f12c5393af5b ``` 1. Concatenate '`md5`' in front of the MD5 hash string and provide the concatenated string as the *md5hash* argument\. ``` create user user1 password 'md5153c434b4b77c89e6b94f12c5393af5b'; ``` 1. Log on to the database using the user name and password\. For this example, log on as `user1` with password `ez`\. VALID UNTIL '*expiration\_date*' Specifies that the password has an expiration date\. Use the value `'infinity'` to avoid having an expiration date\. The valid data type for this parameter is timestamp\. RENAME TO Renames the user account\. *new\_name* New name of the user\. For more information about valid names, see [Names and identifiers](r_names.md)\. When you rename a user, you must also change the user’s password\. The user name is used as part of the password encryption, so when a user is renamed, the password is cleared\. The user will not be able to log on until the password is reset\. For example: ``` alter user newuser password 'EXAMPLENewPassword11'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
583c2c79ec99-4
``` alter user newuser password 'EXAMPLENewPassword11'; ``` CONNECTION LIMIT \{ *limit* \| UNLIMITED \} The maximum number of database connections the user is permitted to have open concurrently\. The limit isn't enforced for super users\. Use the UNLIMITED keyword to permit the maximum number of concurrent connections\. A limit on the number of connections for each database might also apply\. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md)\. The default is UNLIMITED\. To view current connections, query the [STV\_SESSIONS](r_STV_SESSIONS.md) system view\. If both user and database connection limits apply, an unused connection slot must be available that is within both limits when a user attempts to connect\. SET Sets a configuration parameter to a new default value for all sessions run by the specified user\. RESET Resets a configuration parameter to the original default value for the specified user\. *parameter* Name of the parameter to set or reset\. *value* New value of the parameter\. DEFAULT Sets the configuration parameter to the default value for all sessions run by the specified user\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
e113ba87a427-0
When using AWS Identity and Access Management \(IAM\) authentication to create database user credentials, you might want to create a superuser that is able to log on only using temporary credentials\. You can't disable a superuser's password, but you can create an unknown password using a randomly generated MD5 hash string\. ``` alter user iam_superuser password 'mdA51234567890123456780123456789012'; ``` When you set the [search\_path](r_search_path.md) parameter with the ALTER USER command, the modification takes effect on the specified user's next login\. If you want to change the search\_path for the current user and session, use a SET command\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
bb4bd7c228fd-0
The following example gives the user ADMIN the privilege to create databases: ``` alter user admin createdb; ``` The following example sets the password of the user ADMIN to `adminPass9` and sets an expiration date and time for the password: ``` alter user admin password 'adminPass9' valid until '2017-12-31 23:59'; ``` The following example renames the user ADMIN to SYSADMIN: ``` alter user admin rename to sysadmin; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ALTER_USER.md
fae32f151c10-0
We strongly recommend that you individually compress your load files using gzip, lzop, bzip2, or Zstandard when you have large datasets\. Specify the GZIP, LZOP, BZIP2, or ZSTD option with the COPY command\. This example loads the TIME table from a pipe\-delimited lzop file\. ``` copy time from 's3://mybucket/data/timerows.lzo' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' lzop delimiter '|'; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-compress-data-files.md
fce1e17b005a-0
The following constraints apply when you use Amazon Redshift stored procedures\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-constraints.md
23692d55b6e1-0
The following are differences between stored procedure support in Amazon Redshift and PostgreSQL: + Amazon Redshift doesn't support subtransactions, and hence has limited support for exception handling blocks\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-constraints.md
3880c953416d-0
The following are limits on stored procedures in Amazon Redshift: + The maximum number of stored procedures for a database is 3000\. + The maximum size of the source code for a procedure is 2 MB\. + The maximum number of explicit and implicit cursors that you can open concurrently in a user session is one\. FOR loops that iterate over the result set of a SQL statement open implicit cursors\. Nested cursors aren't supported\. + Explicit and implicit cursors have the same restrictions on the result set size as standard Amazon Redshift cursors\. For more information, see [Cursor constraints](declare.md#declare-constraints)\. + The maximum number of levels for nested calls is 16\. + The maximum number of procedure parameters is 32 for input arguments and 32 for output arguments\. + The maximum number of variables in a stored procedure is 1,024\. + Any SQL command that requires its own transaction context isn't supported inside a stored procedure\. Examples are VACUUM, ALTER TABLE APPEND, and CREATE EXTERNAL TABLE\. + The `registerOutParameter` method call through the Java Database Connectivity \(JDBC\) driver isn't supported for the `refcursor` data type\. For an example of using the `refcursor` data type, see [Returning a result set](stored-procedure-result-set.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/stored-procedure-constraints.md
68f35f29abe0-0
To load data files that are compressed using gzip, lzop, or bzip2, include the corresponding option: GZIP, LZOP, or BZIP2\. COPY does not support files compressed using the lzop *\-\-filter* option\. For example, the following command loads from files that were compressing using lzop\. ``` copy customer from 's3://mybucket/customer.lzo' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' delimiter '|' lzop; ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_loading-gzip-compressed-data-files-from-S3.md
3ad8edf6d578-0
You can load data from text files in an Amazon S3 bucket, in an Amazon EMR cluster, or on a remote host that your cluster can access using an SSH connection\. You can also load data directly from a DynamoDB table\. The maximum size of a single input row from any source is 4 MB\. To export data from a table to a set of files in an Amazon S3, use the [UNLOAD](r_UNLOAD.md) command\. **Topics** + [COPY from Amazon S3](copy-parameters-data-source-s3.md) + [COPY from Amazon EMR](copy-parameters-data-source-emr.md) + [COPY from remote host \(SSH\)](copy-parameters-data-source-ssh.md) + [COPY from Amazon DynamoDB](copy-parameters-data-source-dynamodb.md)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-parameters-data-source.md
ff1c8d836921-0
The EXP function implements the exponential function for a numeric expression, or the base of the natural logarithm, `e`, raised to the power of expression\. The EXP function is the inverse of [LN function](r_LN.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXP.md
863d0e666aea-0
``` EXP (expression) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXP.md
ba4fae094ea1-0
*expression* The expression must be an INTEGER, DECIMAL, or DOUBLE PRECISION data type\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXP.md
b7a04ce9c002-0
EXP returns a DOUBLE PRECISION number\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXP.md
203667fe065e-0
Use the EXP function to forecast ticket sales based on a continuous growth pattern\. In this example, the subquery returns the number of tickets sold in 2008\. That result is multiplied by the result of the EXP function, which specifies a continuous growth rate of 7% over 10 years\. ``` select (select sum(qtysold) from sales, date where sales.dateid=date.dateid and year=2008) * exp((7::float/100)*10) qty2018; qty2018 ------------------ 695447.483772222 (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_EXP.md
249ecf1b490c-0
Records the details for an unload operation\. STL\_UNLOAD\_LOG records one row for each file created by an UNLOAD statement\. For example, if an UNLOAD creates 12 files, STL\_UNLOAD\_LOG will contain 12 corresponding rows\. This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNLOAD_LOG.md
eed0ffbeafd6-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_UNLOAD_LOG.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNLOAD_LOG.md
8c6361531407-0
To get a list of the files that were written to Amazon S3 by an UNLOAD command, you can call an Amazon S3 list operation after the UNLOAD completes; however, depending on how quickly you issue the call, the list might be incomplete because an Amazon S3 list operation is eventually consistent\. To get a complete, authoritative list immediately, query STL\_UNLOAD\_LOG\. The following query returns the pathname for files that were created by an UNLOAD for the last query executed: ``` select query, substring(path,0,40) as path from stl_unload_log where query = pg_last_query_id() order by path; ``` This command returns the following sample output: ``` query | path -------+-------------------------------------- 2320 | s3://my-bucket/venue0000_part_00 2320 | s3://my-bucket/venue0001_part_00 2320 | s3://my-bucket/venue0002_part_00 2320 | s3://my-bucket/venue0003_part_00 (4 rows) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNLOAD_LOG.md
0ac6bb3d46f7-0
In this section, you can find the conventions that are used to write the syntax for the PL/pgSQL stored procedure language\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_PL_reference_conventions.html)
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PL_reference_conventions.md
d6a7432b1098-0
Records the time when a user cancels or terminates a process\. SELECT PG\_TERMINATE\_BACKEND\(*pid*\), SELECT PG\_CANCEL\_BACKEND\(*pid*\), and CANCEL *pid* creates a log entry in SVL\_TERMINATE\. SVL\_ TERMINATE is visible only to the superuser\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_TERMINATE.md
7d3a24b98731-0
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVL_TERMINATE.html) The following command shows the latest cancelled query\. ``` select * from svl_terminate order by eventtime desc limit 1; pid | eventtime | userid | type ------+----------------------------+--------+-------- 8324 | 2020-03-24 09:42:07.298937 | 1 | CANCEL (1 row) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVL_TERMINATE.md
887692099ec5-0
If you specify `'auto'` as the argument for the DATEFORMAT or TIMEFORMAT parameter, Amazon Redshift will automatically recognize and convert the date format or time format in your source data\. The following shows an example\. ``` copy favoritemovies from 'dynamodb://ProductCatalog' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' dateformat 'auto'; ``` When used with the `'auto'` argument for DATEFORMAT and TIMEFORMAT, COPY recognizes and converts the date and time formats listed in the table in [ DATEFORMAT and TIMEFORMAT strings](r_DATEFORMAT_and_TIMEFORMAT_strings.md)\. In addition, the `'auto'` argument recognizes the following formats that aren't supported when using a DATEFORMAT and TIMEFORMAT string\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/automatic-recognition.html) Automatic recognition doesn't support epochsecs and epochmillisecs\. To test whether a date or timestamp value will be automatically converted, use a CAST function to attempt to convert the string to a date or timestamp value\. For example, the following commands test the timestamp value `'J2345678 04:05:06.789'`: ``` create table formattest (test char(16));
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-recognition.md
887692099ec5-1
``` create table formattest (test char(16)); insert into formattest values('J2345678 04:05:06.789'); select test, cast(test as timestamp) as timestamp, cast(test as date) as date from formattest; test | timestamp | date ----------------------+---------------------+------------ J2345678 04:05:06.789 1710-02-23 04:05:06 1710-02-23 ``` If the source data for a DATE column includes time information, the time component is truncated\. If the source data for a TIMESTAMP column omits time information, 00:00:00 is used for the time component\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-recognition.md
1a5598360ff5-0
ST\_AddPoint returns a linestring geometry that is the same as the input geometry with a point added\. If an index is provided, then the point is added at the index position\. If the index is \-1 or not provided, then the point is appended to the linestring\. The index is zero\-based\. The spatial reference system identifier \(SRID\) of the result is the same as that of the input geometry\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AddPoint-function.md
cc58bbc2e506-0
``` ST_AddPoint(geom1, geom2) ``` ``` ST_AddPoint(geom1, geom2, index) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AddPoint-function.md
e481377b567b-0
*geom1* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING`\. *geom2* A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `POINT`\. *index* A value of data type `INTEGER` that represents the position of a zero\-based index\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AddPoint-function.md
566f577d1e38-0
`GEOMETRY` If *geom1*, *geom2*, or *index* is null, then null is returned\. If *geom1* is not a `LINESTRING`, then an error is returned\. If *geom2* is not a `POINT`, then an error is returned\. If *index* is out of range, then an error is returned\. Valid values for the index position are \-1 or a value between 0 and `ST_NumPoints(geom1)`\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AddPoint-function.md
181e0b5a0983-0
The following SQL adds a point to a linestring to make it a closed linestring\. ``` WITH tmp(g) AS (SELECT ST_GeomFromText('LINESTRING(0 0,10 0,10 10,5 5,0 5)',4326)) SELECT ST_AsEWKT(ST_AddPoint(g, ST_StartPoint(g))) FROM tmp; ``` ``` st_asewkt ------------------------------------------------ SRID=4326;LINESTRING(0 0,10 0,10 10,5 5,0 5,0 0) ``` The following SQL adds a point to a specific position in a linestring\. ``` WITH tmp(g) AS (SELECT ST_GeomFromText('LINESTRING(0 0,10 0,10 10,5 5,0 5)',4326)) SELECT ST_AsEWKT(ST_AddPoint(g, ST_SetSRID(ST_Point(5, 10), 4326), 3)) FROM tmp; ``` ``` st_asewkt
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AddPoint-function.md
181e0b5a0983-1
``` ``` st_asewkt ------------------------------------------------ SRID=4326;LINESTRING(0 0,10 0,10 10,5 10,5 5,0 5) ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AddPoint-function.md
713055566615-0
**Topics** + [Syntax](#r_UPDATE-synopsis) + [Parameters](#r_UPDATE-parameters) + [Usage notes](#r_UPDATE_usage_notes) + [Examples of UPDATE statements](c_Examples_of_UPDATE_statements.md) Updates values in one or more table columns when a condition is satisfied\. **Note** The maximum size for a single SQL statement is 16 MB\.
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPDATE.md
f99a6f9e2bba-0
``` UPDATE table_name SET column = { expression | DEFAULT } [,...] [ FROM fromlist ] [ WHERE condition ] ```
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UPDATE.md