id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
91ada9899d1e-0
|
```
ST_AsEWKT(geom)
```
```
ST_AsEWKT(geom, precision)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsEWKT-function.md
|
5b63f6af4bc6-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*precision*
A value of data type `INTEGER`\. The coordinates of *geom* are displayed using the specified precision 1–20\. If *precision* is not specified, the default is 15\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsEWKT-function.md
|
55534db6c7bf-0
|
`VARCHAR`
If *geom* is null, then null is returned\.
If *precision* is null, then null is returned\.
If the result is larger than a 64\-KB `VARCHAR`, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsEWKT-function.md
|
f8fa0e60931c-0
|
The following SQL returns the EWKT representation of a linestring\.
```
SELECT ST_AsEWKT(ST_GeomFromText('LINESTRING(3.141592653589793 -6.283185307179586,2.718281828459045 -1.414213562373095)', 4326));
```
```
st_asewkt
--------------------------------
SRID=4326;LINESTRING(3.14159265358979 -6.28318530717959,2.71828182845905 -1.41421356237309)
```
The following SQL returns the EWKT representation of a linestring\. The coordinates of the geometries are displayed with six digits of precision\.
```
SELECT ST_AsEWKT(ST_GeomFromText('LINESTRING(3.141592653589793 -6.283185307179586,2.718281828459045 -1.414213562373095)', 4326), 6);
```
```
st_asewkt
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsEWKT-function.md
|
f8fa0e60931c-1
|
```
```
st_asewkt
--------------------------------
SRID=4326;LINESTRING(3.14159 -6.28319,2.71828 -1.41421)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_AsEWKT-function.md
|
6450b5a77860-0
|
You can use the COPY command to load data in parallel from an Amazon EMR cluster configured to write text files to the cluster's Hadoop Distributed File System \(HDFS\) in the form of fixed\-width files, character\-delimited files, CSV files, or JSON\-formatted files\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-emr.md
|
fcb90fe70db8-0
|
This section walks you through the process of loading data from an Amazon EMR cluster\. The following sections provide the details you need to accomplish each step\.
+ **[Step 1: Configure IAM permissions](load-from-emr-steps-configure-iam.md)**
The users that create the Amazon EMR cluster and run the Amazon Redshift COPY command must have the necessary permissions\.
+ **[Step 2: Create an Amazon EMR cluster](load-from-emr-steps-create-cluster.md)**
Configure the cluster to output text files to the Hadoop Distributed File System \(HDFS\)\. You will need the Amazon EMR cluster ID and the cluster's master public DNS \(the endpoint for the Amazon EC2 instance that hosts the cluster\)\.
+ **[Step 3: Retrieve the Amazon Redshift cluster public key and cluster node IP addresses](load-from-emr-steps-retrieve-key-and-ips.md)**
The public key enables the Amazon Redshift cluster nodes to establish SSH connections to the hosts\. You will use the IP address for each cluster node to configure the host security groups to permit access from your Amazon Redshift cluster using these IP addresses\.
+ **[Step 4: Add the Amazon Redshift cluster public key to each Amazon EC2 host's authorized keys file](load-from-emr-steps-add-key-to-host.md)**
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-emr.md
|
fcb90fe70db8-1
|
You add the Amazon Redshift cluster public key to the host's authorized keys file so that the host will recognize the Amazon Redshift cluster and accept the SSH connection\.
+ **[Step 5: Configure the hosts to accept all of the Amazon Redshift cluster's IP addresses](load-from-emr-steps-configure-security-groups.md)**
Modify the Amazon EMR instance's security groups to add ingress rules to accept the Amazon Redshift IP addresses\.
+ **[Step 6: Run the COPY command to load the data](load-from-emr-steps-run-copy.md)**
From an Amazon Redshift database, run the COPY command to load the data into an Amazon Redshift table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/loading-data-from-emr.md
|
56d1da29a425-0
|
Many functions that are not excluded have different semantics or usage\. For example, some supported functions will run only on the leader node\. Also, some unsupported functions will not return an error when run on the leader node\. The fact that these functions do not return an error in some cases should not be taken to indicate that the function is supported by Amazon Redshift\.
**Important**
Do not assume that the semantics of elements that Amazon Redshift and PostgreSQL have in common are identical\. Make sure to consult the *Amazon Redshift Database Developer Guide * [SQL commands](c_SQL_commands.md) to understand the often subtle differences\.
For more information, see [SQL functions supported on the leader node](c_sql-functions-leader-node.md)\.
These PostgreSQL functions are not supported in Amazon Redshift\.
+ Access privilege inquiry functions
+ Advisory lock functions
+ Aggregate functions
+ STRING\_AGG\(\)
+ ARRAY\_AGG\(\)
+ EVERY\(\)
+ XML\_AGG\(\)
+ CORR\(\)
+ COVAR\_POP\(\)
+ COVAR\_SAMP\(\)
+ REGR\_AVGX\(\), REGR\_AVGY\(\)
+ REGR\_COUNT\(\)
+ REGR\_INTERCEPT\(\)
+ REGR\_R2\(\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-functions.md
|
56d1da29a425-1
|
+ REGR\_INTERCEPT\(\)
+ REGR\_R2\(\)
+ REGR\_SLOPE\(\)
+ REGR\_SXX\(\), REGR\_SXY\(\), REGR\_SYY\(\)
+ Array functions and operators
+ Backup control functions
+ Comment information functions
+ Database object location functions
+ Database object size functions
+ Date/Time functions and operators
+ CLOCK\_TIMESTAMP\(\)
+ JUSTIFY\_DAYS\(\), JUSTIFY\_HOURS\(\), JUSTIFY\_INTERVAL\(\)
+ PG\_SLEEP\(\)
+ TRANSACTION\_TIMESTAMP\(\)
+ ENUM support functions
+ Geometric functions and operators
+ Generic file access functions
+ IS DISTINCT FROM
+ Network address functions and operators
+ Mathematical functions
+ DIV\(\)
+ SETSEED\(\)
+ WIDTH\_BUCKET\(\)
+ Set returning functions
+ GENERATE\_SERIES\(\)
+ GENERATE\_SUBSCRIPTS\(\)
+ Range functions and operators
+ Recovery control functions
+ Recovery information functions
+ ROLLBACK TO SAVEPOINT function
+ Schema visibility inquiry functions
+ Server signaling functions
+ Snapshot synchronization functions
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-functions.md
|
56d1da29a425-2
|
+ ROLLBACK TO SAVEPOINT function
+ Schema visibility inquiry functions
+ Server signaling functions
+ Snapshot synchronization functions
+ Sequence manipulation functions
+ String functions
+ BIT\_LENGTH\(\)
+ OVERLAY\(\)
+ CONVERT\(\), CONVERT\_FROM\(\), CONVERT\_TO\(\)
+ ENCODE\(\)
+ FORMAT\(\)
+ QUOTE\_NULLABLE\(\)
+ REGEXP\_MATCHES\(\)
+ REGEXP\_SPLIT\_TO\_ARRAY\(\)
+ REGEXP\_SPLIT\_TO\_TABLE\(\)
+ System catalog information functions
+ System information functions
+ CURRENT\_CATALOG CURRENT\_QUERY\(\)
+ INET\_CLIENT\_ADDR\(\)
+ INET\_CLIENT\_PORT\(\)
+ INET\_SERVER\_ADDR\(\) INET\_SERVER\_PORT\(\)
+ PG\_CONF\_LOAD\_TIME\(\)
+ PG\_IS\_OTHER\_TEMP\_SCHEMA\(\)
+ PG\_LISTENING\_CHANNELS\(\)
+ PG\_MY\_TEMP\_SCHEMA\(\)
+ PG\_POSTMASTER\_START\_TIME\(\)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-functions.md
|
56d1da29a425-3
|
+ PG\_MY\_TEMP\_SCHEMA\(\)
+ PG\_POSTMASTER\_START\_TIME\(\)
+ PG\_TRIGGER\_DEPTH\(\)
+ SHOW VERSION\(\)
+ Text search functions and operators
+ Transaction IDs and snapshots functions
+ Trigger functions
+ XML functions
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_unsupported-postgresql-functions.md
|
da3af4f18f1c-0
|
Analyzes table scan steps for queries\. The step number for rows in this table is always 0 because a scan is the first step in a segment\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
6e66e70dc50f-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SCAN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
f92bab5e00a7-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SCAN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
959e037afed8-0
|
Ideally `rows` should be relatively close to `rows_pre_filter`\. A large difference between `rows` and `rows_pre_filter` is an indication that the execution engine is scanning rows that are later discarded, which is inefficient\. The difference between `rows_pre_filter` and `rows_pre_user_filter` is the number of ghost rows in the scan\. Run a VACUUM to remove rows marked for deletion\. The difference between `rows` and `rows_pre_user_filter` is the number of rows filtered by
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
959e037afed8-1
|
is the number of rows filtered by the query\. If a lot of rows are discarded by the user filter, review your choice of sort column or, if this is due to a large unsorted region, run a vacuum\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
bc355d5dd563-0
|
The following example shows that `rows_pre_filter` is larger than `rows_pre_user_filter` because the table has deleted rows that have not been vacuumed \(ghost rows\)\.
```
SELECT slice, segment,step,rows, rows_pre_filter, rows_pre_user_filter
from stl_scan where query = pg_last_query_id();
query | slice | segment | step | rows | rows_pre_filter | rows_pre_user_filter
-------+--------+---------+------+-------+-----------------+----------------------
42915 | 0 | 0 | 0 | 43159 | 86318 | 43159
42915 | 0 | 1 | 0 | 1 | 0 | 0
42915 | 1 | 0 | 0 | 43091 | 86182 | 43091
42915 | 1 | 1 | 0 | 1 | 0 | 0
42915 | 2 | 0 | 0 | 42778 | 85556 | 42778
42915 | 2 | 1 | 0 | 1 | 0 | 0
42915 | 3 | 0 | 0 | 43428 | 86856 | 43428
42915 | 3 | 1 | 0 | 1 | 0 | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
bc355d5dd563-1
|
42915 | 3 | 1 | 0 | 1 | 0 | 0
42915 | 10000 | 2 | 0 | 4 | 0 | 0
(9 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCAN.md
|
1002e551ab22-0
|
Amazon Redshift creates the SVV\_DISKUSAGE system view by joining the STV\_TBL\_PERM and STV\_BLOCKLIST tables\. The SVV\_DISKUSAGE view contains information about data allocation for the tables in a database\.
Use aggregate queries with SVV\_DISKUSAGE, as the following examples show, to determine the number of disk blocks allocated per database, table, slice, or column\. Each data block uses 1 MB\. You can also use [STV\_PARTITIONS](r_STV_PARTITIONS.md) to view summary information about disk utilization\.
SVV\_DISKUSAGE is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_DISKUSAGE.md
|
b82bff3ee783-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_DISKUSAGE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_DISKUSAGE.md
|
12be28d4abf8-0
|
SVV\_DISKUSAGE contains one row per allocated disk block, so a query that selects all the rows potentially returns a very large number of rows\. We recommend using only aggregate queries with SVV\_DISKUSAGE\.
Return the highest number of blocks ever allocated to column 6 in the USERS table \(the EMAIL column\):
```
select db_id, trim(name) as tablename, max(blocknum)
from svv_diskusage
where name='users' and col=6
group by db_id, name;
db_id | tablename | max
--------+-----------+-----
175857 | users | 2
(1 row)
```
The following query returns similar results for all of the columns in a large 10\-column table called SALESNEW\. \(The last three rows, for columns 10 through 12, are for the hidden metadata columns\.\)
```
select db_id, trim(name) as tablename, col, tbl, max(blocknum)
from svv_diskusage
where name='salesnew'
group by db_id, name, col, tbl
order by db_id, name, col, tbl;
db_id | tablename | col | tbl | max
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_DISKUSAGE.md
|
12be28d4abf8-1
|
order by db_id, name, col, tbl;
db_id | tablename | col | tbl | max
--------+------------+-----+--------+-----
175857 | salesnew | 0 | 187605 | 154
175857 | salesnew | 1 | 187605 | 154
175857 | salesnew | 2 | 187605 | 154
175857 | salesnew | 3 | 187605 | 154
175857 | salesnew | 4 | 187605 | 154
175857 | salesnew | 5 | 187605 | 79
175857 | salesnew | 6 | 187605 | 79
175857 | salesnew | 7 | 187605 | 302
175857 | salesnew | 8 | 187605 | 302
175857 | salesnew | 9 | 187605 | 302
175857 | salesnew | 10 | 187605 | 3
175857 | salesnew | 11 | 187605 | 2
175857 | salesnew | 12 | 187605 | 296
(13 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_DISKUSAGE.md
|
e1b8bb900452-0
|
Compression is a column\-level operation that reduces the size of data when it is stored\. Compression conserves storage space and reduces the size of data that is read from storage, which reduces the amount of disk I/O and therefore improves query performance\.
By default, Amazon Redshift stores data in its raw, uncompressed format\. When you create tables in an Amazon Redshift database, you can define a compression type, or encoding, for the columns\. For more information, see [Compression encodings](c_Compression_encodings.md)\.
You can apply compression encodings to columns in tables manually when you create the tables, or you can use the COPY command to analyze the load data and apply compression encodings automatically\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
9b04e5858e2e-0
|
1. Find how much space each column uses\.
Query the STV\_BLOCKLIST system view to find the number of 1 MB blocks each column uses\. The MAX aggregate function returns the highest block number for each column\. This example uses `col < 17` in the WHERE clause to exclude system\-generated columns\.
Execute the following command\.
```
select col, max(blocknum)
from stv_blocklist b, stv_tbl_perm p
where (b.tbl=p.id) and name ='lineorder'
and col < 17
group by name, col
order by col;
```
Your results will look similar to the following\.
```
col | max
----+-----
0 | 572
1 | 572
2 | 572
3 | 572
4 | 572
5 | 572
6 | 1659
7 | 715
8 | 572
9 | 572
10 | 572
11 | 572
12 | 572
13 | 572
14 | 572
15 | 572
16 | 1185
(17 rows)
```
1. Experiment with the different encoding methods\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
9b04e5858e2e-1
|
16 | 1185
(17 rows)
```
1. Experiment with the different encoding methods\.
In this step, you create a table with identical columns, except that each column uses a different compression encoding\. Then you insert a large number of rows, using data from the `p_name` column in the PART table, so that every column has the same data\. Finally, you will examine the table to compare the effects of the different encodings on column sizes\.
1. Create a table with the encodings that you want to compare\.
```
create table encodingshipmode (
moderaw varchar(22) encode raw,
modebytedict varchar(22) encode bytedict,
modelzo varchar(22) encode lzo,
moderunlength varchar(22) encode runlength,
modetext255 varchar(22) encode text255,
modetext32k varchar(22) encode text32k);
```
1. Insert the same data into all of the columns using an INSERT statement with a SELECT clause\. The command will take a couple minutes to execute\.
```
insert into encodingshipmode
select lo_shipmode as moderaw, lo_shipmode as modebytedict, lo_shipmode as modelzo,
lo_shipmode as moderunlength, lo_shipmode as modetext255,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
9b04e5858e2e-2
|
lo_shipmode as moderunlength, lo_shipmode as modetext255,
lo_shipmode as modetext32k
from lineorder where lo_orderkey < 200000000;
```
1. Query the STV\_BLOCKLIST system table to compare the number of 1 MB disk blocks used by each column\.
```
select col, max(blocknum)
from stv_blocklist b, stv_tbl_perm p
where (b.tbl=p.id) and name = 'encodingshipmode'
and col < 6
group by name, col
order by col;
```
The query returns results similar to the following\. Depending on how your cluster is configured, your results will be different, but the relative sizes should be similar\.
```
col | max
–------+-----
0 | 221
1 | 26
2 | 61
3 | 192
4 | 54
5 | 105
(6 rows)
```
The columns show the results for the following encodings:
+ Raw
+ Bytedict
+ LZO
+ Runlength
+ Text255
+ Text32K
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
9b04e5858e2e-3
|
+ Bytedict
+ LZO
+ Runlength
+ Text255
+ Text32K
You can see that Bytedict encoding on the second column produced the best results for this data set, with a compression ratio of better than 8:1\. Different data sets will produce different results, of course\.
1. Use the ANALYZE COMPRESSION command to view the suggested encodings for an existing table\.
Execute the following command\.
```
analyze compression lineorder;
```
Your results should look similar to the following\.
```
Table | Column | Encoding
-----------+------------------+-------------------
lineorder lo_orderkey delta
lineorder lo_linenumber delta
lineorder lo_custkey raw
lineorder lo_partkey raw
lineorder lo_suppkey raw
lineorder lo_orderdate delta32k
lineorder lo_orderpriority bytedict
lineorder lo_shippriority runlength
lineorder lo_quantity delta
lineorder lo_extendedprice lzo
lineorder lo_ordertotalprice lzo
lineorder lo_discount delta
lineorder lo_revenue lzo
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
9b04e5858e2e-4
|
lineorder lo_ordertotalprice lzo
lineorder lo_discount delta
lineorder lo_revenue lzo
lineorder lo_supplycost delta32k
lineorder lo_tax delta
lineorder lo_commitdate delta32k
lineorder lo_shipmode bytedict
```
Notice that ANALYZE COMPRESSION chose BYTEDICT encoding for the `lo_shipmode` column\.
For an example that walks through choosing manually applied compression encodings, see [Example: Choosing compression encodings for the CUSTOMER table](Examples__compression_encodings_in_CREATE_TABLE_statements.md)\.
1. Apply automatic compression to the SSB tables\.
By default, the COPY command automatically applies compression encodings when you load data into an empty table that has no compression encodings other than RAW encoding\. For this tutorial, you will let the COPY command automatically select and apply optimal encodings for the tables as part of the next step, Recreate the test data set\.
For more information, see [Loading tables with automatic compression](c_Loading_tables_auto_compress.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
6bff63d19de6-0
|
[Step 6: Recreate the test data set](tutorial-tuning-tables-recreate-test-data.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-compression.md
|
c780ec0d1916-0
|
When you create a database object, you are its owner\. By default, only a superuser or the owner of an object can query, modify, or grant privileges on the object\. For users to use an object, you must grant the necessary privileges to the user or the group that contains the user\. Database superusers have the same privileges as database owners\.
Amazon Redshift supports the following privileges: SELECT, INSERT, UPDATE, DELETE, REFERENCES, CREATE, TEMPORARY, and USAGE\. Different privileges are associated with different object types\. For information on database object privileges supported by Amazon Redshift, see the [GRANT](r_GRANT.md) command\.
The right to modify or destroy an object is always the privilege of the owner only\.
To revoke a privilege that was previously granted, use the [REVOKE](r_REVOKE.md) command\. The privileges of the object owner, such as DROP, GRANT, and REVOKE privileges, are implicit and cannot be granted or revoked\. Object owners can revoke their own ordinary privileges, for example, to make a table read\-only for themselves as well as others\. Superusers retain all privileges regardless of GRANT and REVOKE commands\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Privileges.md
|
779b7f85423d-0
|
The COPY command is atomic and transactional\. In other words, even when the COPY command reads data from multiple files, the entire process is treated as a single transaction\. If COPY encounters an error reading a file, it automatically retries until the process times out \(see [statement\_timeout](r_statement_timeout.md)\) or if data cannot be download from Amazon S3 for a prolonged period of time \(between 15 and 30 minutes\), ensuring that each file is loaded only once\. If the COPY command fails, the entire transaction is aborted and all changes are rolled back\. For more information about handling load errors, see [Troubleshooting data loads](t_Troubleshooting_load_errors.md)\.
After a COPY command is successfully initiated, it doesn't fail if the session terminates, for example when the client disconnects\. However, if the COPY command is within a BEGIN … END transaction block that doesn't complete because the session terminates, the entire transaction, including the COPY, is rolled back\. For more information about transactions, see [BEGIN](r_BEGIN.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/copy-usage_notes-multiple-files.md
|
af9e6c5a907e-0
|
A scalar SQL UDF incorporates a SQL SELECT clause that executes when the function is called and returns a single value\. The [CREATE FUNCTION](r_CREATE_FUNCTION.md) command defines the following parameters:
+ \(Optional\) Input arguments\. Each argument must have a data type\.
+ One return data type\.
+ One SQL SELECT clause\. In the SELECT clause, refer to the input arguments using $1, $2, and so on, according to the order of the arguments in the function definition\.
The input and return data types can be any standard Amazon Redshift data type\.
Don't include a FROM clause in your SELECT clause\. Instead, include the FROM clause in the SQL statement that calls the SQL UDF\.
The SELECT clause can't include any of the following types of clauses:
+ FROM
+ INTO
+ WHERE
+ GROUP BY
+ ORDER BY
+ LIMIT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-creating-a-scalar-sql-udf.md
|
2344a787adc2-0
|
The following example creates a function that compares two numbers and returns the larger value\. For more information, see [CREATE FUNCTION](r_CREATE_FUNCTION.md)\.
```
create function f_sql_greater (float, float)
returns float
stable
as $$
select case when $1 > $2 then $1
else $2
end
$$ language sql;
```
The following query calls the new f\_sql\_greater function to query the SALES table and return either COMMISSION or 20 percent of PRICEPAID, whichever is greater\.
```
select f_sql_greater(commission, pricepaid*0.20) from sales;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-creating-a-scalar-sql-udf.md
|
7bf7fd398f5b-0
|
Bit\-wise aggregate functions compute bit operations to perform aggregation of integer columns and columns that can be converted or rounded to integer values\.
**Topics**
+ [Using NULLs in bit\-wise aggregations](#c_bitwise_aggregate_functions-nulls-in-bit-wise-aggregations)
+ [DISTINCT support for bit\-wise aggregations](#distinct-support-for-bit-wise-aggregations)
+ [Overview examples for bit\-wise functions](#r_bitwise_example)
+ [BIT\_AND function](r_BIT_AND.md)
+ [BIT\_OR function](r_BIT_OR.md)
+ [BOOL\_AND function](r_BOOL_AND.md)
+ [BOOL\_OR function](r_BOOL_OR.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_bitwise_aggregate_functions.md
|
3b71270e554f-0
|
When you apply a bit\-wise function to a column that is nullable, any NULL values are eliminated before the function result is calculated\. If no rows qualify for aggregation, the bit\-wise function returns NULL\. The same behavior applies to regular aggregate functions\. Following is an example\.
```
select sum(venueseats), bit_and(venueseats) from venue
where venueseats is null;
sum | bit_and
------+---------
null | null
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_bitwise_aggregate_functions.md
|
c371f3619f5a-0
|
As other aggregate functions do, bit\-wise functions support the DISTINCT keyword\.
However, using DISTINCT with these functions has no impact on the results\. The first instance of a value is sufficient to satisfy bit\-wise AND or OR operations\. It makes no difference if duplicate values are present in the expression being evaluated\.
Because the DISTINCT processing is likely to incur some query execution overhead, we recommend that you don't use DISTINCT with bit\-wise functions\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_bitwise_aggregate_functions.md
|
a96472cfa057-0
|
Following, you can find some overview examples demonstrating how to work with the bit\-wise functions\. You can also find specific code examples with each function description\.
Examples for the bit\-wise functions are based on the TICKIT sample database\. The USERS table in the TICKIT sample database contains several Boolean columns that indicate whether each user is known to like different types of events, such as sports, theatre, opera, and so on\. An example follows\.
```
select userid, username, lastname, city, state,
likesports, liketheatre
from users limit 10;
userid | username | lastname | city | state | likesports | liketheatre
-------+----------+-----------+--------------+-------+------------+-------------
1 | JSG99FHE | Taylor | Kent | WA | t | t
9 | MSD36KVR | Watkins | Port Orford | MD | t | f
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_bitwise_aggregate_functions.md
|
a96472cfa057-1
|
9 | MSD36KVR | Watkins | Port Orford | MD | t | f
```
Assume that a new version of the USERS table is built in a different way\. In this new version, a single integer column that defines \(in binary form\) eight types of events that each user likes or dislikes\. In this design, each bit position represents a type of event\. A user who likes all eight types has all eight bits set to 1 \(as in the first row of the following table\)\. A user who doesn't like any of these events has all eight bits set to 0 \(see the second row\)\. A user who likes only sports and jazz is represented in the third row following\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_bitwise_aggregate_functions.html)
In the database table, these binary values can be stored in a single LIKES column as integers, as shown following\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_bitwise_aggregate_functions.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_bitwise_aggregate_functions.md
|
3544f80b27ea-0
|
IS\_VALID\_JSON\_ARRAY validates a JSON array\. The function returns Boolean `true` \(`t`\) if the array is properly formed JSON or `false` \(`f`\) if the array is malformed\. To validate a JSON string, use [IS\_VALID\_JSON function](IS_VALID_JSON.md)
For more information, see [JSON functions](json-functions.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON_ARRAY.md
|
d7d7fd640900-0
|
```
is_valid_json_array('json_array')
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON_ARRAY.md
|
9ae1ceecd73b-0
|
*json\_array*
A string or expression that evaluates to a JSON array\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON_ARRAY.md
|
61324712ed61-0
|
BOOLEAN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON_ARRAY.md
|
c5da587faced-0
|
The following example creates a table and inserts JSON strings for testing\.
```
create table test_json_arrays(id int identity(0,1), json_arrays varchar);
-- Insert valid JSON array strings --
insert into test_json_arrays(json_arrays)
values('[]'),
('["a","b"]'),
('["a",["b",1,["c",2,3,null]]]');
-- Insert invalid JSON array strings --
insert into test_json_arrays(json_arrays) values
('{"a":1}'),
('a'),
('[1,2,]');
```
The following example validates the strings in the preceding example\.
```
select json_arrays, is_valid_json_array(json_arrays)
from test_json_arrays order by id;
json_arrays | is_valid_json_array
-----------------------------+--------------------
[] | true
["a","b"] | true
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON_ARRAY.md
|
c5da587faced-1
|
[] | true
["a","b"] | true
["a",["b",1,["c",2,3,null]]] | true
{"a":1} | false
a | false
[1,2,] | false
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/IS_VALID_JSON_ARRAY.md
|
99af34b02f7e-0
|
Stores information about external schemas\.
PG\_EXTERNAL\_SCHEMA is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_EXTERNAL_SCHEMA.md
|
919b17c9c8cb-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_PG_EXTERNAL_SCHEMA.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_EXTERNAL_SCHEMA.md
|
9b5fe7e7d750-0
|
The following example shows details for external schemas\.
```
select esoid, nspname as schemaname, nspowner, esdbname as external_db, esoptions
from pg_namespace a,pg_external_schema b where a.oid=b.esoid;
esoid | schemaname | nspowner | external_db | esoptions
-------+-----------------+----------+-------------+-------------------------------------------------------------
100134 | spectrum_schema | 100 | spectrum_db | {"IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"}
100135 | spectrum | 100 | spectrumdb | {"IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"}
100149 | external | 100 | external_db | {"IAM_ROLE":"arn:aws:iam::123456789012:role/mySpectrumRole"}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_PG_EXTERNAL_SCHEMA.md
|
113ecc377ca4-0
|
ST\_MakePolygon returns a polygon geometry whose outer ring is the input linestring\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePolygon-function.md
|
d99ee6626766-0
|
```
ST_MakePolygon(linestring)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePolygon-function.md
|
53c0f19c6f4f-0
|
*linestring*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. The subtype must be `LINESTRING`\. The *linestring* value must be closed\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePolygon-function.md
|
7ce2be77d9ab-0
|
`GEOMETRY` of subtype `POLYGON`\.
The spatial reference system identifier \(SRID\) of the returned geometry is equal to the SRID of the value input as *linestring*\.
If *linestring* is null, then null is returned\.
If *linestring* is not a linestring, then an error is returned\.
If *linestring* is not closed, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePolygon-function.md
|
9af54a508e0a-0
|
The following SQL returns a polygon from an input linestring\.
```
SELECT ST_AsText(ST_MakePolygon(ST_GeomFromText('LINESTRING(77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07)')));
```
```
st_astext
xxxxxxxxxxx
POLYGON((77.29 29.07,77.42 29.26,77.27 29.31,77.29 29.07))
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_MakePolygon-function.md
|
b46175caaa82-0
|
Displays information about transactions that have been undone\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNDONE.md
|
3f336ceeace3-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_UNDONE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNDONE.md
|
9bb63b918e3a-0
|
To view a concise log of all undone transactions, type the following command:
```
select xact_id, xact_id_undone, table_id from stl_undone;
```
This command returns the following sample output:
```
xact_id | xact_id_undone | table_id
---------+----------------+----------
1344 | 1344 | 100192
1326 | 1326 | 100192
1551 | 1551 | 100192
(3 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_UNDONE.md
|
4fdb42c724b0-0
|
The NTILE window function divides ordered rows in the partition into the specified number of ranked groups of as equal size as possible and returns the group that a given row falls into\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTILE.md
|
d8dea9503fcc-0
|
```
NTILE (expr)
OVER (
[ PARTITION BY expression_list ]
[ ORDER BY order_list ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTILE.md
|
5c176460956a-0
|
*expr*
The number of ranking groups and must result in a positive integer value \(greater than 0\) for each partition\. The *expr* argument must not be nullable\.
OVER
A clause that specifies the window partitioning and ordering\. The OVER clause cannot contain a window frame specification\.
PARTITION BY *window\_partition*
Optional\. The range of records for each group in the OVER clause\.
ORDER BY *window\_ordering*
Optional\. An expression that sorts the rows within each partition\. If the ORDER BY clause is omitted, the ranking behavior is the same\.
If ORDER BY does not produce a unique ordering, the order of the rows is nondeterministic\. For more information, see [Unique ordering of data for window functions](r_Examples_order_by_WF.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTILE.md
|
19497b462c97-0
|
BIGINT
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTILE.md
|
54e3e0b06b22-0
|
The following example ranks into four ranking groups the price paid for Hamlet tickets on August 26, 2008\. The result set is 17 rows, divided almost evenly among the rankings 1 through 4:
```
select eventname, caldate, pricepaid, ntile(4)
over(order by pricepaid desc) from sales, event, date
where sales.eventid=event.eventid and event.dateid=date.dateid and eventname='Hamlet'
and caldate='2008-08-26'
order by 4;
eventname | caldate | pricepaid | ntile
-----------+------------+-----------+-------
Hamlet | 2008-08-26 | 1883.00 | 1
Hamlet | 2008-08-26 | 1065.00 | 1
Hamlet | 2008-08-26 | 589.00 | 1
Hamlet | 2008-08-26 | 530.00 | 1
Hamlet | 2008-08-26 | 472.00 | 1
Hamlet | 2008-08-26 | 460.00 | 2
Hamlet | 2008-08-26 | 355.00 | 2
Hamlet | 2008-08-26 | 334.00 | 2
Hamlet | 2008-08-26 | 296.00 | 2
Hamlet | 2008-08-26 | 230.00 | 3
Hamlet | 2008-08-26 | 216.00 | 3
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTILE.md
|
54e3e0b06b22-1
|
Hamlet | 2008-08-26 | 230.00 | 3
Hamlet | 2008-08-26 | 216.00 | 3
Hamlet | 2008-08-26 | 212.00 | 3
Hamlet | 2008-08-26 | 106.00 | 3
Hamlet | 2008-08-26 | 100.00 | 4
Hamlet | 2008-08-26 | 94.00 | 4
Hamlet | 2008-08-26 | 53.00 | 4
Hamlet | 2008-08-26 | 25.00 | 4
(17 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_NTILE.md
|
049c2a686195-0
|
PERCENTILE\_CONT is an inverse distribution function that assumes a continuous distribution model\. It takes a percentile value and a sort specification, and returns an interpolated value that would fall into the given percentile value with respect to the sort specification\.
PERCENTILE\_CONT computes a linear interpolation between values after ordering them\. Using the percentile value `(P)` and the number of not null rows `(N)` in the aggregation group, the function computes the row number after ordering the rows according to the sort specification\. This row number `(RN)` is computed according to the formula `RN = (1+ (P*(N-1))`\. The final result of the aggregate function is computed by linear interpolation between the values from rows at row numbers `CRN = CEILING(RN)` and `FRN = FLOOR(RN)`\.
The final result will be as follows\.
If `(CRN = FRN = RN)` then the result is `(value of expression from row at RN)`
Otherwise the result is as follows:
`(CRN - RN) * (value of expression for row at FRN) + (RN - FRN) * (value of expression for row at CRN)`\.
You can specify only the PARTITION clause in the OVER clause\. If PARTITION is specified, for each row, PERCENTILE\_CONT returns the value that would fall into the specified percentile among a set of values within a given partition\.
PERCENTILE\_CONT is a compute\-node only function\. The function returns an error if the query doesn't reference a user\-defined table or Amazon Redshift system table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
d39e3d5267a4-0
|
```
PERCENTILE_CONT ( percentile )
WITHIN GROUP (ORDER BY expr)
OVER ( [ PARTITION BY expr_list ] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
2f691633f5d6-0
|
*percentile*
Numeric constant between 0 and 1\. Nulls are ignored in the calculation\.
WITHIN GROUP \( ORDER BY *expr*\)
Specifies numeric or date/time values to sort and compute the percentile over\.
OVER
Specifies the window partitioning\. The OVER clause cannot contain a window ordering or window frame specification\.
PARTITION BY *expr*
Optional argument that sets the range of records for each group in the OVER clause\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
e9227e857145-0
|
The return type is determined by the data type of the ORDER BY expression in the WITHIN GROUP clause\. The following table shows the return type for each ORDER BY expression data type\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_WF_PERCENTILE_CONT.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
fd88caaaa201-0
|
If the ORDER BY expression is a DECIMAL data type defined with the maximum precision of 38 digits, it is possible that PERCENTILE\_CONT will return either an inaccurate result or an error\. If the return value of the PERCENTILE\_CONT function exceeds 38 digits, the result is truncated to fit, which causes a loss of precision\. If, during interpolation, an intermediate result exceeds the maximum precision, a numeric overflow occurs and the function returns an error\. To avoid these conditions, we recommend either using a data type with lower precision or casting the ORDER BY expression to a lower precision\.
For example, a SUM function with a DECIMAL argument returns a default precision of 38 digits\. The scale of the result is the same as the scale of the argument\. So, for example, a SUM of a DECIMAL\(5,2\) column returns a DECIMAL\(38,2\) data type\.
The following example uses a SUM function in the ORDER BY clause of a PERCENTILE\_CONT function\. The data type of the PRICEPAID column is DECIMAL \(8,2\), so the SUM function returns DECIMAL\(38,2\)\.
```
select salesid, sum(pricepaid), percentile_cont(0.6)
within group (order by sum(pricepaid) desc) over()
from sales where salesid < 10 group by salesid;
```
To avoid a potential loss of precision or an overflow error, cast the result to a DECIMAL data type with lower precision, as the following example shows\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
fd88caaaa201-1
|
```
select salesid, sum(pricepaid), percentile_cont(0.6)
within group (order by sum(pricepaid)::decimal(30,2) desc) over()
from sales where salesid < 10 group by salesid;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
451d0ddc9215-0
|
The following examples uses the WINSALES table\. For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
```
select sellerid, qty, percentile_cont(0.5)
within group (order by qty)
over() as median from winsales;
sellerid | qty | median
----------+-----+--------
1 | 10 | 20.0
1 | 10 | 20.0
3 | 10 | 20.0
4 | 10 | 20.0
3 | 15 | 20.0
2 | 20 | 20.0
3 | 20 | 20.0
2 | 20 | 20.0
3 | 30 | 20.0
1 | 30 | 20.0
4 | 40 | 20.0
(11 rows)
```
```
select sellerid, qty, percentile_cont(0.5)
within group (order by qty)
over(partition by sellerid) as median from winsales;
sellerid | qty | median
----------+-----+--------
2 | 20 | 20.0
2 | 20 | 20.0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
451d0ddc9215-1
|
2 | 20 | 20.0
2 | 20 | 20.0
4 | 10 | 25.0
4 | 40 | 25.0
1 | 10 | 10.0
1 | 10 | 10.0
1 | 30 | 10.0
3 | 10 | 17.5
3 | 15 | 17.5
3 | 20 | 17.5
3 | 30 | 17.5
(11 rows)
```
The following example calculates the PERCENTILE\_CONT and PERCENTILE\_DISC of the ticket sales for sellers in Washington state\.
```
SELECT sellerid, state, sum(qtysold*pricepaid) sales,
percentile_cont(0.6) within group (order by sum(qtysold*pricepaid::decimal(14,2) ) desc) over(),
percentile_disc(0.6) within group (order by sum(qtysold*pricepaid::decimal(14,2) ) desc) over()
from sales s, users u
where s.sellerid = u.userid and state = 'WA' and sellerid < 1000
group by sellerid, state;
sellerid | state | sales | percentile_cont | percentile_disc
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
451d0ddc9215-2
|
group by sellerid, state;
sellerid | state | sales | percentile_cont | percentile_disc
----------+-------+---------+-----------------+-----------------
127 | WA | 6076.00 | 2044.20 | 1531.00
787 | WA | 6035.00 | 2044.20 | 1531.00
381 | WA | 5881.00 | 2044.20 | 1531.00
777 | WA | 2814.00 | 2044.20 | 1531.00
33 | WA | 1531.00 | 2044.20 | 1531.00
800 | WA | 1476.00 | 2044.20 | 1531.00
1 | WA | 1177.00 | 2044.20 | 1531.00
(7 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_PERCENTILE_CONT.md
|
50038f52c0fe-0
|
**Topics**
+ [Simple expressions](#r_expressions-simple-expressions)
+ [Compound expressions](r_compound_expressions.md)
+ [Expression lists](r_expression_lists.md)
+ [Scalar subqueries](r_scalar_subqueries.md)
+ [Function expressions](r_function_expressions.md)
An expression is a combination of one or more values, operators, or functions that evaluate to a value\. The data type of an expression is generally that of its components\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_expressions.md
|
b63c6f11a9d6-0
|
A simple expression is one of the following:
+ A constant or literal value
+ A column name or column reference
+ A scalar function
+ An aggregate \(set\) function
+ A window function
+ A scalar subquery
Examples of simple expressions include:
```
5+12
dateid
sales.qtysold * 100
sqrt (4)
max (qtysold)
(select max (qtysold) from sales)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_expressions.md
|
dba76947e46f-0
|
Contains details for *return* steps in queries\. A return step returns the results of queries executed on the compute nodes to the leader node\. The leader node then merges the data and returns the results to the requesting client\. For queries executed on the leader node, a return step returns results to the client\.
A query consists of multiple segments, and each segment consists of one or more steps\. For more information, see [Query processing](c-query-processing.md)\.
STL\_RETURN is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RETURN.md
|
be3fee01b21e-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_RETURN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RETURN.md
|
d40d2dea7b1b-0
|
The following query shows which steps in the most recent query were executed on each slice\. \(Slice 6411 is on the leader node\.\)
```
SELECT query, slice, segment, step, endtime, rows, packets
from stl_return where query = pg_last_query_id();
query | slice | segment | step | endtime | rows | packets
-------+--------+---------+------+----------------------------+------+---------
4 | 2 | 3 | 2 | 2013-12-27 01:43:21.469043 | 3 | 0
4 | 3 | 3 | 2 | 2013-12-27 01:43:21.473321 | 0 | 0
4 | 0 | 3 | 2 | 2013-12-27 01:43:21.469118 | 2 | 0
4 | 1 | 3 | 2 | 2013-12-27 01:43:21.474196 | 0 | 0
4 | 4 | 3 | 2 | 2013-12-27 01:43:21.47704 | 2 | 0
4 | 5 | 3 | 2 | 2013-12-27 01:43:21.478593 | 0 | 0
4 | 6411| 4 | 1 | 2013-12-27 01:43:21.480755 | 0 | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RETURN.md
|
d40d2dea7b1b-1
|
4 | 6411| 4 | 1 | 2013-12-27 01:43:21.480755 | 0 | 0
(7 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_RETURN.md
|
5d4a1484d130-0
|
In the following UNION query, rows in the SALES table are merged with rows in the LISTING table\. Three compatible columns are selected from each table; in this case, the corresponding columns have the same names and data types\.
The final result set is ordered by the first column in the LISTING table and limited to the 5 rows with the highest LISTID value\.
```
select listid, sellerid, eventid from listing
union select listid, sellerid, eventid from sales
order by listid, sellerid, eventid desc limit 5;
listid | sellerid | eventid
--------+----------+---------
1 | 36861 | 7872
2 | 16002 | 4806
3 | 21461 | 4256
4 | 8117 | 4337
5 | 1616 | 8647
(5 rows)
```
The following example shows how you can add a literal value to the output of a UNION query so you can see which query expression produced each row in the result set\. The query identifies rows from the first query expression as "B" \(for buyers\) and rows from the second query expression as "S" \(for sellers\)\.
The query identifies buyers and sellers for ticket transactions that cost $10,000 or more\. The only difference between the two query expressions on either side of the UNION operator is the joining column for the SALES table\.
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_union_query.md
|
5d4a1484d130-1
|
```
select listid, lastname, firstname, username,
pricepaid as price, 'S' as buyorsell
from sales, users
where sales.sellerid=users.userid
and pricepaid >=10000
union
select listid, lastname, firstname, username, pricepaid,
'B' as buyorsell
from sales, users
where sales.buyerid=users.userid
and pricepaid >=10000
order by 1, 2, 3, 4, 5;
listid | lastname | firstname | username | price | buyorsell
--------+----------+-----------+----------+-----------+-----------
209658 | Lamb | Colette | VOR15LYI | 10000.00 | B
209658 | West | Kato | ELU81XAA | 10000.00 | S
212395 | Greer | Harlan | GXO71KOC | 12624.00 | S
212395 | Perry | Cora | YWR73YNZ | 12624.00 | B
215156 | Banks | Patrick | ZNQ69CLT | 10000.00 | S
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_union_query.md
|
5d4a1484d130-2
|
215156 | Banks | Patrick | ZNQ69CLT | 10000.00 | S
215156 | Hayden | Malachi | BBG56AKU | 10000.00 | B
(6 rows)
```
The following example uses a UNION ALL operator because duplicate rows, if found, need to be retained in the result\. For a specific series of event IDs, the query returns 0 or more rows for each sale associated with each event, and 0 or 1 row for each listing of that event\. Event IDs are unique to each row in the LISTING and EVENT tables, but there might be multiple sales for the same combination of event and listing IDs in the SALES table\.
The third column in the result set identifies the source of the row\. If it comes from the SALES table, it is marked "Yes" in the SALESROW column\. \(SALESROW is an alias for SALES\.LISTID\.\) If the row comes from the LISTING table, it is marked "No" in the SALESROW column\.
In this case, the result set consists of three sales rows for listing 500, event 7787\. In other words, three different transactions took place for this listing and event combination\. The other two listings, 501 and 502, did not produce any sales, so the only row that the query produces for these list IDs comes from the LISTING table \(SALESROW = 'No'\)\.
```
select eventid, listid, 'Yes' as salesrow
from sales
where listid in(500,501,502)
union all
select eventid, listid, 'No'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_union_query.md
|
5d4a1484d130-3
|
from sales
where listid in(500,501,502)
union all
select eventid, listid, 'No'
from listing
where listid in(500,501,502)
order by listid asc;
eventid | listid | salesrow
---------+--------+----------
7787 | 500 | No
7787 | 500 | Yes
7787 | 500 | Yes
7787 | 500 | Yes
6473 | 501 | No
5108 | 502 | No
(6 rows)
```
If you run the same query without the ALL keyword, the result retains only one of the sales transactions\.
```
select eventid, listid, 'Yes' as salesrow
from sales
where listid in(500,501,502)
union
select eventid, listid, 'No'
from listing
where listid in(500,501,502)
order by listid asc;
eventid | listid | salesrow
---------+--------+----------
7787 | 500 | No
7787 | 500 | Yes
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_union_query.md
|
5d4a1484d130-4
|
7787 | 500 | No
7787 | 500 | Yes
6473 | 501 | No
5108 | 502 | No
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_example_union_query.md
|
ee7bc06e4be2-0
|
Any built\-in can be used as an expression\. The syntax for a function call is the name of a function followed by its argument list in parentheses\.
```
function ( [expression [, expression...]] )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_function_expressions.md
|
c2d806fa468d-0
|
*function*
Any built\-in function\.
*expression*
Any expression\(s\) matching the data type and parameter count expected by the function\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_function_expressions.md
|
ee47b016790c-0
|
```
abs (variable)
select avg (qtysold + 3) from sales;
select dateadd (day,30,caldate) as plus30days from date;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_function_expressions.md
|
25ffbe10a35a-0
|
The users that create the Amazon EMR cluster and run the Amazon Redshift COPY command must have the necessary permissions\.
**To configure IAM permissions**
1. Add the following permissions for the IAM user that will create the Amazon EMR cluster\.
```
ec2:DescribeSecurityGroups
ec2:RevokeSecurityGroupIngress
ec2:AuthorizeSecurityGroupIngress
redshift:DescribeClusters
```
1. Add the following permission for the IAM role or IAM user that will execute the COPY command\.
```
elasticmapreduce:ListInstances
```
1. Add the following permission to the Amazon EMR cluster's IAM role\.
```
redshift:DescribeClusters
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-configure-iam.md
|
f5e714a1a2c1-0
|
Records an alert when the query optimizer identifies conditions that might indicate performance issues\. This view is derived from the STL\_ALERT\_EVENT\_LOG system table but doesn't show slice\-level for queries run on a concurrency scaling cluster\. Use the SVCS\_ALERT\_EVENT\_LOG table to identify opportunities to improve query performance\.
A query consists of multiple segments, and each segment consists of one or more steps\. For more information, see [Query processing](c-query-processing.md)\.
**Note**
System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the tables with the prefix STL except that the STL tables provide information only for queries run on the main cluster\.
SVCS\_ALERT\_EVENT\_LOG is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_ALERT_EVENT_LOG.md
|
d98269eea6e2-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_ALERT_EVENT_LOG.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_ALERT_EVENT_LOG.md
|
59bc55fee67f-0
|
You can use the SVCS\_ALERT\_EVENT\_LOG to identify potential issues in your queries, then follow the practices in [Tuning query performance](c-optimizing-query-performance.md) to optimize your database design and rewrite your queries\. SVCS\_ALERT\_EVENT\_LOG records the following alerts:
+ **Missing statistics**
Statistics are missing\. Run ANALYZE following data loads or significant updates and use STATUPDATE with COPY operations\. For more information, see [Amazon Redshift best practices for designing queries](c_designing-queries-best-practices.md)\.
+ **Nested loop **
A nested loop is usually a Cartesian product\. Evaluate your query to ensure that all participating tables are joined efficiently\.
+ **Very selective filter**
The ratio of rows returned to rows scanned is less than 0\.05\. Rows scanned is the value of `rows_pre_user_filter `and rows returned is the value of rows in the [STL\_SCAN](r_STL_SCAN.md) system table\. Indicates that the query is scanning an unusually large number of rows to determine the result set\. This can be caused by missing or incorrect sort keys\. For more information, see [Choosing sort keys](t_Sorting_data.md)\.
+ **Excessive ghost rows **
A scan skipped a relatively large number of rows that are marked as deleted but not vacuumed, or rows that have been inserted but not committed\. For more information, see [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_ALERT_EVENT_LOG.md
|
59bc55fee67f-1
|
+ **Large distribution **
More than 1,000,000 rows were redistributed for hash join or aggregation\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\.
+ **Large broadcast **
More than 1,000,000 rows were broadcast for hash join\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\.
+ **Serial execution **
A DS\_DIST\_ALL\_INNER redistribution style was indicated in the query plan, which forces serial execution because the entire inner table was redistributed to a single node\. For more information, see [Choosing a data distribution style](t_Distributing_data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_ALERT_EVENT_LOG.md
|
bfaa63102188-0
|
The following query shows alert events for four queries\.
```
SELECT query, substring(event,0,25) as event,
substring(solution,0,25) as solution,
trim(event_time) as event_time from svcs_alert_event_log order by query;
query | event | solution | event_time
-------+-------------------------------+------------------------------+---------------------
6567 | Missing query planner statist | Run the ANALYZE command | 2014-01-03 18:20:58
7450 | Scanned a large number of del | Run the VACUUM command to rec| 2014-01-03 21:19:31
8406 | Nested Loop Join in the query | Review the join predicates to| 2014-01-04 00:34:22
29512 | Very selective query filter:r | Review the choice of sort key| 2014-01-06 22:00:00
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_ALERT_EVENT_LOG.md
|
3b87bbf4c173-0
|
Use the SVV\_INTERLEAVED\_COLUMNS view to help determine whether a table that uses interleaved sort keys should be reindexed using [VACUUM REINDEX](r_VACUUM_command.md#vacuum-reindex)\. For more information about how to determine how often to run VACUUM and when to run a VACUUM REINDEX, see [Managing vacuum times](vacuum-managing-vacuum-times.md)\.
SVV\_INTERLEAVED\_COLUMNS is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_INTERLEAVED_COLUMNS.md
|
3c2b3fe04231-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_INTERLEAVED_COLUMNS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_INTERLEAVED_COLUMNS.md
|
b609981d44c1-0
|
To identify tables that might need to be reindexed, execute the following query\.
```
select tbl as tbl_id, stv_tbl_perm.name as table_name,
col, interleaved_skew, last_reindex
from svv_interleaved_columns, stv_tbl_perm
where svv_interleaved_columns.tbl = stv_tbl_perm.id
and interleaved_skew is not null;
tbl_id | table_name | col | interleaved_skew | last_reindex
--------+------------+-----+------------------+--------------------
100068 | lineorder | 0 | 3.65 | 2015-04-22 22:05:45
100068 | lineorder | 1 | 2.65 | 2015-04-22 22:05:45
100072 | customer | 0 | 1.65 | 2015-04-22 22:05:45
100072 | lineorder | 1 | 1.00 | 2015-04-22 22:05:45
(4 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_INTERLEAVED_COLUMNS.md
|
c4f5933f30cc-0
|
Use the SVCS\_S3PARTITION\_SUMMARY view to get a summary of Redshift Spectrum queries partition processing at the segment level\. One segment can perform one external table scan\.
**Note**
System views with the prefix SVCS provide details about queries on both the main and concurrency scaling clusters\. The views are similar to the views with the prefix SVL except that the SVL views provide information only for queries run on the main cluster\.
SVCS\_S3PARTITION\_SUMMARY is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
For information about SVL\_S3PARTITION, see [SVL\_S3PARTITION](r_SVL_S3PARTITION.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3PARTITION_SUMMARY.md
|
f423bc748b4c-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVCS_S3PARTITION_SUMMARY.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3PARTITION_SUMMARY.md
|
316204bbfead-0
|
The following example gets the partition scan details for the last query executed\.
```
select query, segment, assignment, min_starttime, max_endtime, min_dureation, avg_duration
from svcs_s3partition_summary
where query = pg_last_query_id()
order by query,segment;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVCS_S3PARTITION_SUMMARY.md
|
f7f7a14c952c-0
|
Returns the number of rows that were loaded by the last COPY command executed in the current session\. PG\_LAST\_COPY\_COUNT is updated with the last COPY ID, which is the query ID of the last COPY that began the load process, even if the load failed\. The query ID and COPY ID are updated when the COPY command begins the load process\.
If the COPY fails because of a syntax error or because of insufficient privileges, the COPY ID is not updated and PG\_LAST\_COPY\_COUNT returns the count for the previous COPY\. If no COPY commands were executed in the current session, or if the last COPY failed during loading, PG\_LAST\_COPY\_COUNT returns 0\. For more information, see [PG\_LAST\_COPY\_ID](PG_LAST_COPY_ID.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_COUNT.md
|
ed435b90a0d6-0
|
```
pg_last_copy_count()
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_COUNT.md
|
531903bcccc7-0
|
Returns BIGINT\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_LAST_COPY_COUNT.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.