id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
ca927831c842-4
|
```
select * from loadvenuenulls where venuestate is null or venueseats is null;
venueid | venuename | venuecity | venuestate | venueseats
---------+--------------------------+-----------+------------+------------
253 | Mirage Hotel | Las Vegas | NV |
255 | Venetian Hotel | Las Vegas | NV |
251 | Paris Hotel | Las Vegas | NV |
...
```
To load empty strings to non\-numeric columns as NULL, include the EMPTYASNULL or BLANKSASNULL options\. It's OK to use both\.
```
unload ('select * from venue')
to 's3://mybucket/nulls/'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' allowoverwrite;
truncate loadvenuenulls;
copy loadvenuenulls from 's3://mybucket/nulls/'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' EMPTYASNULL;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD_command_examples.md
|
ca927831c842-5
|
```
To verify that the columns contain NULL, not just whitespace or empty, select from LOADVENUENULLS and filter for null\.
```
select * from loadvenuenulls where venuestate is null or venueseats is null;
venueid | venuename | venuecity | venuestate | venueseats
---------+--------------------------+-----------+------------+------------
72 | Cleveland Browns Stadium | Cleveland | | 73200
253 | Mirage Hotel | Las Vegas | NV |
255 | Venetian Hotel | Las Vegas | NV |
22 | Quicken Loans Arena | Cleveland | | 0
101 | Progressive Field | Cleveland | | 43345
251 | Paris Hotel | Las Vegas | NV |
...
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD_command_examples.md
|
74032a353dd8-0
|
By default, UNLOAD doesn't overwrite existing files in the destination bucket\. For example, if you run the same UNLOAD statement twice without modifying the files in the destination bucket, the second UNLOAD fails\. To overwrite the existing files, including the manifest file, specify the ALLOWOVERWRITE option\.
```
unload ('select * from venue')
to 's3://mybucket/venue_pipe_'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
manifest allowoverwrite;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_UNLOAD_command_examples.md
|
ed31dc92cb93-0
|
ST\_Touches returns true if the two input geometries touch\. The two geometries touch if they are nonempty, intersect, and have no interior points in common\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Touches-function.md
|
cb3de0655ad3-0
|
```
ST_Touches(geom1, geom2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Touches-function.md
|
fed2a67feccc-0
|
*geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Touches-function.md
|
150768ce6720-0
|
`BOOLEAN`
If *geom1* or *geom2* is null, then null is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Touches-function.md
|
44865e2067e3-0
|
The following SQL checks if a polygon touches a linestring\.
```
SELECT ST_Touches(ST_GeomFromText('POLYGON((0 0,10 0,0 10,0 0))'), ST_GeomFromText('LINESTRING(20 10,20 0,10 0)'));
```
```
st_touches
-------------
t
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Touches-function.md
|
da80983453c5-0
|
**Topics**
+ [Names and identifiers](r_names.md)
+ [Literals](r_Literals.md)
+ [Nulls](r_Nulls.md)
+ [Data types](c_Supported_data_types.md)
+ [Collation sequences](c_collation_sequences.md)
This section covers the rules for working with database object names, literals, nulls, and data types\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Basic_elements.md
|
99fc0bd4a2f3-0
|
STL views are generated from logs that have been persisted to disk to provide a history of the system\. STV views are virtual views that contain snapshots of the current system data\. They are based on transient in\-memory data and are not persisted to disk\-based logs or regular tables\. System views that contain any reference to a transient STV table are called SVV views\. Views containing only references to STL views are called SVL views\.
System tables and views do not use the same consistency model as regular tables\. It is important to be aware of this issue when querying them, especially for STV tables and SVV views\. For example, given a regular table t1 with a column c1, you would expect that the following query to return no rows:
```
select * from t1
where c1 > (select max(c1) from t1)
```
However, the following query against a system table might well return rows:
```
select * from stv_exec_state
where currenttime > (select max(currenttime) from stv_exec_state)
```
The reason this query might return rows is that currenttime is transient and the two references in the query might not return the same value when evaluated\.
On the other hand, the following query might well return no rows:
```
select * from stv_exec_state
where currenttime = (select max(currenttime) from stv_exec_state)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_types-of-system-tables-and-views.md
|
1905e593d51f-0
|
With automatic workload management \(WLM\), Amazon Redshift manages query concurrency and memory allocation\. Up to eight queues are created with the service class identifiers 100–107\. Each queue has a priority\. For more information, see [Query priority](query-priority.md)\.
In contrast, manual WLM requires you to specify values for query concurrency and memory allocation\. The default for manual WLM is concurrency of five queries, and memory is divided equally between all five\. Automatic WLM determines the amount of resources that queries need, and adjusts the concurrency based on the workload\. When queries requiring large amounts of resources are in the system \(for example, hash joins between large tables\), the concurrency is lower\. When lighter queries \(such as inserts, deletes, scans, or simple aggregations\) are submitted, concurrency is higher\.
For details about how to migrate from manual WLM to automatic WLM, see [Migrating from manual WLM to automatic WLM](cm-c-modifying-wlm-configuration.md#wlm-manual-to-automatic)\.
Automatic WLM is separate from short query acceleration \(SQA\) and it evaluates queries differently\. Automatic WLM and SQA work together to allow short running and lightweight queries to complete even while long running, resource intensive queries are active\. For more information about SQA, see [Working with short query acceleration](wlm-short-query-acceleration.md)\.
Amazon Redshift enables automatic WLM through parameter groups:
+ If your clusters use the default parameter group, Amazon Redshift enables automatic WLM for them\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
1905e593d51f-1
|
Amazon Redshift enables automatic WLM through parameter groups:
+ If your clusters use the default parameter group, Amazon Redshift enables automatic WLM for them\.
+ If your clusters use custom parameter groups, you can configure the clusters to enable automatic WLM\. We recommend that you create a separate parameter group for your automatic WLM configuration\.
To configure WLM, edit the `wlm_json_configuration` parameter in a parameter group that can be associated with one or more clusters\. For more information, see [Modifying the WLM configuration](cm-c-modifying-wlm-configuration.md)\.
You define query queues within the WLM configuration\. You can add additional query queues to the default WLM configuration, up to a total of eight user queues\. You can configure the following for each query queue:
+ Priority
+ Concurrency scaling mode
+ User groups
+ Query groups
+ Query monitoring rules
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
1368cf70b084-0
|
You can define the relative importance of queries in a workload by setting a priority value\. The priority is specified for a queue and inherited by all queries associated with the queue\. For more information, see [Query priority](query-priority.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
e7910e35d4af-0
|
When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read queries\. Write operations continue as normal on your main cluster\. Users see the most current data, whether the queries run on the main cluster or on a concurrency scaling cluster\.
You manage which queries are sent to the concurrency scaling cluster by configuring WLM queues\. When you enable concurrency scaling for a queue, eligible queries are sent to the concurrency scaling cluster instead of waiting in line\. For more information, see [Working with concurrency scaling](concurrency-scaling.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
46c3f5854d5a-0
|
You can assign a set of user groups to a queue by specifying each user group name or by using wildcards\. When a member of a listed user group runs a query, that query runs in the corresponding queue\. There is no set limit on the number of user groups that can be assigned to a queue\. For more information, see [Wildcards](#wlm-auto-wildcards)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
8bc88e5d2557-0
|
You can assign a set of query groups to a queue by specifying each query group name or by using wildcards\. A *query group* is simply a label\. At runtime, you can assign the query group label to a series of queries\. Any queries that are assigned to a listed query group run in the corresponding queue\. There is no set limit to the number of query groups that can be assigned to a queue\. For
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
8bc88e5d2557-1
|
that can be assigned to a queue\. For more information, see [Wildcards](#wlm-auto-wildcards)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
34c03e9c7f7b-0
|
If wildcards are enabled in the WLM queue configuration, you can assign user groups and query groups to a queue either individually or by using Unix shell–style wildcards\. The pattern matching is case\-insensitive\.
For example, the '\*' wildcard character matches any number of characters\. Thus, if you add `dba_*` to the list of user groups for a queue, any user\-run query that belongs to a group with a name that begins with `dba_` is assigned to that queue\. Examples are `dba_admin` or `DBA_primary`\. The '?' wildcard character matches any single character\. Thus, if the queue includes user\-group `dba?1`, then user groups named `dba11` and `dba21` match, but `dba12` doesn't match\.
By default, wildcards aren't enabled\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
131d61681321-0
|
Query monitoring rules define metrics\-based performance boundaries for WLM queues and specify what action to take when a query goes beyond those boundaries\. For example, for a queue dedicated to short running queries, you might create a rule that aborts queries that run for more than 60 seconds\. To track poorly designed queries, you might have another rule that logs queries that contain nested loops\. For more information, see [WLM query monitoring
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
131d61681321-1
|
For more information, see [WLM query monitoring rules](cm-c-wlm-query-monitoring-rules.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
0b7e00afad8a-0
|
To check whether automatic WLM is enabled, run the following query\. If the query returns at least one row, then automatic WLM is enabled\.
```
select * from stv_wlm_service_class_config
where service_class >= 100;
```
The following query shows the number of queries that went through each query queue \(service class\)\. It also shows the average execution time, the number of queries with wait time at the 90th percentile, and the average wait time\. Automatic WLM queries use service classes 100 to 107\.
```
select final_state, service_class, count(*), avg(total_exec_time),
percentile_cont(0.9) within group (order by total_queue_time), avg(total_queue_time)
from stl_wlm_query where userid >= 100 group by 1,2 order by 2,1;
```
To find which queries were run by automatic WLM, and completed successfully, run the following query\.
```
select a.queue_start_time, a.total_exec_time, label, trim(querytxt)
from stl_wlm_query a, stl_query b
where a.query = b.query and a.service_class >= 100 and a.final_state = 'Completed'
order by b.query desc limit 5;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/automatic-wlm.md
|
50f47efa9584-0
|
PL/pgSQL is a procedural language with many of the same constructs as other procedural languages\.
**Topics**
+ [Block](#r_PLpgSQL-block)
+ [Variable declaration](#r_PLpgSQL-variable-declaration)
+ [Alias declaration](#r_PLpgSQL-alias-declaration)
+ [Built\-in variables](#r_PLpgSQL-builtin-variables)
+ [Record types](#r_PLpgSQL-record-type)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
1dbc6c08fa70-0
|
PL/pgSQL is a block\-structured language\. The complete body of a procedure is defined in a block, which contains variable declarations and PL/pgSQL statements\. A statement can also be a nested block, or subblock\.
End declarations and statements with a semicolon\. Follow the END keyword in a block or subblock with a semicolon\. Don't use semicolons after the keywords DECLARE and BEGIN\.
You can write all keywords and identifiers in mixed uppercase and lowercase\. Identifiers are implicitly converted to lowercase unless enclosed in double quotation marks\.
A double hyphen \(\-\-\) starts a comment that extends to the end of the line\. A /\* starts a block comment that extends to the next occurrence of \*/\. You can't nest block comments\. However, you can enclose double\-hyphen comments in a block comment, and a double hyphen can hide the block comment delimiters /\* and \*/\.
Any statement in the statement section of a block can be a subblock\. You can use subblocks for logical grouping or to localize variables to a small group of statements\.
```
[ <<label>> ]
[ DECLARE
declarations ]
BEGIN
statements
END [ label ];
```
The variables declared in the declarations section preceding a block are initialized to their default values every time the block is entered\. In other words, they're not initialized only once per function call\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
1dbc6c08fa70-1
|
The following shows an example\.
```
CREATE PROCEDURE update_value() AS $$
DECLARE
value integer := 20;
BEGIN
RAISE NOTICE 'Value here is %', value; -- Value here is 20
value := 50;
--
-- Create a subblock
--
DECLARE
value integer := 80;
BEGIN
RAISE NOTICE 'Value here is %', value; -- Value here is 80
END;
RAISE NOTICE 'Value here is %', value; -- Value here is 50
END;
$$ LANGUAGE plpgsql;
```
Use a label to identify the block to use in an EXIT statement or to qualify the names of the variables declared in the block\.
Don't confuse the use of BEGIN/END for grouping statements in PL/pgSQL with the database commands for transaction control\. The BEGIN and END in PL/pgSQL are only for grouping\. They don't start or end a transaction\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
cb8ac116a579-0
|
Declare all variables in a block, with the exception of loop variables, in the block's DECLARE section\. Variables can use any valid Amazon Redshift data type\. For supported data types, see [Data types](c_Supported_data_types.md)\.
PL/pgSQL variables can be any Amazon Redshift supported data type, plus `RECORD` and `refcursor`\. For more information about `RECORD`, see [Record types](#r_PLpgSQL-record-type)\. For more information about `refcursor`, see [Cursors](c_PLpgSQL-statements.md#r_PLpgSQL-cursors)\.
```
DECLARE
name [ CONSTANT ] type [ NOT NULL ] [ { DEFAULT | := } expression ];
```
Following, you can find example variable declarations\.
```
customerID integer;
numberofitems numeric(6);
link varchar;
onerow RECORD;
```
The loop variable of a FOR loop iterating over a range of integers is automatically declared as an integer variable\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
cb8ac116a579-1
|
```
The loop variable of a FOR loop iterating over a range of integers is automatically declared as an integer variable\.
The DEFAULT clause, if given, specifies the initial value assigned to the variable when the block is entered\. If the DEFAULT clause is not given, then the variable is initialized to the SQL NULL value\. The CONSTANT option prevents the variable from being assigned to, so that its value remains constant for the duration of the block\. If NOT NULL is specified, an assignment of a null value results in a runtime error\. All variables declared as NOT NULL must have a non\-null default value specified\.
The default value is evaluated every time the block is entered\. So, for example, assigning `now()` to a variable of type `timestamp` causes the variable to have the time of the current function call, not the time when the function was precompiled\.
```
quantity INTEGER DEFAULT 32;
url VARCHAR := 'http://mysite.com';
user_id CONSTANT INTEGER := 10;
```
The `refcursor` data type is the data type of cursor variables within stored procedures\. A `refcursor` value can be returned from within a stored procedure\. For more information, see [Returning a result set](stored-procedure-result-set.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
6bb4d7088542-0
|
If stored procedure's signature omits the argument name, you can declare an alias for the argument\.
```
name ALIAS FOR $n;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
70fb15a18674-0
|
The following built\-in variables are supported:
+ FOUND
+ SQLSTATE
+ SQLERRM
+ GET DIAGNOSTICS integer\_var := ROW\_COUNT;
FOUND is a special variable of type Boolean\. FOUND starts out false within each procedure call\. FOUND is set by the following types of statements:
+ SELECT INTO
Sets FOUND to true if it returns a row, false if no row is returned\.
+ UPDATE, INSERT, and DELETE
Sets FOUND to true if at least one row is affected, false if no row is affected\.
+ FETCH
Sets FOUND to true if it returns a row, false if no row is returned\.
+ FOR statement
Sets FOUND to true if the FOR statement iterates one or more times, and otherwise false\. This applies to all three variants of the FOR statement: integer FOR loops, record\-set FOR loops, and dynamic record\-set FOR loops\.
FOUND is set when the FOR loop exits\. Inside the execution of the loop, FOUND isn't modified by the FOR statement\. However, it can be changed by the execution of other statements within the loop body\.
The following shows an example\.
```
CREATE TABLE employee(empname varchar);
CREATE OR REPLACE PROCEDURE show_found()
AS $$
DECLARE
myrec record;
BEGIN
SELECT INTO myrec * FROM employee WHERE empname = 'John';
IF NOT FOUND THEN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
70fb15a18674-1
|
BEGIN
SELECT INTO myrec * FROM employee WHERE empname = 'John';
IF NOT FOUND THEN
RAISE EXCEPTION 'employee John not found';
END IF;
END;
$$ LANGUAGE plpgsql;
```
Within an exception handler, the special variable SQLSTATE contains the error code that corresponds to the exception that was raised\. The special variable SQLERRM contains the error message associated with the exception\. These variables are undefined outside exception handlers and throw an error if used\.
The following shows an example\.
```
CREATE OR REPLACE PROCEDURE sqlstate_sqlerrm() AS
$$
BEGIN
UPDATE employee SET firstname = 'Adam' WHERE lastname = 'Smith';
EXECUTE 'select invalid';
EXCEPTION WHEN OTHERS THEN
RAISE INFO 'error message SQLERRM %', SQLERRM;
RAISE INFO 'error message SQLSTATE %', SQLSTATE;
END;
$$ LANGUAGE plpgsql;
```
ROW\_COUNT is used with the GET DIAGNOSTICS command\. It shows the number of rows processed by the last SQL command sent down to the SQL engine\.
The following shows an example\.
```
CREATE OR REPLACE PROCEDURE sp_row_count() AS
$$
DECLARE
integer_var int;
BEGIN
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
70fb15a18674-2
|
CREATE OR REPLACE PROCEDURE sp_row_count() AS
$$
DECLARE
integer_var int;
BEGIN
INSERT INTO tbl_row_count VALUES(1);
GET DIAGNOSTICS integer_var := ROW_COUNT;
RAISE INFO 'rows inserted = %', integer_var;
END;
$$ LANGUAGE plpgsql;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
5290337536b2-0
|
A RECORD type is not a true data type, only a placeholder\. Record type variables assume the actual row structure of the row that they are assigned during a SELECT or FOR command\. The substructure of a record variable can change each time it is assigned a value\. Until a record variable is first assigned to, it has no substructure\. Any attempt to access a field in it throws a runtime error\.
```
name RECORD;
```
The following shows an example\.
```
CREATE TABLE tbl_record(a int, b int);
INSERT INTO tbl_record VALUES(1, 2);
CREATE OR REPLACE PROCEDURE record_example()
LANGUAGE plpgsql
AS $$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT a FROM tbl_record
LOOP
RAISE INFO 'a = %', rec.a;
END LOOP;
END;
$$;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_PLpgSQL-structure.md
|
48a6570a05c5-0
|
Logs information about errors recorded when the disk is full\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DISK_FULL_DIAG.md
|
b55d05a8561b-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_DISK_FULL_DIAG.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DISK_FULL_DIAG.md
|
0324172eddd7-0
|
The following example returns details about the data stored when there is a disk\-full error\.
```
select * from stl_disk_full_diag
```
The following example converts the `currenttime` to a timestamp\.
```
select '2000-01-01'::timestamp + (currenttime/1000000.0)* interval '1 second' as currenttime,node_num,query_id,temp_blocks from pg_catalog.stl_disk_full_diag;
```
```
currenttime | node_num | query_id | temp_blocks
----------------------------+----------+----------+-------------
2019-05-18 19:19:18.609338 | 0 | 569399 | 70982
2019-05-18 19:37:44.755548 | 0 | 569580 | 70982
2019-05-20 13:37:20.566916 | 0 | 597424 | 70869
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DISK_FULL_DIAG.md
|
74d4154b90be-0
|
Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk\.
The following series of illustrations describe how columnar data storage implements efficiencies and how that translates into efficiencies when retrieving data into memory\.
This first illustration shows how records from database tables are typically stored into disk blocks by row\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/03a-Rows-vs-Columns.png)
In a typical relational database table, each row contains field values for a single record\. In row\-wise database storage, data blocks store values sequentially for each consecutive column making up the entire row\. If block size is smaller than the size of a record, storage for an entire record may take more than one block\. If block size is larger than the size of a record, storage for an entire record may take less than one block, resulting in an inefficient use of disk space\. In online transaction processing \(OLTP\) applications, most transactions involve frequently reading and writing all of the values for entire records, typically one record or a small number of records at a time\. As a result, row\-wise storage is optimal for OLTP databases\.
The next illustration shows how with columnar storage, the values for each column are stored sequentially into disk blocks\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_columnar_storage_disk_mem_mgmnt.md
|
74d4154b90be-1
|
The next illustration shows how with columnar storage, the values for each column are stored sequentially into disk blocks\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/03b-Rows-vs-Columns.png)
Using columnar storage, each data block stores values of a single column for multiple rows\. As records enter the system, Amazon Redshift transparently converts the data to columnar storage for each of the columns\.
In this simplified example, using columnar storage, each data block holds column field values for as many as three times as many records as row\-based storage\. This means that reading the same number of column field values for the same number of records requires a third of the I/O operations compared to row\-wise storage\. In practice, using tables with very large numbers of columns and very large row counts, storage efficiency is even greater\.
An added advantage is that, since each block holds the same type of data, block data can use a compression scheme selected specifically for the column data type, further reducing disk space and I/O\. For more information about compression encodings based on data types, see [Compression encodings](c_Compression_encodings.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_columnar_storage_disk_mem_mgmnt.md
|
74d4154b90be-2
|
The savings in space for storing data on disk also carries over to retrieving and then storing that data in memory\. Since many database operations only need to access or operate on one or a small number of columns at a time, you can save memory space by only retrieving blocks for columns you actually need for a query\. Where OLTP transactions typically involve most or all of the columns in a row for a small number of records, data warehouse queries commonly read only a few columns for a very large number of rows\. This means that reading the same number of column field values for the same number of rows requires a fraction of the I/O operations and uses a fraction of the memory that would be required for processing row\-wise blocks\. In practice, using tables with very large numbers of columns and very large row counts, the efficiency gains are proportionally greater\. For example, suppose a table contains 100 columns\. A query that uses five columns will only need to read about five percent of the data contained in the table\. This savings is repeated for possibly billions or even trillions of records for large databases\. In contrast, a row\-wise database would read the blocks that contain the 95 unneeded columns as well\.
Typical database block sizes range from 2 KB to 32 KB\. Amazon Redshift uses a block size of 1 MB, which is more efficient and further reduces the number of I/O requests needed to perform any database loading or other operations that are part of query execution\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_columnar_storage_disk_mem_mgmnt.md
|
0fdbce91a188-0
|
Records the service class configurations for WLM\.
STV\_WLM\_SERVICE\_CLASS\_CONFIG is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_SERVICE_CLASS_CONFIG.md
|
21a9c6159511-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STV_WLM_SERVICE_CLASS_CONFIG.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_SERVICE_CLASS_CONFIG.md
|
656f3075b601-0
|
The first user\-defined service class is service class 6, which is named Service class \#1\. The following query displays the current configuration for service classes greater than 4\. For a list of service class IDs, see [WLM service class IDs](cm-c-wlm-system-tables-and-views.md#wlm-service-class-ids)\.
```
select rtrim(name) as name,
num_query_tasks as slots,
query_working_mem as mem,
max_execution_time as max_time,
user_group_wild_card as user_wildcard,
query_group_wild_card as query_wildcard
from stv_wlm_service_class_config
where service_class > 4;
name | slots | mem | max_time | user_wildcard | query_wildcard
-----------------------------+-------+-----+----------+---------------+---------------
Service class for super user | 1 | 535 | 0 | false | false
Queue 1 | 5 | 125 | 0 | false | false
Queue 2 | 5 | 125 | 0 | false | false
Queue 3 | 5 | 125 | 0 | false | false
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_SERVICE_CLASS_CONFIG.md
|
656f3075b601-1
|
Queue 2 | 5 | 125 | 0 | false | false
Queue 3 | 5 | 125 | 0 | false | false
Queue 4 | 5 | 627 | 0 | false | false
Queue 5 | 5 | 125 | 0 | true | true
Default queue | 5 | 125 | 0 | false | false
```
The following query shows the status of a dynamic WLM transition\. While the transition is in process, `num_query_tasks` and `target_query_working_mem` are updated until they equal the target values\. For more information, see [WLM dynamic and static configuration properties](cm-c-wlm-dynamic-properties.md)\.
```
select rtrim(name) as name,
num_query_tasks as slots,
target_num_query_tasks as target_slots,
query_working_mem as memory,
target_query_working_mem as target_memory
from stv_wlm_service_class_config
where num_query_tasks > target_num_query_tasks
or query_working_mem > target_query_working_mem
and service_class > 5;
name | slots | target_slots | memory | target_mem
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_SERVICE_CLASS_CONFIG.md
|
656f3075b601-2
|
and service_class > 5;
name | slots | target_slots | memory | target_mem
------------------+-------+--------------+--------+------------
Queue 3 | 5 | 15 | 125 | 375
Queue 5 | 10 | 5 | 250 | 125
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STV_WLM_SERVICE_CLASS_CONFIG.md
|
78b75af123e7-0
|
Now that you have chosen the sort keys and distribution styles for each of the tables, you can create the tables using those attributes and reload the data\. You will allow the COPY command to analyze the load data and apply compression encodings automatically\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
49217924152b-0
|
1. You need to drop the SSB tables before you run the CREATE TABLE commands\.
Execute the following commands\.
```
drop table part cascade;
drop table supplier cascade;
drop table customer cascade;
drop table dwdate cascade;
drop table lineorder cascade;
```
1. Create the tables with sort keys and distribution styles\.
Execute the following set of SQL CREATE TABLE commands\.
```
CREATE TABLE part (
p_partkey integer not null sortkey distkey,
p_name varchar(22) not null,
p_mfgr varchar(6) not null,
p_category varchar(7) not null,
p_brand1 varchar(9) not null,
p_color varchar(11) not null,
p_type varchar(25) not null,
p_size integer not null,
p_container varchar(10) not null
);
CREATE TABLE supplier (
s_suppkey integer not null sortkey,
s_name varchar(25) not null,
s_address varchar(25) not null,
s_city varchar(10) not null,
s_nation varchar(15) not null,
s_region varchar(12) not null,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
49217924152b-1
|
s_nation varchar(15) not null,
s_region varchar(12) not null,
s_phone varchar(15) not null)
diststyle all;
CREATE TABLE customer (
c_custkey integer not null sortkey,
c_name varchar(25) not null,
c_address varchar(25) not null,
c_city varchar(10) not null,
c_nation varchar(15) not null,
c_region varchar(12) not null,
c_phone varchar(15) not null,
c_mktsegment varchar(10) not null)
diststyle all;
CREATE TABLE dwdate (
d_datekey integer not null sortkey,
d_date varchar(19) not null,
d_dayofweek varchar(10) not null,
d_month varchar(10) not null,
d_year integer not null,
d_yearmonthnum integer not null,
d_yearmonth varchar(8) not null,
d_daynuminweek integer not null,
d_daynuminmonth integer not null,
d_daynuminyear integer not null,
d_monthnuminyear integer not null,
d_weeknuminyear integer not null,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
49217924152b-2
|
d_daynuminyear integer not null,
d_monthnuminyear integer not null,
d_weeknuminyear integer not null,
d_sellingseason varchar(13) not null,
d_lastdayinweekfl varchar(1) not null,
d_lastdayinmonthfl varchar(1) not null,
d_holidayfl varchar(1) not null,
d_weekdayfl varchar(1) not null)
diststyle all;
CREATE TABLE lineorder (
lo_orderkey integer not null,
lo_linenumber integer not null,
lo_custkey integer not null,
lo_partkey integer not null distkey,
lo_suppkey integer not null,
lo_orderdate integer not null sortkey,
lo_orderpriority varchar(15) not null,
lo_shippriority varchar(1) not null,
lo_quantity integer not null,
lo_extendedprice integer not null,
lo_ordertotalprice integer not null,
lo_discount integer not null,
lo_revenue integer not null,
lo_supplycost integer not null,
lo_tax integer not null,
lo_commitdate integer not null,
lo_shipmode varchar(10) not null
);
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
49217924152b-3
|
lo_commitdate integer not null,
lo_shipmode varchar(10) not null
);
```
1. Load the tables using the same sample data\.
1. Open the `loadssb.sql` script that you created in the first step\.
1. Delete `compupdate off` from each COPY statement\. This time, you will allow COPY to apply compression encodings\.
For reference, the edited script should look like the following:
```
copy customer from 's3://awssampledbuswest2/ssbgz/customer'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip region 'us-west-2';
copy dwdate from 's3://awssampledbuswest2/ssbgz/dwdate'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip region 'us-west-2';
copy lineorder from 's3://awssampledbuswest2/ssbgz/lineorder'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
49217924152b-4
|
gzip region 'us-west-2';
copy part from 's3://awssampledbuswest2/ssbgz/part'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip region 'us-west-2';
copy supplier from 's3://awssampledbuswest2/ssbgz/supplier'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip region 'us-west-2';
```
1. Save the file\.
1. Execute the COPY commands either by running the SQL script or by copying and pasting the commands into your SQL client\.
**Note**
The load operation will take about 10 to 15 minutes\. This might be a good time to get another cup of tea or feed the fish\.
Your results should look similar to the following\.
```
Warnings:
Load into table 'customer' completed, 3000000 record(s) loaded successfully.
...
...
Script execution finished
Total script execution time: 12m 15s
```
1. Record the load time in the benchmarks table\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
49217924152b-5
|
Total script execution time: 12m 15s
```
1. Record the load time in the benchmarks table\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-recreate-test-data.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
94a41d661c44-0
|
[Step 7: Retest system performance after tuning](tutorial-tuning-tables-retest.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-recreate-test-data.md
|
ef106b2c5a11-0
|
Returns an array of the names of any schemas in the current search path\. The current search path is defined in the search\_path parameter\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMAS.md
|
e102ba11750c-0
|
**Note**
This is a leader\-node function\. This function returns an error if it references a user\-created table, an STL or STV system table, or an SVV or SVL system view\.
```
current_schemas(include_implicit)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMAS.md
|
f84e1a65e592-0
|
*include\_implicit*
If true, specifies that the search path should include any implicitly included system schemas\. Valid values are `true` and `false`\. Typically, if `true`, this parameter returns the `pg_catalog` schema in addition to the current schema\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMAS.md
|
189eba104442-0
|
Returns a CHAR or VARCHAR string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMAS.md
|
9ffabc130bbc-0
|
The following example returns the names of the schemas in the current search path, not including implicitly included system schemas:
```
select current_schemas(false);
current_schemas
-----------------
{public}
(1 row)
```
The following example returns the names of the schemas in the current search path, including implicitly included system schemas:
```
select current_schemas(true);
current_schemas
---------------------
{pg_catalog,public}
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_SCHEMAS.md
|
fd873bd549c0-0
|
Creates a new scalar user\-defined function \(UDF\) using either a SQL SELECT clause or a Python program\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
92a142c4afe7-0
|
```
CREATE [ OR REPLACE ] FUNCTION f_function_name
( { [py_arg_name py_arg_data_type |
sql_arg_data_type } [ , ... ] ] )
RETURNS data_type
{ VOLATILE | STABLE | IMMUTABLE }
AS $$
{ python_program | SELECT_clause }
$$ LANGUAGE { plpythonu | sql }
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
6ec4572b7261-0
|
OR REPLACE
Specifies that if a function with the same name and input argument data types, or *signature*, as this one already exists, the existing function is replaced\. You can only replace a function with a new function that defines an identical set of data types\. You must be a superuser to replace a function\.
If you define a function with the same name as an existing function but a different signature, you create a new function\. In other words, the function name is overloaded\. For more information, see [Overloading function names](udf-naming-udfs.md#udf-naming-overloading-function-names)\.
*f\_function\_name*
The name of the function\. If you specify a schema name \(such as `myschema.myfunction`\), the function is created using the specified schema\. Otherwise, the function is created in the current schema\. For more information about valid names, see [Names and identifiers](r_names.md)\.
We recommend that you prefix all UDF names with `f_`\. Amazon Redshift reserves the `f_` prefix for UDF names, so by using the `f_` prefix, you ensure that your UDF name will not conflict with any existing or future Amazon Redshift built\-in SQL function names\. For more information, see [Naming UDFs](udf-naming-udfs.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
6ec4572b7261-1
|
You can define more than one function with the same function name if the data types for the input arguments are different\. In other words, the function name is overloaded\. For more information, see [Overloading function names](udf-naming-udfs.md#udf-naming-overloading-function-names)\.
*py\_arg\_name py\_arg\_data\_type \| sql\_arg\_data type*
For a Python UDF, a list of input argument names and data types\. For a SQL UDF, a list of data types, without argument names\. In a Python UDF, refer to arguments using the argument names\. In a SQL UDF, refer to arguments using $1, $2, and so on, based on the order of the arguments in the argument list\.
For a SQL UDF, the input and return data types can be any standard Amazon Redshift data type\. For a Python UDF, the input and return data types can be SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE PRECISION, BOOLEAN, CHAR, VARCHAR, DATE, or TIMESTAMP\. In addition, Python user\-defined functions \(UDFs\) support a data type of ANYELEMENT\. This is automatically converted to a standard data type based on the data type of the corresponding argument supplied at runtime\. If multiple arguments use ANYELEMENT, they will all resolve to the same data type at runtime, based on the first ANYELEMENT argument in the list\. For more information, see [Python UDF data types](udf-data-types.md) and [Data types](c_Supported_data_types.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
6ec4572b7261-2
|
You can specify a maximum of 32 arguments\.
RETURNS *data\_type*
The data type of the value returned by the function\. The RETURNS data type can be any standard Amazon Redshift data type\. In addition, Python UDFs can use a data type of ANYELEMENT, which is automatically converted to a standard data type based on the argument supplied at runtime\. If you specify ANYELEMENT for the return data type, at least one argument must use ANYELEMENT\. The actual return data type matches the data type supplied for the ANYELEMENT argument when the function is called\. For more information, see [Python UDF data types](udf-data-types.md)\.
VOLATILE \| STABLE \| IMMUTABLE
Informs the query optimizer about the volatility of the function\.
You will get the best optimization if you label your function with the strictest volatility category that is valid for it\. However, if the category is too strict, there is a risk that the optimizer will erroneously skip some calls, resulting in an incorrect result set\. In order of strictness, beginning with the least strict, the volatility categories are as follows:
+ VOLATILE
+ STABLE
+ IMMUTABLE
VOLATILE
Given the same arguments, the function can return different results on successive calls, even for the rows in a single statement\. The query optimizer can't make any assumptions about the behavior of a volatile function, so a query that uses a volatile function must reevaluate the function for every input row\.
STABLE
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
6ec4572b7261-3
|
STABLE
Given the same arguments, the function is guaranteed to return the same results for all rows processed within a single statement\. The function can return different results when called in different statements\. This category allows the optimizer to optimize multiple calls of the function within a single statement to a single call for the statement\.
IMMUTABLE
Given the same arguments, the function always returns the same result, forever\. When a query calls an `IMMUTABLE` function with constant arguments, the optimizer pre\-evaluates the function\.
AS $$ *statement* $$
A construct that encloses the statement to be executed\. The literal keywords `AS $$` and `$$` are required\.
Amazon Redshift requires you to enclose the statement in your function by using a format called dollar quoting\. Anything within the enclosure is passed exactly as is\. You don't need to escape any special characters because the contents of the string are written literally\.
With *dollar quoting, *you use a pair of dollar signs \($$\) to signify the start and the end of the statement to execute, as shown in the following example\.
```
$$ my statement $$
```
Optionally, between the dollar signs in each pair, you can specify a string to help identify the statement\. The string that you use must be the same in both the start and the end of the enclosure pairs\. This string is case\-sensitive, and it follows the same constraints as an unquoted identifier except that it can't contain dollar signs\. The following example uses the string `test`\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
6ec4572b7261-4
|
```
$test$ my statement $test$
```
For more information about dollar quoting, see "Dollar\-quoted String Constants" under [ Lexical Structure](https://www.postgresql.org/docs/9.4/static/sql-syntax-lexical.html) in the PostgreSQL documentation\.
*python\_program*
A valid executable Python program that returns a value\. The statement that you pass in with the function must conform to indentation requirements as specified in the [Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/#indentation) on the Python website\. For more information, see [Python language support for UDFs](udf-python-language-support.md)\.
*SQL\_clause*
A SQL SELECT clause\.
The SELECT clause can't include any of the following types of clauses:
+ FROM
+ INTO
+ WHERE
+ GROUP BY
+ ORDER BY
+ LIMIT
LANGUAGE \{ plpythonu \| sql \}
For Python, specify `plpythonu`\. For SQL, specify `sql`\. You must have permission for usage on language for SQL or plpythonu\. For more information, see [UDF security and privileges](udf-security-and-privileges.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
74e49c3b7ff0-0
|
You can call another SQL user\-defined function \(UDF\) from within a SQL UDF\. The nested function must exist when you run the CREATE FUNCTION command\. Amazon Redshift doesn't track dependencies for UDFs, so if you drop the nested function, Amazon Redshift doesn't return an error\. However, the UDF will fail if the nested function doesn't exist\. For example, the following function calls the `f_sql_greater `function in the SELECT clause\.
```
create function f_sql_commission (float, float )
returns float
stable
as $$
select f_sql_greater ($1, $2)
$$ language sql;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
7808f9e62c2a-0
|
To create a UDF, you must have permission for usage on language for SQL or plpythonu \(Python\)\. By default, USAGE ON LANGUAGE SQL is granted to PUBLIC, However, you must explicitly grant USAGE ON LANGUAGE PLPYTHONU to specific users or groups\.
To revoke usage for SQL, first revoke usage from PUBLIC\. Then grant usage on SQL only to the specific users or groups permitted to create SQL UDFs\. The following example revokes usage on SQL from PUBLIC then grants usage to the user group `udf_devs`\.
```
revoke usage on language sql from PUBLIC;
grant usage on language sql to group udf_devs;
```
To execute a UDF, you must have execute permission for each function\. By default, execute permission for new UDFs is granted to PUBLIC\. To restrict usage, revoke execute from PUBLIC for the function\. Then grant the privilege to specific individuals or groups\.
The following example revokes execution on function `f_py_greater` from PUBLIC then grants usage to the user group `udf_devs`\.
```
revoke execute on function f_py_greater(a float, b float) from PUBLIC;
grant execute on function f_py_greater(a float, b float) to group udf_devs;
```
Superusers have all privileges by default\.
For more information, see [GRANT](r_GRANT.md) and [REVOKE](r_REVOKE.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
52c7d3ce6a45-0
|
The following example creates a Python UDF that compares two integers and returns the larger value\.
```
create function f_py_greater (a float, b float)
returns float
stable
as $$
if a > b:
return a
return b
$$ language plpythonu;
```
The following example queries the SALES table and calls the new `f_py_greater` function to return either COMMISSION or 20 percent of PRICEPAID, whichever is greater\.
```
select f_py_greater (commission, pricepaid*0.20) from sales;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
4f7c1d5d186b-0
|
The following example creates a function that compares two numbers and returns the larger value\.
```
create function f_sql_greater (float, float)
returns float
stable
as $$
select case when $1 > $2 then $1
else $2
end
$$ language sql;
```
The following query calls the new `f_sql_greater` function to query the SALES table and returns either COMMISSION or 20 percent of PRICEPAID, whichever is greater\.
```
select f_sql_greater (commission, pricepaid*0.20) from sales;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CREATE_FUNCTION.md
|
69908ba3aacc-0
|
The RANK window function determines the rank of a value in a group of values, based on the ORDER BY expression in the OVER clause\. If the optional PARTITION BY clause is present, the rankings are reset for each group of rows\. Rows with equal values for the ranking criteria receive the same rank\. Amazon Redshift adds the number of tied rows to the tied rank to calculate the next rank and thus the ranks might not be consecutive numbers\. For example, if two rows are ranked 1, the next rank is 3\.
RANK differs from the [DENSE\_RANK window function](r_WF_DENSE_RANK.md) in one respect: For DENSE\_RANK, if two or more rows tie, there is no gap in the sequence of ranked values\. For example, if two rows are ranked 1, the next rank is 2\.
You can have ranking functions with different PARTITION BY and ORDER BY clauses in the same query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
89e3854546a0-0
|
```
RANK () OVER
(
[ PARTITION BY expr_list ]
[ ORDER BY order_list ]
)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
963a489e7c90-0
|
\( \)
The function takes no arguments, but the empty parentheses are required\.
OVER
The window clauses for the RANK function\.
PARTITION BY *expr\_list*
Optional\. One or more expressions that define the window\.
ORDER BY *order\_list*
Optional\. Defines the columns on which the ranking values are based\. If no PARTITION BY is specified, ORDER BY uses the entire table\. If ORDER BY is omitted, the return value is 1 for all rows\.
If ORDER BY does not produce a unique ordering, the order of the rows is nondeterministic\. For more information, see [Unique ordering of data for window functions](r_Examples_order_by_WF.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
64b58ec05228-0
|
INTEGER
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
1f4a105aadaf-0
|
The following example orders the table by the quantity sold \(default ascending\), and assign a rank to each row\. A rank value of 1 is the highest ranked value\. The results are sorted after the window function results are applied:
```
select salesid, qty,
rank() over (order by qty) as rnk
from winsales
order by 2,1;
salesid | qty | rnk
--------+-----+-----
10001 | 10 | 1
10006 | 10 | 1
30001 | 10 | 1
40005 | 10 | 1
30003 | 15 | 5
20001 | 20 | 6
20002 | 20 | 6
30004 | 20 | 6
10005 | 30 | 9
30007 | 30 | 9
40001 | 40 | 11
(11 rows)
```
Note that the outer ORDER BY clause in this example includes columns 2 and 1 to make sure that Amazon Redshift returns consistently sorted results each time this query is run\. For example, rows with sales IDs 10001 and 10006 have identical QTY and RNK values\. Ordering the final result set by column 1 ensures that row 10001 always falls before 10006\. For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
1f4a105aadaf-1
|
In the following example, the ordering is reversed for the window function \(`order by qty desc`\)\. Now the highest rank value applies to the largest QTY value\.
```
select salesid, qty,
rank() over (order by qty desc) as rank
from winsales
order by 2,1;
salesid | qty | rank
---------+-----+-----
10001 | 10 | 8
10006 | 10 | 8
30001 | 10 | 8
40005 | 10 | 8
30003 | 15 | 7
20001 | 20 | 4
20002 | 20 | 4
30004 | 20 | 4
10005 | 30 | 2
30007 | 30 | 2
40001 | 40 | 1
(11 rows)
```
For a description of the WINSALES table, see [Overview example for window functions](c_Window_functions.md#r_Window_function_example)\.
The following example partitions the table by SELLERID and order each partition by the quantity \(in descending order\) and assign a rank to each row\. The results are sorted after the window function results are applied\.
```
select salesid, sellerid, qty, rank() over
(partition by sellerid
order by qty desc) as rank
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
1f4a105aadaf-2
|
select salesid, sellerid, qty, rank() over
(partition by sellerid
order by qty desc) as rank
from winsales
order by 2,3,1;
salesid | sellerid | qty | rank
--------+----------+-----+-----
10001 | 1 | 10 | 2
10006 | 1 | 10 | 2
10005 | 1 | 30 | 1
20001 | 2 | 20 | 1
20002 | 2 | 20 | 1
30001 | 3 | 10 | 4
30003 | 3 | 15 | 3
30004 | 3 | 20 | 2
30007 | 3 | 30 | 1
40005 | 4 | 10 | 2
40001 | 4 | 40 | 1
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WF_RANK.md
|
57c6e1b8e1b5-0
|
Synonym for the REPEAT function\.
See [REPEAT function](r_REPEAT.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_REPLICATE.md
|
c0eb6182b3da-0
|
Uniqueness, primary key, and foreign key constraints are informational only; *they are not enforced by Amazon Redshift*\. Nonetheless, primary keys and foreign keys are used as planning hints and they should be declared if your ETL process or some other process in your application enforces their integrity\.
For example, the query planner uses primary and foreign keys in certain statistical computations, to infer uniqueness and referential relationships that affect subquery decorrelation techniques, to order large numbers of joins, and to eliminate redundant joins\.
The planner leverages these key relationships, but it assumes that all keys in Amazon Redshift tables are valid as loaded\. If your application allows invalid foreign keys or primary keys, some queries could return incorrect results\. For example, a SELECT DISTINCT query might return duplicate rows if the primary key is not unique\. Do not define key constraints for your tables if you doubt their validity\. On the other hand, you should always declare primary and foreign keys and uniqueness constraints when you know that they are valid\.
Amazon Redshift *does* enforce NOT NULL column constraints\.
For more information about table constraints, see [CREATE TABLE](r_CREATE_TABLE_NEW.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Defining_constraints.md
|
3a1e9c014323-0
|
You can create a custom UDF based on the Python programming language\. The [Python 2\.7 standard library](https://docs.python.org/2/library/index.html) is available for use in UDFs, with the exception of the following modules:
+ ScrolledText
+ Tix
+ Tkinter
+ tk
+ turtle
+ smtpd
In addition to the Python Standard Library, the following modules are part of the Amazon Redshift implementation:
+ [numpy 1\.8\.2](http://www.numpy.org/)
+ [pandas 0\.14\.1](https://pandas.pydata.org/)
+ [python\-dateutil 2\.2](https://dateutil.readthedocs.org/en/latest/)
+ [pytz 2014\.7](https://pypi.org/project/pytz/2014.7/)
+ [scipy 0\.12\.1](https://www.scipy.org/)
+ [six 1\.3\.0](https://pypi.org/project/six/1.3.0/)
+ [wsgiref 0\.1\.2](https://pypi.python.org/pypi/wsgiref)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
3a1e9c014323-1
|
+ [wsgiref 0\.1\.2](https://pypi.python.org/pypi/wsgiref)
You can also import your own custom Python modules and make them available for use in UDFs by executing a [CREATE LIBRARY](r_CREATE_LIBRARY.md) command\. For more information, see [Importing custom Python library modules](#udf-importing-custom-python-library-modules)\.
**Important**
Amazon Redshift blocks all network access and write access to the file system through UDFs\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
99655b153ce7-0
|
You define scalar functions using Python language syntax\. In addition to the native Python Standard Library modules and Amazon Redshift preinstalled modules, you can create your own custom Python library modules and import the libraries into your clusters, or use existing libraries provided by Python or third parties\.
You cannot create a library that contains a module with the same name as a Python Standard Library module or an Amazon Redshift preinstalled Python module\. If an existing user\-installed library uses the same Python package as a library you create, you must drop the existing library before installing the new library\.
You must be a superuser or have `USAGE ON LANGUAGE plpythonu` privilege to install custom libraries; however, any user with sufficient privileges to create functions can use the installed libraries\. You can query the [PG\_LIBRARY](r_PG_LIBRARY.md) system catalog to view information about the libraries installed on your cluster\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
1cc83e96c80e-0
|
This section provides an example of importing a custom Python module into your cluster\. To perform the steps in this section, you must have an Amazon S3 bucket, where you upload the library package\. You then install the package in your cluster\. For more information about creating buckets, go to [ Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/CreatingaBucket.html) in the *Amazon Simple Storage Service Console User Guide*\.
In this example, let's suppose that you create UDFs to work with positions and distances in your data\. Connect to your Amazon Redshift cluster from a SQL client tool, and run the following commands to create the functions\.
```
CREATE FUNCTION f_distance (x1 float, y1 float, x2 float, y2 float) RETURNS float IMMUTABLE as $$
def distance(x1, y1, x2, y2):
import math
return math.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2)
return distance(x1, y1, x2, y2)
$$ LANGUAGE plpythonu;
CREATE FUNCTION f_within_range (x1 float, y1 float, x2 float, y2 float) RETURNS bool IMMUTABLE as $$
def distance(x1, y1, x2, y2):
import math
return math.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
1cc83e96c80e-1
|
import math
return math.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2)
return distance(x1, y1, x2, y2) < 20
$$ LANGUAGE plpythonu;
```
Note that a few lines of code are duplicated in the previous functions\. This duplication is necessary because a UDF cannot reference the contents of another UDF, and both functions require the same functionality\. However, instead of duplicating code in multiple functions, you can create a custom library and configure your functions to use it\.
To do so, first create the library package by following these steps:
1. Create a folder named **geometry**\. This folder is the top level package of the library\.
1. In the **geometry** folder, create a file named `__init__.py`\. Note that the file name contains two double underscore characters\. This file indicates to Python that the package can be initialized\.
1. Also in the **geometry** folder, create a folder named **trig**\. This folder is the subpackage of the library\.
1. In the **trig** folder, create another file named `__init__.py` and a file named `line.py`\. In this folder, `__init__.py` indicates to Python that the subpackage can be initialized and that `line.py` is the file that contains library code\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
1cc83e96c80e-2
|
Your folder and file structure should be the same as the following:
```
geometry/
__init__.py
trig/
__init__.py
line.py
```
For more information about package structure, go to [ Modules](https://docs.python.org/2/tutorial/modules.html) in the Python tutorial on the Python website\.
1. The following code contains a class and member functions for the library\. Copy and paste it into `line.py`\.
```
class LineSegment:
def __init__(self, x1, y1, x2, y2):
self.x1 = x1
self.y1 = y1
self.x2 = x2
self.y2 = y2
def angle(self):
import math
return math.atan2(self.y2 - self.y1, self.x2 - self.x1)
def distance(self):
import math
return math.sqrt((self.y2 - self.y1) ** 2 + (self.x2 - self.x1) ** 2)
```
After you have created the package, do the following to prepare the package and upload it to Amazon S3\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
1cc83e96c80e-3
|
```
After you have created the package, do the following to prepare the package and upload it to Amazon S3\.
1. Compress the contents of the **geometry** folder into a \.zip file named **geometry\.zip**\. Do not include the **geometry** folder itself; only include the contents of the folder as shown following:
```
geometry.zip
__init__.py
trig/
__init__.py
line.py
```
1. Upload **geometry\.zip** to your Amazon S3 bucket\.
**Important**
If the Amazon S3 bucket does not reside in the same region as your Amazon Redshift cluster, you must use the REGION option to specify the region in which the data is located\. For more information, see [CREATE LIBRARY](r_CREATE_LIBRARY.md)\.
1. From your SQL client tool, run the following command to install the library\. Replace *<bucket\_name>* with the name of your bucket, and replace *<access key id>* and *<secret key>* with an access key and secret access key from your AWS Identity and Access Management \(IAM\) user credentials\.
```
CREATE LIBRARY geometry LANGUAGE plpythonu FROM 's3://<bucket_name>/geometry.zip' CREDENTIALS 'aws_access_key_id=<access key id>;aws_secret_access_key=<secret key>';
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
1cc83e96c80e-4
|
```
After you install the library in your cluster, you need to configure your functions to use the library\. To do this, run the following commands\.
```
CREATE OR REPLACE FUNCTION f_distance (x1 float, y1 float, x2 float, y2 float) RETURNS float IMMUTABLE as $$
from trig.line import LineSegment
return LineSegment(x1, y1, x2, y2).distance()
$$ LANGUAGE plpythonu;
CREATE OR REPLACE FUNCTION f_within_range (x1 float, y1 float, x2 float, y2 float) RETURNS bool IMMUTABLE as $$
from trig.line import LineSegment
return LineSegment(x1, y1, x2, y2).distance() < 20
$$ LANGUAGE plpythonu;
```
In the preceding commands, `import trig/line` eliminates the duplicated code from the original functions in this section\. You can reuse the functionality provided by this library in multiple UDFs\. Note that to import the module, you only need to specify the path to the subpackage and module name \(`trig/line`\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-python-language-support.md
|
2e6f22a549a8-0
|
If a scheduled maintenance occurs while a query is running, the query is terminated and rolled back and you need to restart it\. Schedule long\-running operations, such as large data loads or VACUUM operation, to avoid maintenance windows\. You can also minimize the risk, and make restarts easier when they are needed, by performing data loads in smaller increments and managing the size of your VACUUM operations\. For more information, see [Load data in sequential
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-avoid-maintenance.md
|
2e6f22a549a8-1
|
For more information, see [Load data in sequential blocks](c_best-practices-load-data-in-sequential-blocks.md) and [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_best-practices-avoid-maintenance.md
|
bca9e424401f-0
|
The DLOG10 returns the base 10 logarithm of the input parameter\. Synonym of the LOG function\.
Synonym of [LOG function](r_LOG.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DLOG10.md
|
2959c26675e0-0
|
```
DLOG10(number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DLOG10.md
|
e2715e710e81-0
|
*number*
The input parameter is a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DLOG10.md
|
87578f79f209-0
|
The DLOG10 function returns a double precision number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DLOG10.md
|
c27f8d23de78-0
|
The following example returns the base 10 logarithm of the number 100:
```
select dlog10(100);
dlog10
--------
2
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DLOG10.md
|
9bb32e480495-0
|
Before you use this guide, you should complete these tasks\.<a name="ul_vpv_yd1_n3"></a>
+ Install a SQL client\.
+ Launch an Amazon Redshift cluster\.
+ Connect your SQL client to the cluster master database\.
For step\-by\-step instructions, see [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/)\.
You should also know how to use your SQL client and should have a fundamental understanding of the SQL language\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-dev-guide-prereqs.md
|
3563acd37e40-0
|
Amazon Redshift database security is distinct from other types of Amazon Redshift security\. In addition to database security, which is described in this section, Amazon Redshift provides these features to manage security:
+ **Sign\-in credentials** — Access to your Amazon Redshift Management Console is controlled by your AWS account privileges\. For more information, see [Sign\-in credentials](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)\.
+ **Access management** — To control access to specific Amazon Redshift resources, you define AWS Identity and Access Management \(IAM\) accounts\. For more information, see [Controlling access to Amazon Redshift resources](https://docs.aws.amazon.com/redshift/latest/mgmt/iam-redshift-user-mgmt.html)\.
+ **Cluster security groups** — To grant other users inbound access to an Amazon Redshift cluster, you define a cluster security group and associate it with a cluster\. For more information, see [ Amazon Redshift cluster security groups](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_security-overview.md
|
3563acd37e40-1
|
+ **VPC** — To protect access to your cluster by using a virtual networking environment, you can launch your cluster in an Amazon Virtual Private Cloud \(VPC\)\. For more information, see [Managing clusters in Virtual Private Cloud \(VPC\)](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-vpc.html)\.
+ **Cluster encryption** — To encrypt the data in all your user\-created tables, you can enable cluster encryption when you launch the cluster\. For more information, see [Amazon Redshift clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html)\.
+ **SSL connections** — To encrypt the connection between your SQL client and your cluster, you can use secure sockets layer \(SSL\) encryption\. For more information, see [Connect to your cluster using SSL](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_security-overview.md
|
3563acd37e40-2
|
+ **Load data encryption** — To encrypt your table load data files when you upload them to Amazon S3, you can use either server\-side encryption or client\-side encryption\. When you load from server\-side encrypted data, Amazon S3 handles decryption transparently\. When you load from client\-side encrypted data, the Amazon Redshift COPY command decrypts the data as it loads the table\. For more information, see [Uploading encrypted data to Amazon S3](t_uploading-encrypted-data.md)\.
+ **Data in transit** — To protect your data in transit within the AWS cloud, Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY, UNLOAD, backup, and restore operations\.
+ **Column\-level access control** — To have column\-level access control for data in Amazon Redshift , use column\-level grant and revoke statements without having to implement views\-based access control or use another system\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_security-overview.md
|
983c9e6ff897-0
|
Whenever you add, delete, or modify a significant number of rows, you should run a VACUUM command and then an ANALYZE command\. A *vacuum* recovers the space from deleted rows and restores the sort order\. The ANALYZE command updates the statistics metadata, which enables the query optimizer to generate more accurate query plans\. For more information, see [Vacuuming tables](t_Reclaiming_storage_space202.md)\.
If you load the data in sort key order, a vacuum is fast\. In this tutorial, you added a significant number of rows, but you added them to empty tables\. That being the case, there is no need to resort, and you didn't delete any rows\. COPY automatically updates statistics after loading an empty table, so your statistics should be up\-to\-date\. However, as a matter of good housekeeping, you complete this tutorial by vacuuming and analyzing your database\.
To vacuum and analyze the database, execute the following commands\.
```
vacuum;
analyze;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-vacuum.md
|
bb216f1a8147-0
|
[Step 7: Clean up your resources](tutorial-loading-data-clean-up.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-loading-data-vacuum.md
|
7ce4951229f1-0
|
To create a UDF, you must have permission for usage on language for SQL or plpythonu \(Python\)\. By default, USAGE ON LANGUAGE SQL is granted to PUBLIC, but you must explicitly grant USAGE ON LANGUAGE PLPYTHONU to specific users or groups\.
To revoke usage for SQL, first revoke usage from PUBLIC\. Then grant usage on SQL only to the specific users or groups permitted to create SQL UDFs\. The following example revokes usage on SQL from PUBLIC\. Then it grants usage to the user group `udf_devs`\.
```
revoke usage on language sql from PUBLIC;
grant usage on language sql to group udf_devs;
```
To execute a UDF, you must have execute permission for each function\. By default, execute permission for new UDFs is granted to PUBLIC\. To restrict usage, revoke execute from PUBLIC for the function\. Then grant the privilege to specific individuals or groups\.
The following example revokes execution on function `f_py_greater` from PUBLIC\. Then it grants usage to the user group `udf_devs`\.
```
revoke execute on function f_py_greater(a float, b float) from PUBLIC;
grant execute on function f_py_greater(a float, b float) to group udf_devs;
```
Superusers have all privileges by default\.
For more information, see [GRANT](r_GRANT.md) and [REVOKE](r_REVOKE.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-security-and-privileges.md
|
bde4ec4c11b7-0
|
ST\_Covers returns true if the first input geometry covers the second input geometry\. Geometry `A` covers geometry `B` if both are nonempty and every point in `B` is a point in `A`\.
ST\_Covers\(`A`, `B`\) is equivalent to ST\_CoveredBy\(`B`, `A`\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Covers-function.md
|
74a7bb1a7d9c-0
|
```
ST_Covers(geom1, geom2)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Covers-function.md
|
b8575cb76df8-0
|
*geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. This value is compared with *geom1* to determine if it covers *geom1*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_Covers-function.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.