id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
|---|---|---|
7a11963bffcf-0
|
The following example returns the degree equivalent of \.5 radians:
```
select degrees(.5);
degrees
------------------
28.6478897565412
(1 row)
```
The following example converts PI radians to degrees:
```
select degrees(pi());
degrees
---------
180
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DEGREES.md
|
2139c0735f89-0
|
In byte dictionary encoding, a separate dictionary of unique values is created for each block of column values on disk\. \(An Amazon Redshift disk block occupies 1 MB\.\) The dictionary contains up to 256 one\-byte values that are stored as indexes to the original data values\. If more than 256 values are stored in a single block, the extra values are written into the block in raw, uncompressed form\. The process repeats for each disk block\.
This encoding is very effective when a column contains a limited number of unique values\. This encoding is optimal when the data domain of a column is fewer than 256 unique values\. Byte\-dictionary encoding is especially space\-efficient if a CHAR column holds long character strings\.
**Note**
Byte\-dictionary encoding is not always effective when used with VARCHAR columns\. Using BYTEDICT with large VARCHAR columns might cause excessive disk usage\. We strongly recommend using a different encoding, such as LZO, for VARCHAR columns\.
Suppose a table has a COUNTRY column with a CHAR\(30\) data type\. As data is loaded, Amazon Redshift creates the dictionary and populates the COUNTRY column with the index value\. The dictionary contains the indexed unique values, and the table itself contains only the one\-byte subscripts of the corresponding values\.
**Note**
Trailing blanks are stored for fixed\-length character columns\. Therefore, in a CHAR\(30\) column, every compressed value saves 29 bytes of storage when you use the byte\-dictionary encoding\.
The following table represents the dictionary for the COUNTRY column:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Byte_dictionary_encoding.md
|
2139c0735f89-1
|
The following table represents the dictionary for the COUNTRY column:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Byte_dictionary_encoding.html)
The following table represents the values in the COUNTRY column:
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/c_Byte_dictionary_encoding.html)
The total compressed size in this example is calculated as follows: 6 different entries are stored in the dictionary \(6 \* 30 = 180\), and the table contains 10 1\-byte compressed values, for a total of 190 bytes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_Byte_dictionary_encoding.md
|
b2d1dd31b824-0
|
TO\_DATE converts a date represented in a character string to a DATE data type\.
The second argument is a format string that indicates how the character string should be parsed to create the date value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_DATE_function.md
|
f7f7fc4761cc-0
|
```
TO_DATE (string, format)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_DATE_function.md
|
df482927a46d-0
|
*string*
String to be converted\.
*format*
A string literal that defines the format of the input *string*, in terms of its date parts\. For a list of valid formats, see [Datetime format strings](r_FORMAT_strings.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_DATE_function.md
|
1e00e5d981ab-0
|
TO\_DATE returns a DATE, depending on the *format* value\.
If the conversion to *format* fails, then an error is returned\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_DATE_function.md
|
3c5924d86dd6-0
|
The following command converts the date `02 Oct 2001` into the default date format:
```
select to_date ('02 Oct 2001', 'DD Mon YYYY');
to_date
------------
2001-10-02
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_TO_DATE_function.md
|
c44445346180-0
|
Records the occurrence, timestamp, XID, and other useful information when a schema quota is exceeded\.
Superusers can see all the records\. Schema owners can only see records related to the schemas they own\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCHEMA_QUOTA_VIOLATIONS.md
|
e464561f6596-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_SCHEMA_QUOTA_VIOLATIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCHEMA_QUOTA_VIOLATIONS.md
|
e379ccf779af-0
|
The following query shows the result of quota violation:
```
SELECT userid, TRIM(SCHEMA_NAME) "schema_name", quota, disk_usage, disk_usage_pct, timestamp FROM
stl_schema_quota_violations WHERE SCHEMA_NAME = 'sales_schema' ORDER BY timestamp DESC;
```
This query returns the following sample output for the specified schema:
```
userid | schema_name | quota | disk_usage | disk_usage_pct |timestamp
-------+--------------+-------+------------+----------------+----------------------------
104 | sales_schema | 2048 | 2798 | 136.62 | 2020-04-20 20:09:25.494723
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_SCHEMA_QUOTA_VIOLATIONS.md
|
6e8b20ba1222-0
|
Now you have an IAM role that authorizes Amazon Redshift to access the external Data Catalog and Amazon S3 for you\. At this point, you must associate that role with your Amazon Redshift cluster\.
**Note**
A new console is available for Amazon Redshift\. Choose either the **New console** or the **Original console** instructions based on the console that you are using\. The **New console** instructions are open by default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-add-role.md
|
a42bd4ed1991-0
|
**To associate an IAM role with a cluster**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. On the navigation menu, choose **CLUSTERS**, then choose the name of the cluster that you want to update\.
1. For **Actions**, choose **Manage IAM roles**\. The **IAM roles** page appears\.
1. Either Choose **Enter ARN** and then enter an ARN or an IAM role, or choose an IAM role from the list\. Then choose **Add IAM role** to add it to the list of **Attached IAM roles**\.
1. Choose **Done** to associate the IAM role with the cluster\. The cluster is modified to complete the change\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-add-role.md
|
3165ebffa9e3-0
|
**To associate the IAM role with your cluster**
1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console\.aws\.amazon\.com/redshift/](https://console.aws.amazon.com/redshift/)\.
1. In the navigation pane, choose **Clusters**\.
1. In the list, choose the cluster that you want to manage IAM role associations for\.
1. Choose **Manage IAM Roles**\.
1. For **Available roles**, choose your IAM role\.
1. Choose **Apply Changes** to update the IAM roles that are associated with the cluster\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-getting-started-using-spectrum-add-role.md
|
6804dba536bb-0
|
If a column in a row is missing, unknown, or not applicable, it is a null value or is said to contain null\. Nulls can appear in fields of any data type that are not restricted by primary key or NOT NULL constraints\. A null is not equivalent to the value zero or to an empty string\.
Any arithmetic expression containing a null always evaluates to a null\. All operators except concatenation return a null when given a null argument or operand\.
To test for nulls, use the comparison conditions IS NULL and IS NOT NULL\. Because null represents a lack of data, a null is not equal or unequal to any value or to another null\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Nulls.md
|
01f83d45bfd7-0
|
A scalar Python UDF incorporates a Python program that executes when the function is called and returns a single value\. The [CREATE FUNCTION](r_CREATE_FUNCTION.md) command defines the following parameters:
+ \(Optional\) Input arguments\. Each argument must have a name and a data type\.
+ One return data type\.
+ One executable Python program\.
The input and return data types can be SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE PRECISION, BOOLEAN, CHAR, VARCHAR, DATE, or TIMESTAMP\. In addition, Python UDFs can use the data type ANYELEMENT, which Amazon Redshift automatically converts to a standard data type based on the arguments supplied at run time\. For more information, see [ANYELEMENT data type](#udf-anyelement-data-type)
When an Amazon Redshift query calls a scalar UDF, the following steps occur at run time\.
1. The function converts the input arguments to Python data types\.
For a mapping of Amazon Redshift data types to Python data types, see [Python UDF data types](udf-data-types.md)\.
1. The function executes the Python program, passing the converted input arguments\.
1. The Python code returns a single value\. The data type of the return value must correspond to the RETURNS data type specified by the function definition\.
1. The function converts the Python return value to the specified Amazon Redshift data type, then returns that value to the query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-creating-a-scalar-udf.md
|
58bc2a364245-0
|
The following example creates a function that compares two numbers and returns the larger value\. Note that the indentation of the code between the double dollar signs \($$\) is a Python requirement\. For more information, see [CREATE FUNCTION](r_CREATE_FUNCTION.md)\.
```
create function f_py_greater (a float, b float)
returns float
stable
as $$
if a > b:
return a
return b
$$ language plpythonu;
```
The following query calls the new `f_greater` function to query the SALES table and return either COMMISSION or 20 percent of PRICEPAID, whichever is greater\.
```
select f_py_greater (commission, pricepaid*0.20) from sales;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-creating-a-scalar-udf.md
|
6aacdfb268cf-0
|
ANYELEMENT is a *polymorphic data type*, which means that if a function is declared using ANYELEMENT for an argument's data type, the function can accept any standard Amazon Redshift data type as input for that argument when the function is called\. The ANYELEMENT argument is set to the data type actually passed to it when the function is called\.
If a function uses multiple ANYELEMENT data types, they must all resolve to the same actual data type when the function is called\. All ANYELEMENT argument data types are set to the actual data type of the first argument passed to an ANYELEMENT\. For example, a function declared as `f_equal(anyelement, anyelement)` will take any two input values, so long as they are of the same data type\.
If the return value of a function is declared as ANYELEMENT, at least one input argument must be ANYELEMENT\. The actual data type for the return value will be the same as the actual data type supplied for the ANYELEMENT input argument\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/udf-creating-a-scalar-udf.md
|
cb6992ecf92d-0
|
A literal or constant is a fixed data value, composed of a sequence of characters or a numeric constant\. Amazon Redshift supports several types of literals, including:
+ Numeric literals for integer, decimal, and floating\-point numbers\. For more information, see [Integer and floating\-point literals](r_numeric_literals201.md)\.
+ Character literals, also referred to as strings, character strings, or character constants
+ Datetime and interval literals, used with datetime data types\. For more information, see [Date and timestamp literals](r_Date_and_time_literals.md) and [Interval literals](r_interval_literals.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_Literals.md
|
b06442bc5e5f-0
|
ST\_NRings returns the number of rings in an input geometry\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NRings-function.md
|
ebc240dc4ab3-0
|
```
ST_NRings(geom)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NRings-function.md
|
59348d0ead54-0
|
*geom*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NRings-function.md
|
0b6ee21f4b6d-0
|
`INTEGER`
If *geom* is null, then null is returned\.
The values returned are as follows\.
| Returned value | Geometry subtype |
| --- | --- |
| 0 | Returned if *geom* is a `POINT`, `LINESTRING`, `MULTIPOINT`, or `MULTILINESTRING` subtype |
| The number of rings\. | Returned if *geom* is a `POLYGON` or `MULTIPOLYGON` subtype |
| The number of rings in all components | Returned if *geom* is a `GEOMETRYCOLLECTION` subtype |
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NRings-function.md
|
bd8409fd25b2-0
|
The following SQL returns the number of rings in a multipolygon\.
```
SELECT ST_NRings(ST_GeomFromText('MULTIPOLYGON(((0 0,10 0,0 10,0 0)),((0 0,-10 0,0 -10,0 0)))'));
```
```
st_nrings
-------------
2
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_NRings-function.md
|
ea7d14c3c83d-0
|
To analyze query summary information by slice, do the following:
1. Run the following to determine your query ID:
```
select query, elapsed, substring
from svl_qlog
order by query
desc limit 5;
```
Examine the truncated query text in the `substring` field to determine which `query` value represents your query\. If you have run the query more than once, use the `query` value from the row with the lower `elapsed` value\. That is the row for the compiled version\. If you have been running many queries, you can raise the value used by the LIMIT clause used to make sure that your query is included\.
1. Select rows from SVL\_QUERY\_REPORT for your query\. Order the results by segment, step, elapsed\_time, and rows:
```
select * from svl_query_report where query = MyQueryID order by segment, step, elapsed_time, rows;
```
1. For each step, check to see that all slices are processing approximately the same number of rows:
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/SVL_QUERY_REPORT_rows.png)
Also check to see that all slices are taking approximately the same amount of time:
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Report.md
|
ea7d14c3c83d-1
|
Also check to see that all slices are taking approximately the same amount of time:
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/SVL_QUERY_REPORT_elapsed_time.png)
Large discrepancies in these values can indicate data distribution skew due to a suboptimal distribution style for this particular query\. For recommended solutions, see [Suboptimal data distribution](query-performance-improvement-opportunities.md#suboptimal-data-distribution)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/using-SVL-Query-Report.md
|
072d044aa3c1-0
|
**Topics**
+ [Syntax](#r_ORDER_BY_clause-synopsis)
+ [Parameters](#r_ORDER_BY_clause-parameters)
+ [Usage notes](#r_ORDER_BY_usage_notes)
+ [Examples with ORDER BY](r_Examples_with_ORDER_BY.md)
The ORDER BY clause sorts the result set of a query\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ORDER_BY_clause.md
|
0710531b4528-0
|
```
[ ORDER BY expression [ ASC | DESC ] ]
[ NULLS FIRST | NULLS LAST ]
[ LIMIT { count | ALL } ]
[ OFFSET start ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ORDER_BY_clause.md
|
d58f322cd876-0
|
*expression*
Expression that defines the sort order of the query result set, typically by specifying one or more columns in the select list\. Results are returned based on binary UTF\-8 ordering\. You can also specify the following:
+ Columns that aren't in the select list
+ Expressions formed from one or more columns that exist in the tables referenced by the query
+ Ordinal numbers that represent the position of select list entries \(or the position of columns in the table if no select list exists\)
+ Aliases that define select list entries
When the ORDER BY clause contains multiple expressions, the result set is sorted according to the first expression, then the second expression is applied to rows that have matching values from the first expression, and so on\.
ASC \| DESC
Option that defines the sort order for the expression, as follows:
+ ASC: ascending \(for example, low to high for numeric values and 'A' to 'Z' for character strings\)\. If no option is specified, data is sorted in ascending order by default\.
+ DESC: descending \(high to low for numeric values; 'Z' to 'A' for strings\)\.
NULLS FIRST \| NULLS LAST
Option that specifies whether NULL values should be ordered first, before non\-null values, or last, after non\-null values\. By default, NULL values are sorted and ranked last in ASC ordering, and sorted and ranked first in DESC ordering\.
LIMIT *number* \| ALL <a name="order-by-clause-limit"></a>
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ORDER_BY_clause.md
|
d58f322cd876-1
|
LIMIT *number* \| ALL <a name="order-by-clause-limit"></a>
Option that controls the number of sorted rows that the query returns\. The LIMIT number must be a positive integer; the maximum value is `2147483647`\.
LIMIT 0 returns no rows\. You can use this syntax for testing purposes: to check that a query runs \(without displaying any rows\) or to return a column list from a table\. An ORDER BY clause is redundant if you are using LIMIT 0 to return a column list\. The default is LIMIT ALL\.
OFFSET *start* <a name="order-by-clause-offset"></a>
Option that specifies to skip the number of rows before *start* before beginning to return rows\. The OFFSET number must be a positive integer; the maximum value is `2147483647`\. When used with the LIMIT option, OFFSET rows are skipped before starting to count the LIMIT rows that are returned\. If the LIMIT option isn't used, the number of rows in the result set is reduced by the number of rows that are skipped\. The rows skipped by an OFFSET clause still have to be scanned, so it might be inefficient to use a large OFFSET value\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ORDER_BY_clause.md
|
b1e60687f0e4-0
|
Note the following expected behavior with ORDER BY clauses:
+ NULL values are considered "higher" than all other values\. With the default ascending sort order, NULL values sort at the end\. To change this behavior, use the NULLS FIRST option\.
+ When a query doesn't contain an ORDER BY clause, the system returns result sets with no predictable ordering of the rows\. The same query executed twice might return the result set in a different order\.
+ The LIMIT and OFFSET options can be used without an ORDER BY clause; however, to return a consistent set of rows, use these options in conjunction with ORDER BY\.
+ In any parallel system like Amazon Redshift, when ORDER BY doesn't produce a unique ordering, the order of the rows is nondeterministic\. That is, if the ORDER BY expression produces duplicate values, the return order of those rows might vary from other systems or from one run of Amazon Redshift to the next\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_ORDER_BY_clause.md
|
6dc9fe953621-0
|
Removes a view from the database\. Multiple views can be dropped with a single DROP VIEW command\. This command isn't reversible\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_VIEW.md
|
8a43e50f8a73-0
|
```
DROP VIEW [ IF EXISTS ] name [, ... ] [ CASCADE | RESTRICT ]
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_VIEW.md
|
6f7cfe9c435f-0
|
IF EXISTS
Clause that indicates that if the specified view doesn’t exist, the command should make no changes and return a message that the view doesn't exist, rather than terminating with an error\.
This clause is useful when scripting, so the script doesn’t fail if DROP VIEW runs against a nonexistent view\.
*name*
Name of the view to be removed\.
CASCADE
Clause that indicates to automatically drop objects that depend on the view, such as other views\.
To create a view that isn't dependent on other database objects, such as views and tables, include the WITH NO SCHEMA BINDING clause in the view definition\. For more information, see [CREATE VIEW](r_CREATE_VIEW.md)\.
RESTRICT
Clause that indicates not to drop the view if any objects depend on it\. This action is the default\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_VIEW.md
|
11a7be9a1c49-0
|
The following example drops the view called *event*:
```
drop view event;
```
To remove a view that has dependencies, use the CASCADE option\. For example, say we start with a table called EVENT\. We then create the eventview view of the EVENT table, using the CREATE VIEW command, as shown in the following example:
```
create view eventview as
select dateid, eventname, catid
from event where catid = 1;
```
Now, we create a second view called *myeventview*, that is based on the first view *eventview*:
```
create view myeventview as
select eventname, catid
from eventview where eventname <> ' ';
```
At this point, two views have been created: *eventview* and *myeventview*\.
The *myeventview* view is a child view with*eventview* as its parent\.
To delete the *eventview* view, the obvious command to use is the following:
```
drop view eventview;
```
Notice that if you run this command in this case, you get the following error:
```
drop view eventview;
ERROR: can't drop view eventview because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_VIEW.md
|
11a7be9a1c49-1
|
ERROR: can't drop view eventview because other objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too.
```
To remedy this, execute the following command \(as suggested in the error message\):
```
drop view eventview cascade;
```
Both *eventview* and *myeventview* have now been dropped successfully\.
The following example either drops the *eventview* view if it exists, or does nothing and returns a message if it doesn't
```
drop view if exists eventview;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DROP_VIEW.md
|
6e36ad53b418-0
|
Use SVV\_EXTERNAL\_PARTITIONS to view details for partitions in external tables\.
SVV\_EXTERNAL\_PARTITIONS is visible to all users\. Superusers can see all rows; regular users can see only metadata to which they have access\. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_PARTITIONS.md
|
85ac5f293629-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_EXTERNAL_PARTITIONS.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_EXTERNAL_PARTITIONS.md
|
12b03920dbb4-0
|
**Topics**
+ [Data distribution concepts](#t_data_distribution_concepts)
+ [Distribution styles](c_choosing_dist_sort.md)
+ [Viewing distribution styles](viewing-distribution-styles.md)
+ [Evaluating query patterns](t_evaluating_query_patterns.md)
+ [Designating distribution styles](t_designating_distribution_styles.md)
+ [Evaluating the query plan](c_data_redistribution.md)
+ [Query plan example](t_explain_plan_example.md)
+ [Distribution examples](c_Distribution_examples.md)
When you load data into a table, Amazon Redshift distributes the rows of the table to each of the compute nodes according to the table's distribution style\. When you run a query, the query optimizer redistributes the rows to the compute nodes as needed to perform any joins and aggregations\. The goal in selecting a table distribution style is to minimize the impact of the redistribution step by locating the data where it needs to be before the query is executed\.
This section will introduce you to the principles of data distribution in an Amazon Redshift database and give you a methodology to choose the best distribution style for each of your tables\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Distributing_data.md
|
3bc9032ff10f-0
|
**Nodes and slices**
An Amazon Redshift cluster is a set of nodes\. Each node in the cluster has its own operating system, dedicated memory, and dedicated disk storage\. One node is the *leader node*, which manages the distribution of data and query processing tasks to the *compute nodes*\.
The disk storage for a compute node is divided into a number of *slices*\. The number of slices per node depends on the node size of the cluster\. For example, each DS2\.XL compute node has two slices, and each DS2\.8XL compute node has 16 slices\. The nodes all participate in parallel query execution, working on data that is distributed as evenly as possible across the slices\. For more information about the number of slices that each node size has, go to [About clusters and nodes](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-about-clusters-and-nodes) in the *Amazon Redshift Cluster Management Guide*\.
**Data redistribution**
When you load data into a table, Amazon Redshift distributes the rows of the table to each of the node slices according to the table's distribution style\. As part of a query plan, the optimizer determines where blocks of data need to be located to best execute the query\. The data is then physically moved, or redistributed, during execution\. Redistribution might involve either sending specific rows to nodes for joining or broadcasting an entire table to all of the nodes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Distributing_data.md
|
3bc9032ff10f-1
|
Data redistribution can account for a substantial portion of the cost of a query plan, and the network traffic it generates can affect other database operations and slow overall system performance\. To the extent that you anticipate where best to locate data initially, you can minimize the impact of data redistribution\.
**Data distribution goals**
When you load data into a table, Amazon Redshift distributes the table's rows to the compute nodes and slices according to the distribution style that you chose when you created the table\. Data distribution has two primary goals:
+ To distribute the workload uniformly among the nodes in the cluster\. Uneven distribution, or data distribution skew, forces some nodes to do more work than others, which impairs query performance\.
+ To minimize data movement during query execution\. If the rows that participate in joins or aggregates are already collocated on the nodes with their joining rows in other tables, the optimizer does not need to redistribute as much data during query execution\.
The distribution strategy that you choose for your database has important consequences for query performance, storage requirements, data loading, and maintenance\. By choosing the best distribution style for each table, you can balance your data distribution and significantly improve overall system performance\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Distributing_data.md
|
1a8852428c82-0
|
A DECODE expression replaces a specific value with either another specific value or a default value, depending on the result of an equality condition\. This operation is equivalent to the operation of a simple CASE expression or an IF\-THEN\-ELSE statement\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
4f5180401642-0
|
```
DECODE ( expression, search, result [, search, result ]... [ ,default ] )
```
This type of expression is useful for replacing abbreviations or codes that are stored in tables with meaningful business values that are needed for reports\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
c37ce0a5516b-0
|
*expression*
The source of the value that you want to compare, such as a column in a table\.
*search*
The target value that is compared against the source expression, such as a numeric value or a character string\. The search expression must evaluate to a single fixed value\. You cannot specify an expression that evaluates to a range of values, such as `age between 20 and 29`; you need to specify separate search/result pairs for each value that you want to replace\.
The data type of all instances of the search expression must be the same or compatible\. The *expression* and *search* parameters must also be compatible\.
*result*
The replacement value that query returns when the expression matches the search value\. You must include at least one search/result pair in the DECODE expression\.
The data types of all instances of the result expression must be the same or compatible\. The *result* and *default* parameters must also be compatible\.
*default*
An optional default value that is used for cases when the search condition fails\. If you do not specify a default value, the DECODE expression returns NULL\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
d4ef37dc6613-0
|
If the *expression* value and the *search* value are both NULL, the DECODE result is the corresponding *result* value\. For an illustration of this use of the function, see the Examples section\.
When used this way, DECODE is similar to [NVL2 expression](r_NVL2.md), but there are some differences\. For a description of these differences, see the NVL2 usage notes\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
96707ae525db-0
|
When the value `2008-06-01` exists in the START\_DATE column of DATETABLE, the following example replaces it with `June 1st, 2008`\. The example replaces all other START\_DATE values with NULL\.
```
select decode(caldate, '2008-06-01', 'June 1st, 2008')
from date where month='JUN' order by caldate;
case
----------------
June 1st, 2008
...
(30 rows)
```
The following example uses a DECODE expression to convert the five abbreviated CATNAME columns in the CATEGORY table to full names and convert other values in the column to `Unknown`\.
```
select catid, decode(catname,
'NHL', 'National Hockey League',
'MLB', 'Major League Baseball',
'MLS', 'Major League Soccer',
'NFL', 'National Football League',
'NBA', 'National Basketball Association',
'Unknown')
from category
order by catid;
catid | case
-------+---------------------------------
1 | Major League Baseball
2 | National Hockey League
3 | National Football League
4 | National Basketball Association
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
96707ae525db-1
|
1 | Major League Baseball
2 | National Hockey League
3 | National Football League
4 | National Basketball Association
5 | Major League Soccer
6 | Unknown
7 | Unknown
8 | Unknown
9 | Unknown
10 | Unknown
11 | Unknown
(11 rows)
```
Use a DECODE expression to find venues in Colorado and Nevada with NULL in the VENUESEATS column; convert the NULLs to zeroes\. If the VENUESEATS column is not NULL, return 1 as the result\.
```
select venuename, venuestate, decode(venueseats,null,0,1)
from venue
where venuestate in('NV','CO')
order by 2,3,1;
venuename | venuestate | case
------------------------------+----------------+-----------
Coors Field | CO | 1
Dick's Sporting Goods Park | CO | 1
Ellie Caulkins Opera House | CO | 1
INVESCO Field | CO | 1
Pepsi Center | CO | 1
Ballys Hotel | NV | 0
Bellagio Hotel | NV | 0
Caesars Palace | NV | 0
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
96707ae525db-2
|
Ballys Hotel | NV | 0
Bellagio Hotel | NV | 0
Caesars Palace | NV | 0
Harrahs Hotel | NV | 0
Hilton Hotel | NV | 0
...
(20 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DECODE_expression.md
|
490095c47316-0
|
Displays the quota and the current disk usage for each schema\.
Regular users can see information for schemas for which they have USAGE permission\. Superusers can see information for all schemas in the current database\.
SVV\_SCHEMA\_QUOTA\_STATE is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_SCHEMA_QUOTA_STATE.md
|
a2354424eaa3-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_SVV_SCHEMA_QUOTA_STATE.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_SCHEMA_QUOTA_STATE.md
|
bb34eef2c1c1-0
|
The following example displays the quota and the current disk usage for the schema\.
```
SELECT TRIM(SCHEMA_NAME) "schema_name", QUOTA, disk_usage, disk_usage_pct FROM svv_schema_quota_state
WHERE SCHEMA_NAME = 'sales_schema';
schema_name | quota | disk_usage | disk_usage_pct
--------------+-------+------------+----------------
sales_schema | 2048 | 30 | 1.46
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_SVV_SCHEMA_QUOTA_STATE.md
|
13d5a8ae495c-0
|
Refreshes a materialized view\.
When you create a materialized view, its contents reflect the state of the underlying database table or tables at that time\. The data in the materialized view remains unchanged, even when applications make changes to the data in the underlying tables\. To update the data in the materialized view, you can use the `REFRESH MATERIALIZED VIEW` statement at any time\. When you use this statement, Amazon Redshift identifies changes that have taken place in the base table or tables, and then applies those changes to the materialized view\.
Amazon Redshift has two strategies for refreshing a materialized view:
+ In many cases, Amazon Redshift can perform an incremental refresh\. In an *incremental refresh*, Amazon Redshift quickly identifies the changes to the data in the base tables since last refresh and updates the data in the materialized view\. Incremental refresh is supported or following SQL constructs used in the query when defining the materialized view\.
+ Contains clauses SELECT, FROM, \[INNER\] JOIN, WHERE, GROUP BY, HAVING\.
+ Contains aggregations, such as SUM and COUNT\.
+ Most built\-in SQL functions, specifically the ones that are immutable, given the same input arguments and always produces the same output\. For a list of SQL functions for which incremental refresh is not supported, see [Limitations for incremental refresh](#mv_REFRESH_MARTERIALIZED_VIEW_limitations)\.
+ If an incremental refresh isn't possible, then Amazon Redshift performs a full refresh\. A *full refresh* reruns the underlying SQL statement, replacing all of the data in the materialized view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
13d5a8ae495c-1
|
+ Amazon Redshift automatically picks the refresh method for a materialize view depending on the SELECT query used to define the materialized view\.
For more information about materialized views, see [Creating materialized views in Amazon Redshift](materialized-view-overview.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
237f1289e5db-0
|
```
REFRESH MATERIALIZED VIEW mv_name
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
adc3ea139e61-0
|
*mv\_name*
The name of the materialized view to be refreshed\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
1dc302669783-0
|
Only the owner of a materialized view can perform a `REFRESH MATERIALIZED VIEW` operation on that materialized view\. Furthermore, the owner must have SELECT privilege on the underlying base tables to successfully run `REFRESH MATERIALIZED VIEW`\.
The `REFRESH MATERIALIZED VIEW` command runs as a transaction of its own\. Amazon Redshift transaction semantics are followed to determine what data from base tables is visible to the `REFRESH` command, or when the changes made by the `REFRESH` command are made visible to other transactions running in Amazon Redshift\.
+ For incremental materialized views, `REFRESH MATERIALIZED VIEW` uses only those base table rows that are already committed\. Therefore, if the refresh operation runs after a data manipulation language \(DML\) statement in the same transaction, then changes of that DML statement aren't visible to refresh\.
+ Furthermore, take a case where a transaction B follows a transaction A\. In such a case, `REFRESH MATERIALIZED VIEW` issued after committing B doesn't see some committed base table rows that are updated by transaction B while the older transaction A is in progress\. These omitted rows are updated by subsequent refresh operations, after transaction A is committed\.
+ For a full refresh of a materialized view, `REFRESH MATERIALIZED VIEW` sees all base table rows visible to the refresh transaction, according to usual Amazon Redshift transaction semantics\.
+ Depending on the input argument type, Amazon Redshift still supports incremental refresh for materialized views for the following functions with specific input argument types: DATE \(timestamp\), DATE\_PART \(date, time, interval, time\-tz\), DATE\_TRUNC \(timestamp, interval\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
1dc302669783-1
|
Some operations in Amazon Redshift interact with materialized views\. Some of these operations might force a `REFRESH MATERIALIZED VIEW` operation to fully recompute the materialized view even though the query defining the materialized view only uses the SQL features eligible for incremental refresh\. For example:
+ Background vacuum operations might be blocked if materialized views aren't refreshed\. After an internally defined threshold period, a vacuum operation is allowed to run\. When this vacuum operation happens, any dependent materialized views are marked for recomputation upon the next refresh \(even if they are incremental\)\. For information about VACUUM, see [VACUUM](r_VACUUM_command.md)\. For more information about events and state changes, see [STL\_MV\_STATE](r_STL_MV_STATE.md)\.
+ Some user\-initiated operations on base tables force a materialized view to be fully recomputed next time that a REFRESH operation is run\. Examples of such operations are a manually invoked VACUUM, a classic resize, an ALTER DISTKEY operation, an ALTER SORTKEY operation, and a truncate operation\. For more information about events and state changes, see [STL\_MV\_STATE](r_STL_MV_STATE.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
ba69af5e00bf-0
|
Amazon Redshift currently does not support incremental refresh for materialized views that are defined with a query using any of the following SQL elements:
+ OUTER JOIN \(RIGHT, LEFT, or FULL\)\.
+ Set operations: UNION, INTERSECT, EXCEPT, MINUS\.
+ Aggregate functions: AVG, MEDIAN, PERCENTILE\_CONT, MAX, MIN, LISTAGG, STDDEV\_SAMP, STDDEV\_POP, APPROXIMATE COUNT, APPROXIMATE PERCENTILE, and bitwise aggregate functions\.
**Note**
The COUNT and SUM aggregate functions are supported\.
+ DISTINCT aggregate functions, such as DISTINCT COUNT, DISTINCT SUM, and so on\.
+ Window functions\.
+ A query that uses temporary tables for query optimization, such as optimizing common subexpressions\.
+ Subqueries in any place other than the FROM clause\.
+ External tables referenced as base tables in the query that defines the materialized view\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
8b3dc03be366-0
|
The following example refreshes the `tickets_mv` materialized view\.
```
REFRESH MATERIALIZED VIEW tickets_mv;
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/materialized-view-refresh-sql-command.md
|
6b882b325976-0
|
The following table shows the mapping of an Amazon Redshift data type to a corresponding Amazon RDS PostgreSQL or Aurora PostgreSQL data type\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/federated-data-types.html)
The following RDS PostgreSQL and Aurora PostgreSQL data types are converted to VARCHAR\(64K\) in Amazon Redshift:
+ JSON, JSONB
+ Arrays
+ BIT, BIT VARYING
+ BYTEA
+ Composite types
+ Date and time types INTERVAL and TIME
+ Enumerated types
+ Monetary types
+ Network address types
+ Numeric types SERIAL, BIGSERIAL, SMALLSERIAL, and MONEY
+ Object identifier types
+ pg\_lsn type
+ Pseudo\-types
+ Range types
+ Text search types
+ TXID\_SNAPSHOT
+ UUID
+ XML type
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/federated-data-types.md
|
7bb6b8e32031-0
|
The LTRIM function trims a specified set of characters from the beginning of a string\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LTRIM.md
|
8c70f209f417-0
|
```
LTRIM( string, 'trim_chars' )
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LTRIM.md
|
217a18c86c00-0
|
*string*
The string column or expression to be trimmed\.
*trim\_chars*
A string column or expression representing the characters to be trimmed from the beginning of *string*\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LTRIM.md
|
729094dc8460-0
|
The LTRIM function returns a character string that is the same data type as the input string \(CHAR or VARCHAR\)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LTRIM.md
|
160acdc90ce3-0
|
The following example trims the year from LISTTIME:
```
select listid, listtime, ltrim(listtime, '2008-')
from listing
order by 1, 2, 3
limit 10;
listid | listtime | ltrim
-------+---------------------+----------------
1 | 2008-01-24 06:43:29 | 1-24 06:43:29
2 | 2008-03-05 12:25:29 | 3-05 12:25:29
3 | 2008-11-01 07:35:33 | 11-01 07:35:33
4 | 2008-05-24 01:18:37 | 5-24 01:18:37
5 | 2008-05-17 02:29:11 | 5-17 02:29:11
6 | 2008-08-15 02:08:13 | 15 02:08:13
7 | 2008-11-15 09:38:15 | 11-15 09:38:15
8 | 2008-11-09 05:07:30 | 11-09 05:07:30
9 | 2008-09-09 08:03:36 | 9-09 08:03:36
10 | 2008-06-17 09:44:54 | 6-17 09:44:54
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LTRIM.md
|
160acdc90ce3-1
|
10 | 2008-06-17 09:44:54 | 6-17 09:44:54
(10 rows)
```
LTRIM removes any of the characters in *trim\_chars* when they appear at the beginning of *string*\. The following example trims the characters 'C', 'D', and 'G' when they appear at the beginning of VENUENAME\.
```
select venueid, venuename, trim(venuename, 'CDG')
from venue
where venuename like '%Park'
order by 2
limit 7;
venueid | venuename | btrim
--------+----------------------------+--------------------------
121 | ATT Park | ATT Park
109 | Citizens Bank Park | itizens Bank Park
102 | Comerica Park | omerica Park
9 | Dick's Sporting Goods Park | ick's Sporting Goods Park
97 | Fenway Park | Fenway Park
112 | Great American Ball Park | reat American Ball Park
114 | Miller Park | Miller Park
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_LTRIM.md
|
16339f1c5c67-0
|
Returns the files that Amazon Redshift read while loading data via the COPY command\.
Querying this view can help troubleshoot data load errors\. STL\_FILE\_SCAN can be particularly helpful with pinpointing issues in parallel data loads because parallel data loads typically load many files with a single COPY command\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_FILE_SCAN.md
|
98c92d3f2517-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_FILE_SCAN.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_FILE_SCAN.md
|
08e8a06260cb-0
|
The following query retrieves the names and load times of any files that took over 1000000 microseconds for Amazon Redshift to read:
```
select trim(name)as name, loadtime from stl_file_scan
where loadtime > 1000000;
```
This query returns the following example output:
```
name | loadtime
---------------------------+----------
listings_pipe.txt | 9458354
allusers_pipe.txt | 2963761
allevents_pipe.txt | 1409135
tickit/listings_pipe.txt | 7071087
tickit/allevents_pipe.txt | 1237364
tickit/allusers_pipe.txt | 2535138
listings_pipe.txt | 6706370
allusers_pipe.txt | 3579461
allevents_pipe.txt | 1313195
tickit/allusers_pipe.txt | 3236060
tickit/listings_pipe.txt | 4980108
(11 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_FILE_SCAN.md
|
6876955ca86e-0
|
Compares a date to a time stamp and returns `0` if the values are identical, `1` if *date* is greater alphabetically and `-1` if *timestamp* is greater\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMP.md
|
fa974bc94a55-0
|
```
DATE_CMP_TIMESTAMP(date, timestamp)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMP.md
|
61b0031e595b-0
|
*date*
A date column or an expression that implicitly converts to a date\.
*timestamp*
A timestamp column or an expression that implicitly converts to a time stamp\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMP.md
|
321dbc34e2b5-0
|
INTEGER
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMP.md
|
c543180cc918-0
|
The following example compares the date `2008-06-18` to LISTTIME\. Listings made before this date return `1`; listings made after this date return `-1`\.
```
select listid, '2008-06-18', listtime,
date_cmp_timestamp('2008-06-18', listtime)
from listing
order by 1, 2, 3, 4
limit 10;
listid | ?column? | listtime | date_cmp_timestamp
--------+------------+---------------------+--------------------
1 | 2008-06-18 | 2008-01-24 06:43:29 | 1
2 | 2008-06-18 | 2008-03-05 12:25:29 | 1
3 | 2008-06-18 | 2008-11-01 07:35:33 | -1
4 | 2008-06-18 | 2008-05-24 01:18:37 | 1
5 | 2008-06-18 | 2008-05-17 02:29:11 | 1
6 | 2008-06-18 | 2008-08-15 02:08:13 | -1
7 | 2008-06-18 | 2008-11-15 09:38:15 | -1
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMP.md
|
c543180cc918-1
|
7 | 2008-06-18 | 2008-11-15 09:38:15 | -1
8 | 2008-06-18 | 2008-11-09 05:07:30 | -1
9 | 2008-06-18 | 2008-09-09 08:03:36 | -1
10 | 2008-06-18 | 2008-06-17 09:44:54 | 1
(10 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_DATE_CMP_TIMESTAMP.md
|
b3eb64082306-0
|
Analyzes hash execution steps for queries\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_HASH.md
|
1514f3471dff-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_HASH.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_HASH.md
|
0ebd86e96e97-0
|
The following example returns information about the number of partitions that were used in a hash for query 720, and indicates that none of the steps ran on disk\.
```
select slice, rows, bytes, occupied, workmem, num_parts, est_rows, num_blocks_permitted, is_diskbased
from stl_hash
where query=720 and segment=5
order by slice;
```
```
slice | rows | bytes | occupied | workmem | num_parts | est_rows | num_blocks_permitted | is_diskbased
-------+------+--------+----------+----------+-----------+----------+----------------------+--------------
0 | 145 | 585800 | 1 | 88866816 | 16 | 1 | 52 f
1 | 0 | 0 | 0 | 0 | 16 | 1 | 52 f
(2 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_HASH.md
|
dfad87a327d0-0
|
The FLOOR function rounds a number down to the next whole number\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FLOOR.md
|
835396ef5353-0
|
```
FLOOR (number)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FLOOR.md
|
82b8daf4bdd2-0
|
*number*
DOUBLE PRECISION number to be rounded down\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FLOOR.md
|
7ad42027dcf2-0
|
FLOOR returns an integer\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FLOOR.md
|
7c9f4fe60a4d-0
|
The example shows the value of the commission paid for a given sales transaction before and after using the FLOOR function\.
```
select commission from sales
where salesid=10000;
floor
-------
28.05
(1 row)
select floor(commission) from sales
where salesid=10000;
floor
-------
28
(1 row)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_FLOOR.md
|
c1ae31085cc0-0
|
Data warehouse databases commonly use a star schema design, in which a central fact table contains the core data for the database and several dimension tables provide descriptive attribute information for the fact table\. The fact table joins each dimension table on a foreign key that matches the dimension's primary key\.
**Star Schema Benchmark \(SSB\) **
For this tutorial, you will use a set of five tables based on the Star Schema Benchmark \(SSB\) schema\. The following diagram shows the SSB data model\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-ssb-data-model.png)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-0
|
You will create a set of tables without sort keys, distribution styles, or compression encodings\. Then you will load the tables with data from the SSB data set\.
1. \(Optional\) Launch a cluster\.
If you already have a cluster that you want to use, you can skip this step\. Your cluster should have at least two nodes\. For the exercises in this tutorial, you will use a four\-node cluster\.
To launch a dc2\.large cluster with four nodes, follow the steps in [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/), but select **Multi Node** for **Cluster Type** and set **Number of Compute Nodes** to **4**\.
Follow the steps to connect to your cluster from a SQL client and test a connection\. You do not need to complete the remaining steps to create tables, upload data, and try example queries\.
1. Create the SSB test tables using minimum attributes\.
**Note**
If the SSB tables already exist in the current database, you will need to drop the tables first\. See [Step 6: Recreate the test data set](tutorial-tuning-tables-recreate-test-data.md) for the DROP TABLE commands\.
For the purposes of this tutorial, the first time you create the tables, they will not have sort keys, distribution styles, or compression encodings\.
Execute the following CREATE TABLE commands\.
```
CREATE TABLE part
(
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-1
|
Execute the following CREATE TABLE commands\.
```
CREATE TABLE part
(
p_partkey INTEGER NOT NULL,
p_name VARCHAR(22) NOT NULL,
p_mfgr VARCHAR(6) NOT NULL,
p_category VARCHAR(7) NOT NULL,
p_brand1 VARCHAR(9) NOT NULL,
p_color VARCHAR(11) NOT NULL,
p_type VARCHAR(25) NOT NULL,
p_size INTEGER NOT NULL,
p_container VARCHAR(10) NOT NULL
);
CREATE TABLE supplier
(
s_suppkey INTEGER NOT NULL,
s_name VARCHAR(25) NOT NULL,
s_address VARCHAR(25) NOT NULL,
s_city VARCHAR(10) NOT NULL,
s_nation VARCHAR(15) NOT NULL,
s_region VARCHAR(12) NOT NULL,
s_phone VARCHAR(15) NOT NULL
);
CREATE TABLE customer
(
c_custkey INTEGER NOT NULL,
c_name VARCHAR(25) NOT NULL,
c_address VARCHAR(25) NOT NULL,
c_city VARCHAR(10) NOT NULL,
c_nation VARCHAR(15) NOT NULL,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-2
|
c_city VARCHAR(10) NOT NULL,
c_nation VARCHAR(15) NOT NULL,
c_region VARCHAR(12) NOT NULL,
c_phone VARCHAR(15) NOT NULL,
c_mktsegment VARCHAR(10) NOT NULL
);
CREATE TABLE dwdate
(
d_datekey INTEGER NOT NULL,
d_date VARCHAR(19) NOT NULL,
d_dayofweek VARCHAR(10) NOT NULL,
d_month VARCHAR(10) NOT NULL,
d_year INTEGER NOT NULL,
d_yearmonthnum INTEGER NOT NULL,
d_yearmonth VARCHAR(8) NOT NULL,
d_daynuminweek INTEGER NOT NULL,
d_daynuminmonth INTEGER NOT NULL,
d_daynuminyear INTEGER NOT NULL,
d_monthnuminyear INTEGER NOT NULL,
d_weeknuminyear INTEGER NOT NULL,
d_sellingseason VARCHAR(13) NOT NULL,
d_lastdayinweekfl VARCHAR(1) NOT NULL,
d_lastdayinmonthfl VARCHAR(1) NOT NULL,
d_holidayfl VARCHAR(1) NOT NULL,
d_weekdayfl VARCHAR(1) NOT NULL
);
CREATE TABLE lineorder
(
lo_orderkey INTEGER NOT NULL,
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-3
|
);
CREATE TABLE lineorder
(
lo_orderkey INTEGER NOT NULL,
lo_linenumber INTEGER NOT NULL,
lo_custkey INTEGER NOT NULL,
lo_partkey INTEGER NOT NULL,
lo_suppkey INTEGER NOT NULL,
lo_orderdate INTEGER NOT NULL,
lo_orderpriority VARCHAR(15) NOT NULL,
lo_shippriority VARCHAR(1) NOT NULL,
lo_quantity INTEGER NOT NULL,
lo_extendedprice INTEGER NOT NULL,
lo_ordertotalprice INTEGER NOT NULL,
lo_discount INTEGER NOT NULL,
lo_revenue INTEGER NOT NULL,
lo_supplycost INTEGER NOT NULL,
lo_tax INTEGER NOT NULL,
lo_commitdate INTEGER NOT NULL,
lo_shipmode VARCHAR(10) NOT NULL
);
```
1. Load the tables using SSB sample data\.
The sample data for this tutorial is provided in an Amazon S3 buckets that give read access to all authenticated AWS users, so any valid AWS credentials that permit access to Amazon S3 will work\.
1. Create a new text file named `loadssb.sql` containing the following SQL\.
```
copy customer from 's3://awssampledbuswest2/ssbgz/customer'
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-4
|
```
copy customer from 's3://awssampledbuswest2/ssbgz/customer'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip compupdate off region 'us-west-2';
copy dwdate from 's3://awssampledbuswest2/ssbgz/dwdate'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip compupdate off region 'us-west-2';
copy lineorder from 's3://awssampledbuswest2/ssbgz/lineorder'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip compupdate off region 'us-west-2';
copy part from 's3://awssampledbuswest2/ssbgz/part'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip compupdate off region 'us-west-2';
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-5
|
gzip compupdate off region 'us-west-2';
copy supplier from 's3://awssampledbuswest2/ssbgz/supplier'
credentials 'aws_access_key_id=<Your-Access-Key-ID>;aws_secret_access_key=<Your-Secret-Access-Key>'
gzip compupdate off region 'us-west-2';
```
1. Replace *<Your\-Access\-Key\-ID>* and *<Your\-Secret\-Access\-Key>* with your own AWS account credentials\. The segment of the credentials string that is enclosed in single quotes must not contain any spaces or line breaks\.
1. Execute the COPY commands either by running the SQL script or by copying and pasting the commands into your SQL client\.
**Note**
The load operation will take about 10 to 15 minutes for all five tables\.
Your results should look similar to the following\.
```
Load into table 'customer' completed, 3000000 record(s) loaded successfully.
0 row(s) affected.
copy executed successfully
Execution time: 10.28s
(Statement 1 of 5 finished)
...
...
Script execution finished
Total script execution time: 9m 51s
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
fffac985597e-6
|
...
...
Script execution finished
Total script execution time: 9m 51s
```
1. Sum the execution time for all five tables, or else note the total script execution time\. You’ll record that number as the load time in the benchmarks table in Step 2, following\.
1. To verify that each table loaded correctly, execute the following commands\.
```
select count(*) from LINEORDER;
select count(*) from PART;
select count(*) from CUSTOMER;
select count(*) from SUPPLIER;
select count(*) from DWDATE;
```
The following results table shows the number of rows for each SSB table\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-create-test-data.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
1f9933082f10-0
|
[Step 2: Test system performance to establish a baseline](tutorial-tuning-tables-test-performance.md)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/tutorial-tuning-tables-create-test-data.md
|
369fcceb98b4-0
|
The COPY command can connect to multiple hosts using SSH, and can create multiple SSH connections to each host\. COPY executes a command through each host connection, and then loads the output from the commands in parallel into the table\. The manifest file is a text file in JSON format that Amazon Redshift uses to connect to the host\. The manifest file specifies the SSH host endpoints and the commands that are executed on the hosts to return data to Amazon Redshift\. Optionally, you can include the host public key, the login user name, and a mandatory flag for each entry\.
Create the manifest file on your local computer\. In a later step, you upload the file to Amazon S3\.
The manifest file is in the following format:
```
{
"entries": [
{"endpoint":"<ssh_endpoint_or_IP>",
"command": "<remote_command>",
"mandatory":true,
"publickey": "<public_key>",
"username": "<host_user_name>"},
{"endpoint":"<ssh_endpoint_or_IP>",
"command": "<remote_command>",
"mandatory":true,
"publickey": "<public_key>",
"username": "host_user_name"}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-create-manifest.md
|
369fcceb98b4-1
|
"username": "host_user_name"}
]
}
```
The manifest file contains one "entries" construct for each SSH connection\. Each entry represents a single SSH connection\. You can have multiple connections to a single host or multiple connections to multiple hosts\. The double quotes are required as shown, both for the field names and the values\. The only value that does not need double quotes is the Boolean value **true** or **false** for the mandatory field\.
The following describes the fields in the manifest file\.
endpoint
The URL address or IP address of the host\. For example, "`ec2-111-222-333.compute-1.amazonaws.com`" or "`22.33.44.56`"
command
The command that will be executed by the host to generate text or binary \(gzip, lzop, or bzip2\) output\. The command can be any command that the user *"host\_user\_name"* has permission to run\. The command can be as simple as printing a file, or it could query a database or launch a script\. The output \(text file, gzip binary file, lzop binary file, or bzip2 binary file\) must be in a form the Amazon Redshift COPY command can ingest\. For more information, see [Preparing your input data](t_preparing-input-data.md)\.
publickey
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-create-manifest.md
|
369fcceb98b4-2
|
publickey
\(Optional\) The public key of the host\. If provided, Amazon Redshift will use the public key to identify the host\. If the public key is not provided, Amazon Redshift will not attempt host identification\. For example, if the remote host's public key is: `ssh-rsa AbcCbaxxx…xxxDHKJ root@amazon.com` enter the following text in the publickey field: `AbcCbaxxx…xxxDHKJ`\.
mandatory
\(Optional\) Indicates whether the COPY command should fail if the connection fails\. The default is `false`\. If Amazon Redshift does not successfully make at least one connection, the COPY command fails\.
username
\(Optional\) The username that will be used to log on to the host system and execute the remote command\. The user login name must be the same as the login that was used to add the public key to the host's authorized keys file in Step 2\. The default username is "redshift"\.
The following example shows a completed manifest to open four connections to the same host and execute a different command through each connection:
```
{
"entries": [
{"endpoint":"ec2-184-72-204-112.compute-1.amazonaws.com",
"command": "cat loaddata1.txt",
"mandatory":true,
"publickey": "ec2publickeyportionoftheec2keypair",
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-create-manifest.md
|
369fcceb98b4-3
|
"mandatory":true,
"publickey": "ec2publickeyportionoftheec2keypair",
"username": "ec2-user"},
{"endpoint":"ec2-184-72-204-112.compute-1.amazonaws.com",
"command": "cat loaddata2.txt",
"mandatory":true,
"publickey": "ec2publickeyportionoftheec2keypair",
"username": "ec2-user"},
{"endpoint":"ec2-184-72-204-112.compute-1.amazonaws.com",
"command": "cat loaddata3.txt",
"mandatory":true,
"publickey": "ec2publickeyportionoftheec2keypair",
"username": "ec2-user"},
{"endpoint":"ec2-184-72-204-112.compute-1.amazonaws.com",
"command": "cat loaddata4.txt",
"mandatory":true,
"publickey": "ec2publickeyportionoftheec2keypair",
"username": "ec2-user"}
]
}
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-host-steps-create-manifest.md
|
c0c00c423ed9-0
|
To reload the results of an unload operation, you can use a COPY command\.
The following example shows a simple case in which the VENUE table is unloaded using a manifest file, truncated, and reloaded\.
```
unload ('select * from venue order by venueid')
to 's3://mybucket/tickit/venue/reload_'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
manifest
delimiter '|';
truncate venue;
copy venue
from 's3://mybucket/tickit/venue/reload_manifest'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
manifest
delimiter '|';
```
After it is reloaded, the VENUE table looks like this:
```
select * from venue order by venueid limit 5;
venueid | venuename | venuecity | venuestate | venueseats
---------+---------------------------+-------------+------------+-----------
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reloading_unload_files.md
|
c0c00c423ed9-1
|
1 | Toyota Park | Bridgeview | IL | 0
2 | Columbus Crew Stadium | Columbus | OH | 0
3 | RFK Stadium | Washington | DC | 0
4 | CommunityAmerica Ballpark | Kansas City | KS | 0
5 | Gillette Stadium | Foxborough | MA | 68756
(5 rows)
```
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/t_Reloading_unload_files.md
|
d25e7603105e-0
|
Logs authentication attempts and connections and disconnections\.
This view is visible only to superusers\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\.
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_CONNECTION_LOG.md
|
405acd445182-0
|
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_CONNECTION_LOG.html)
|
https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_CONNECTION_LOG.md
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.