id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
c351b5ed-9eb6-4974-ac93-4614b07f8ffc | The query
sql
SELECT a, b, toTypeName(a), toTypeName(b) FROM t_1 FULL JOIN t_2 USING (a, b);
returns the set:
response
┌──a─┬────b─┬─toTypeName(a)─┬─toTypeName(b)───┐
│ 1 │ 1 │ Int32 │ Nullable(Int64) │
│ 2 │ 2 │ Int32 │ Nullable(Int64) │
│ -1 │ 1 │ Int32 │ Nullable(Int64) │
│ 1 │ -1 │ Int32 │ Nullable(Int64) │
└────┴──────┴───────────────┴─────────────────┘
Usage recommendations {#usage-recommendations}
Processing of empty or NULL cells {#processing-of-empty-or-null-cells}
While joining tables, the empty cells may appear. The setting
join_use_nulls
define how ClickHouse fills these cells.
If the
JOIN
keys are
Nullable
fields, the rows where at least one of the keys has the value
NULL
are not joined.
Syntax {#syntax}
The columns specified in
USING
must have the same names in both subqueries, and the other columns must be named differently. You can use aliases to change the names of columns in subqueries.
The
USING
clause specifies one or more columns to join, which establishes the equality of these columns. The list of columns is set without brackets. More complex join conditions are not supported.
Syntax Limitations {#syntax-limitations}
For multiple
JOIN
clauses in a single
SELECT
query:
Taking all the columns via
*
is available only if tables are joined, not subqueries.
The
PREWHERE
clause is not available.
The
USING
clause is not available.
For
ON
,
WHERE
, and
GROUP BY
clauses:
Arbitrary expressions cannot be used in
ON
,
WHERE
, and
GROUP BY
clauses, but you can define an expression in a
SELECT
clause and then use it in these clauses via an alias.
Performance {#performance}
When running a
JOIN
, there is no optimization of the order of execution in relation to other stages of the query. The join (a search in the right table) is run before filtering in
WHERE
and before aggregation.
Each time a query is run with the same
JOIN
, the subquery is run again because the result is not cached. To avoid this, use the special
Join
table engine, which is a prepared array for joining that is always in RAM.
In some cases, it is more efficient to use
IN
instead of
JOIN
.
If you need a
JOIN
for joining with dimension tables (these are relatively small tables that contain dimension properties, such as names for advertising campaigns), a
JOIN
might not be very convenient due to the fact that the right table is re-accessed for every query. For such cases, there is a "dictionaries" feature that you should use instead of
JOIN
. For more information, see the
Dictionaries
section.
Memory limitations {#memory-limitations} | {"source_file": "join.md"} | [
-0.008784365840256214,
-0.04035785049200058,
-0.020550090819597244,
0.08264073729515076,
-0.09336110204458237,
-0.04259959235787392,
0.06564600765705109,
0.01731668971478939,
-0.0393156036734581,
-0.037490371614694595,
0.05568762496113777,
-0.08261734247207642,
0.0932924821972847,
-0.12563... |
716f9b16-2ab4-406c-866d-b680569783b5 | Memory limitations {#memory-limitations}
By default, ClickHouse uses the
hash join
algorithm. ClickHouse takes the right_table and creates a hash table for it in RAM. If
join_algorithm = 'auto'
is enabled, then after some threshold of memory consumption, ClickHouse falls back to
merge
join algorithm. For
JOIN
algorithms description see the
join_algorithm
setting.
If you need to restrict
JOIN
operation memory consumption use the following settings:
max_rows_in_join
— Limits number of rows in the hash table.
max_bytes_in_join
— Limits size of the hash table.
When any of these limits is reached, ClickHouse acts as the
join_overflow_mode
setting instructs.
Examples {#examples}
Example:
sql
SELECT
CounterID,
hits,
visits
FROM
(
SELECT
CounterID,
count() AS hits
FROM test.hits
GROUP BY CounterID
) ANY LEFT JOIN
(
SELECT
CounterID,
sum(Sign) AS visits
FROM test.visits
GROUP BY CounterID
) USING CounterID
ORDER BY hits DESC
LIMIT 10
text
┌─CounterID─┬───hits─┬─visits─┐
│ 1143050 │ 523264 │ 13665 │
│ 731962 │ 475698 │ 102716 │
│ 722545 │ 337212 │ 108187 │
│ 722889 │ 252197 │ 10547 │
│ 2237260 │ 196036 │ 9522 │
│ 23057320 │ 147211 │ 7689 │
│ 722818 │ 90109 │ 17847 │
│ 48221 │ 85379 │ 4652 │
│ 19762435 │ 77807 │ 7026 │
│ 722884 │ 77492 │ 11056 │
└───────────┴────────┴────────┘
Related content {#related-content}
Blog:
ClickHouse: A Blazingly Fast DBMS with Full SQL Join Support - Part 1
Blog:
ClickHouse: A Blazingly Fast DBMS with Full SQL Join Support - Under the Hood - Part 2
Blog:
ClickHouse: A Blazingly Fast DBMS with Full SQL Join Support - Under the Hood - Part 3
Blog:
ClickHouse: A Blazingly Fast DBMS with Full SQL Join Support - Under the Hood - Part 4 | {"source_file": "join.md"} | [
0.029315520077943802,
-0.0608149990439415,
-0.07319813966751099,
0.026764461770653725,
-0.07069191336631775,
-0.04427918419241905,
0.0333620086312294,
0.013334372080862522,
-0.02276736870408058,
-0.010494108311831951,
-0.02088627591729164,
0.08928760141134262,
0.02988671138882637,
-0.07839... |
45886fbe-81f0-4da1-af9a-89b94b86e616 | description: 'Documentation for Offset'
sidebar_label: 'OFFSET'
slug: /sql-reference/statements/select/offset
title: 'OFFSET FETCH Clause'
doc_type: 'reference'
OFFSET
and
FETCH
allow you to retrieve data by portions. They specify a row block which you want to get by a single query.
sql
OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}]
The
offset_row_count
or
fetch_row_count
value can be a number or a literal constant. You can omit
fetch_row_count
; by default, it equals to 1.
OFFSET
specifies the number of rows to skip before starting to return rows from the query result set.
OFFSET n
skips the first
n
rows from the result.
Negative OFFSET is supported:
OFFSET -n
skips the last
n
rows from the result.
Fractional OFFSET is also supported:
OFFSET n
- if 0 < n < 1, then the first n * 100% of the result is skipped.
Example:
•
OFFSET 0.1
- skips the first 10% of the result.
Note
• The fraction must be a
Float64
number less than 1 and greater than zero.
• If a fractional number of rows results from the calculation, it is rounded up to the next whole number.
The
FETCH
specifies the maximum number of rows that can be in the result of a query.
The
ONLY
option is used to return rows that immediately follow the rows omitted by the
OFFSET
. In this case the
FETCH
is an alternative to the
LIMIT
clause. For example, the following query
sql
SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY;
is identical to the query
sql
SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1;
The
WITH TIES
option is used to return any additional rows that tie for the last place in the result set according to the
ORDER BY
clause. For example, if
fetch_row_count
is set to 5 but two additional rows match the values of the
ORDER BY
columns in the fifth row, the result set will contain seven rows.
:::note
According to the standard, the
OFFSET
clause must come before the
FETCH
clause if both are present.
:::
:::note
The real offset can also depend on the
offset
setting.
:::
Examples {#examples}
Input table:
text
┌─a─┬─b─┐
│ 1 │ 1 │
│ 2 │ 1 │
│ 3 │ 4 │
│ 1 │ 3 │
│ 5 │ 4 │
│ 0 │ 6 │
│ 5 │ 7 │
└───┴───┘
Usage of the
ONLY
option:
sql
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY;
Result:
text
┌─a─┬─b─┐
│ 2 │ 1 │
│ 3 │ 4 │
│ 5 │ 4 │
└───┴───┘
Usage of the
WITH TIES
option:
sql
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES;
Result:
text
┌─a─┬─b─┐
│ 2 │ 1 │
│ 3 │ 4 │
│ 5 │ 4 │
│ 5 │ 7 │
└───┴───┘ | {"source_file": "offset.md"} | [
-0.07526256889104843,
0.017710335552692413,
-0.006427221931517124,
0.028699606657028198,
-0.08440059423446655,
0.06074470654129982,
0.004222464747726917,
0.03483225032687187,
-0.00546899251639843,
-0.03801190108060837,
0.024951433762907982,
-0.007582681253552437,
0.08305441588163376,
-0.14... |
720fd9e6-7ef4-41e3-8015-27b06645d225 | description: 'Documentation for ALL Clause'
sidebar_label: 'ALL'
slug: /sql-reference/statements/select/all
title: 'ALL Clause'
doc_type: 'reference'
ALL Clause
If there are multiple matching rows in a table, then
ALL
returns all of them.
SELECT ALL
is identical to
SELECT
without
DISTINCT
. If both
ALL
and
DISTINCT
are specified, then an exception will be thrown.
ALL
can be specified inside aggregate functions, although it has no practical effect on the query's result.
For example:
sql
SELECT sum(ALL number) FROM numbers(10);
Is equivalent to:
sql
SELECT sum(number) FROM numbers(10); | {"source_file": "all.md"} | [
0.004618600010871887,
0.028025437146425247,
0.02632853202521801,
-0.010546890087425709,
0.01301640272140503,
0.044958069920539856,
0.03773960843682289,
0.0448341965675354,
-0.006615877617150545,
0.0098563926294446,
0.04709569737315178,
0.0018779461970552802,
0.05304460600018501,
-0.1201692... |
69e3d5af-b06a-4fac-939e-543d63bdc10c | description: 'Documentation for LIMIT Clause'
sidebar_label: 'LIMIT'
slug: /sql-reference/statements/select/limit
title: 'LIMIT Clause'
doc_type: 'reference'
LIMIT Clause
LIMIT m
allows to select the first
m
rows from the result.
LIMIT n, m
allows to select the
m
rows from the result after skipping the first
n
rows. The
LIMIT m OFFSET n
syntax is equivalent.
In the standard forms above,
n
and
m
are non-negative integers.
Additionally, negative limits are supported:
LIMIT -m
selects the last
m
rows from the result.
LIMIT -m OFFSET -n
selects the last
m
rows after skipping the last
n
rows. The
LIMIT -n, -m
syntax is equivalent.
Moreover, selecting a fraction of the result is also supported:
LIMIT m
- if 0 < m < 1, then the first m * 100% of rows are returned.
LIMIT m OFFSET n
- if 0 < m < 1 and 0 < n < 1, then the first m * 100% of the result is returned after skipping the first n * 100% of rows. The
LIMIT n, m
syntax is equivalent.
Examples:
•
LIMIT 0.1
- selects the first 10% of the result.
•
LIMIT 1 OFFSET 0.5
- selects the median row.
•
LIMIT 0.25 OFFSET 0.5
- selects 3rd quartile of the result.
Note
• The fraction must be a
Float64
number less than 1 and greater than zero.
• If a fractional number of rows results from the calculation, it is rounded up to the next whole number.
Note
• You can combine standard limit with fractional offset and vice versa.
• You can combine standard limit with negative offset and vice versa.
If there is no
ORDER BY
clause that explicitly sorts results, the choice of rows for the result may be arbitrary and non-deterministic.
:::note
The number of rows in the result set can also depend on the
limit
setting.
:::
LIMIT ... WITH TIES Modifier {#limit--with-ties-modifier}
When you set
WITH TIES
modifier for
LIMIT n[,m]
and specify
ORDER BY expr_list
, you will get in result first
n
or
n,m
rows and all rows with same
ORDER BY
fields values equal to row at position
n
for
LIMIT n
and
m
for
LIMIT n,m
.
Note
•
WITH TIES
is currently not supported with negative
LIMIT
.
This modifier also can be combined with
ORDER BY ... WITH FILL modifier
.
For example, the following query
sql
SELECT * FROM (
SELECT number%50 AS n FROM numbers(100)
) ORDER BY n LIMIT 0, 5
returns
text
┌─n─┐
│ 0 │
│ 0 │
│ 1 │
│ 1 │
│ 2 │
└───┘
but after apply
WITH TIES
modifier
sql
SELECT * FROM (
SELECT number%50 AS n FROM numbers(100)
) ORDER BY n LIMIT 0, 5 WITH TIES
it returns another rows set
text
┌─n─┐
│ 0 │
│ 0 │
│ 1 │
│ 1 │
│ 2 │
│ 2 │
└───┘
cause row number 6 have same value "2" for field
n
as row number 5 | {"source_file": "limit.md"} | [
-0.04249006137251854,
0.027530260384082794,
-0.012361898086965084,
-0.0015016988618299365,
-0.014744610525667667,
0.027263522148132324,
0.03587237000465393,
0.014568528160452843,
-0.013653897680342197,
0.001977107021957636,
0.06230803206562996,
-0.005901239346712828,
0.06621843576431274,
-... |
21767815-3a18-4777-945a-81fe4e134295 | description: 'Documentation for INTO OUTFILE Clause'
sidebar_label: 'INTO OUTFILE'
slug: /sql-reference/statements/select/into-outfile
title: 'INTO OUTFILE Clause'
doc_type: 'reference'
INTO OUTFILE Clause
INTO OUTFILE
clause redirects the result of a
SELECT
query to a file on the
client
side.
Compressed files are supported. Compression type is detected by the extension of the file name (mode
'auto'
is used by default). Or it can be explicitly specified in a
COMPRESSION
clause. The compression level for a certain compression type can be specified in a
LEVEL
clause.
Syntax
sql
SELECT <expr_list> INTO OUTFILE file_name [AND STDOUT] [APPEND | TRUNCATE] [COMPRESSION type [LEVEL level]]
file_name
and
type
are string literals. Supported compression types are:
'none'
,
'gzip'
,
'deflate'
,
'br'
,
'xz'
,
'zstd'
,
'lz4'
,
'bz2'
.
level
is a numeric literal. Positive integers in following ranges are supported:
1-12
for
lz4
type,
1-22
for
zstd
type and
1-9
for other compression types.
Implementation Details {#implementation-details}
This functionality is available in the
command-line client
and
clickhouse-local
. Thus a query sent via
HTTP interface
will fail.
The query will fail if a file with the same file name already exists.
The default
output format
is
TabSeparated
(like in the command-line client batch mode). Use
FORMAT
clause to change it.
If
AND STDOUT
is mentioned in the query then the output that is written to the file is also displayed on standard output. If used with compression, the plaintext is displayed on standard output.
If
APPEND
is mentioned in the query then the output is appended to an existing file. If compression is used, append cannot be used.
When writing to a file that already exists,
APPEND
or
TRUNCATE
must be used.
Example
Execute the following query using
command-line client
:
bash
clickhouse-client --query="SELECT 1,'ABC' INTO OUTFILE 'select.gz' FORMAT CSV;"
zcat select.gz
Result:
text
1,"ABC" | {"source_file": "into-outfile.md"} | [
-0.04119221493601799,
0.06014472246170044,
-0.06386616080999374,
0.09938912093639374,
0.0012477290583774447,
-0.0359979011118412,
0.08136623352766037,
0.09527097642421722,
-0.05022922530770302,
0.04305867850780487,
-0.036038365215063095,
0.0559079647064209,
0.10425689071416855,
-0.09660275... |
c463af50-3906-4b1a-ab77-ef80d66bf2b9 | description: 'Documentation for the
WHERE
clause in ClickHouse'
sidebar_label: 'WHERE'
slug: /sql-reference/statements/select/where
title: 'WHERE clause'
doc_type: 'reference'
keywords: ['WHERE']
WHERE clause
The
WHERE
clause allows you to filter the data that comes from the
FROM
clause of
SELECT
.
If there is a
WHERE
clause, it must be followed by an expression of type
UInt8
.
Rows where this expression evaluates to
0
are excluded from further transformations or the result.
The expression following the
WHERE
clause is often used with
comparison
and
logical operators
, or one of the many
regular functions
.
The
WHERE
expression is evaluated on the ability to use indexes and partition pruning, if the underlying table engine supports that.
:::note PREWHERE
There is also a filtering optimization called
PREWHERE
.
Prewhere is an optimization to apply filtering more efficiently.
It is enabled by default even if
PREWHERE
clause is not specified explicitly.
:::
Testing for
NULL
{#testing-for-null}
If you need to test a value for
NULL
, use:
-
IS NULL
or
isNull
-
IS NOT NULL
or
isNotNull
An expression with
NULL
will otherwise never pass.
Filtering data with logical operators {#filtering-data-with-logical-operators}
You can use the following
logical functions
together with the
WHERE
clause for combining multiple conditions:
and()
or
AND
not()
or
NOT
or()
or
NOT
xor()
Using UInt8 columns as a condition {#using-uint8-columns-as-a-condition}
In ClickHouse,
UInt8
columns can be used directly as boolean conditions, where
0
is
false
and any non-zero value (typically
1
) is
true
.
An example of this is given in the section
below
.
Using comparison operators {#using-comparison-operators}
The following
comparison operators
can be used: | {"source_file": "where.md"} | [
0.0039731222204864025,
0.04995010793209076,
0.01010546088218689,
0.053998593240976334,
0.0141212223097682,
0.07968350499868393,
0.04996803402900696,
-0.035230059176683426,
-0.07798269391059875,
0.007412610109895468,
0.04122010990977287,
0.013767078518867493,
0.04272620379924774,
-0.0609521... |
c0d6005d-a350-4b1d-b2b2-3a0be436745f | Using comparison operators {#using-comparison-operators}
The following
comparison operators
can be used:
| Operator | Function | Description | Example |
|----------|----------|-------------|---------|
|
a = b
|
equals(a, b)
| Equal to |
price = 100
|
|
a == b
|
equals(a, b)
| Equal to (alternative syntax) |
price == 100
|
|
a != b
|
notEquals(a, b)
| Not equal to |
category != 'Electronics'
|
|
a <> b
|
notEquals(a, b)
| Not equal to (alternative syntax) |
category <> 'Electronics'
|
|
a < b
|
less(a, b)
| Less than |
price < 200
|
|
a <= b
|
lessOrEquals(a, b)
| Less than or equal to |
price <= 200
|
|
a > b
|
greater(a, b)
| Greater than |
price > 500
|
|
a >= b
|
greaterOrEquals(a, b)
| Greater than or equal to |
price >= 500
|
|
a LIKE s
|
like(a, b)
| Pattern matching (case-sensitive) |
name LIKE '%top%'
|
|
a NOT LIKE s
|
notLike(a, b)
| Pattern not matching (case-sensitive) |
name NOT LIKE '%top%'
|
|
a ILIKE s
|
ilike(a, b)
| Pattern matching (case-insensitive) |
name ILIKE '%LAPTOP%'
|
|
a BETWEEN b AND c
|
a >= b AND a <= c
| Range check (inclusive) |
price BETWEEN 100 AND 500
|
|
a NOT BETWEEN b AND c
|
a < b OR a > c
| Outside range check |
price NOT BETWEEN 100 AND 500
|
Pattern matching and conditional expressions {#pattern-matching-and-conditional-expressions}
Beyond comparison operators, you can use pattern matching and conditional expressions in the
WHERE
clause.
| Feature | Syntax | Case-Sensitive | Performance | Best For |
| ----------- | ------------------------------ | -------------- | ----------- | ------------------------------ |
|
LIKE
|
col LIKE '%pattern%'
| Yes | Fast | Exact case pattern matching |
|
ILIKE
|
col ILIKE '%pattern%'
| No | Slower | Case-insensitive searching |
|
if()
|
if(cond, a, b)
| N/A | Fast | Simple binary conditions |
|
multiIf()
|
multiIf(c1, r1, c2, r2, def)
| N/A | Fast | Multiple conditions |
|
CASE
|
CASE WHEN ... THEN ... END
| N/A | Fast | SQL-standard conditional logic |
See
"Pattern matching and conditional expressions"
for usage examples.
Expression with literals, columns or subqueries {#expressions-with-literals-columns-subqueries}
The expression following the
WHERE
clause can also include
literals
, columns or subqueries, which are nested
SELECT
statements that return values used in conditions. | {"source_file": "where.md"} | [
0.03908153250813484,
0.09690646082162857,
-0.019223526120185852,
-0.07415644079446793,
-0.06045132875442505,
-0.0029820341151207685,
-0.03364735096693039,
0.040713243186473846,
-0.005082267336547375,
0.0032627924811095,
0.01706857979297638,
-0.10844512283802032,
0.13146527111530304,
0.0161... |
cdb00f79-4522-4387-818a-f49fb8167426 | The expression following the
WHERE
clause can also include
literals
, columns or subqueries, which are nested
SELECT
statements that return values used in conditions.
| Type | Definition | Evaluation | Performance | Example |
|------|------------|------------|-------------|---------|
|
Literal
| Fixed constant value | Query write time | Fastest |
WHERE price > 100
|
|
Column
| Table data reference | Per row | Fast |
WHERE price > cost
|
|
Subquery
| Nested SELECT | Query execution time | Varies |
WHERE id IN (SELECT ...)
|
You can mix literals, columns, and subqueries in complex conditions:
```sql
-- Literal + Column
WHERE price > 100 AND category = 'Electronics'
-- Column + Subquery
WHERE price > (SELECT AVG(price) FROM products) AND in_stock = true
-- Literal + Column + Subquery
WHERE category = 'Electronics'
AND price < 500
AND id IN (SELECT product_id FROM bestsellers)
-- All three with logical operators
WHERE (price > 100 OR category IN (SELECT category FROM featured))
AND in_stock = true
AND name LIKE '%Special%'
```
Examples {#examples}
Testing for
NULL
{#examples-testing-for-null}
Queries with
NULL
values:
```sql
CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE=MergeTree() ORDER BY x;
INSERT INTO t_null VALUES (1, NULL), (2, 3);
SELECT * FROM t_null WHERE y IS NULL;
SELECT * FROM t_null WHERE y != 0;
```
response
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
┌─x─┬─y─┐
│ 2 │ 3 │
└───┴───┘
Filtering data with logical operators {#example-filtering-with-logical-operators}
Given the following table and data:
```sql
CREATE TABLE products (
id UInt32,
name String,
price Float32,
category String,
in_stock Bool
) ENGINE = MergeTree()
ORDER BY id;
INSERT INTO products VALUES
(1, 'Laptop', 999.99, 'Electronics', true),
(2, 'Mouse', 25.50, 'Electronics', true),
(3, 'Desk', 299.00, 'Furniture', false),
(4, 'Chair', 150.00, 'Furniture', true),
(5, 'Monitor', 350.00, 'Electronics', true),
(6, 'Lamp', 45.00, 'Furniture', false);
```
1.
AND
- both conditions must be true:
sql
SELECT * FROM products
WHERE category = 'Electronics' AND price < 500;
response
┌─id─┬─name────┬─price─┬─category────┬─in_stock─┐
1. │ 2 │ Mouse │ 25.5 │ Electronics │ true │
2. │ 5 │ Monitor │ 350 │ Electronics │ true │
└────┴─────────┴───────┴─────────────┴──────────┘
2.
OR
- at least one condition must be true:
sql
SELECT * FROM products
WHERE category = 'Furniture' OR price > 500;
response
┌─id─┬─name───┬──price─┬─category────┬─in_stock─┐
1. │ 1 │ Laptop │ 999.99 │ Electronics │ true │
2. │ 3 │ Desk │ 299 │ Furniture │ false │
3. │ 4 │ Chair │ 150 │ Furniture │ true │
4. │ 6 │ Lamp │ 45 │ Furniture │ false │
└────┴────────┴────────┴─────────────┴──────────┘
3.
NOT
- Negates a condition:
sql
SELECT * FROM products
WHERE NOT in_stock; | {"source_file": "where.md"} | [
-0.009480496868491173,
0.05978642404079437,
-0.003109187353402376,
0.07939650863409042,
-0.016176383942365646,
0.01718948967754841,
0.034758731722831726,
0.049002375453710556,
-0.019318804144859314,
0.00003988404569099657,
0.03031536377966404,
-0.03252413123846054,
0.0619647353887558,
-0.0... |
f701663b-8ab6-43a1-875e-d8cda80542b8 | 3.
NOT
- Negates a condition:
sql
SELECT * FROM products
WHERE NOT in_stock;
response
┌─id─┬─name─┬─price─┬─category──┬─in_stock─┐
1. │ 3 │ Desk │ 299 │ Furniture │ false │
2. │ 6 │ Lamp │ 45 │ Furniture │ false │
└────┴──────┴───────┴───────────┴──────────┘
4.
XOR
- Exactly one condition must be true (not both):
sql
SELECT *
FROM products
WHERE xor(price > 200, category = 'Electronics')
response
┌─id─┬─name──┬─price─┬─category────┬─in_stock─┐
1. │ 2 │ Mouse │ 25.5 │ Electronics │ true │
2. │ 3 │ Desk │ 299 │ Furniture │ false │
└────┴───────┴───────┴─────────────┴──────────┘
5. Combining multiple operators:
sql
SELECT * FROM products
WHERE (category = 'Electronics' OR category = 'Furniture')
AND in_stock = true
AND price < 400;
response
┌─id─┬─name────┬─price─┬─category────┬─in_stock─┐
1. │ 2 │ Mouse │ 25.5 │ Electronics │ true │
2. │ 4 │ Chair │ 150 │ Furniture │ true │
3. │ 5 │ Monitor │ 350 │ Electronics │ true │
└────┴─────────┴───────┴─────────────┴──────────┘
6. Using function syntax:
sql
SELECT * FROM products
WHERE and(or(category = 'Electronics', price > 100), in_stock);
response
┌─id─┬─name────┬──price─┬─category────┬─in_stock─┐
1. │ 1 │ Laptop │ 999.99 │ Electronics │ true │
2. │ 2 │ Mouse │ 25.5 │ Electronics │ true │
3. │ 4 │ Chair │ 150 │ Furniture │ true │
4. │ 5 │ Monitor │ 350 │ Electronics │ true │
└────┴─────────┴────────┴─────────────┴──────────┘
The SQL keyword syntax (
AND
,
OR
,
NOT
,
XOR
) is generally more readable, but the function syntax can be useful in complex expressions or when building dynamic queries.
Using UInt8 columns as a condition {#example-uint8-column-as-condition}
Taking the table from a
previous example
, you can use a column name directly as a condition:
sql
SELECT * FROM products
WHERE in_stock
response
┌─id─┬─name────┬──price─┬─category────┬─in_stock─┐
1. │ 1 │ Laptop │ 999.99 │ Electronics │ true │
2. │ 2 │ Mouse │ 25.5 │ Electronics │ true │
3. │ 4 │ Chair │ 150 │ Furniture │ true │
4. │ 5 │ Monitor │ 350 │ Electronics │ true │
└────┴─────────┴────────┴─────────────┴──────────┘
Using comparison operators {#example-using-comparison-operators}
The examples below use the table and data from the
example
above. Results are omitted for sake of brevity.
1. Explicit equality with true (
= 1
or
= true
):
sql
SELECT * FROM products
WHERE in_stock = true;
-- or
WHERE in_stock = 1;
2. Explicit equality with false (
= 0
or
= false
):
sql
SELECT * FROM products
WHERE in_stock = false;
-- or
WHERE in_stock = 0;
3. Inequality (
!= 0
or
!= false
):
sql
SELECT * FROM products
WHERE in_stock != false;
-- or
WHERE in_stock != 0;
4. Greater than:
sql
SELECT * FROM products
WHERE in_stock > 0;
5. Less than or equal:
sql
SELECT * FROM products
WHERE in_stock <= 0; | {"source_file": "where.md"} | [
-0.0014573283260688186,
0.01085312943905592,
0.05251886323094368,
-0.024110985919833183,
0.041942283511161804,
-0.07559161633253098,
0.08054770529270172,
-0.0036508790217339993,
0.007021572906523943,
-0.01982922852039337,
0.10727518796920776,
-0.07413019984960556,
0.07858742028474808,
-0.0... |
cda458e5-f408-44f9-a88e-2880ad804f15 | 4. Greater than:
sql
SELECT * FROM products
WHERE in_stock > 0;
5. Less than or equal:
sql
SELECT * FROM products
WHERE in_stock <= 0;
6. Combining with other conditions:
sql
SELECT * FROM products
WHERE in_stock AND price < 400;
7. Using the
IN
operator:
In the example below
(1, true)
is a
tuple
.
sql
SELECT * FROM products
WHERE in_stock IN (1, true);
You can also use an
array
to do this:
sql
SELECT * FROM products
WHERE in_stock IN [1, true];
8. Mixing comparison styles:
sql
SELECT * FROM products
WHERE category = 'Electronics' AND in_stock = true;
Pattern matching and conditional expressions {#examples-pattern-matching-and-conditional-expressions}
The examples below use the table and data from the
example
above. Results are omitted for sake of brevity.
LIKE examples {#like-examples}
```sql
-- Find products with 'o' in the name
SELECT * FROM products WHERE name LIKE '%o%';
-- Result: Laptop, Monitor
-- Find products starting with 'L'
SELECT * FROM products WHERE name LIKE 'L%';
-- Result: Laptop, Lamp
-- Find products with exactly 4 characters
SELECT * FROM products WHERE name LIKE '____';
-- Result: Desk, Lamp
```
ILIKE examples {#ilike-examples}
```sql
-- Case-insensitive search for 'LAPTOP'
SELECT * FROM products WHERE name ILIKE '%laptop%';
-- Result: Laptop
-- Case-insensitive prefix match
SELECT * FROM products WHERE name ILIKE 'l%';
-- Result: Laptop, Lamp
```
IF examples {#if-examples}
```sql
-- Different price thresholds by category
SELECT * FROM products
WHERE if(category = 'Electronics', price < 500, price < 200);
-- Result: Mouse, Chair, Monitor
-- (Electronics under $500 OR Furniture under $200)
-- Filter based on stock status
SELECT * FROM products
WHERE if(in_stock, price > 100, true);
-- Result: Laptop, Chair, Monitor, Desk, Lamp
-- (In stock items over $100 OR all out-of-stock items)
```
multiIf examples {#multiif-examples}
```sql
-- Multiple category-based conditions
SELECT * FROM products
WHERE multiIf(
category = 'Electronics', price < 600,
category = 'Furniture', in_stock = true,
false
);
-- Result: Mouse, Monitor, Chair
-- (Electronics < $600 OR in-stock Furniture)
-- Tiered filtering
SELECT * FROM products
WHERE multiIf(
price > 500, category = 'Electronics',
price > 100, in_stock = true,
true
);
-- Result: Laptop, Chair, Monitor, Lamp
```
CASE examples {#case-examples}
Simple CASE:
sql
-- Different rules per category
SELECT * FROM products
WHERE CASE category
WHEN 'Electronics' THEN price < 400
WHEN 'Furniture' THEN in_stock = true
ELSE false
END;
-- Result: Mouse, Monitor, Chair
Searched CASE:
sql
-- Price-based tiered logic
SELECT * FROM products
WHERE CASE
WHEN price > 500 THEN in_stock = true
WHEN price > 100 THEN category = 'Electronics'
ELSE true
END;
-- Result: Laptop, Monitor, Mouse, Lamp | {"source_file": "where.md"} | [
-0.005649914033710957,
0.07291329652070999,
0.042801011353731155,
0.0023204341996461153,
-0.006114506628364325,
0.003189041744917631,
0.06247856840491295,
0.05933738499879837,
-0.060024309903383255,
-0.030683455988764763,
0.04666336998343468,
-0.028513753786683083,
0.10212962329387665,
-0.... |
6c91022f-b130-4850-b2ce-3fadfc511b71 | description: 'Documentation for CREATE VIEW'
sidebar_label: 'VIEW'
sidebar_position: 37
slug: /sql-reference/statements/create/view
title: 'CREATE VIEW'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import DeprecatedBadge from '@theme/badges/DeprecatedBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
CREATE VIEW
Creates a new view. Views can be
normal
,
materialized
,
refreshable materialized
, and
window
.
Normal View {#normal-view}
Syntax:
sql
CREATE [OR REPLACE] VIEW [IF NOT EXISTS] [db.]table_name [(alias1 [, alias2 ...])] [ON CLUSTER cluster_name]
[DEFINER = { user | CURRENT_USER }] [SQL SECURITY { DEFINER | INVOKER | NONE }]
AS SELECT ...
[COMMENT 'comment']
Normal views do not store any data. They just perform a read from another table on each access. In other words, a normal view is nothing more than a saved query. When reading from a view, this saved query is used as a subquery in the
FROM
clause.
As an example, assume you've created a view:
sql
CREATE VIEW view AS SELECT ...
and written a query:
sql
SELECT a, b, c FROM view
This query is fully equivalent to using the subquery:
sql
SELECT a, b, c FROM (SELECT ...)
Parameterized View {#parameterized-view}
Parameterized views are similar to normal views, but can be created with parameters which are not resolved immediately. These views can be used with table functions, which specify the name of the view as function name and the parameter values as its arguments.
sql
CREATE VIEW view AS SELECT * FROM TABLE WHERE Column1={column1:datatype1} and Column2={column2:datatype2} ...
The above creates a view for table which can be used as table function by substituting parameters as shown below.
sql
SELECT * FROM view(column1=value1, column2=value2 ...)
Materialized View {#materialized-view}
sql
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster_name] [TO[db.]name [(columns)]] [ENGINE = engine] [POPULATE]
[DEFINER = { user | CURRENT_USER }] [SQL SECURITY { DEFINER | NONE }]
AS SELECT ...
[COMMENT 'comment']
:::tip
Here is a step-by-step guide on using
Materialized views
.
:::
Materialized views store data transformed by the corresponding
SELECT
query.
When creating a materialized view without
TO [db].[table]
, you must specify
ENGINE
– the table engine for storing data.
When creating a materialized view with
TO [db].[table]
, you can't also use
POPULATE
.
A materialized view is implemented as follows: when inserting data to the table specified in
SELECT
, part of the inserted data is converted by this
SELECT
query, and the result is inserted in the view. | {"source_file": "view.md"} | [
-0.011275812052190304,
-0.07100659608840942,
-0.05459878593683243,
0.08209623396396637,
0.0006558598252013326,
0.04633123055100441,
0.0186771247535944,
-0.04880235344171524,
0.012310037389397621,
0.031103916466236115,
0.028167061507701874,
-0.026609687134623528,
0.09085480123758316,
-0.021... |
abff04b9-44bf-4078-b389-d7278382e94a | :::note
Materialized views in ClickHouse use
column names
instead of column order during insertion into destination table. If some column names are not present in the
SELECT
query result, ClickHouse uses a default value, even if the column is not
Nullable
. A safe practice would be to add aliases for every column when using Materialized views.
Materialized views in ClickHouse are implemented more like insert triggers. If there's some aggregation in the view query, it's applied only to the batch of freshly inserted data. Any changes to existing data of source table (like update, delete, drop partition, etc.) does not change the materialized view.
Materialized views in ClickHouse do not have deterministic behaviour in case of errors. This means that blocks that had been already written will be preserved in the destination table, but all blocks after error will not.
By default if pushing to one of views fails, then the INSERT query will fail too, and some blocks may not be written to the destination table. This can be changed using
materialized_views_ignore_errors
setting (you should set it for
INSERT
query), if you will set
materialized_views_ignore_errors=true
, then any errors while pushing to views will be ignored and all blocks will be written to the destination table.
Also note, that
materialized_views_ignore_errors
set to
true
by default for
system.*_log
tables.
:::
If you specify
POPULATE
, the existing table data is inserted into the view when creating it, as if making a
CREATE TABLE ... AS SELECT ...
. Otherwise, the query contains only the data inserted in the table after creating the view. We
do not recommend
using
POPULATE
, since data inserted in the table during the view creation will not be inserted in it.
:::note
Given that
POPULATE
works like
CREATE TABLE ... AS SELECT ...
it has limitations:
- It is not supported with Replicated database
- It is not supported in ClickHouse cloud
Instead a separate
INSERT ... SELECT
can be used.
:::
A
SELECT
query can contain
DISTINCT
,
GROUP BY
,
ORDER BY
,
LIMIT
. Note that the corresponding conversions are performed independently on each block of inserted data. For example, if
GROUP BY
is set, data is aggregated during insertion, but only within a single packet of inserted data. The data won't be further aggregated. The exception is when using an
ENGINE
that independently performs data aggregation, such as
SummingMergeTree
.
The execution of
ALTER
queries on materialized views has limitations, for example, you can not update the
SELECT
query, so this might be inconvenient. If the materialized view uses the construction
TO [db.]name
, you can
DETACH
the view, run
ALTER
for the target table, and then
ATTACH
the previously detached (
DETACH
) view.
Note that materialized view is influenced by
optimize_on_insert
setting. The data is merged before the insertion into a view. | {"source_file": "view.md"} | [
-0.04887180030345917,
-0.10589849203824997,
-0.035301029682159424,
0.06641272455453873,
-0.015991684049367905,
-0.04527977108955383,
-0.023334739729762077,
-0.07713228464126587,
0.018427670001983643,
0.009467185474932194,
0.08521854132413864,
-0.0320708304643631,
0.05916125699877739,
-0.07... |
ddd45efd-b8b1-4967-81b8-0b66f2968308 | Note that materialized view is influenced by
optimize_on_insert
setting. The data is merged before the insertion into a view.
Views look the same as normal tables. For example, they are listed in the result of the
SHOW TABLES
query.
To delete a view, use
DROP VIEW
. Although
DROP TABLE
works for VIEWs as well.
SQL security {#sql_security}
DEFINER
and
SQL SECURITY
allow you to specify which ClickHouse user to use when executing the view's underlying query.
SQL SECURITY
has three legal values:
DEFINER
,
INVOKER
, or
NONE
. You can specify any existing user or
CURRENT_USER
in the
DEFINER
clause.
The following table will explain which rights are required for which user in order to select from view.
Note that regardless of the SQL security option, in every case it is still required to have
GRANT SELECT ON <view>
in order to read from it.
| SQL security option | View | Materialized View |
|---------------------|-----------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
|
DEFINER alice
|
alice
must have a
SELECT
grant for the view's source table. |
alice
must have a
SELECT
grant for the view's source table and an
INSERT
grant for the view's target table. |
|
INVOKER
| User must have a
SELECT
grant for the view's source table. |
SQL SECURITY INVOKER
can't be specified for materialized views. |
|
NONE
| - | - |
:::note
SQL SECURITY NONE
is a deprecated option. Any user with the rights to create views with
SQL SECURITY NONE
will be able to execute any arbitrary query.
Thus, it is required to have
GRANT ALLOW SQL SECURITY NONE TO <user>
in order to create a view with this option.
:::
If
DEFINER
/
SQL SECURITY
aren't specified, the default values are used:
-
SQL SECURITY
:
INVOKER
for normal views and
DEFINER
for materialized views (
configurable by settings
)
-
DEFINER
:
CURRENT_USER
(
configurable by settings
)
If a view is attached without
DEFINER
/
SQL SECURITY
specified, the default value is
SQL SECURITY NONE
for the materialized view and
SQL SECURITY INVOKER
for the normal view.
To change SQL security for an existing view, use
sql
ALTER TABLE MODIFY SQL SECURITY { DEFINER | INVOKER | NONE } [DEFINER = { user | CURRENT_USER }]
Examples {#examples}
sql
CREATE VIEW test_view
DEFINER = alice SQL SECURITY DEFINER
AS SELECT ...
sql
CREATE VIEW test_view
SQL SECURITY INVOKER
AS SELECT ...
Live View {#live-view} | {"source_file": "view.md"} | [
-0.02527414821088314,
-0.08394120633602142,
-0.11896853893995285,
0.041398607194423676,
-0.04275213181972504,
0.008398747071623802,
0.039379868656396866,
-0.0600067600607872,
-0.0200966689735651,
0.013431892730295658,
0.03996444493532181,
-0.02282724715769291,
0.06969413161277771,
-0.03452... |
9e445d13-2100-4c71-94ee-cd675389e83e | Examples {#examples}
sql
CREATE VIEW test_view
DEFINER = alice SQL SECURITY DEFINER
AS SELECT ...
sql
CREATE VIEW test_view
SQL SECURITY INVOKER
AS SELECT ...
Live View {#live-view}
This feature is deprecated and will be removed in the future.
For your convenience, the old documentation is located
here
Refreshable Materialized View {#refreshable-materialized-view}
sql
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
REFRESH EVERY|AFTER interval [OFFSET interval]
[RANDOMIZE FOR interval]
[DEPENDS ON [db.]name [, [db.]name [, ...]]]
[SETTINGS name = value [, name = value [, ...]]]
[APPEND]
[TO[db.]name] [(columns)] [ENGINE = engine]
[EMPTY]
[DEFINER = { user | CURRENT_USER }] [SQL SECURITY { DEFINER | NONE }]
AS SELECT ...
[COMMENT 'comment']
where
interval
is a sequence of simple intervals:
sql
number SECOND|MINUTE|HOUR|DAY|WEEK|MONTH|YEAR
Periodically runs the corresponding query and stores its result in a table.
* If the query says
APPEND
, each refresh inserts rows into the table without deleting existing rows. The insert is not atomic, just like a regular INSERT SELECT.
* Otherwise each refresh atomically replaces the table's previous contents.
Differences from regular non-refreshable materialized views:
* No insert trigger. I.e. when new data is inserted into the table specified in SELECT, it's
not
automatically pushed to the refreshable materialized view. The periodic refresh runs the entire query. * No restrictions on the SELECT query. Table functions (e.g.
url()
), views, UNION, JOIN, are all allowed.
:::note
The settings in the
REFRESH ... SETTINGS
part of the query are refresh settings (e.g.
refresh_retries
), distinct from regular settings (e.g.
max_threads
). Regular settings can be specified using
SETTINGS
at the end of the query.
:::
Refresh Schedule {#refresh-schedule}
Example refresh schedules:
sql
REFRESH EVERY 1 DAY -- every day, at midnight (UTC)
REFRESH EVERY 1 MONTH -- on 1st day of every month, at midnight
REFRESH EVERY 1 MONTH OFFSET 5 DAY 2 HOUR -- on 6th day of every month, at 2:00 am
REFRESH EVERY 2 WEEK OFFSET 5 DAY 15 HOUR 10 MINUTE -- every other Saturday, at 3:10 pm
REFRESH EVERY 30 MINUTE -- at 00:00, 00:30, 01:00, 01:30, etc
REFRESH AFTER 30 MINUTE -- 30 minutes after the previous refresh completes, no alignment with time of day
-- REFRESH AFTER 1 HOUR OFFSET 1 MINUTE -- syntax error, OFFSET is not allowed with AFTER
REFRESH EVERY 1 WEEK 2 DAYS -- every 9 days, not on any particular day of the week or month;
-- specifically, when day number (since 1969-12-29) is divisible by 9
REFRESH EVERY 5 MONTHS -- every 5 months, different months each year (as 12 is not divisible by 5);
-- specifically, when month number (since 1970-01) is divisible by 5
RANDOMIZE FOR
randomly adjusts the time of each refresh, e.g.: | {"source_file": "view.md"} | [
-0.06609680503606796,
-0.08673928678035736,
-0.11530402302742004,
0.09039624780416489,
-0.025402534753084183,
0.010346143506467342,
0.044229693710803986,
-0.056174859404563904,
-0.03223534673452377,
0.025836709886789322,
0.026127560064196587,
-0.03159118443727493,
0.08346115052700043,
-0.0... |
5f5c9cf9-b267-4756-9a2d-61ae6c4c2429 | RANDOMIZE FOR
randomly adjusts the time of each refresh, e.g.:
sql
REFRESH EVERY 1 DAY OFFSET 2 HOUR RANDOMIZE FOR 1 HOUR -- every day at random time between 01:30 and 02:30
At most one refresh may be running at a time, for a given view. E.g. if a view with
REFRESH EVERY 1 MINUTE
takes 2 minutes to refresh, it'll just be refreshing every 2 minutes. If it then becomes faster and starts refreshing in 10 seconds, it'll go back to refreshing every minute. (In particular, it won't refresh every 10 seconds to catch up with a backlog of missed refreshes - there's no such backlog.)
Additionally, a refresh is started immediately after the materialized view is created, unless
EMPTY
is specified in the
CREATE
query. If
EMPTY
is specified, the first refresh happens according to schedule.
In Replicated DB {#in-replicated-db}
If the refreshable materialized view is in a
Replicated database
, the replicas coordinate with each other such that only one replica performs the refresh at each scheduled time.
ReplicatedMergeTree
table engine is required, so that all replicas see the data produced by the refresh.
In
APPEND
mode, coordination can be disabled using
SETTINGS all_replicas = 1
. This makes replicas do refreshes independently of each other. In this case ReplicatedMergeTree is not required.
In non-
APPEND
mode, only coordinated refreshing is supported. For uncoordinated, use
Atomic
database and
CREATE ... ON CLUSTER
query to create refreshable materialized views on all replicas.
The coordination is done through Keeper. The znode path is determined by
default_replica_path
server setting.
Dependencies {#refresh-dependencies}
DEPENDS ON
synchronizes refreshes of different tables. By way of example, suppose there's a chain of two refreshable materialized views:
sql
CREATE MATERIALIZED VIEW source REFRESH EVERY 1 DAY AS SELECT * FROM url(...)
CREATE MATERIALIZED VIEW destination REFRESH EVERY 1 DAY AS SELECT ... FROM source
Without
DEPENDS ON
, both views will start a refresh at midnight, and
destination
typically will see yesterday's data in
source
. If we add dependency:
sql
CREATE MATERIALIZED VIEW destination REFRESH EVERY 1 DAY DEPENDS ON source AS SELECT ... FROM source
then
destination
's refresh will start only after
source
's refresh finished for that day, so
destination
will be based on fresh data.
Alternatively, the same result can be achieved with:
sql
CREATE MATERIALIZED VIEW destination REFRESH AFTER 1 HOUR DEPENDS ON source AS SELECT ... FROM source
where
1 HOUR
can be any duration less than
source
's refresh period. The dependent table won't be refreshed more frequently than any of its dependencies. This is a valid way to set up a chain of refreshable views without specifying the real refresh period more than once.
A few more examples:
*
REFRESH EVERY 1 DAY OFFSET 10 MINUTE
(
destination
) depends on
REFRESH EVERY 1 DAY
(
source
) | {"source_file": "view.md"} | [
-0.06732496619224548,
-0.08201943337917328,
0.004432999528944492,
0.06606107205152512,
-0.001694590668193996,
-0.05573906749486923,
0.023076293990015984,
-0.11397650837898254,
0.04342120885848999,
0.061617009341716766,
-0.033136144280433655,
0.01097660232335329,
0.08052489161491394,
-0.055... |
cd4cb1f8-468e-4bfb-896a-89505004da2f | A few more examples:
*
REFRESH EVERY 1 DAY OFFSET 10 MINUTE
(
destination
) depends on
REFRESH EVERY 1 DAY
(
source
)
If
source
refresh takes more than 10 minutes,
destination
will wait for it.
*
REFRESH EVERY 1 DAY OFFSET 1 HOUR
depends on
REFRESH EVERY 1 DAY OFFSET 23 HOUR
Similar to the above, even though the corresponding refreshes happen on different calendar days.
destination
's refresh on day X+1 will wait for
source
's refresh on day X (if it takes more than 2 hours).
*
REFRESH EVERY 2 HOUR
depends on
REFRESH EVERY 1 HOUR
The 2 HOUR refresh happens after the 1 HOUR refresh for every other hour, e.g. after the midnight
refresh, then after the 2am refresh, etc.
*
REFRESH EVERY 1 MINUTE
depends on
REFRESH EVERY 2 HOUR
REFRESH AFTER 1 MINUTE
depends on
REFRESH EVERY 2 HOUR
REFRESH AFTER 1 MINUTE
depends on
REFRESH AFTER 2 HOUR
destination
is refreshed once after every
source
refresh, i.e. every 2 hours. The
1 MINUTE
is effectively ignored.
*
REFRESH AFTER 1 HOUR
depends on
REFRESH AFTER 1 HOUR
Currently this is not recommended.
:::note
DEPENDS ON
only works between refreshable materialized views. Listing a regular table in the
DEPENDS ON
list will prevent the view from ever refreshing (dependencies can be removed with
ALTER
, see below).
:::
Settings {#settings}
Available refresh settings:
*
refresh_retries
- How many times to retry if refresh query fails with an exception. If all retries fail, skip to the next scheduled refresh time. 0 means no retries, -1 means infinite retries. Default: 0.
*
refresh_retry_initial_backoff_ms
- Delay before the first retry, if
refresh_retries
is not zero. Each subsequent retry doubles the delay, up to
refresh_retry_max_backoff_ms
. Default: 100 ms.
*
refresh_retry_max_backoff_ms
- Limit on the exponential growth of delay between refresh attempts. Default: 60000 ms (1 minute).
Changing Refresh Parameters {#changing-refresh-parameters}
To change refresh parameters:
sql
ALTER TABLE [db.]name MODIFY REFRESH EVERY|AFTER ... [RANDOMIZE FOR ...] [DEPENDS ON ...] [SETTINGS ...]
:::note
This replaces
all
refresh parameters at once: schedule, dependencies, settings, and APPEND-ness. E.g. if the table had a
DEPENDS ON
, doing a
MODIFY REFRESH
without
DEPENDS ON
will remove the dependencies.
:::
Other operations {#other-operations}
The status of all refreshable materialized views is available in table
system.view_refreshes
. In particular, it contains refresh progress (if running), last and next refresh time, exception message if a refresh failed.
To manually stop, start, trigger, or cancel refreshes use
SYSTEM STOP|START|REFRESH|WAIT|CANCEL VIEW
.
To wait for a refresh to complete, use
SYSTEM WAIT VIEW
. In particular, useful for waiting for initial refresh after creating a view. | {"source_file": "view.md"} | [
-0.044372379779815674,
-0.06876257807016373,
0.03913737088441849,
0.018540609627962112,
0.04140826314687729,
-0.05213251709938049,
0.016438495367765427,
-0.05511569604277611,
0.06299351155757904,
-0.027996528893709183,
0.025358939543366432,
-0.038045573979616165,
0.015176501125097275,
-0.0... |
230c4100-b52e-492f-be7f-7caea02d6682 | To wait for a refresh to complete, use
SYSTEM WAIT VIEW
. In particular, useful for waiting for initial refresh after creating a view.
:::note
Fun fact: the refresh query is allowed to read from the view that's being refreshed, seeing pre-refresh version of the data. This means you can implement Conway's game of life: https://pastila.nl/?00021a4b/d6156ff819c83d490ad2dcec05676865#O0LGWTO7maUQIA4AcGUtlA==
:::
Window View {#window-view}
:::info
This is an experimental feature that may change in backwards-incompatible ways in the future releases. Enable usage of window views and
WATCH
query using
allow_experimental_window_view
setting. Input the command
set allow_experimental_window_view = 1
.
:::
sql
CREATE WINDOW VIEW [IF NOT EXISTS] [db.]table_name [TO [db.]table_name] [INNER ENGINE engine] [ENGINE engine] [WATERMARK strategy] [ALLOWED_LATENESS interval_function] [POPULATE]
AS SELECT ...
GROUP BY time_window_function
[COMMENT 'comment']
Window view can aggregate data by time window and output the results when the window is ready to fire. It stores the partial aggregation results in an inner(or specified) table to reduce latency and can push the processing result to a specified table or push notifications using the WATCH query.
Creating a window view is similar to creating
MATERIALIZED VIEW
. Window view needs an inner storage engine to store intermediate data. The inner storage can be specified by using
INNER ENGINE
clause, the window view will use
AggregatingMergeTree
as the default inner engine.
When creating a window view without
TO [db].[table]
, you must specify
ENGINE
– the table engine for storing data.
Time Window Functions {#time-window-functions}
Time window functions
are used to get the lower and upper window bound of records. The window view needs to be used with a time window function.
TIME ATTRIBUTES {#time-attributes}
Window view supports
processing time
and
event time
process.
Processing time
allows window view to produce results based on the local machine's time and is used by default. It is the most straightforward notion of time but does not provide determinism. The processing time attribute can be defined by setting the
time_attr
of the time window function to a table column or using the function
now()
. The following query creates a window view with processing time.
sql
CREATE WINDOW VIEW wv AS SELECT count(number), tumbleStart(w_id) as w_start from date GROUP BY tumble(now(), INTERVAL '5' SECOND) as w_id
Event time
is the time that each individual event occurred on its producing device. This time is typically embedded within the records when it is generated. Event time processing allows for consistent results even in case of out-of-order events or late events. Window view supports event time processing by using
WATERMARK
syntax.
Window view provides three watermark strategies: | {"source_file": "view.md"} | [
-0.03753262758255005,
-0.06280414015054703,
-0.044092874974012375,
0.04163168743252754,
0.02528822235763073,
0.017972059547901154,
-0.01577834039926529,
-0.02948339842259884,
0.010136102326214314,
0.01362432911992073,
-0.009934071451425552,
-0.03319842740893364,
-0.05580994859337807,
-0.04... |
0658e31d-9948-4625-8efd-f6706bbe5a2d | Window view provides three watermark strategies:
STRICTLY_ASCENDING
: Emits a watermark of the maximum observed timestamp so far. Rows that have a timestamp smaller to the max timestamp are not late.
ASCENDING
: Emits a watermark of the maximum observed timestamp so far minus 1. Rows that have a timestamp equal and smaller to the max timestamp are not late.
BOUNDED
: WATERMARK=INTERVAL. Emits watermarks, which are the maximum observed timestamp minus the specified delay.
The following queries are examples of creating a window view with
WATERMARK
:
sql
CREATE WINDOW VIEW wv WATERMARK=STRICTLY_ASCENDING AS SELECT count(number) FROM date GROUP BY tumble(timestamp, INTERVAL '5' SECOND);
CREATE WINDOW VIEW wv WATERMARK=ASCENDING AS SELECT count(number) FROM date GROUP BY tumble(timestamp, INTERVAL '5' SECOND);
CREATE WINDOW VIEW wv WATERMARK=INTERVAL '3' SECOND AS SELECT count(number) FROM date GROUP BY tumble(timestamp, INTERVAL '5' SECOND);
By default, the window will be fired when the watermark comes, and elements that arrived behind the watermark will be dropped. Window view supports late event processing by setting
ALLOWED_LATENESS=INTERVAL
. An example of lateness handling is:
sql
CREATE WINDOW VIEW test.wv TO test.dst WATERMARK=ASCENDING ALLOWED_LATENESS=INTERVAL '2' SECOND AS SELECT count(a) AS count, tumbleEnd(wid) AS w_end FROM test.mt GROUP BY tumble(timestamp, INTERVAL '5' SECOND) AS wid;
Note that elements emitted by a late firing should be treated as updated results of a previous computation. Instead of firing at the end of windows, the window view will fire immediately when the late event arrives. Thus, it will result in multiple outputs for the same window. Users need to take these duplicated results into account or deduplicate them.
You can modify
SELECT
query that was specified in the window view by using
ALTER TABLE ... MODIFY QUERY
statement. The data structure resulting in a new
SELECT
query should be the same as the original
SELECT
query when with or without
TO [db.]name
clause. Note that the data in the current window will be lost because the intermediate state cannot be reused.
Monitoring New Windows {#monitoring-new-windows}
Window view supports the
WATCH
query to monitoring changes, or use
TO
syntax to output the results to a table.
sql
WATCH [db.]window_view
[EVENTS]
[LIMIT n]
[FORMAT format]
A
LIMIT
can be specified to set the number of updates to receive before terminating the query. The
EVENTS
clause can be used to obtain a short form of the
WATCH
query where instead of the query result you will just get the latest query watermark.
Settings {#settings-1}
window_view_clean_interval
: The clean interval of window view in seconds to free outdated data. The system will retain the windows that have not been fully triggered according to the system time or
WATERMARK
configuration, and the other data will be deleted. | {"source_file": "view.md"} | [
-0.029525551944971085,
-0.04401792585849762,
0.05006019026041031,
0.047774601727724075,
0.048296961933374405,
0.015401901677250862,
0.05055626481771469,
-0.01407712697982788,
0.058598797768354416,
-0.06291066110134125,
-0.04585840180516243,
-0.045926533639431,
0.03005482815206051,
0.017031... |
a8a11503-52a5-46a1-94a3-4fbcce75bddf | window_view_heartbeat_interval
: The heartbeat interval in seconds to indicate the watch query is alive.
wait_for_window_view_fire_signal_timeout
: Timeout for waiting for window view fire signal in event time processing.
Example {#example}
Suppose we need to count the number of click logs per 10 seconds in a log table called
data
, and its table structure is:
sql
CREATE TABLE data ( `id` UInt64, `timestamp` DateTime) ENGINE = Memory;
First, we create a window view with tumble window of 10 seconds interval:
sql
CREATE WINDOW VIEW wv as select count(id), tumbleStart(w_id) as window_start from data group by tumble(timestamp, INTERVAL '10' SECOND) as w_id
Then, we use the
WATCH
query to get the results.
sql
WATCH wv
When logs are inserted into table
data
,
sql
INSERT INTO data VALUES(1,now())
The
WATCH
query should print the results as follows:
text
┌─count(id)─┬────────window_start─┐
│ 1 │ 2020-01-14 16:56:40 │
└───────────┴─────────────────────┘
Alternatively, we can attach the output to another table using
TO
syntax.
sql
CREATE WINDOW VIEW wv TO dst AS SELECT count(id), tumbleStart(w_id) as window_start FROM data GROUP BY tumble(timestamp, INTERVAL '10' SECOND) as w_id
Additional examples can be found among stateful tests of ClickHouse (they are named
*window_view*
there).
Window View Usage {#window-view-usage}
The window view is useful in the following scenarios:
Monitoring
: Aggregate and calculate the metrics logs by time, and output the results to a target table. The dashboard can use the target table as a source table.
Analyzing
: Automatically aggregate and preprocess data in the time window. This can be useful when analyzing a large number of logs. The preprocessing eliminates repeated calculations in multiple queries and reduces query latency.
Related Content {#related-content}
Blog:
Working with time series data in ClickHouse
Blog:
Building an Observability Solution with ClickHouse - Part 2 - Traces
Temporary Views {#temporary-views}
ClickHouse supports
temporary views
with the following characteristics (matching temporary tables where applicable):
Session-lifetime
A temporary view exists only for the duration of the current session. It is dropped automatically when the session ends.
No database
You
cannot
qualify a temporary view with a database name. It lives outside databases (session namespace).
Not replicated / no ON CLUSTER
Temporary objects are local to the session and
cannot
be created with
ON CLUSTER
.
Name resolution
If a temporary object (table or view) has the same name as a persistent object and a query references the name
without
a database, the
temporary
object is used.
Logical object (no storage)
A temporary view stores only its
SELECT
text (uses the
View
storage internally). It does not persist data and cannot accept
INSERT
.
Engine clause | {"source_file": "view.md"} | [
-0.0003822658909484744,
-0.041829876601696014,
-0.06478459388017654,
0.08795859664678574,
-0.0165354385972023,
0.04965895786881447,
0.06219717860221863,
-0.037523236125707626,
0.08309406042098999,
-0.07144427299499512,
0.02619120664894581,
-0.06408696621656418,
0.03297477960586548,
-0.0342... |
7dd6888c-e2c0-464a-b72a-aab6ac277563 | Logical object (no storage)
A temporary view stores only its
SELECT
text (uses the
View
storage internally). It does not persist data and cannot accept
INSERT
.
Engine clause
You do
not
need to specify
ENGINE
; if provided as
ENGINE = View
, it’s ignored/treated as the same logical view.
Security / privileges
Creating a temporary view requires the privilege
CREATE TEMPORARY VIEW
which is implicitly granted by
CREATE VIEW
.
SHOW CREATE
Use
SHOW CREATE TEMPORARY VIEW view_name;
to print the DDL of a temporary view.
Syntax {#temporary-views-syntax}
sql
CREATE TEMPORARY VIEW [IF NOT EXISTS] view_name AS <select_query>
OR REPLACE
is
not
supported for temporary views (to match temporary tables). If you need to “replace” a temporary view, drop it and create it again.
Examples {#temporary-views-examples}
Create a temporary source table and a temporary view on top:
```sql
CREATE TEMPORARY TABLE t_src (id UInt32, val String);
INSERT INTO t_src VALUES (1, 'a'), (2, 'b');
CREATE TEMPORARY VIEW tview AS
SELECT id, upper(val) AS u
FROM t_src
WHERE id <= 2;
SELECT * FROM tview ORDER BY id;
```
Show its DDL:
sql
SHOW CREATE TEMPORARY VIEW tview;
Drop it:
sql
DROP TEMPORARY VIEW IF EXISTS tview; -- temporary views are dropped with TEMPORARY TABLE syntax
Disallowed / limitations {#temporary-views-limitations}
CREATE OR REPLACE TEMPORARY VIEW ...
→
not allowed
(use
DROP
+
CREATE
).
CREATE TEMPORARY MATERIALIZED VIEW ...
/
WINDOW VIEW
→
not allowed
.
CREATE TEMPORARY VIEW db.view AS ...
→
not allowed
(no database qualifier).
CREATE TEMPORARY VIEW view ON CLUSTER 'name' AS ...
→
not allowed
(temporary objects are session-local).
POPULATE
,
REFRESH
,
TO [db.table]
, inner engines, and all MV-specific clauses →
not applicable
to temporary views.
Notes on distributed queries {#temporary-views-distributed-notes}
A temporary
view
is just a definition; there’s no data to pass around. If your temporary view references temporary
tables
(e.g.,
Memory
), their data can be shipped to remote servers during distributed query execution the same way temporary tables work.
Example {#temporary-views-distributed-example}
```sql
-- A session-scoped, in-memory table
CREATE TEMPORARY TABLE temp_ids (id UInt64) ENGINE = Memory;
INSERT INTO temp_ids VALUES (1), (5), (42);
-- A session-scoped view over the temp table (purely logical)
CREATE TEMPORARY VIEW v_ids AS
SELECT id FROM temp_ids;
-- Replace 'test' with your cluster name.
-- GLOBAL JOIN forces ClickHouse to
ship
the small join-side (temp_ids via v_ids)
-- to every remote server that executes the left side.
SELECT count()
FROM cluster('test', system.numbers) AS n
GLOBAL ANY INNER JOIN v_ids USING (id)
WHERE n.number < 100;
``` | {"source_file": "view.md"} | [
-0.030613409355282784,
-0.02981446124613285,
-0.08105028420686722,
0.08024485409259796,
-0.019723279401659966,
-0.019769888371229172,
0.05621509999036789,
-0.027871062979102135,
0.031087961047887802,
0.013884265907108784,
0.04353606328368187,
-0.08663138002157211,
0.06348035484552383,
-0.0... |
d84e5d30-a45d-44b6-80ad-f7c9163d122f | description: 'Documentation for Function'
sidebar_label: 'FUNCTION'
sidebar_position: 38
slug: /sql-reference/statements/create/function
title: 'CREATE FUNCTION -user defined function (UDF)'
doc_type: 'reference'
Creates a user defined function (UDF) from a lambda expression. The expression must consist of function parameters, constants, operators, or other function calls.
Syntax
sql
CREATE FUNCTION name [ON CLUSTER cluster] AS (parameter0, ...) -> expression
A function can have an arbitrary number of parameters.
There are a few restrictions:
The name of a function must be unique among user defined and system functions.
Recursive functions are not allowed.
All variables used by a function must be specified in its parameter list.
If any restriction is violated then an exception is raised.
Example
Query:
sql
CREATE FUNCTION linear_equation AS (x, k, b) -> k*x + b;
SELECT number, linear_equation(number, 2, 1) FROM numbers(3);
Result:
text
┌─number─┬─plus(multiply(2, number), 1)─┐
│ 0 │ 1 │
│ 1 │ 3 │
│ 2 │ 5 │
└────────┴──────────────────────────────┘
A
conditional function
is called in a user defined function in the following query:
sql
CREATE FUNCTION parity_str AS (n) -> if(n % 2, 'odd', 'even');
SELECT number, parity_str(number) FROM numbers(3);
Result:
text
┌─number─┬─if(modulo(number, 2), 'odd', 'even')─┐
│ 0 │ even │
│ 1 │ odd │
│ 2 │ even │
└────────┴──────────────────────────────────────┘
Related Content {#related-content}
Executable UDFs
. {#executable-udfs}
User-defined functions in ClickHouse Cloud
{#user-defined-functions-in-clickhouse-cloud} | {"source_file": "function.md"} | [
-0.05214717984199524,
-0.0600384846329689,
-0.05744374915957451,
-0.00331559544429183,
-0.09781736135482788,
0.009995276108384132,
0.030322255566716194,
0.05622907727956772,
-0.04457511007785797,
0.011272406205534935,
0.04516379162669182,
-0.03380921110510826,
0.05526909604668617,
-0.07060... |
884e23a7-4d29-4688-81ea-eca7940fe82f | description: 'Documentation for Table'
keywords: ['compression', 'codec', 'schema', 'DDL']
sidebar_label: 'TABLE'
sidebar_position: 36
slug: /sql-reference/statements/create/table
title: 'CREATE TABLE'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Creates a new table. This query can have various syntax forms depending on a use case.
By default, tables are created only on the current server. Distributed DDL queries are implemented as
ON CLUSTER
clause, which is
described separately
.
Syntax Forms {#syntax-forms}
With Explicit Schema {#with-explicit-schema}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [NULL|NOT NULL] [DEFAULT|MATERIALIZED|EPHEMERAL|ALIAS expr1] [COMMENT 'comment for column'] [compression_codec] [TTL expr1],
name2 [type2] [NULL|NOT NULL] [DEFAULT|MATERIALIZED|EPHEMERAL|ALIAS expr2] [COMMENT 'comment for column'] [compression_codec] [TTL expr2],
...
) ENGINE = engine
[COMMENT 'comment for table']
Creates a table named
table_name
in the
db
database or the current database if
db
is not set, with the structure specified in brackets and the
engine
engine.
The structure of the table is a list of column descriptions, secondary indexes and constraints . If
primary key
is supported by the engine, it will be indicated as parameter for the table engine.
A column description is
name type
in the simplest case. Example:
RegionID UInt32
.
Expressions can also be defined for default values (see below).
If necessary, primary key can be specified, with one or more key expressions.
Comments can be added for columns and for the table.
With a Schema Similar to Other Table {#with-a-schema-similar-to-other-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name AS [db2.]name2 [ENGINE = engine]
Creates a table with the same structure as another table. You can specify a different engine for the table. If the engine is not specified, the same engine will be used as for the
db2.name2
table.
With a Schema and Data Cloned from Another Table {#with-a-schema-and-data-cloned-from-another-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name CLONE AS [db2.]name2 [ENGINE = engine]
Creates a table with the same structure as another table. You can specify a different engine for the table. If the engine is not specified, the same engine will be used as for the
db2.name2
table. After the new table is created, all partitions from
db2.name2
are attached to it. In other words, the data of
db2.name2
is cloned into
db.table_name
upon creation. This query is equivalent to the following:
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name AS [db2.]name2 [ENGINE = engine];
ALTER TABLE [db.]table_name ATTACH PARTITION ALL FROM [db2].name2;
From a Table Function {#from-a-table-function} | {"source_file": "table.md"} | [
-0.017364641651511192,
-0.09834102541208267,
-0.05207038298249245,
0.06355144083499908,
-0.00832346361130476,
-0.004151162225753069,
0.008045288734138012,
0.005916711408644915,
-0.01676032692193985,
0.08008196949958801,
0.012305112555623055,
-0.027974572032690048,
0.11218979209661484,
-0.0... |
365195b5-b5cf-4977-9da1-f002486d78c2 | From a Table Function {#from-a-table-function}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_function()
Creates a table with the same result as that of the
table function
specified. The created table will also work in the same way as the corresponding table function that was specified.
From SELECT query {#from-select-query}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name[(name1 [type1], name2 [type2], ...)] ENGINE = engine AS SELECT ...
Creates a table with a structure like the result of the
SELECT
query, with the
engine
engine, and fills it with data from
SELECT
. Also you can explicitly specify columns description.
If the table already exists and
IF NOT EXISTS
is specified, the query won't do anything.
There can be other clauses after the
ENGINE
clause in the query. See detailed documentation on how to create tables in the descriptions of
table engines
.
Example
Query:
sql
CREATE TABLE t1 (x String) ENGINE = Memory AS SELECT 1;
SELECT x, toTypeName(x) FROM t1;
Result:
text
┌─x─┬─toTypeName(x)─┐
│ 1 │ String │
└───┴───────────────┘
NULL Or NOT NULL Modifiers {#null-or-not-null-modifiers}
NULL
and
NOT NULL
modifiers after data type in column definition allow or do not allow it to be
Nullable
.
If the type is not
Nullable
and if
NULL
is specified, it will be treated as
Nullable
; if
NOT NULL
is specified, then no. For example,
INT NULL
is the same as
Nullable(INT)
. If the type is
Nullable
and
NULL
or
NOT NULL
modifiers are specified, the exception will be thrown.
See also
data_type_default_nullable
setting.
Default Values {#default_values}
The column description can specify a default value expression in the form of
DEFAULT expr
,
MATERIALIZED expr
, or
ALIAS expr
. Example:
URLDomain String DEFAULT domain(URL)
.
The expression
expr
is optional. If it is omitted, the column type must be specified explicitly and the default value will be
0
for numeric columns,
''
(the empty string) for string columns,
[]
(the empty array) for array columns,
1970-01-01
for date columns, or
NULL
for nullable columns.
The column type of a default value column can be omitted in which case it is inferred from
expr
's type. For example the type of column
EventDate DEFAULT toDate(EventTime)
will be date.
If both a data type and a default value expression are specified, an implicit type casting function inserted which converts the expression to the specified type. Example:
Hits UInt32 DEFAULT 0
is internally represented as
Hits UInt32 DEFAULT toUInt32(0)
.
A default value expression
expr
may reference arbitrary table columns and constants. ClickHouse checks that changes of the table structure do not introduce loops in the expression calculation. For INSERT, it checks that expressions are resolvable – that all columns they can be calculated from have been passed.
DEFAULT {#default}
DEFAULT expr | {"source_file": "table.md"} | [
-0.03657476231455803,
-0.06583607196807861,
-0.049144722521305084,
0.08061046898365021,
-0.05719055235385895,
-0.07008954137563705,
0.07941337674856186,
0.034008391201496124,
-0.0384511835873127,
-0.02078159898519516,
0.030752260237932205,
-0.04695557430386543,
0.08095061779022217,
-0.0935... |
8236bf13-b18a-494a-b51e-bc349ff4d7c1 | DEFAULT {#default}
DEFAULT expr
Normal default value. If the value of such a column is not specified in an INSERT query, it is computed from
expr
.
Example:
```sql
CREATE OR REPLACE TABLE test
(
id UInt64,
updated_at DateTime DEFAULT now(),
updated_at_date Date DEFAULT toDate(updated_at)
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test (id) VALUES (1);
SELECT * FROM test;
┌─id─┬──────────updated_at─┬─updated_at_date─┐
│ 1 │ 2023-02-24 17:06:46 │ 2023-02-24 │
└────┴─────────────────────┴─────────────────┘
```
MATERIALIZED {#materialized}
MATERIALIZED expr
Materialized expression. Values of such columns are automatically calculated according to the specified materialized expression when rows are inserted. Values cannot be explicitly specified during
INSERT
s.
Also, default value columns of this type are not included in the result of
SELECT *
. This is to preserve the invariant that the result of a
SELECT *
can always be inserted back into the table using
INSERT
. This behavior can be disabled with setting
asterisk_include_materialized_columns
.
Example:
```sql
CREATE OR REPLACE TABLE test
(
id UInt64,
updated_at DateTime MATERIALIZED now(),
updated_at_date Date MATERIALIZED toDate(updated_at)
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test VALUES (1);
SELECT * FROM test;
┌─id─┐
│ 1 │
└────┘
SELECT id, updated_at, updated_at_date FROM test;
┌─id─┬──────────updated_at─┬─updated_at_date─┐
│ 1 │ 2023-02-24 17:08:08 │ 2023-02-24 │
└────┴─────────────────────┴─────────────────┘
SELECT * FROM test SETTINGS asterisk_include_materialized_columns=1;
┌─id─┬──────────updated_at─┬─updated_at_date─┐
│ 1 │ 2023-02-24 17:08:08 │ 2023-02-24 │
└────┴─────────────────────┴─────────────────┘
```
EPHEMERAL {#ephemeral}
EPHEMERAL [expr]
Ephemeral column. Columns of this type are not stored in the table and it is not possible to SELECT from them. The only purpose of ephemeral columns is to build default value expressions of other columns from them.
An insert without explicitly specified columns will skip columns of this type. This is to preserve the invariant that the result of a
SELECT *
can always be inserted back into the table using
INSERT
.
Example:
```sql
CREATE OR REPLACE TABLE test
(
id UInt64,
unhexed String EPHEMERAL,
hexed FixedString(4) DEFAULT unhex(unhexed)
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test (id, unhexed) VALUES (1, '5a90b714');
SELECT
id,
hexed,
hex(hexed)
FROM test
FORMAT Vertical;
Row 1:
──────
id: 1
hexed: Z��
hex(hexed): 5A90B714
```
ALIAS {#alias}
ALIAS expr
Calculated columns (synonym). Column of this type are not stored in the table and it is not possible to INSERT values into them. | {"source_file": "table.md"} | [
-0.03726488724350929,
-0.023470215499401093,
-0.03749523311853409,
0.049903709441423416,
-0.06046412140130997,
-0.044466763734817505,
-0.057773422449827194,
0.05730011314153671,
0.05510520190000534,
0.0676565021276474,
0.09685437381267548,
-0.012962374836206436,
-0.03102417103946209,
-0.07... |
4640c759-a327-453e-8129-8d5e49aca2ae | ALIAS {#alias}
ALIAS expr
Calculated columns (synonym). Column of this type are not stored in the table and it is not possible to INSERT values into them.
When SELECT queries explicitly reference columns of this type, the value is computed at query time from
expr
. By default,
SELECT *
excludes ALIAS columns. This behavior can be disabled with setting
asterisk_include_alias_columns
.
When using the ALTER query to add new columns, old data for these columns is not written. Instead, when reading old data that does not have values for the new columns, expressions are computed on the fly by default. However, if running the expressions requires different columns that are not indicated in the query, these columns will additionally be read, but only for the blocks of data that need it.
If you add a new column to a table but later change its default expression, the values used for old data will change (for data where values were not stored on the disk). Note that when running background merges, data for columns that are missing in one of the merging parts is written to the merged part.
It is not possible to set default values for elements in nested data structures.
```sql
CREATE OR REPLACE TABLE test
(
id UInt64,
size_bytes Int64,
size String ALIAS formatReadableSize(size_bytes)
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test VALUES (1, 4678899);
SELECT id, size_bytes, size FROM test;
┌─id─┬─size_bytes─┬─size─────┐
│ 1 │ 4678899 │ 4.46 MiB │
└────┴────────────┴──────────┘
SELECT * FROM test SETTINGS asterisk_include_alias_columns=1;
┌─id─┬─size_bytes─┬─size─────┐
│ 1 │ 4678899 │ 4.46 MiB │
└────┴────────────┴──────────┘
```
Primary Key {#primary-key}
You can define a
primary key
when creating a table. Primary key can be specified in two ways:
Inside the column list
sql
CREATE TABLE db.table_name
(
name1 type1, name2 type2, ...,
PRIMARY KEY(expr1[, expr2,...])
)
ENGINE = engine;
Outside the column list
sql
CREATE TABLE db.table_name
(
name1 type1, name2 type2, ...
)
ENGINE = engine
PRIMARY KEY(expr1[, expr2,...]);
:::tip
You can't combine both ways in one query.
:::
Constraints {#constraints}
Along with columns descriptions constraints could be defined:
CONSTRAINT {#constraint}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [compression_codec] [TTL expr1],
...
CONSTRAINT constraint_name_1 CHECK boolean_expr_1,
...
) ENGINE = engine
boolean_expr_1
could by any boolean expression. If constraints are defined for the table, each of them will be checked for every row in
INSERT
query. If any constraint is not satisfied — server will raise an exception with constraint name and checking expression.
Adding large amount of constraints can negatively affect performance of big
INSERT
queries.
ASSUME {#assume} | {"source_file": "table.md"} | [
-0.02983211539685726,
-0.04061385989189148,
-0.009546487592160702,
0.04047205299139023,
-0.040361903607845306,
-0.07568612694740295,
0.01830575428903103,
0.07911700755357742,
0.051163263618946075,
0.07387570291757584,
0.09574680030345917,
0.0108187822625041,
0.008602889254689217,
-0.034971... |
8e47490a-ba40-4024-ae25-513e63ec5713 | Adding large amount of constraints can negatively affect performance of big
INSERT
queries.
ASSUME {#assume}
The
ASSUME
clause is used to define a
CONSTRAINT
on a table that is assumed to be true. This constraint can then be used by the optimizer to enhance the performance of SQL queries.
Take this example where
ASSUME CONSTRAINT
is used in the creation of the
users_a
table:
sql
CREATE TABLE users_a (
uid Int16,
name String,
age Int16,
name_len UInt8 MATERIALIZED length(name),
CONSTRAINT c1 ASSUME length(name) = name_len
)
ENGINE=MergeTree
ORDER BY (name_len, name);
Here,
ASSUME CONSTRAINT
is used to assert that the
length(name)
function always equals the value of the
name_len
column. This means that whenever
length(name)
is called in a query, ClickHouse can replace it with
name_len
, which should be faster because it avoids calling the
length()
function.
Then, when executing the query
SELECT name FROM users_a WHERE length(name) < 5;
, ClickHouse can optimize it to
SELECT name FROM users_a WHERE name_len < 5
; because of the
ASSUME CONSTRAINT
. This can make the query run faster because it avoids calculating the length of
name
for each row.
ASSUME CONSTRAINT
does not enforce the constraint
, it merely informs the optimizer that the constraint holds true. If the constraint is not actually true, the results of the queries may be incorrect. Therefore, you should only use
ASSUME CONSTRAINT
if you are sure that the constraint is true.
TTL Expression {#ttl-expression}
Defines storage time for values. Can be specified only for MergeTree-family tables. For the detailed description, see
TTL for columns and tables
.
Column Compression Codecs {#column_compression_codec}
By default, ClickHouse applies
lz4
compression in the self-managed version, and
zstd
in ClickHouse Cloud.
For
MergeTree
-engine family you can change the default compression method in the
compression
section of a server configuration.
You can also define the compression method for each individual column in the
CREATE TABLE
query.
sql
CREATE TABLE codec_example
(
dt Date CODEC(ZSTD),
ts DateTime CODEC(LZ4HC),
float_value Float32 CODEC(NONE),
double_value Float64 CODEC(LZ4HC(9)),
value Float32 CODEC(Delta, ZSTD)
)
ENGINE = <Engine>
...
The
Default
codec can be specified to reference default compression which may depend on different settings (and properties of data) in runtime.
Example:
value UInt64 CODEC(Default)
— the same as lack of codec specification.
Also you can remove current CODEC from the column and use default compression from config.xml:
sql
ALTER TABLE codec_example MODIFY COLUMN float_value CODEC(Default);
Codecs can be combined in a pipeline, for example,
CODEC(Delta, Default)
.
:::tip
You can't decompress ClickHouse database files with external utilities like
lz4
. Instead, use the special
clickhouse-compressor
utility.
::: | {"source_file": "table.md"} | [
-0.017252681776881218,
0.015161819756031036,
0.009345929138362408,
0.01618567854166031,
-0.06582175195217133,
-0.02250504493713379,
-0.02027175948023796,
0.03446899726986885,
-0.007931212894618511,
0.021778959780931473,
0.06026932969689369,
-0.017942063510417938,
0.0729915052652359,
-0.081... |
a1c4caca-dd73-40e7-a6c1-ae51d2e6a321 | :::tip
You can't decompress ClickHouse database files with external utilities like
lz4
. Instead, use the special
clickhouse-compressor
utility.
:::
Compression is supported for the following table engines:
MergeTree
family. Supports column compression codecs and selecting the default compression method by
compression
settings.
Log
family. Uses the
lz4
compression method by default and supports column compression codecs.
Set
. Only supported the default compression.
Join
. Only supported the default compression.
ClickHouse supports general purpose codecs and specialized codecs.
General Purpose Codecs {#general-purpose-codecs}
NONE {#none}
NONE
— No compression.
LZ4 {#lz4}
LZ4
— Lossless
data compression algorithm
used by default. Applies LZ4 fast compression.
LZ4HC {#lz4hc}
LZ4HC[(level)]
— LZ4 HC (high compression) algorithm with configurable level. Default level: 9. Setting
level <= 0
applies the default level. Possible levels: [1, 12]. Recommended level range: [4, 9].
ZSTD {#zstd}
ZSTD[(level)]
—
ZSTD compression algorithm
with configurable
level
. Possible levels: [1, 22]. Default level: 1.
High compression levels are useful for asymmetric scenarios, like compress once, decompress repeatedly. Higher levels mean better compression and higher CPU usage.
ZSTD_QAT {#zstd_qat}
ZSTD_QAT[(level)]
—
ZSTD compression algorithm
with configurable level, implemented by
Intel® QATlib
and
Intel® QAT ZSTD Plugin
. Possible levels: [1, 12]. Default level: 1. Recommended level range: [6, 12]. Some limitations apply:
ZSTD_QAT is disabled by default and can only be used after enabling configuration setting
enable_zstd_qat_codec
.
For compression, ZSTD_QAT tries to use an Intel® QAT offloading device (
QuickAssist Technology
). If no such device was found, it will fallback to ZSTD compression in software.
Decompression is always performed in software.
DEFLATE_QPL {#deflate_qpl}
DEFLATE_QPL
—
Deflate compression algorithm
implemented by Intel® Query Processing Library. Some limitations apply:
DEFLATE_QPL is disabled by default and can only be used after enabling configuration setting
enable_deflate_qpl_codec
.
DEFLATE_QPL requires a ClickHouse build compiled with SSE 4.2 instructions (by default, this is the case). Refer to
Build Clickhouse with DEFLATE_QPL
for more details.
DEFLATE_QPL works best if the system has a Intel® IAA (In-Memory Analytics Accelerator) offloading device. Refer to
Accelerator Configuration
and
Benchmark with DEFLATE_QPL
for more details.
DEFLATE_QPL-compressed data can only be transferred between ClickHouse nodes compiled with SSE 4.2 enabled.
Specialized Codecs {#specialized-codecs} | {"source_file": "table.md"} | [
-0.0297867339104414,
0.001681094174273312,
-0.07124830037355423,
-0.004598549567162991,
0.03727873042225838,
-0.05713754892349243,
-0.06572665274143219,
-0.0062104021199047565,
-0.06380581855773926,
0.01921882852911949,
0.01862870715558529,
-0.00930141843855381,
0.0029527097940444946,
-0.0... |
3df27982-b0ef-4e91-b0f9-53c28870e8cd | DEFLATE_QPL-compressed data can only be transferred between ClickHouse nodes compiled with SSE 4.2 enabled.
Specialized Codecs {#specialized-codecs}
These codecs are designed to make compression more effective by exploiting specific features of the data. Some of these codecs do not compress data themselves, they instead preprocess the data such that a second compression stage using a general-purpose codec can achieve a higher data compression rate.
Delta {#delta}
Delta(delta_bytes)
— Compression approach in which raw values are replaced by the difference of two neighboring values, except for the first value that stays unchanged.
delta_bytes
is the maximum size of raw values, the default value is
sizeof(type)
. Specifying
delta_bytes
as an argument is deprecated and support will be removed in a future release. Delta is a data preparation codec, i.e. it cannot be used stand-alone.
DoubleDelta {#doubledelta}
DoubleDelta(bytes_size)
— Calculates delta of deltas and writes it in compact binary form. The
bytes_size
has a similar meaning than
delta_bytes
in
Delta
codec. Specifying
bytes_size
as an argument is deprecated and support will be removed in a future release. Optimal compression rates are achieved for monotonic sequences with a constant stride, such as time series data. Can be used with any numeric type. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. Uses 1 extra bit for 32-bit deltas: 5-bit prefixes instead of 4-bit prefixes. For additional information, see Compressing Time Stamps in
Gorilla: A Fast, Scalable, In-Memory Time Series Database
. DoubleDelta is a data preparation codec, i.e. it cannot be used stand-alone.
GCD {#gcd}
GCD()
- - Calculates the greatest common denominator (GCD) of the values in the column, then divides each value by the GCD. Can be used with integer, decimal and date/time columns. The codec is well suited for columns with values that change (increase or decrease) in multiples of the GCD, e.g. 24, 28, 16, 24, 8, 24 (GCD = 4). GCD is a data preparation codec, i.e. it cannot be used stand-alone.
Gorilla {#gorilla}
Gorilla(bytes_size)
— Calculates XOR between current and previous floating point value and writes it in compact binary form. The smaller the difference between consecutive values is, i.e. the slower the values of the series changes, the better the compression rate. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. Possible
bytes_size
values: 1, 2, 4, 8, the default value is
sizeof(type)
if equal to 1, 2, 4, or 8. In all other cases, it's 1. For additional information, see section 4.1 in
Gorilla: A Fast, Scalable, In-Memory Time Series Database
.
FPC {#fpc} | {"source_file": "table.md"} | [
-0.11833864450454712,
-0.001996249658986926,
-0.0439663864672184,
-0.0252996813505888,
-0.06212005019187927,
-0.03697708249092102,
-0.011526004411280155,
0.04923641309142113,
-0.005309771280735731,
-0.0035090213641524315,
0.018193647265434265,
0.014328692108392715,
-0.042883846908807755,
-... |
09f69331-1c8b-48f6-a85d-cb6e6a3f832b | FPC {#fpc}
FPC(level, float_size)
- Repeatedly predicts the next floating point value in the sequence using the better of two predictors, then XORs the actual with the predicted value, and leading-zero compresses the result. Similar to Gorilla, this is efficient when storing a series of floating point values that change slowly. For 64-bit values (double), FPC is faster than Gorilla, for 32-bit values your mileage may vary. Possible
level
values: 1-28, the default value is 12. Possible
float_size
values: 4, 8, the default value is
sizeof(type)
if type is Float. In all other cases, it's 4. For a detailed description of the algorithm see
High Throughput Compression of Double-Precision Floating-Point Data
.
T64 {#t64}
T64
— Compression approach that crops unused high bits of values in integer data types (including
Enum
,
Date
and
DateTime
). At each step of its algorithm, codec takes a block of 64 values, puts them into 64x64 bit matrix, transposes it, crops the unused bits of values and returns the rest as a sequence. Unused bits are the bits, that do not differ between maximum and minimum values in the whole data part for which the compression is used.
DoubleDelta
and
Gorilla
codecs are used in Gorilla TSDB as the components of its compressing algorithm. Gorilla approach is effective in scenarios when there is a sequence of slowly changing values with their timestamps. Timestamps are effectively compressed by the
DoubleDelta
codec, and values are effectively compressed by the
Gorilla
codec. For example, to get an effectively stored table, you can create it in the following configuration:
sql
CREATE TABLE codec_example
(
timestamp DateTime CODEC(DoubleDelta),
slow_values Float32 CODEC(Gorilla)
)
ENGINE = MergeTree()
Encryption Codecs {#encryption-codecs}
These codecs don't actually compress data, but instead encrypt data on disk. These are only available when an encryption key is specified by
encryption
settings. Note that encryption only makes sense at the end of codec pipelines, because encrypted data usually can't be compressed in any meaningful way.
Encryption codecs:
AES_128_GCM_SIV {#aes_128_gcm_siv}
CODEC('AES-128-GCM-SIV')
— Encrypts data with AES-128 in
RFC 8452
GCM-SIV mode.
AES-256-GCM-SIV {#aes-256-gcm-siv}
CODEC('AES-256-GCM-SIV')
— Encrypts data with AES-256 in GCM-SIV mode.
These codecs use a fixed nonce and encryption is therefore deterministic. This makes it compatible with deduplicating engines such as
ReplicatedMergeTree
but has a weakness: when the same data block is encrypted twice, the resulting ciphertext will be exactly the same so an adversary who can read the disk can see this equivalence (although only the equivalence, without getting its content).
:::note
Most engines including the "*MergeTree" family create index files on disk without applying codecs. This means plaintext will appear on disk if an encrypted column is indexed.
::: | {"source_file": "table.md"} | [
-0.01475876197218895,
0.025354798883199692,
-0.08738679438829422,
0.06377078592777252,
0.016313469037413597,
-0.09521441161632538,
-0.0807252898812294,
0.022393599152565002,
0.02035205066204071,
0.009041931480169296,
-0.019655095413327217,
-0.015603099018335342,
-0.08038848638534546,
0.011... |
b41d8d27-2ecc-4d27-bf3d-657dfff6224a | :::note
Most engines including the "*MergeTree" family create index files on disk without applying codecs. This means plaintext will appear on disk if an encrypted column is indexed.
:::
:::note
If you perform a SELECT query mentioning a specific value in an encrypted column (such as in its WHERE clause), the value may appear in
system.query_log
. You may want to disable the logging.
:::
Example
sql
CREATE TABLE mytable
(
x String CODEC(AES_128_GCM_SIV)
)
ENGINE = MergeTree ORDER BY x;
:::note
If compression needs to be applied, it must be explicitly specified. Otherwise, only encryption will be applied to data.
:::
Example
sql
CREATE TABLE mytable
(
x String CODEC(Delta, LZ4, AES_128_GCM_SIV)
)
ENGINE = MergeTree ORDER BY x;
Temporary Tables {#temporary-tables}
:::note
Please note that temporary tables are not replicated. As a result, there is no guarantee that data inserted into a temporary table will be available in other replicas. The primary use case where temporary tables can be useful is for querying or joining small external datasets during a single session.
:::
ClickHouse supports temporary tables which have the following characteristics:
Temporary tables disappear when the session ends, including if the connection is lost.
A temporary table uses the Memory table engine when engine is not specified and it may use any table engine except Replicated and
KeeperMap
engines.
The DB can't be specified for a temporary table. It is created outside of databases.
Impossible to create a temporary table with distributed DDL query on all cluster servers (by using
ON CLUSTER
): this table exists only in the current session.
If a temporary table has the same name as another one and a query specifies the table name without specifying the DB, the temporary table will be used.
For distributed query processing, temporary tables with Memory engine used in a query are passed to remote servers.
To create a temporary table, use the following syntax:
sql
CREATE [OR REPLACE] TEMPORARY TABLE [IF NOT EXISTS] table_name
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) [ENGINE = engine]
In most cases, temporary tables are not created manually, but when using external data for a query, or for distributed
(GLOBAL) IN
. For more information, see the appropriate sections
It's possible to use tables with
ENGINE = Memory
instead of temporary tables.
REPLACE TABLE {#replace-table}
The
REPLACE
statement allows you to update a table
atomically
.
:::note
This statement is supported for the
Atomic
and
Replicated
database engines,
which are the default database engines for ClickHouse and ClickHouse Cloud respectively.
::: | {"source_file": "table.md"} | [
-0.041144710034132004,
0.018014969304203987,
-0.07466107606887817,
0.028124868869781494,
0.009572131559252739,
-0.07767113298177719,
0.029218444600701332,
0.011336691677570343,
0.029061954468488693,
0.09484757483005524,
0.07103033363819122,
-0.026950130239129066,
0.0710441991686821,
-0.044... |
60681d73-122e-425d-a093-eefba209a4cc | :::note
This statement is supported for the
Atomic
and
Replicated
database engines,
which are the default database engines for ClickHouse and ClickHouse Cloud respectively.
:::
Ordinarily, if you need to delete some data from a table,
you can create a new table and fill it with a
SELECT
statement that does not retrieve unwanted data,
then drop the old table and rename the new one.
This approach is demonstrated in the example below:
```sql
CREATE TABLE myNewTable AS myOldTable;
INSERT INTO myNewTable
SELECT * FROM myOldTable
WHERE CounterID <12345;
DROP TABLE myOldTable;
RENAME TABLE myNewTable TO myOldTable;
```
Instead of the approach above, it is also possible to use
REPLACE
(given you are using the default database engines) to achieve the same result:
sql
REPLACE TABLE myOldTable
ENGINE = MergeTree()
ORDER BY CounterID
AS
SELECT * FROM myOldTable
WHERE CounterID <12345;
Syntax {#syntax}
sql
{CREATE [OR REPLACE] | REPLACE} TABLE [db.]table_name
:::note
All syntax forms for the
CREATE
statement also work for this statement. Invoking
REPLACE
for a non-existent table will cause an error.
:::
Examples: {#examples}
Consider the following table:
```sql
CREATE DATABASE base
ENGINE = Atomic;
CREATE OR REPLACE TABLE base.t1
(
n UInt64,
s String
)
ENGINE = MergeTree
ORDER BY n;
INSERT INTO base.t1 VALUES (1, 'test');
SELECT * FROM base.t1;
┌─n─┬─s────┐
│ 1 │ test │
└───┴──────┘
```
We can use the
REPLACE
statement to clear all the data:
```sql
CREATE OR REPLACE TABLE base.t1
(
n UInt64,
s Nullable(String)
)
ENGINE = MergeTree
ORDER BY n;
INSERT INTO base.t1 VALUES (2, null);
SELECT * FROM base.t1;
┌─n─┬─s──┐
│ 2 │ \N │
└───┴────┘
```
Or we can use the
REPLACE
statement to change the table structure:
```sql
REPLACE TABLE base.t1 (n UInt64)
ENGINE = MergeTree
ORDER BY n;
INSERT INTO base.t1 VALUES (3);
SELECT * FROM base.t1;
┌─n─┐
│ 3 │
└───┘
```
Consider the following table on ClickHouse Cloud:
```sql
CREATE DATABASE base;
CREATE OR REPLACE TABLE base.t1
(
n UInt64,
s String
)
ENGINE = MergeTree
ORDER BY n;
INSERT INTO base.t1 VALUES (1, 'test');
SELECT * FROM base.t1;
1 test
```
We can use the
REPLACE
statement to clear all the data:
```sql
CREATE OR REPLACE TABLE base.t1
(
n UInt64,
s Nullable(String)
)
ENGINE = MergeTree
ORDER BY n;
INSERT INTO base.t1 VALUES (2, null);
SELECT * FROM base.t1;
2
```
Or we can use the
REPLACE
statement to change the table structure:
```sql
REPLACE TABLE base.t1 (n UInt64)
ENGINE = MergeTree
ORDER BY n;
INSERT INTO base.t1 VALUES (3);
SELECT * FROM base.t1;
3
```
COMMENT Clause {#comment-clause}
You can add a comment to the table when creating it.
Syntax
sql
CREATE TABLE db.table_name
(
name1 type1, name2 type2, ...
)
ENGINE = engine
COMMENT 'Comment'
Example
Query: | {"source_file": "table.md"} | [
-0.023772437125444412,
-0.053376197814941406,
0.052476752549409866,
0.0344260148704052,
-0.09154204279184341,
0.0023920699022710323,
0.035367462784051895,
-0.05821089446544647,
0.004233409650623798,
0.11059848964214325,
0.05958272144198418,
0.04642544314265251,
0.09223032742738724,
-0.1023... |
b3d4a143-3281-4043-9e7c-9806576c3143 | You can add a comment to the table when creating it.
Syntax
sql
CREATE TABLE db.table_name
(
name1 type1, name2 type2, ...
)
ENGINE = engine
COMMENT 'Comment'
Example
Query:
sql
CREATE TABLE t1 (x String) ENGINE = Memory COMMENT 'The temporary table';
SELECT name, comment FROM system.tables WHERE name = 't1';
Result:
text
┌─name─┬─comment─────────────┐
│ t1 │ The temporary table │
└──────┴─────────────────────┘
Related content {#related-content}
Blog:
Optimizing ClickHouse with Schemas and Codecs
Blog:
Working with time series data in ClickHouse | {"source_file": "table.md"} | [
0.008066674694418907,
-0.07656770199537277,
-0.0804404765367508,
0.11495450884103775,
-0.0634300634264946,
-0.03632556274533272,
0.06379283964633942,
0.00827023945748806,
-0.05139772966504097,
0.02404758520424366,
0.04208381101489067,
-0.05397285148501396,
0.09941652417182922,
-0.008581493... |
711e3018-45c3-4875-badb-4d4b49ddedf0 | description: 'Documentation for Dictionary'
sidebar_label: 'DICTIONARY'
sidebar_position: 38
slug: /sql-reference/statements/create/dictionary
title: 'CREATE DICTIONARY'
doc_type: 'reference'
Creates a new
dictionary
with given
structure
,
source
,
layout
and
lifetime
.
Syntax {#syntax}
sql
CREATE [OR REPLACE] DICTIONARY [IF NOT EXISTS] [db.]dictionary_name [ON CLUSTER cluster]
(
key1 type1 [DEFAULT|EXPRESSION expr1] [IS_OBJECT_ID],
key2 type2 [DEFAULT|EXPRESSION expr2],
attr1 type2 [DEFAULT|EXPRESSION expr3] [HIERARCHICAL|INJECTIVE],
attr2 type2 [DEFAULT|EXPRESSION expr4] [HIERARCHICAL|INJECTIVE]
)
PRIMARY KEY key1, key2
SOURCE(SOURCE_NAME([param1 value1 ... paramN valueN]))
LAYOUT(LAYOUT_NAME([param_name param_value]))
LIFETIME({MIN min_val MAX max_val | max_val})
SETTINGS(setting_name = setting_value, setting_name = setting_value, ...)
COMMENT 'Comment'
The dictionary structure consists of attributes. Dictionary attributes are specified similarly to table columns. The only required attribute property is its type, all other properties may have default values.
ON CLUSTER
clause allows creating dictionary on a cluster, see
Distributed DDL
.
Depending on dictionary
layout
one or more attributes can be specified as dictionary keys.
SOURCE {#source}
The source for a dictionary can be a:
- table in the current ClickHouse service
- table in a remote ClickHouse service
- file available by HTTP(S)
- another database
Create a dictionary from a table in the current ClickHouse service {#create-a-dictionary-from-a-table-in-the-current-clickhouse-service}
Input table
source_table
:
text
┌─id─┬─value──┐
│ 1 │ First │
│ 2 │ Second │
└────┴────────┘
Creating the dictionary:
sql
CREATE DICTIONARY id_value_dictionary
(
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(TABLE 'source_table'))
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000)
Output the dictionary:
sql
SHOW CREATE DICTIONARY id_value_dictionary;
response
CREATE DICTIONARY default.id_value_dictionary
(
`id` UInt64,
`value` String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(TABLE 'source_table'))
LIFETIME(MIN 0 MAX 1000)
LAYOUT(FLAT())
:::note
When using the SQL console in
ClickHouse Cloud
, you must specify a user (
default
or any other user with the role
default_role
) and password when creating a dictionary.
:::
```sql
CREATE USER IF NOT EXISTS clickhouse_admin
IDENTIFIED WITH sha256_password BY 'passworD43$x';
GRANT default_role TO clickhouse_admin;
CREATE DATABASE foo_db;
CREATE TABLE foo_db.source_table (
id UInt64,
value String
) ENGINE = MergeTree
PRIMARY KEY id;
CREATE DICTIONARY foo_db.id_value_dictionary
(
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(TABLE 'source_table' USER 'clickhouse_admin' PASSWORD 'passworD43$x' DB 'foo_db' ))
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000);
``` | {"source_file": "dictionary.md"} | [
-0.004427952691912651,
-0.005005029961466789,
-0.07762733101844788,
0.04902404919266701,
-0.05578070133924484,
-0.01120967511087656,
0.03483959287405014,
0.01413073018193245,
-0.08447781205177307,
0.01014167070388794,
0.07149441540241241,
-0.03680317476391792,
0.1460280865430832,
-0.041895... |
fbc839f4-d5b9-4867-9dd7-14ca2ed27519 | Create a dictionary from a table in a remote ClickHouse service {#create-a-dictionary-from-a-table-in-a-remote-clickhouse-service}
Input table (in the remote ClickHouse service)
source_table
:
text
┌─id─┬─value──┐
│ 1 │ First │
│ 2 │ Second │
└────┴────────┘
Creating the dictionary:
sql
CREATE DICTIONARY id_value_dictionary
(
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(HOST 'HOSTNAME' PORT 9000 USER 'default' PASSWORD 'PASSWORD' TABLE 'source_table' DB 'default'))
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000)
Create a dictionary from a file available by HTTP(S) {#create-a-dictionary-from-a-file-available-by-https}
sql
CREATE DICTIONARY default.taxi_zone_dictionary
(
`LocationID` UInt16 DEFAULT 0,
`Borough` String,
`Zone` String,
`service_zone` String
)
PRIMARY KEY LocationID
SOURCE(HTTP(URL 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/taxi_zone_lookup.csv' FORMAT 'CSVWithNames'))
LIFETIME(MIN 0 MAX 0)
LAYOUT(HASHED())
Create a dictionary from another database {#create-a-dictionary-from-another-database}
Please see the details in
Dictionary sources
.
See Also
For more information, see the
Dictionaries
section.
system.dictionaries
— This table contains information about
Dictionaries
. | {"source_file": "dictionary.md"} | [
0.021319622173905373,
-0.04707934334874153,
-0.1515442579984665,
-0.0005266311345621943,
-0.04738462343811989,
-0.06881009042263031,
0.04052939638495445,
-0.0337802916765213,
-0.04701676964759827,
0.05931546911597252,
0.04519730433821678,
-0.022898906841874123,
0.08505783975124359,
-0.0498... |
1fb2ca8c-0e10-4f74-8196-5a6aa203e4ff | description: 'Documentation for CREATE NAMED COLLECTION'
sidebar_label: 'NAMED COLLECTION'
slug: /sql-reference/statements/create/named-collection
title: 'CREATE NAMED COLLECTION'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
CREATE NAMED COLLECTION
Creates a new named collection.
Syntax
sql
CREATE NAMED COLLECTION [IF NOT EXISTS] name [ON CLUSTER cluster] AS
key_name1 = 'some value' [[NOT] OVERRIDABLE],
key_name2 = 'some value' [[NOT] OVERRIDABLE],
key_name3 = 'some value' [[NOT] OVERRIDABLE],
...
Example
sql
CREATE NAMED COLLECTION foobar AS a = '1', b = '2' OVERRIDABLE;
Related statements
CREATE NAMED COLLECTION
DROP NAMED COLLECTION
See Also
Named collections guide | {"source_file": "named-collection.md"} | [
-0.036305297166109085,
-0.021322118118405342,
-0.057857491075992584,
0.06375658512115479,
-0.06840959936380386,
0.004707366693764925,
0.02604687213897705,
-0.05379166826605797,
0.03331947699189186,
0.06324958056211472,
0.051500823348760605,
-0.06591358780860901,
0.10054301470518112,
-0.060... |
55d49dbd-f150-46d8-b5d6-496c99d6d3ef | description: 'Documentation for CREATE Queries'
sidebar_label: 'CREATE'
sidebar_position: 34
slug: /sql-reference/statements/create/
title: 'CREATE Queries'
doc_type: 'reference'
CREATE Queries
CREATE queries create (for example) new
databases
,
tables
and
views
. | {"source_file": "index.md"} | [
-0.008502923883497715,
-0.0424555167555809,
-0.07719875872135162,
0.08701630681753159,
-0.09642838686704636,
0.00494189839810133,
0.05096187815070152,
0.053707223385572433,
-0.023415936157107353,
0.03670715168118477,
0.0037561035715043545,
-0.015208618715405464,
0.09316179901361465,
-0.047... |
54fadbc9-1663-4c28-9b71-4ae0e34a571c | description: 'Documentation for Settings Profile'
sidebar_label: 'SETTINGS PROFILE'
sidebar_position: 43
slug: /sql-reference/statements/create/settings-profile
title: 'CREATE SETTINGS PROFILE'
doc_type: 'reference'
Creates
settings profiles
that can be assigned to a user or a role.
Syntax:
sql
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]]
[ON CLUSTER cluster_name]
[IN access_storage_type]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | INHERIT 'profile_name'] [,...]
[TO {{role1 | user1 [, role2 | user2 ...]} | NONE | ALL | ALL EXCEPT {role1 | user1 [, role2 | user2 ...]}}]
ON CLUSTER
clause allows creating settings profiles on a cluster, see
Distributed DDL
.
Example {#example}
Create a user:
sql
CREATE USER robin IDENTIFIED BY 'password';
Create the
max_memory_usage_profile
settings profile with value and constraints for the
max_memory_usage
setting and assign it to user
robin
:
sql
CREATE
SETTINGS PROFILE max_memory_usage_profile SETTINGS max_memory_usage = 100000001 MIN 90000000 MAX 110000000
TO robin | {"source_file": "settings-profile.md"} | [
0.018710022792220116,
-0.026725174859166145,
-0.06962009519338608,
0.10281243175268173,
-0.09279485791921616,
0.05795132741332054,
0.06779354810714722,
0.08406226336956024,
-0.1469312459230423,
-0.037398889660835266,
0.0040701450780034065,
-0.05463476479053497,
0.13771682977676392,
-0.0171... |
981f4109-80cf-4958-ac38-66f9b4c4e2dc | description: 'Documentation for User'
sidebar_label: 'USER'
sidebar_position: 39
slug: /sql-reference/statements/create/user
title: 'CREATE USER'
doc_type: 'reference'
Creates
user accounts
.
Syntax:
sql
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[NOT IDENTIFIED | IDENTIFIED {[WITH {plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | WITH NO_PASSWORD | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']} [VALID UNTIL datetime]
[, {[{plaintext_password | sha256_password | sha256_hash | ...}] BY {'password' | 'hash'}} | {ldap SERVER 'server_name'} | {...} | ... [,...]]]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]
[IN access_storage_type]
[DEFAULT ROLE role [,...]]
[DEFAULT DATABASE database | NONE]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
ON CLUSTER
clause allows creating users on a cluster, see
Distributed DDL
.
Identification {#identification}
There are multiple ways of user identification:
IDENTIFIED WITH no_password
IDENTIFIED WITH plaintext_password BY 'qwerty'
IDENTIFIED WITH sha256_password BY 'qwerty'
or
IDENTIFIED BY 'password'
IDENTIFIED WITH sha256_hash BY 'hash'
or
IDENTIFIED WITH sha256_hash BY 'hash' SALT 'salt'
IDENTIFIED WITH double_sha1_password BY 'qwerty'
IDENTIFIED WITH double_sha1_hash BY 'hash'
IDENTIFIED WITH bcrypt_password BY 'qwerty'
IDENTIFIED WITH bcrypt_hash BY 'hash'
IDENTIFIED WITH ldap SERVER 'server_name'
IDENTIFIED WITH kerberos
or
IDENTIFIED WITH kerberos REALM 'realm'
IDENTIFIED WITH ssl_certificate CN 'mysite.com:user'
IDENTIFIED WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa', KEY 'another_public_key' TYPE 'ssh-ed25519'
IDENTIFIED WITH http SERVER 'http_server'
or
IDENTIFIED WITH http SERVER 'http_server' SCHEME 'basic'
IDENTIFIED BY 'qwerty'
Password complexity requirements can be edited in
config.xml
. Below is an example configuration that requires passwords to be at least 12 characters long and contain 1 number. Each password complexity rule requires a regex to match against passwords and a description of the rule.
xml
<clickhouse>
<password_complexity>
<rule>
<pattern>.{12}</pattern>
<message>be at least 12 characters long</message>
</rule>
<rule>
<pattern>\p{N}</pattern>
<message>contain at least 1 numeric character</message>
</rule>
</password_complexity>
</clickhouse> | {"source_file": "user.md"} | [
0.057844843715429306,
-0.020564822480082512,
-0.1296044886112213,
0.020993785932660103,
-0.04434159770607948,
0.008557640947401524,
0.02589450031518936,
-0.015523414127528667,
-0.06134593114256859,
-0.008410955779254436,
0.04243491217494011,
-0.07723214477300644,
0.09514506906270981,
-0.02... |
aa8d4c39-ea4c-4a4b-9e1a-8b0b84362cc9 | :::note
In ClickHouse Cloud, by default, passwords must meet the following complexity requirements:
- Be at least 12 characters long
- Contain at least 1 numeric character
- Contain at least 1 uppercase character
- Contain at least 1 lowercase character
- Contain at least 1 special character
:::
Examples {#examples}
The following username is
name1
and does not require a password - which obviously doesn't provide much security:
sql
CREATE USER name1 NOT IDENTIFIED
To specify a plaintext password:
sql
CREATE USER name2 IDENTIFIED WITH plaintext_password BY 'my_password'
:::tip
The password is stored in a SQL text file in
/var/lib/clickhouse/access
, so it's not a good idea to use
plaintext_password
. Try
sha256_password
instead, as demonstrated next...
:::
The most common option is to use a password that is hashed using SHA-256. ClickHouse will hash the password for you when you specify
IDENTIFIED WITH sha256_password
. For example:
sql
CREATE USER name3 IDENTIFIED WITH sha256_password BY 'my_password'
The
name3
user can now login using
my_password
, but the password is stored as the hashed value above. The following SQL file was created in
/var/lib/clickhouse/access
and gets executed at server startup:
bash
/var/lib/clickhouse/access $ cat 3843f510-6ebd-a52d-72ac-e021686d8a93.sql
ATTACH USER name3 IDENTIFIED WITH sha256_hash BY '0C268556C1680BEF0640AAC1E7187566704208398DA31F03D18C74F5C5BE5053' SALT '4FB16307F5E10048196966DD7E6876AE53DE6A1D1F625488482C75F14A5097C7';
:::tip
If you have already created a hash value and corresponding salt value for a username, then you can use
IDENTIFIED WITH sha256_hash BY 'hash'
or
IDENTIFIED WITH sha256_hash BY 'hash' SALT 'salt'
. For identification with
sha256_hash
using
SALT
- hash must be calculated from concatenation of 'password' and 'salt'.
:::
The
double_sha1_password
is not typically needed, but comes in handy when working with clients that require it (like the MySQL interface):
sql
CREATE USER name4 IDENTIFIED WITH double_sha1_password BY 'my_password'
ClickHouse generates and runs the following query:
response
CREATE USER name4 IDENTIFIED WITH double_sha1_hash BY 'CCD3A959D6A004B9C3807B728BC2E55B67E10518'
The
bcrypt_password
is the most secure option for storing passwords. It uses the
bcrypt
algorithm, which is resilient against brute force attacks even if the password hash is compromised.
sql
CREATE USER name5 IDENTIFIED WITH bcrypt_password BY 'my_password'
The length of the password is limited to 72 characters with this method.
The bcrypt work factor parameter, which defines the amount of computations and time needed to compute the hash and verify the password, can be modified in the server configuration:
xml
<bcrypt_workfactor>12</bcrypt_workfactor>
The work factor must be between 4 and 31, with a default value of 12. | {"source_file": "user.md"} | [
-0.033627379685640335,
-0.08150333166122437,
-0.11687745153903961,
-0.05408424139022827,
-0.09129606187343597,
0.0038416797760874033,
0.11422356963157654,
-0.0498950369656086,
0.007766162510961294,
0.015341350808739662,
0.043485842645168304,
-0.00888095237314701,
0.11619308590888977,
-0.00... |
6511f043-3d77-47e6-812e-9992aceea486 | xml
<bcrypt_workfactor>12</bcrypt_workfactor>
The work factor must be between 4 and 31, with a default value of 12.
:::warning
For applications with high-frequency authentication,
consider alternative authentication methods due to
bcrypt's computational overhead at higher work factors.
:::
6.
6. The type of the password can also be omitted:
```sql
CREATE USER name6 IDENTIFIED BY 'my_password'
```
In this case, ClickHouse will use the default password type specified in the server configuration:
```xml
<default_password_type>sha256_password</default_password_type>
```
The available password types are: `plaintext_password`, `sha256_password`, `double_sha1_password`.
Multiple authentication methods can be specified:
sql
CREATE USER user1 IDENTIFIED WITH plaintext_password by '1', bcrypt_password by '2', plaintext_password by '3''
Notes:
1. Older versions of ClickHouse might not support the syntax of multiple authentication methods. Therefore, if the ClickHouse server contains such users and is downgraded to a version that does not support it, such users will become unusable and some user related operations will be broken. In order to downgrade gracefully, one must set all users to contain a single authentication method prior to downgrading. Alternatively, if the server was downgraded without the proper procedure, the faulty users should be dropped.
2.
no_password
can not co-exist with other authentication methods for security reasons. Therefore, you can only specify
no_password
if it is the only authentication method in the query.
User Host {#user-host}
User host is a host from which a connection to ClickHouse server could be established. The host can be specified in the
HOST
query section in the following ways:
HOST IP 'ip_address_or_subnetwork'
— User can connect to ClickHouse server only from the specified IP address or a
subnetwork
. Examples:
HOST IP '192.168.0.0/16'
,
HOST IP '2001:DB8::/32'
. For use in production, only specify
HOST IP
elements (IP addresses and their masks), since using
host
and
host_regexp
might cause extra latency.
HOST ANY
— User can connect from any location. This is a default option.
HOST LOCAL
— User can connect only locally.
HOST NAME 'fqdn'
— User host can be specified as FQDN. For example,
HOST NAME 'mysite.com'
.
HOST REGEXP 'regexp'
— You can use
pcre
regular expressions when specifying user hosts. For example,
HOST REGEXP '.*\.mysite\.com'
.
HOST LIKE 'template'
— Allows you to use the
LIKE
operator to filter the user hosts. For example,
HOST LIKE '%'
is equivalent to
HOST ANY
,
HOST LIKE '%.mysite.com'
filters all the hosts in the
mysite.com
domain.
Another way of specifying host is to use
@
syntax following the username. Examples:
CREATE USER mira@'127.0.0.1'
— Equivalent to the
HOST IP
syntax.
CREATE USER mira@'localhost'
— Equivalent to the
HOST LOCAL
syntax. | {"source_file": "user.md"} | [
-0.017076872289180756,
-0.05881175398826599,
-0.1283082813024521,
-0.05237339809536934,
-0.11357667297124863,
0.0025274893268942833,
0.07723138481378555,
-0.03520434722304344,
-0.01713375188410282,
-0.054797615855932236,
-0.0297815203666687,
-0.06498484313488007,
0.09867352992296219,
-0.02... |
1e55b426-2cca-44cd-9b3f-6600a873a30a | CREATE USER mira@'127.0.0.1'
— Equivalent to the
HOST IP
syntax.
CREATE USER mira@'localhost'
— Equivalent to the
HOST LOCAL
syntax.
CREATE USER mira@'192.168.%.%'
— Equivalent to the
HOST LIKE
syntax.
:::tip
ClickHouse treats
user_name@'address'
as a username as a whole. Thus, technically you can create multiple users with the same
user_name
and different constructions after
@
. However, we do not recommend to do so.
:::
VALID UNTIL Clause {#valid-until-clause}
Allows you to specify the expiration date and, optionally, the time for an authentication method. It accepts a string as a parameter. It is recommended to use the
YYYY-MM-DD [hh:mm:ss] [timezone]
format for datetime. By default, this parameter equals
'infinity'
.
The
VALID UNTIL
clause can only be specified along with an authentication method, except for the case where no authentication method has been specified in the query. In this scenario, the
VALID UNTIL
clause will be applied to all existing authentication methods.
Examples:
CREATE USER name1 VALID UNTIL '2025-01-01'
CREATE USER name1 VALID UNTIL '2025-01-01 12:00:00 UTC'
CREATE USER name1 VALID UNTIL 'infinity'
CREATE USER name1 VALID UNTIL '2025-01-01 12:00:00 `Asia/Tokyo`'
CREATE USER name1 IDENTIFIED WITH plaintext_password BY 'no_expiration', bcrypt_password BY 'expiration_set' VALID UNTIL '2025-01-01''
GRANTEES Clause {#grantees-clause}
Specifies users or roles which are allowed to receive
privileges
from this user on the condition this user has also all required access granted with
GRANT OPTION
. Options of the
GRANTEES
clause:
user
— Specifies a user this user can grant privileges to.
role
— Specifies a role this user can grant privileges to.
ANY
— This user can grant privileges to anyone. It's the default setting.
NONE
— This user can grant privileges to none.
You can exclude any user or role by using the
EXCEPT
expression. For example,
CREATE USER user1 GRANTEES ANY EXCEPT user2
. It means if
user1
has some privileges granted with
GRANT OPTION
it will be able to grant those privileges to anyone except
user2
.
Examples {#examples-1}
Create the user account
mira
protected by the password
qwerty
:
sql
CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty';
mira
should start client app at the host where the ClickHouse server runs.
Create the user account
john
, assign roles to it and make this roles default:
sql
CREATE USER john DEFAULT ROLE role1, role2;
Create the user account
john
and make all his future roles default:
sql
CREATE USER john DEFAULT ROLE ALL;
When some role is assigned to
john
in the future, it will become default automatically.
Create the user account
john
and make all his future roles default excepting
role1
and
role2
:
sql
CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2; | {"source_file": "user.md"} | [
-0.038025882095098495,
-0.05649835988879204,
-0.025321844965219498,
-0.017802070826292038,
-0.08455583453178406,
-0.018720543012022972,
0.0690109059214592,
-0.021728897467255592,
-0.0041269417852163315,
-0.02211923524737358,
0.08855268359184265,
-0.03704036399722099,
0.06390822678804398,
0... |
4a064e45-3ff0-49e5-a5eb-e9de8d2c7648 | Create the user account
john
and make all his future roles default excepting
role1
and
role2
:
sql
CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2;
Create the user account
john
and allow him to grant his privileges to the user with
jack
account:
sql
CREATE USER john GRANTEES jack;
Use a query parameter to create the user account
john
:
sql
SET param_user=john;
CREATE USER {user:Identifier}; | {"source_file": "user.md"} | [
-0.0181864146143198,
-0.05038786306977272,
-0.03601302206516266,
-0.02876264601945877,
-0.2078673392534256,
0.022106042131781578,
0.07068408280611038,
0.029231004416942596,
-0.07828518003225327,
-0.02841968648135662,
-0.02086726948618889,
-0.04586397856473923,
0.09806343168020248,
0.058954... |
ddfca873-3002-4c30-b888-695c4e2d085a | description: 'Documentation for Row Policy'
sidebar_label: 'ROW POLICY'
sidebar_position: 41
slug: /sql-reference/statements/create/row-policy
title: 'CREATE ROW POLICY'
doc_type: 'reference'
Creates a
row policy
, i.e. a filter used to determine which rows a user can read from a table.
:::tip
Row policies make sense only for users with readonly access. If a user can modify a table or copy partitions between tables, it defeats the restrictions of row policies.
:::
Syntax:
sql
CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluster_name1] ON [db1.]table1|db1.*
[, policy_name2 [ON CLUSTER cluster_name2] ON [db2.]table2|db2.* ...]
[IN access_storage_type]
[FOR SELECT] USING condition
[AS {PERMISSIVE | RESTRICTIVE}]
[TO {role1 [, role2 ...] | ALL | ALL EXCEPT role1 [, role2 ...]}]
USING Clause {#using-clause}
Allows specifying a condition to filter rows. A user will see a row if the condition is calculated to non-zero for the row.
TO Clause {#to-clause}
In the
TO
section you can provide a list of users and roles this policy should work for. For example,
CREATE ROW POLICY ... TO accountant, john@localhost
.
Keyword
ALL
means all the ClickHouse users, including current user. Keyword
ALL EXCEPT
allows excluding some users from the all users list, for example,
CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost
:::note
If there are no row policies defined for a table, then any user can
SELECT
all the rows from the table. Defining one or more row policies for the table makes access to the table dependent on the row policies, no matter if those row policies are defined for the current user or not. For example, the following policy:
CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter
forbids the users
mira
and
peter
from seeing the rows with
b != 1
, and any non-mentioned user (e.g., the user
paul
) will see no rows from
mydb.table1
at all.
If that's not desirable, it can be fixed by adding one more row policy, like the following:
CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter
:::
AS Clause {#as-clause}
It's allowed to have more than one policy enabled on the same table for the same user at one time. So we need a way to combine the conditions from multiple policies.
By default, policies are combined using the boolean
OR
operator. For example, the following policies:
sql
CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter
CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio
enable the user
peter
to see rows with either
b=1
or
c=2
.
The
AS
clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive. By default, policies are permissive, which means they are combined using the boolean
OR
operator. | {"source_file": "row-policy.md"} | [
-0.03506740927696228,
0.009752788580954075,
-0.0574980191886425,
0.03651697561144829,
0.01999448612332344,
0.029513293877243996,
0.10747943818569183,
-0.03813813626766205,
-0.0520583875477314,
0.061529140919446945,
0.05846719816327095,
-0.00514743197709322,
0.10081122070550919,
-0.03106880... |
9c8684ba-64bc-409e-b8e9-e3708f84b836 | A policy can be defined as restrictive as an alternative. Restrictive policies are combined using the boolean
AND
operator.
Here is the general formula:
text
row_is_visible = (one or more of the permissive policies' conditions are non-zero) AND
(all of the restrictive policies's conditions are non-zero)
For example, the following policies:
sql
CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter
CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio
enable the user
peter
to see rows only if both
b=1
AND
c=2
.
Database policies are combined with table policies.
For example, the following policies:
sql
CREATE ROW POLICY pol1 ON mydb.* USING b=1 TO mira, peter
CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio
enable the user
peter
to see table1 rows only if both
b=1
AND
c=2
, although
any other table in mydb would have only
b=1
policy applied for the user.
ON CLUSTER Clause {#on-cluster-clause}
Allows creating row policies on a cluster, see
Distributed DDL
.
Examples {#examples}
CREATE ROW POLICY filter1 ON mydb.mytable USING a<1000 TO accountant, john@localhost
CREATE ROW POLICY filter2 ON mydb.mytable USING a<1000 AND b=5 TO ALL EXCEPT mira
CREATE ROW POLICY filter3 ON mydb.mytable USING 1 TO admin
CREATE ROW POLICY filter4 ON mydb.* USING 1 TO admin | {"source_file": "row-policy.md"} | [
0.0053175403736531734,
-0.031189775094389915,
-0.054537758231163025,
-0.006409044377505779,
-0.014862088486552238,
0.0767161175608635,
0.08522553741931915,
-0.05345239117741585,
-0.0644766241312027,
0.05836787447333336,
0.015400328673422337,
-0.03511969745159149,
0.11401453614234924,
0.026... |
5a028fe9-cbfa-41fb-b596-b6b7cd5f54b2 | description: 'Documentation for Role'
sidebar_label: 'ROLE'
sidebar_position: 40
slug: /sql-reference/statements/create/role
title: 'CREATE ROLE'
doc_type: 'reference'
Creates new
roles
. Role is a set of
privileges
. A
user
assigned a role gets all the privileges of this role.
Syntax:
sql
CREATE ROLE [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[IN access_storage_type]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | PROFILE 'profile_name'] [,...]
Managing Roles {#managing-roles}
A user can be assigned multiple roles. Users can apply their assigned roles in arbitrary combinations by the
SET ROLE
statement. The final scope of privileges is a combined set of all the privileges of all the applied roles. If a user has privileges granted directly to it's user account, they are also combined with the privileges granted by roles.
User can have default roles which apply at user login. To set default roles, use the
SET DEFAULT ROLE
statement or the
ALTER USER
statement.
To revoke a role, use the
REVOKE
statement.
To delete role, use the
DROP ROLE
statement. The deleted role is being automatically revoked from all the users and roles to which it was assigned.
Examples {#examples}
sql
CREATE ROLE accountant;
GRANT SELECT ON db.* TO accountant;
This sequence of queries creates the role
accountant
that has the privilege of reading data from the
db
database.
Assigning the role to the user
mira
:
sql
GRANT accountant TO mira;
After the role is assigned, the user can apply it and execute the allowed queries. For example:
sql
SET ROLE accountant;
SELECT * FROM db.*; | {"source_file": "role.md"} | [
0.023291470482945442,
-0.03265994042158127,
-0.047449130564928055,
0.08117854595184326,
-0.08563563972711563,
0.03341853246092796,
0.09928413480520248,
0.03271956369280815,
-0.09231589734554291,
-0.033472124487161636,
0.007138030603528023,
-0.036302659660577774,
0.10624418407678604,
-0.004... |
10ebff74-792f-4b2c-8f9b-09e818698bf2 | description: 'Documentation for Quota'
sidebar_label: 'QUOTA'
sidebar_position: 42
slug: /sql-reference/statements/create/quota
title: 'CREATE QUOTA'
doc_type: 'reference'
Creates a
quota
that can be assigned to a user or a role.
Syntax:
sql
CREATE QUOTA [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name]
[IN access_storage_type]
[KEYED BY {user_name | ip_address | client_key | client_key,user_name | client_key,ip_address} | NOT KEYED]
[FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year}
{MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | written_bytes | execution_time | failed_sequential_authentications} = number } [,...] |
NO LIMITS | TRACKING ONLY} [,...]]
[TO {role [,...] | ALL | ALL EXCEPT role [,...]}]
Keys
user_name
,
ip_address
,
client_key
,
client_key, user_name
and
client_key, ip_address
correspond to the fields in the
system.quotas
table.
Parameters
queries
,
query_selects
,
query_inserts
,
errors
,
result_rows
,
result_bytes
,
read_rows
,
read_bytes
,
written_bytes
,
execution_time
,
failed_sequential_authentications
correspond to the fields in the
system.quotas_usage
table.
ON CLUSTER
clause allows creating quotas on a cluster, see
Distributed DDL
.
Examples
Limit the maximum number of queries for the current user with 123 queries in 15 months constraint:
sql
CREATE QUOTA qA FOR INTERVAL 15 month MAX queries = 123 TO CURRENT_USER;
For the default user limit the maximum execution time with half a second in 30 minutes, and limit the maximum number of queries with 321 and the maximum number of errors with 10 in 5 quarters:
sql
CREATE QUOTA qB FOR INTERVAL 30 minute MAX execution_time = 0.5, FOR INTERVAL 5 quarter MAX queries = 321, errors = 10 TO default;
Further examples, using the xml configuration (not supported in ClickHouse Cloud), can be found in the
Quotas guide
.
Related Content {#related-content}
Blog:
Building single page applications with ClickHouse | {"source_file": "quota.md"} | [
-0.002236379077658057,
0.015857119113206863,
-0.05616258829832077,
0.06977575272321701,
-0.07859410345554352,
0.018251756206154823,
0.06089222803711891,
0.02041724883019924,
-0.015417364425957203,
0.03874224051833153,
-0.027555666863918304,
-0.07213297486305237,
0.10696543753147125,
-0.027... |
eaee57db-94b9-4732-a684-537ecddb084b | description: 'Documentation for CREATE DATABASE'
sidebar_label: 'DATABASE'
sidebar_position: 35
slug: /sql-reference/statements/create/database
title: 'CREATE DATABASE'
doc_type: 'reference'
CREATE DATABASE
Creates a new database.
sql
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster] [ENGINE = engine(...)] [COMMENT 'Comment']
Clauses {#clauses}
IF NOT EXISTS {#if-not-exists}
If the
db_name
database already exists, then ClickHouse does not create a new database and:
Doesn't throw an exception if clause is specified.
Throws an exception if clause isn't specified.
ON CLUSTER {#on-cluster}
ClickHouse creates the
db_name
database on all the servers of a specified cluster. More details in a
Distributed DDL
article.
ENGINE {#engine}
By default, ClickHouse uses its own
Atomic
database engine. There are also
Lazy
,
MySQL
,
PostgresSQL
,
MaterializedPostgreSQL
,
Replicated
,
SQLite
.
COMMENT {#comment}
You can add a comment to the database when you are creating it.
The comment is supported for all database engines.
Syntax
sql
CREATE DATABASE db_name ENGINE = engine(...) COMMENT 'Comment'
Example
Query:
sql
CREATE DATABASE db_comment ENGINE = Memory COMMENT 'The temporary database';
SELECT name, comment FROM system.databases WHERE name = 'db_comment';
Result:
text
┌─name───────┬─comment────────────────┐
│ db_comment │ The temporary database │
└────────────┴────────────────────────┘ | {"source_file": "database.md"} | [
-0.020442204549908638,
-0.13006645441055298,
-0.06794171780347824,
0.07623781263828278,
-0.04029698297381401,
-0.053189534693956375,
0.03154360130429268,
-0.02649707719683647,
-0.01002641674131155,
-0.022162847220897675,
0.04960920289158821,
-0.028420787304639816,
0.13102614879608154,
-0.0... |
b8b7a428-8445-4ce1-8c0c-d99bd083134c | description: 'Documentation for Geohash'
sidebar_label: 'Geohash'
slug: /sql-reference/functions/geo/geohash
title: 'Functions for Working with Geohash'
doc_type: 'reference'
Geohash {#geohash}
Geohash
is the geocode system, which subdivides Earth's surface into buckets of grid shape and encodes each cell into a short string of letters and digits. It is a hierarchical data structure, so the longer the geohash string is, the more precise the geographic location will be.
If you need to manually convert geographic coordinates to geohash strings, you can use
geohash.org
geohashEncode {#geohashencode}
Encodes latitude and longitude as a
geohash
-string.
Syntax
sql
geohashEncode(longitude, latitude, [precision])
Input values
longitude
— Longitude part of the coordinate you want to encode. Floating in range
[-180°, 180°]
.
Float
.
latitude
— Latitude part of the coordinate you want to encode. Floating in range
[-90°, 90°]
.
Float
.
precision
(optional) — Length of the resulting encoded string. Defaults to
12
. Integer in the range
[1, 12]
.
Int8
.
:::note
- All coordinate parameters must be of the same type: either
Float32
or
Float64
.
- For the
precision
parameter, any value less than
1
or greater than
12
is silently converted to
12
.
:::
Returned values
Alphanumeric string of the encoded coordinate (modified version of the base32-encoding alphabet is used).
String
.
Example
Query:
sql
SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res;
Result:
text
┌─res──────────┐
│ ezs42d000000 │
└──────────────┘
geohashDecode {#geohashdecode}
Decodes any
geohash
-encoded string into longitude and latitude.
Syntax
sql
geohashDecode(hash_str)
Input values
hash_str
— Geohash-encoded string.
Returned values
Tuple
(longitude, latitude)
of
Float64
values of longitude and latitude.
Tuple
(
Float64
)
Example
sql
SELECT geohashDecode('ezs42') AS res;
text
┌─res─────────────────────────────┐
│ (-5.60302734375,42.60498046875) │
└─────────────────────────────────┘
geohashesInBox {#geohashesinbox}
Returns an array of
geohash
-encoded strings of given precision that fall inside and intersect boundaries of given box, basically a 2D grid flattened into array.
Syntax
sql
geohashesInBox(longitude_min, latitude_min, longitude_max, latitude_max, precision)
Arguments
longitude_min
— Minimum longitude. Range:
[-180°, 180°]
.
Float
.
latitude_min
— Minimum latitude. Range:
[-90°, 90°]
.
Float
.
longitude_max
— Maximum longitude. Range:
[-180°, 180°]
.
Float
.
latitude_max
— Maximum latitude. Range:
[-90°, 90°]
.
Float
.
precision
— Geohash precision. Range:
[1, 12]
.
UInt8
.
:::note
All coordinate parameters must be of the same type: either
Float32
or
Float64
.
:::
Returned values
Array of precision-long strings of geohash-boxes covering provided area, you should not rely on order of items.
Array
(
String
). | {"source_file": "geohash.md"} | [
0.0712515190243721,
0.025682339444756508,
0.02151673287153244,
-0.0650639459490776,
-0.10085665434598923,
-0.042650558054447174,
0.04497925937175751,
0.02915131114423275,
-0.05617145076394081,
-0.0029181716963648796,
-0.006942553911358118,
-0.05154694616794586,
0.08833511173725128,
-0.0751... |
fcda0e71-14fc-4d99-bda6-c0b70055d5fd | Returned values
Array of precision-long strings of geohash-boxes covering provided area, you should not rely on order of items.
Array
(
String
).
[]
- Empty array if minimum latitude and longitude values aren't less than corresponding maximum values.
:::note
Function throws an exception if resulting array is over 10'000'000 items long.
:::
Example
Query:
sql
SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos;
Result:
text
┌─thasos──────────────────────────────────────┐
│ ['sx1q','sx1r','sx32','sx1w','sx1x','sx38'] │
└─────────────────────────────────────────────┘ | {"source_file": "geohash.md"} | [
0.16611014306545258,
0.08347198367118835,
0.04830320551991463,
-0.03998153656721115,
-0.02825007401406765,
-0.023419000208377838,
0.03411509469151497,
-0.04131068289279938,
-0.016790149733424187,
0.014403347857296467,
-0.019627757370471954,
0.04497611150145531,
0.051814138889312744,
-0.059... |
7bdfe09e-07ee-45cf-86c5-877dd16eefa3 | description: 'Documentation for Svg'
sidebar_label: 'SVG'
slug: /sql-reference/functions/geo/svg
title: 'Functions for Generating SVG images from Geo data'
doc_type: 'reference'
Svg {#svg}
Returns a string of select SVG element tags from Geo data.
Syntax
sql
Svg(geometry,[style])
Aliases:
SVG
,
svg
Parameters
geometry
— Geo data.
Geo
.
style
— Optional style name.
String
.
Returned value
The SVG representation of the geometry.
String
.
SVG circle
SVG polygon
SVG path
Examples
Circle
Query:
sql
SELECT SVG((0., 0.))
Result:
response
<circle cx="0" cy="0" r="5" style=""/>
Polygon
Query:
sql
SELECT SVG([(0., 0.), (10, 0), (10, 10), (0, 10)])
Result:
response
<polygon points="0,0 0,10 10,10 10,0 0,0" style=""/>
Path
Query:
sql
SELECT SVG([[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]])
Result:
response
<g fill-rule="evenodd"><path d="M 0,0 L 0,10 L 10,10 L 10,0 L 0,0M 4,4 L 5,4 L 5,5 L 4,5 L 4,4 z " style=""/></g> | {"source_file": "svg.md"} | [
0.033811576664447784,
0.028598152101039886,
-0.005923207383602858,
0.09664660692214966,
-0.013148505240678787,
-0.0925733745098114,
0.10930400341749191,
0.03759819269180298,
0.04368066042661667,
-0.05212533846497536,
-0.0384027436375618,
0.004291069228202105,
0.014426175504922867,
-0.07869... |
2f6e4511-69de-4e1f-b474-5acc3d4b11de | slug: /sql-reference/functions/geo/s2
sidebar_label: 'S2 Geometry'
title: 'Functions for Working with S2 Index'
description: 'Documentation for functions for working with S2 indexes'
doc_type: 'reference'
Functions for Working with S2 Index
S2Index {#s2index}
S2
is a geographical indexing system where all geographical data is represented on a sphere (similar to a globe).
In the S2 library points are represented as the S2 Index - a specific number which encodes internally a point on the surface of a unit sphere, unlike traditional (latitude, longitude) pairs. To get the S2 point index for a given point specified in the format (latitude, longitude) use the
geoToS2
function. Also, you can use the
s2ToGeo
function for getting geographical coordinates corresponding to the specified S2 point index.
geoToS2 {#geotos2}
Returns
S2
point index corresponding to the provided coordinates
(longitude, latitude)
.
Syntax
sql
geoToS2(lon, lat)
Arguments
lon
— Longitude.
Float64
.
lat
— Latitude.
Float64
.
Returned values
S2 point index.
UInt64
.
Example
Query:
sql
SELECT geoToS2(37.79506683, 55.71290588) AS s2Index;
Result:
text
┌─────────────s2Index─┐
│ 4704772434919038107 │
└─────────────────────┘
s2ToGeo {#s2togeo}
Returns geo coordinates
(longitude, latitude)
corresponding to the provided
S2
point index.
Syntax
sql
s2ToGeo(s2index)
Arguments
s2index
— S2 Index.
UInt64
.
Returned values
A
tuple
consisting of two values:
lon
.
Float64
.
lat
.
Float64
.
Example
Query:
sql
SELECT s2ToGeo(4704772434919038107) AS s2Coodrinates;
Result:
text
┌─s2Coodrinates────────────────────────┐
│ (37.79506681471008,55.7129059052841) │
└──────────────────────────────────────┘
s2GetNeighbors {#s2getneighbors}
Returns S2 neighbor indexes corresponding to the provided
S2
. Each cell in the S2 system is a quadrilateral bounded by four geodesics. So, each cell has 4 neighbors.
Syntax
sql
s2GetNeighbors(s2index)
Arguments
s2index
— S2 Index.
UInt64
.
Returned value
An array consisting of 4 neighbor indexes:
array[s2index1, s2index3, s2index2, s2index4]
.
Array
(
UInt64
).
Example
Query:
sql
SELECT s2GetNeighbors(5074766849661468672) AS s2Neighbors;
Result:
text
┌─s2Neighbors───────────────────────────────────────────────────────────────────────┐
│ [5074766987100422144,5074766712222515200,5074767536856236032,5074767261978329088] │
└───────────────────────────────────────────────────────────────────────────────────┘
s2CellsIntersect {#s2cellsintersect}
Determines if the two provided
S2
cells intersect or not.
Syntax
sql
s2CellsIntersect(s2index1, s2index2)
Arguments
siIndex1
,
s2index2
— S2 Index.
UInt64
.
Returned value
1
— If the cells intersect.
UInt8
.
0
— If the cells don't intersect.
UInt8
.
Example
Query:
sql
SELECT s2CellsIntersect(9926595209846587392, 9926594385212866560) AS intersect; | {"source_file": "s2.md"} | [
0.061076536774635315,
-0.030270304530858994,
-0.010580949485301971,
0.013137268833816051,
0.02486756630241871,
0.007808364927768707,
0.046768561005592346,
0.07587289810180664,
-0.004282431676983833,
-0.04604151099920273,
-0.025522945448756218,
0.0015949049266055226,
0.09146984666585922,
-0... |
3ac3a222-4316-4231-8949-0600846abcda | 1
— If the cells intersect.
UInt8
.
0
— If the cells don't intersect.
UInt8
.
Example
Query:
sql
SELECT s2CellsIntersect(9926595209846587392, 9926594385212866560) AS intersect;
Result:
text
┌─intersect─┐
│ 1 │
└───────────┘
s2CapContains {#s2capcontains}
Determines if a cap contains a S2 point. A cap represents a part of the sphere that has been cut off by a plane. It is defined by a point on a sphere and a radius in degrees.
Syntax
sql
s2CapContains(center, degrees, point)
Arguments
center
— S2 point index corresponding to the cap.
UInt64
.
degrees
— Radius of the cap in degrees.
Float64
.
point
— S2 point index.
UInt64
.
Returned value
1
— If the cap contains the S2 point index.
UInt8
.
0
— If the cap doesn't contain the S2 point index.
UInt8
.
Example
Query:
sql
SELECT s2CapContains(1157339245694594829, 1.0, 1157347770437378819) AS capContains;
Result:
text
┌─capContains─┐
│ 1 │
└─────────────┘
s2CapUnion {#s2capunion}
Determines the smallest cap that contains the given two input caps. A cap represents a portion of the sphere that has been cut off by a plane. It is defined by a point on a sphere and a radius in degrees.
Syntax
sql
s2CapUnion(center1, radius1, center2, radius2)
Arguments
center1
,
center2
— S2 point indexes corresponding to the two input caps.
UInt64
.
radius1
,
radius2
— Radius of the two input caps in degrees.
Float64
.
Returned values
center
— S2 point index corresponding the center of the smallest cap containing the two input caps.
UInt64
.
radius
— Radius of the smallest cap containing the two input caps.
Float64
.
Example
Query:
sql
SELECT s2CapUnion(3814912406305146967, 1.0, 1157347770437378819, 1.0) AS capUnion;
Result:
text
┌─capUnion───────────────────────────────┐
│ (4534655147792050737,60.2088283994957) │
└────────────────────────────────────────┘
s2RectAdd {#s2rectadd}
Increases the size of the bounding rectangle to include the given S2 point. In the S2 system, a rectangle is represented by a type of S2Region called a
S2LatLngRect
that represents a rectangle in latitude-longitude space.
Syntax
sql
s2RectAdd(s2pointLow, s2pointHigh, s2Point)
Arguments
s2PointLow
— Low S2 point index corresponding to the rectangle.
UInt64
.
s2PointHigh
— High S2 point index corresponding to the rectangle.
UInt64
.
s2Point
— Target S2 point index that the bound rectangle should be grown to include.
UInt64
.
Returned values
s2PointLow
— Low S2 cell id corresponding to the grown rectangle.
UInt64
.
s2PointHigh
— Height S2 cell id corresponding to the grown rectangle.
UInt64
.
Example
Query:
sql
SELECT s2RectAdd(5178914411069187297, 5177056748191934217, 5179056748191934217) AS rectAdd;
Result:
text
┌─rectAdd───────────────────────────────────┐
│ (5179062030687166815,5177056748191934217) │
└───────────────────────────────────────────┘ | {"source_file": "s2.md"} | [
0.06496681272983551,
-0.02449953928589821,
0.0027243990916758776,
0.014224902726709843,
0.07681257277727127,
-0.009996128268539906,
0.10718388855457306,
0.025735067203640938,
0.07752951979637146,
-0.00612025847658515,
0.031062770634889603,
-0.06883953511714935,
0.046206630766391754,
-0.007... |
786f178f-5b23-4011-a4c5-901a4e9f095a | Result:
text
┌─rectAdd───────────────────────────────────┐
│ (5179062030687166815,5177056748191934217) │
└───────────────────────────────────────────┘
s2RectContains {#s2rectcontains}
Determines if a given rectangle contains a S2 point. In the S2 system, a rectangle is represented by a type of S2Region called a
S2LatLngRect
that represents a rectangle in latitude-longitude space.
Syntax
sql
s2RectContains(s2PointLow, s2PointHi, s2Point)
Arguments
s2PointLow
— Low S2 point index corresponding to the rectangle.
UInt64
.
s2PointHigh
— High S2 point index corresponding to the rectangle.
UInt64
.
s2Point
— Target S2 point index.
UInt64
.
Returned value
1
— If the rectangle contains the given S2 point.
0
— If the rectangle doesn't contain the given S2 point.
Example
Query:
sql
SELECT s2RectContains(5179062030687166815, 5177056748191934217, 5177914411069187297) AS rectContains;
Result:
text
┌─rectContains─┐
│ 0 │
└──────────────┘
s2RectUnion {#s2rectunion}
Returns the smallest rectangle containing the union of this rectangle and the given rectangle. In the S2 system, a rectangle is represented by a type of S2Region called a
S2LatLngRect
that represents a rectangle in latitude-longitude space.
Syntax
sql
s2RectUnion(s2Rect1PointLow, s2Rect1PointHi, s2Rect2PointLow, s2Rect2PointHi)
Arguments
s2Rect1PointLow
,
s2Rect1PointHi
— Low and High S2 point indexes corresponding to the first rectangle.
UInt64
.
s2Rect2PointLow
,
s2Rect2PointHi
— Low and High S2 point indexes corresponding to the second rectangle.
UInt64
.
Returned values
s2UnionRect2PointLow
— Low S2 cell id corresponding to the union rectangle.
UInt64
.
s2UnionRect2PointHi
— High S2 cell id corresponding to the union rectangle.
UInt64
.
Example
Query:
sql
SELECT s2RectUnion(5178914411069187297, 5177056748191934217, 5179062030687166815, 5177056748191934217) AS rectUnion;
Result:
text
┌─rectUnion─────────────────────────────────┐
│ (5179062030687166815,5177056748191934217) │
└───────────────────────────────────────────┘
s2RectIntersection {#s2rectintersection}
Returns the smallest rectangle containing the intersection of this rectangle and the given rectangle. In the S2 system, a rectangle is represented by a type of S2Region called a
S2LatLngRect
that represents a rectangle in latitude-longitude space.
Syntax
sql
s2RectIntersection(s2Rect1PointLow, s2Rect1PointHi, s2Rect2PointLow, s2Rect2PointHi)
Arguments
s2Rect1PointLow
,
s2Rect1PointHi
— Low and High S2 point indexes corresponding to the first rectangle.
UInt64
.
s2Rect2PointLow
,
s2Rect2PointHi
— Low and High S2 point indexes corresponding to the second rectangle.
UInt64
.
Returned values
s2UnionRect2PointLow
— Low S2 cell id corresponding to the rectangle containing the intersection of the given rectangles.
UInt64
. | {"source_file": "s2.md"} | [
0.05019906908273697,
-0.030650652945041656,
0.012825826182961464,
-0.006430545821785927,
0.05163751170039177,
0.026767050847411156,
0.07823578268289566,
0.011989370919764042,
0.018186185508966446,
-0.07043491303920746,
0.006086278241127729,
-0.0548136942088604,
0.08755578845739365,
-0.0255... |
6deb242f-4403-4d59-97c3-bd6a0f4f1bba | Returned values
s2UnionRect2PointLow
— Low S2 cell id corresponding to the rectangle containing the intersection of the given rectangles.
UInt64
.
s2UnionRect2PointHi
— High S2 cell id corresponding to the rectangle containing the intersection of the given rectangles.
UInt64
.
Example
Query:
sql
SELECT s2RectIntersection(5178914411069187297, 5177056748191934217, 5179062030687166815, 5177056748191934217) AS rectIntersection;
Result:
text
┌─rectIntersection──────────────────────────┐
│ (5178914411069187297,5177056748191934217) │
└───────────────────────────────────────────┘ | {"source_file": "s2.md"} | [
0.03357095643877983,
0.023610373958945274,
-0.0033781896345317364,
-0.0050439974293112755,
-0.04123417288064957,
0.03871818631887436,
0.06550327688455582,
0.027175143361091614,
0.010568599216639996,
-0.08461132645606995,
0.037868253886699677,
-0.035303059965372086,
0.01345678512006998,
-0.... |
9a6821cb-9b6a-4229-a10f-f43d70f7cc51 | description: 'Documentation for Coordinates'
sidebar_label: 'Geographical Coordinates'
slug: /sql-reference/functions/geo/coordinates
title: 'Functions for Working with Geographical Coordinates'
doc_type: 'reference'
greatCircleDistance {#greatcircledistance}
Calculates the distance between two points on the Earth's surface using
the great-circle formula
.
sql
greatCircleDistance(lon1Deg, lat1Deg, lon2Deg, lat2Deg)
Input parameters
lon1Deg
— Longitude of the first point in degrees. Range:
[-180°, 180°]
.
lat1Deg
— Latitude of the first point in degrees. Range:
[-90°, 90°]
.
lon2Deg
— Longitude of the second point in degrees. Range:
[-180°, 180°]
.
lat2Deg
— Latitude of the second point in degrees. Range:
[-90°, 90°]
.
Positive values correspond to North latitude and East longitude, and negative values correspond to South latitude and West longitude.
Returned value
The distance between two points on the Earth's surface, in meters.
Generates an exception when the input parameter values fall outside of the range.
Example
sql
SELECT greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673) AS greatCircleDistance
text
┌─greatCircleDistance─┐
│ 14128352 │
└─────────────────────┘
geoDistance {#geodistance}
Similar to
greatCircleDistance
but calculates the distance on WGS-84 ellipsoid instead of sphere. This is more precise approximation of the Earth Geoid.
The performance is the same as for
greatCircleDistance
(no performance drawback). It is recommended to use
geoDistance
to calculate the distances on Earth.
Technical note: for close enough points we calculate the distance using planar approximation with the metric on the tangent plane at the midpoint of the coordinates.
sql
geoDistance(lon1Deg, lat1Deg, lon2Deg, lat2Deg)
Input parameters
lon1Deg
— Longitude of the first point in degrees. Range:
[-180°, 180°]
.
lat1Deg
— Latitude of the first point in degrees. Range:
[-90°, 90°]
.
lon2Deg
— Longitude of the second point in degrees. Range:
[-180°, 180°]
.
lat2Deg
— Latitude of the second point in degrees. Range:
[-90°, 90°]
.
Positive values correspond to North latitude and East longitude, and negative values correspond to South latitude and West longitude.
Returned value
The distance between two points on the Earth's surface, in meters.
Generates an exception when the input parameter values fall outside of the range.
Example
sql
SELECT geoDistance(38.8976, -77.0366, 39.9496, -75.1503) AS geoDistance
text
┌─geoDistance─┐
│ 212458.73 │
└─────────────┘
greatCircleAngle {#greatcircleangle}
Calculates the central angle between two points on the Earth's surface using
the great-circle formula
.
sql
greatCircleAngle(lon1Deg, lat1Deg, lon2Deg, lat2Deg)
Input parameters
lon1Deg
— Longitude of the first point in degrees.
lat1Deg
— Latitude of the first point in degrees.
lon2Deg
— Longitude of the second point in degrees. | {"source_file": "coordinates.md"} | [
0.03669768571853638,
0.0030415141955018044,
-0.021943792700767517,
-0.023706281557679176,
0.0001632371568121016,
-0.054525312036275864,
0.022510536015033722,
0.1645592451095581,
-0.028953110799193382,
-0.08975014835596085,
0.07884590327739716,
-0.08789544552564621,
0.08256158232688904,
-0.... |
a1379394-766b-4342-8113-0cf7ecac8836 | Input parameters
lon1Deg
— Longitude of the first point in degrees.
lat1Deg
— Latitude of the first point in degrees.
lon2Deg
— Longitude of the second point in degrees.
lat2Deg
— Latitude of the second point in degrees.
Returned value
The central angle between two points in degrees.
Example
sql
SELECT greatCircleAngle(0, 0, 45, 0) AS arc
text
┌─arc─┐
│ 45 │
└─────┘
pointInEllipses {#pointinellipses}
Checks whether the point belongs to at least one of the ellipses.
Coordinates are geometric in the Cartesian coordinate system.
sql
pointInEllipses(x, y, x₀, y₀, a₀, b₀,...,xₙ, yₙ, aₙ, bₙ)
Input parameters
x, y
— Coordinates of a point on the plane.
xᵢ, yᵢ
— Coordinates of the center of the
i
-th ellipsis.
aᵢ, bᵢ
— Axes of the
i
-th ellipsis in units of x, y coordinates.
The input parameters must be
2+4⋅n
, where
n
is the number of ellipses.
Returned values
1
if the point is inside at least one of the ellipses;
0
if it is not.
Example
sql
SELECT pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)
text
┌─pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)─┐
│ 1 │
└─────────────────────────────────────────────────┘
pointInPolygon {#pointinpolygon}
Checks whether the point belongs to the polygon on the plane.
sql
pointInPolygon((x, y), [(a, b), (c, d) ...], ...)
Input values
(x, y)
— Coordinates of a point on the plane. Data type —
Tuple
— A tuple of two numbers.
[(a, b), (c, d) ...]
— Polygon vertices. Data type —
Array
. Each vertex is represented by a pair of coordinates
(a, b)
. Vertices should be specified in a clockwise or counterclockwise order. The minimum number of vertices is 3. The polygon must be constant.
The function supports polygon with holes (cut-out sections). Data type —
Polygon
. Either pass the entire
Polygon
as the second argument, or pass the outer ring first and then each hole as separate additional arguments.
The function also supports multipolygon. Data type —
MultiPolygon
. Either pass the entire
MultiPolygon
as the second argument, or list each component polygon as its own argument.
Returned values
1
if the point is inside the polygon,
0
if it is not.
If the point is on the polygon boundary, the function may return either 0 or 1.
Example
sql
SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res
text
┌─res─┐
│ 1 │
└─────┘
Note
• You can set
validate_polygons = 0
to bypass geometry validation.
•
pointInPolygon
assumes every polygon is well-formed. If the input is self-intersecting, has mis-ordered rings, or overlapping edges, results become unreliable—especially for points that sit exactly on an edge, a vertex, or inside a self-intersection where the notion of "inside" vs. "outside" is undefined. | {"source_file": "coordinates.md"} | [
0.04060942679643631,
-0.06854255497455597,
-0.043103866279125214,
-0.032755061984062195,
-0.006227546837180853,
-0.03833572939038277,
0.07377222180366516,
0.05511755868792534,
0.023695938289165497,
-0.07543674111366272,
0.076481394469738,
-0.007581748068332672,
0.08393801003694534,
-0.0824... |
1ed055fb-0fc5-4b1b-9e3f-67c27935ae1c | description: 'Documentation for H3'
sidebar_label: 'H3 Indexes'
slug: /sql-reference/functions/geo/h3
title: 'Functions for Working with H3 Indexes'
doc_type: 'reference'
H3 Index {#h3-index}
H3
is a geographical indexing system where the Earth's surface is divided into a grid of even hexagonal cells. This system is hierarchical, i. e. each hexagon on the top level ("parent") can be split into seven even but smaller ones ("children"), and so on.
The level of the hierarchy is called
resolution
and can receive a value from
0
till
15
, where
0
is the
base
level with the largest and coarsest cells.
A latitude and longitude pair can be transformed to a 64-bit H3 index, identifying a grid cell.
The H3 index is used primarily for bucketing locations and other geospatial manipulations.
The full description of the H3 system is available at
the Uber Engineering site
.
h3IsValid {#h3isvalid}
Verifies whether the number is a valid
H3
index.
Syntax
sql
h3IsValid(h3index)
Parameter
h3index
— Hexagon index number.
UInt64
.
Returned values
1 — The number is a valid H3 index.
UInt8
.
0 — The number is not a valid H3 index.
UInt8
.
Example
Query:
sql
SELECT h3IsValid(630814730351855103) AS h3IsValid;
Result:
text
┌─h3IsValid─┐
│ 1 │
└───────────┘
h3GetResolution {#h3getresolution}
Defines the resolution of the given
H3
index.
Syntax
sql
h3GetResolution(h3index)
Parameter
h3index
— Hexagon index number.
UInt64
.
Returned values
Index resolution. Range:
[0, 15]
.
UInt8
.
If the index is not valid, the function returns a random value. Use
h3IsValid
to verify the index.
UInt8
.
Example
Query:
sql
SELECT h3GetResolution(639821929606596015) AS resolution;
Result:
text
┌─resolution─┐
│ 14 │
└────────────┘
h3EdgeAngle {#h3edgeangle}
Calculates the average length of an
H3
hexagon edge in grades.
Syntax
sql
h3EdgeAngle(resolution)
Parameter
resolution
— Index resolution.
UInt8
. Range:
[0, 15]
.
Returned values
The average length of an
H3
hexagon edge in grades.
Float64
.
Example
Query:
sql
SELECT h3EdgeAngle(10) AS edgeAngle;
Result:
text
┌───────h3EdgeAngle(10)─┐
│ 0.0005927224846720883 │
└───────────────────────┘
h3EdgeLengthM {#h3edgelengthm}
Calculates the average length of an
H3
hexagon edge in meters.
Syntax
sql
h3EdgeLengthM(resolution)
Parameter
resolution
— Index resolution.
UInt8
. Range:
[0, 15]
.
Returned values
The average edge length of an
H3
hexagon in meters.
Float64
.
Example
Query:
sql
SELECT h3EdgeLengthM(15) AS edgeLengthM;
Result:
text
┌─edgeLengthM─┐
│ 0.509713273 │
└─────────────┘
h3EdgeLengthKm {#h3edgelengthkm}
Calculates the average length of an
H3
hexagon edge in kilometers.
Syntax
sql
h3EdgeLengthKm(resolution)
Parameter
resolution
— Index resolution.
UInt8
. Range:
[0, 15]
.
Returned values | {"source_file": "h3.md"} | [
0.021482175216078758,
0.017527027055621147,
-0.05650630220770836,
-0.055069781839847565,
-0.02317824587225914,
-0.07130812108516693,
-0.04226769506931305,
0.015083514153957367,
-0.001941363443620503,
-0.015475139953196049,
0.008553441613912582,
-0.032283395528793335,
0.034457944333553314,
... |
d630b5cc-0e84-4d61-aaee-0473dc53b915 | Syntax
sql
h3EdgeLengthKm(resolution)
Parameter
resolution
— Index resolution.
UInt8
. Range:
[0, 15]
.
Returned values
The average length of an
H3
hexagon edge in kilometers.
Float64
.
Example
Query:
sql
SELECT h3EdgeLengthKm(15) AS edgeLengthKm;
Result:
text
┌─edgeLengthKm─┐
│ 0.000509713 │
└──────────────┘
geoToH3 {#geotoh3}
Returns
H3
point index
(lat, lon)
with specified resolution.
Syntax
sql
geoToH3(lat, lon, resolution)
Arguments
lat
— Latitude.
Float64
.
lon
— Longitude.
Float64
.
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned values
Hexagon index number.
UInt64
.
0 in case of error.
UInt64
.
Note: In ClickHouse v25.4 or older,
geoToH3()
takes values in order
(lon, lat)
. As per ClickHouse v25.5, the input values are in order
(lat, lon)
. The previous behaviour can be restored using setting
geotoh3_argument_order = 'lon_lat'
.
Example
Query:
sql
SELECT geoToH3(55.71290588, 37.79506683, 15) AS h3Index;
Result:
text
┌────────────h3Index─┐
│ 644325524701193974 │
└────────────────────┘
h3ToGeo {#h3togeo}
Returns the centroid latitude and longitude corresponding to the provided
H3
index.
Syntax
sql
h3ToGeo(h3Index)
Arguments
h3Index
— H3 Index.
UInt64
.
Returned values
A tuple consisting of two values:
tuple(lat,lon)
.
lat
— Latitude.
Float64
.
lon
— Longitude.
Float64
.
Note: In ClickHouse v24.12 or older,
h3ToGeo()
returns values in order
(lon, lat)
. As per ClickHouse v25.1, the returned values are in order
(lat, lon)
. The previous behaviour can be restored using setting
h3togeo_lon_lat_result_order = true
.
Example
Query:
sql
SELECT h3ToGeo(644325524701193974) AS coordinates;
Result:
text
┌─coordinates───────────────────────────┐
│ (55.71290243145668,37.79506616830252) │
└───────────────────────────────────────┘
h3ToGeoBoundary {#h3togeoboundary}
Returns array of pairs
(lat, lon)
, which corresponds to the boundary of the provided H3 index.
Syntax
sql
h3ToGeoBoundary(h3Index)
Arguments
h3Index
— H3 Index.
UInt64
.
Returned values
Array of pairs '(lat, lon)'.
Array
(
Float64
,
Float64
).
Example
Query:
sql
SELECT h3ToGeoBoundary(644325524701193974) AS coordinates;
Result: | {"source_file": "h3.md"} | [
0.04678582772612572,
0.04670008271932602,
-0.03885621950030327,
-0.009271640330553055,
-0.06347653269767761,
-0.080880306661129,
0.02429266832768917,
0.016712302342057228,
-0.06818915903568268,
-0.020322725176811218,
0.03919927775859833,
-0.050874244421720505,
0.0007349315565079451,
-0.081... |
ce9e5722-e019-4f0d-998e-efdde2508425 | Returned values
Array of pairs '(lat, lon)'.
Array
(
Float64
,
Float64
).
Example
Query:
sql
SELECT h3ToGeoBoundary(644325524701193974) AS coordinates;
Result:
text
┌─h3ToGeoBoundary(599686042433355775)────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ [(37.2713558667319,-121.91508032705622),(37.353926450852256,-121.8622232890249),(37.42834118609435,-121.92354999630156),(37.42012867767779,-122.03773496427027),(37.33755608435299,-122.090428929044),(37.26319797461824,-122.02910130919001)] │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
h3kRing {#h3kring}
Lists all the
H3
hexagons in the raduis of
k
from the given hexagon in random order.
Syntax
sql
h3kRing(h3index, k)
Arguments
h3index
— Hexagon index number.
UInt64
.
k
— Radius.
integer
Returned values
Array of H3 indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT arrayJoin(h3kRing(644325529233966508, 1)) AS h3index;
Result:
text
┌────────────h3index─┐
│ 644325529233966508 │
│ 644325529233966497 │
│ 644325529233966510 │
│ 644325529233966504 │
│ 644325529233966509 │
│ 644325529233966355 │
│ 644325529233966354 │
└────────────────────┘
h3PolygonToCells
Returns the hexagons (at specified resolution) contained by the provided geometry, either ring or (multi-)polygon.
Syntax
sql
h3PolygonToCells(geometry, resolution)
Arguments
geometry
can be one of the following
Geo Data Types
or their underlying primitive types:
Ring
Polygon
MultiPolygon
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned values
Array of the contained H3-indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3PolygonToCells([(-122.4089866999972145,37.813318999983238),(-122.3544736999993603,37.7198061999978478),(-122.4798767000009008,37.8151571999998453)], 7) AS h3index;
Result:
text
┌────────────h3index─┐
│ 608692970769612799 │
│ 608692971927240703 │
│ 608692970585063423 │
│ 608692970819944447 │
│ 608692970719281151 │
│ 608692970752835583 │
│ 608692972027903999 │
└────────────────────┘
h3GetBaseCell {#h3getbasecell}
Returns the base cell number of the
H3
index.
Syntax
sql
h3GetBaseCell(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Hexagon base cell number.
UInt8
.
Example
Query:
sql
SELECT h3GetBaseCell(612916788725809151) AS basecell;
Result:
text
┌─basecell─┐
│ 12 │
└──────────┘
h3HexAreaM2 {#h3hexaream2}
Returns average hexagon area in square meters at the given resolution.
Syntax
sql
h3HexAreaM2(resolution)
Parameter | {"source_file": "h3.md"} | [
0.030207335948944092,
0.06977996975183487,
-0.12346743792295456,
-0.01215007808059454,
-0.018023112788796425,
-0.03676888719201088,
0.033843688666820526,
-0.006980817299336195,
-0.0337534137070179,
-0.02473538927733898,
0.020724941045045853,
-0.05915110185742378,
0.0008199750445783138,
-0.... |
1318e3fa-b28a-4158-bbd7-3326af7c2437 | h3HexAreaM2 {#h3hexaream2}
Returns average hexagon area in square meters at the given resolution.
Syntax
sql
h3HexAreaM2(resolution)
Parameter
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned value
Area in square meters.
Float64
.
Example
Query:
sql
SELECT h3HexAreaM2(13) AS area;
Result:
text
┌─area─┐
│ 43.9 │
└──────┘
h3HexAreaKm2 {#h3hexareakm2}
Returns average hexagon area in square kilometers at the given resolution.
Syntax
sql
h3HexAreaKm2(resolution)
Parameter
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned value
Area in square kilometers.
Float64
.
Example
Query:
sql
SELECT h3HexAreaKm2(13) AS area;
Result:
text
┌──────area─┐
│ 0.0000439 │
└───────────┘
h3IndexesAreNeighbors {#h3indexesareneighbors}
Returns whether or not the provided
H3
indexes are neighbors.
Syntax
sql
h3IndexesAreNeighbors(index1, index2)
Arguments
index1
— Hexagon index number.
UInt64
.
index2
— Hexagon index number.
UInt64
.
Returned value
1
— Indexes are neighbours.
UInt8
.
0
— Indexes are not neighbours.
UInt8
.
Example
Query:
sql
SELECT h3IndexesAreNeighbors(617420388351344639, 617420388352655359) AS n;
Result:
text
┌─n─┐
│ 1 │
└───┘
h3ToChildren {#h3tochildren}
Returns an array of child indexes for the given
H3
index.
Syntax
sql
h3ToChildren(index, resolution)
Arguments
index
— Hexagon index number.
UInt64
.
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned values
Array of the child H3-indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3ToChildren(599405990164561919, 6) AS children;
Result:
text
┌─children───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ [603909588852408319,603909588986626047,603909589120843775,603909589255061503,603909589389279231,603909589523496959,603909589657714687] │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
h3ToParent {#h3toparent}
Returns the parent (coarser) index containing the given
H3
index.
Syntax
sql
h3ToParent(index, resolution)
Arguments
index
— Hexagon index number.
UInt64
.
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned value
Parent H3 index.
UInt64
.
Example
Query:
sql
SELECT h3ToParent(599405990164561919, 3) AS parent;
Result:
text
┌─────────────parent─┐
│ 590398848891879423 │
└────────────────────┘
h3ToString {#h3tostring}
Converts the
H3Index
representation of the index to the string representation.
sql
h3ToString(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
String representation of the H3 index.
String
.
Example
Query:
sql
SELECT h3ToString(617420388352917503) AS h3_string;
Result: | {"source_file": "h3.md"} | [
0.11206308007240295,
0.03568558022379875,
-0.03916112706065178,
0.014041640795767307,
-0.02506008744239807,
-0.0463179275393486,
-0.037126414477825165,
-0.0023364019580185413,
-0.0793750137090683,
-0.016618233174085617,
0.03448550030589104,
-0.05920308455824852,
0.07349556684494019,
-0.076... |
d265d098-3005-4466-ad89-2e1ba3e3011f | Returned value
String representation of the H3 index.
String
.
Example
Query:
sql
SELECT h3ToString(617420388352917503) AS h3_string;
Result:
text
┌─h3_string───────┐
│ 89184926cdbffff │
└─────────────────┘
stringToH3 {#stringtoh3}
Converts the string representation to the
H3Index
(UInt64) representation.
Syntax
sql
stringToH3(index_str)
Parameter
index_str
— String representation of the H3 index.
String
.
Returned value
Hexagon index number. Returns 0 on error.
UInt64
.
Example
Query:
sql
SELECT stringToH3('89184926cc3ffff') AS index;
Result:
text
┌──────────────index─┐
│ 617420388351344639 │
└────────────────────┘
h3GetResolution {#h3getresolution-1}
Returns the resolution of the
H3
index.
Syntax
sql
h3GetResolution(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Index resolution. Range:
[0, 15]
.
UInt8
.
Example
Query:
sql
SELECT h3GetResolution(617420388352917503) AS res;
Result:
text
┌─res─┐
│ 9 │
└─────┘
h3IsResClassIII {#h3isresclassiii}
Returns whether
H3
index has a resolution with Class III orientation.
Syntax
sql
h3IsResClassIII(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
1
— Index has a resolution with Class III orientation.
UInt8
.
0
— Index doesn't have a resolution with Class III orientation.
UInt8
.
Example
Query:
sql
SELECT h3IsResClassIII(617420388352917503) AS res;
Result:
text
┌─res─┐
│ 1 │
└─────┘
h3IsPentagon {#h3ispentagon}
Returns whether this
H3
index represents a pentagonal cell.
Syntax
sql
h3IsPentagon(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
1
— Index represents a pentagonal cell.
UInt8
.
0
— Index doesn't represent a pentagonal cell.
UInt8
.
Example
Query:
sql
SELECT h3IsPentagon(644721767722457330) AS pentagon;
Result:
text
┌─pentagon─┐
│ 0 │
└──────────┘
h3GetFaces {#h3getfaces}
Returns icosahedron faces intersected by a given
H3
index.
Syntax
sql
h3GetFaces(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned values
Array containing icosahedron faces intersected by a given H3 index.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3GetFaces(599686042433355775) AS faces;
Result:
text
┌─faces─┐
│ [7] │
└───────┘
h3CellAreaM2 {#h3cellaream2}
Returns the exact area of a specific cell in square meters corresponding to the given input H3 index.
Syntax
sql
h3CellAreaM2(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Cell area in square meters.
Float64
.
Example
Query:
sql
SELECT h3CellAreaM2(579205133326352383) AS area;
Result:
text
┌───────────────area─┐
│ 4106166334463.9233 │
└────────────────────┘
h3CellAreaRads2 {#h3cellarearads2}
Returns the exact area of a specific cell in square radians corresponding to the given input H3 index. | {"source_file": "h3.md"} | [
0.05063030868768692,
0.047080110758543015,
-0.07851840555667877,
0.011116428300738335,
-0.050516802817583084,
0.006110884249210358,
0.020996257662773132,
0.00006140167533885688,
-0.00440224027261138,
-0.0472448505461216,
0.002398848067969084,
-0.061938460916280746,
0.017652297392487526,
-0... |
399db5fa-798c-4f86-a9bf-69c57ef77b1e | h3CellAreaRads2 {#h3cellarearads2}
Returns the exact area of a specific cell in square radians corresponding to the given input H3 index.
Syntax
sql
h3CellAreaRads2(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Cell area in square radians.
Float64
.
Example
Query:
sql
SELECT h3CellAreaRads2(579205133326352383) AS area;
Result:
text
┌────────────────area─┐
│ 0.10116268528089567 │
└─────────────────────┘
h3ToCenterChild {#h3tocenterchild}
Returns the center child (finer)
H3
index contained by given
H3
at the given resolution.
Syntax
sql
h3ToCenterChild(index, resolution)
Parameter
index
— Hexagon index number.
UInt64
.
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned values
H3
index of the center child contained by given
H3
at the given resolution.
UInt64
.
Example
Query:
sql
SELECT h3ToCenterChild(577023702256844799,1) AS centerToChild;
Result:
text
┌──────centerToChild─┐
│ 581496515558637567 │
└────────────────────┘
h3ExactEdgeLengthM {#h3exactedgelengthm}
Returns the exact edge length of the unidirectional edge represented by the input h3 index in meters.
Syntax
sql
h3ExactEdgeLengthM(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Exact edge length in meters.
Float64
.
Example
Query:
sql
SELECT h3ExactEdgeLengthM(1310277011704381439) AS exactEdgeLengthM;;
Result:
text
┌───exactEdgeLengthM─┐
│ 195449.63163407316 │
└────────────────────┘
h3ExactEdgeLengthKm {#h3exactedgelengthkm}
Returns the exact edge length of the unidirectional edge represented by the input h3 index in kilometers.
Syntax
sql
h3ExactEdgeLengthKm(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Exact edge length in kilometers.
Float64
.
Example
Query:
sql
SELECT h3ExactEdgeLengthKm(1310277011704381439) AS exactEdgeLengthKm;;
Result:
text
┌──exactEdgeLengthKm─┐
│ 195.44963163407317 │
└────────────────────┘
h3ExactEdgeLengthRads {#h3exactedgelengthrads}
Returns the exact edge length of the unidirectional edge represented by the input h3 index in radians.
Syntax
sql
h3ExactEdgeLengthRads(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
Exact edge length in radians.
Float64
.
Example
Query:
sql
SELECT h3ExactEdgeLengthRads(1310277011704381439) AS exactEdgeLengthRads;;
Result:
text
┌──exactEdgeLengthRads─┐
│ 0.030677980118976447 │
└──────────────────────┘
h3NumHexagons {#h3numhexagons}
Returns the number of unique H3 indices at the given resolution.
Syntax
sql
h3NumHexagons(resolution)
Parameter
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned value
Number of H3 indices.
Int64
.
Example
Query:
sql
SELECT h3NumHexagons(3) AS numHexagons;
Result:
text
┌─numHexagons─┐
│ 41162 │
└─────────────┘
h3PointDistM {#h3pointdistm} | {"source_file": "h3.md"} | [
0.03889734297990799,
0.06948533654212952,
-0.04953967034816742,
0.021013423800468445,
-0.009204940870404243,
-0.006896561477333307,
0.011399323120713234,
0.028439665213227272,
0.007662912365049124,
-0.025008242577314377,
0.036204833537340164,
-0.08591575920581818,
-0.0054528843611478806,
-... |
833aede3-66da-4377-bd69-b19f7a87921d | Number of H3 indices.
Int64
.
Example
Query:
sql
SELECT h3NumHexagons(3) AS numHexagons;
Result:
text
┌─numHexagons─┐
│ 41162 │
└─────────────┘
h3PointDistM {#h3pointdistm}
Returns the "great circle" or "haversine" distance between pairs of GeoCoord points (latitude/longitude) pairs in meters.
Syntax
sql
h3PointDistM(lat1, lon1, lat2, lon2)
Arguments
lat1
,
lon1
— Latitude and Longitude of point1 in degrees.
Float64
.
lat2
,
lon2
— Latitude and Longitude of point2 in degrees.
Float64
.
Returned values
Haversine or great circle distance in meters.
Float64
.
Example
Query:
sql
SELECT h3PointDistM(-10.0 ,0.0, 10.0, 0.0) AS h3PointDistM;
Result:
text
┌──────h3PointDistM─┐
│ 2223901.039504589 │
└───────────────────┘
h3PointDistKm {#h3pointdistkm}
Returns the "great circle" or "haversine" distance between pairs of GeoCoord points (latitude/longitude) pairs in kilometers.
Syntax
sql
h3PointDistKm(lat1, lon1, lat2, lon2)
Arguments
lat1
,
lon1
— Latitude and Longitude of point1 in degrees.
Float64
.
lat2
,
lon2
— Latitude and Longitude of point2 in degrees.
Float64
.
Returned values
Haversine or great circle distance in kilometers.
Float64
.
Example
Query:
sql
SELECT h3PointDistKm(-10.0 ,0.0, 10.0, 0.0) AS h3PointDistKm;
Result:
text
┌─────h3PointDistKm─┐
│ 2223.901039504589 │
└───────────────────┘
h3PointDistRads {#h3pointdistrads}
Returns the "great circle" or "haversine" distance between pairs of GeoCoord points (latitude/longitude) pairs in radians.
Syntax
sql
h3PointDistRads(lat1, lon1, lat2, lon2)
Arguments
lat1
,
lon1
— Latitude and Longitude of point1 in degrees.
Float64
.
lat2
,
lon2
— Latitude and Longitude of point2 in degrees.
Float64
.
Returned values
Haversine or great circle distance in radians.
Float64
.
Example
Query:
sql
SELECT h3PointDistRads(-10.0 ,0.0, 10.0, 0.0) AS h3PointDistRads;
Result:
text
┌────h3PointDistRads─┐
│ 0.3490658503988659 │
└────────────────────┘
h3GetRes0Indexes {#h3getres0indexes}
Returns an array of all the resolution 0 H3 indexes.
Syntax
sql
h3GetRes0Indexes()
Returned values
Array of all the resolution 0 H3 indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3GetRes0Indexes AS indexes ;
Result:
text
┌─indexes─────────────────────────────────────┐
│ [576495936675512319,576531121047601151,....]│
└─────────────────────────────────────────────┘
h3GetPentagonIndexes {#h3getpentagonindexes}
Returns all the pentagon H3 indexes at the specified resolution.
Syntax
sql
h3GetPentagonIndexes(resolution)
Parameter
resolution
— Index resolution. Range:
[0, 15]
.
UInt8
.
Returned value
Array of all pentagon H3 indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3GetPentagonIndexes(3) AS indexes;
Result: | {"source_file": "h3.md"} | [
0.062360603362321854,
0.012168006040155888,
-0.12127301841974258,
-0.011859249323606491,
-0.005888958927243948,
-0.018564525991678238,
0.04718610644340515,
0.06908504664897919,
0.02991405501961708,
-0.09973446279764175,
0.061135999858379364,
-0.07432980835437775,
0.06537459045648575,
-0.05... |
3e99d1c8-79c0-4caf-83b6-d146da282d64 | Returned value
Array of all pentagon H3 indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3GetPentagonIndexes(3) AS indexes;
Result:
text
┌─indexes────────────────────────────────────────────────────────┐
│ [590112357393367039,590464201114255359,590816044835143679,...] │
└────────────────────────────────────────────────────────────────┘
h3Line {#h3line}
Returns the line of indices between the two indices that are provided.
Syntax
sql
h3Line(start,end)
Parameter
start
— Hexagon index number that represents a starting point.
UInt64
.
end
— Hexagon index number that represents an ending point.
UInt64
.
Returned value
Array of h3 indexes representing the line of indices between the two provided indices.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3Line(590080540275638271,590103561300344831) AS indexes;
Result:
text
┌─indexes────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ [590080540275638271,590080471556161535,590080883873021951,590106516237844479,590104385934065663,590103630019821567,590103561300344831] │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
h3Distance {#h3distance}
Returns the distance in grid cells between the two indices that are provided.
Syntax
sql
h3Distance(start,end)
Parameter
start
— Hexagon index number that represents a starting point.
UInt64
.
end
— Hexagon index number that represents an ending point.
UInt64
.
Returned value
Number of grid cells.
Int64
.
Returns a negative number if finding the distance fails.
Example
Query:
sql
SELECT h3Distance(590080540275638271,590103561300344831) AS distance;
Result:
text
┌─distance─┐
│ 7 │
└──────────┘
h3HexRing {#h3hexring}
Returns the indexes of the hexagonal ring centered at the provided origin h3Index and length k.
Returns 0 if no pentagonal distortion was encountered.
Syntax
sql
h3HexRing(index, k)
Parameter
index
— Hexagon index number that represents the origin.
UInt64
.
k
— Distance.
UInt64
.
Returned values
Array of H3 indexes.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3HexRing(590080540275638271, toUInt16(1)) AS hexRing;
Result:
text
┌─hexRing─────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ [590080815153545215,590080471556161535,590080677714591743,590077585338138623,590077447899185151,590079509483487231] │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
h3GetUnidirectionalEdge {#h3getunidirectionaledge}
Returns a unidirectional edge H3 index based on the provided origin and destination and returns 0 on error.
Syntax
sql
h3GetUnidirectionalEdge(originIndex, destinationIndex)
Parameter | {"source_file": "h3.md"} | [
0.06735192239284515,
0.03309826925396919,
-0.05612862482666969,
0.026349639520049095,
-0.05853749439120293,
-0.02475195936858654,
0.030551301315426826,
-0.034882888197898865,
0.02642728015780449,
-0.03272959217429161,
0.014763523824512959,
0.015010172501206398,
0.011642936617136002,
-0.089... |
899af9e9-389d-4996-916d-8479815b023f | Returns a unidirectional edge H3 index based on the provided origin and destination and returns 0 on error.
Syntax
sql
h3GetUnidirectionalEdge(originIndex, destinationIndex)
Parameter
originIndex
— Origin Hexagon index number.
UInt64
.
destinationIndex
— Destination Hexagon index number.
UInt64
.
Returned value
Unidirectional Edge Hexagon Index number.
UInt64
.
Example
Query:
sql
SELECT h3GetUnidirectionalEdge(599686042433355775, 599686043507097599) AS edge;
Result:
text
┌────────────────edge─┐
│ 1248204388774707199 │
└─────────────────────┘
h3UnidirectionalEdgeIsValid {#h3unidirectionaledgeisvalid}
Determines if the provided H3Index is a valid unidirectional edge index. Returns 1 if it's a unidirectional edge and 0 otherwise.
Syntax
sql
h3UnidirectionalEdgeisValid(index)
Parameter
index
— Hexagon index number.
UInt64
.
Returned value
1 — The H3 index is a valid unidirectional edge.
UInt8
.
0 — The H3 index is not a valid unidirectional edge.
UInt8
.
Example
Query:
sql
SELECT h3UnidirectionalEdgeIsValid(1248204388774707199) AS validOrNot;
Result:
text
┌─validOrNot─┐
│ 1 │
└────────────┘
h3GetOriginIndexFromUnidirectionalEdge {#h3getoriginindexfromunidirectionaledge}
Returns the origin hexagon index from the unidirectional edge H3Index.
Syntax
sql
h3GetOriginIndexFromUnidirectionalEdge(edge)
Parameter
edge
— Hexagon index number that represents a unidirectional edge.
UInt64
.
Returned value
Origin Hexagon Index number.
UInt64
.
Example
Query:
sql
SELECT h3GetOriginIndexFromUnidirectionalEdge(1248204388774707197) AS origin;
Result:
text
┌─────────────origin─┐
│ 599686042433355773 │
└────────────────────┘
h3GetDestinationIndexFromUnidirectionalEdge {#h3getdestinationindexfromunidirectionaledge}
Returns the destination hexagon index from the unidirectional edge H3Index.
Syntax
sql
h3GetDestinationIndexFromUnidirectionalEdge(edge)
Parameter
edge
— Hexagon index number that represents a unidirectional edge.
UInt64
.
Returned value
Destination Hexagon Index number.
UInt64
.
Example
Query:
sql
SELECT h3GetDestinationIndexFromUnidirectionalEdge(1248204388774707197) AS destination;
Result:
text
┌────────destination─┐
│ 599686043507097597 │
└────────────────────┘
h3GetIndexesFromUnidirectionalEdge {#h3getindexesfromunidirectionaledge}
Returns the origin and destination hexagon indexes from the given unidirectional edge H3Index.
Syntax
sql
h3GetIndexesFromUnidirectionalEdge(edge)
Parameter
edge
— Hexagon index number that represents a unidirectional edge.
UInt64
.
Returned value
A tuple consisting of two values
tuple(origin,destination)
:
origin
— Origin Hexagon index number.
UInt64
.
destination
— Destination Hexagon index number.
UInt64
.
Returns
(0,0)
if the provided input is not valid.
Example
Query: | {"source_file": "h3.md"} | [
0.022443091496825218,
0.028658347204327583,
-0.0775158554315567,
0.03314456343650818,
-0.0013201514957472682,
-0.02777412347495556,
0.05667246878147125,
-0.05121370404958725,
-0.02341802604496479,
-0.03743304684758186,
0.06027897074818611,
-0.07409068197011948,
-0.013201075606048107,
-0.05... |
5df0e0c0-6be9-444d-8b3a-549e43de4b57 | origin
— Origin Hexagon index number.
UInt64
.
destination
— Destination Hexagon index number.
UInt64
.
Returns
(0,0)
if the provided input is not valid.
Example
Query:
sql
SELECT h3GetIndexesFromUnidirectionalEdge(1248204388774707199) AS indexes;
Result:
text
┌─indexes─────────────────────────────────┐
│ (599686042433355775,599686043507097599) │
└─────────────────────────────────────────┘
h3GetUnidirectionalEdgesFromHexagon {#h3getunidirectionaledgesfromhexagon}
Provides all of the unidirectional edges from the provided H3Index.
Syntax
sql
h3GetUnidirectionalEdgesFromHexagon(index)
Parameter
index
— Hexagon index number that represents a unidirectional edge.
UInt64
.
Returned value
Array of h3 indexes representing each unidirectional edge.
Array
(
UInt64
).
Example
Query:
sql
SELECT h3GetUnidirectionalEdgesFromHexagon(1248204388774707199) AS edges;
Result:
text
┌─edges─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ [1248204388774707199,1320261982812635135,1392319576850563071,1464377170888491007,1536434764926418943,1608492358964346879] │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
h3GetUnidirectionalEdgeBoundary {#h3getunidirectionaledgeboundary}
Returns the coordinates defining the unidirectional edge.
Syntax
sql
h3GetUnidirectionalEdgeBoundary(index)
Parameter
index
— Hexagon index number that represents a unidirectional edge.
UInt64
.
Returned value
Array of pairs '(lon, lat)'.
Array
(
Float64
,
Float64
).
Example
Query:
sql
SELECT h3GetUnidirectionalEdgeBoundary(1248204388774707199) AS boundary;
Result:
text
┌─boundary────────────────────────────────────────────────────────────────────────┐
│ [(37.42012867767779,-122.03773496427027),(37.33755608435299,-122.090428929044)] │
└─────────────────────────────────────────────────────────────────────────────────┘ | {"source_file": "h3.md"} | [
0.046541620045900345,
0.03296720236539841,
-0.0878015011548996,
0.019977714866399765,
-0.01490720920264721,
-0.015806784853339195,
0.02976056933403015,
-0.0544186495244503,
-0.023999618366360664,
-0.04578995332121849,
0.020501727238297462,
-0.05691523104906082,
-0.020826289430260658,
-0.08... |
ef28380c-f0df-43eb-99d3-5850090ecfcf | description: 'Documentation for Index'
sidebar_label: 'Geo'
slug: /sql-reference/functions/geo/
title: 'Geo Functions'
doc_type: 'reference'
Functions for working with geometric objects, for example
to calculate distances between points on a sphere
,
compute geohashes
, and work with
h3 indexes
. | {"source_file": "index.md"} | [
0.01823163777589798,
0.017408376559615135,
-0.039708126336336136,
-0.0008554983069188893,
-0.05490031838417053,
-0.014224008657038212,
0.028842207044363022,
0.021714631468057632,
-0.04752079397439957,
-0.04365041106939316,
0.0037710510659962893,
0.0031376893166452646,
0.03850427269935608,
... |
be870eac-d092-4b2a-acc4-587a48548ecf | description: 'Documentation for Polygon'
sidebar_label: 'Polygons'
slug: /sql-reference/functions/geo/polygons
title: 'Functions for Working with Polygons'
doc_type: 'reference'
WKT {#wkt}
Returns a WKT (Well Known Text) geometric object from various
Geo Data Types
. Supported WKT objects are:
POINT
POLYGON
MULTIPOLYGON
LINESTRING
MULTILINESTRING
Syntax
sql
WKT(geo_data)
Parameters
geo_data
can be one of the following
Geo Data Types
or their underlying primitive types:
Point
Ring
Polygon
MultiPolygon
LineString
MultiLineString
Returned value
WKT geometric object
POINT
is returned for a Point.
WKT geometric object
POLYGON
is returned for a Polygon
WKT geometric object
MULTIPOLYGON
is returned for a MultiPolygon.
WKT geometric object
LINESTRING
is returned for a LineString.
WKT geometric object
MULTILINESTRING
is returned for a MultiLineString.
Examples
POINT from tuple:
sql
SELECT wkt((0., 0.));
response
POINT(0 0)
POLYGON from an array of tuples or an array of tuple arrays:
sql
SELECT wkt([(0., 0.), (10., 0.), (10., 10.), (0., 10.)]);
response
POLYGON((0 0,10 0,10 10,0 10))
MULTIPOLYGON from an array of multi-dimensional tuple arrays:
sql
SELECT wkt([[[(0., 0.), (10., 0.), (10., 10.), (0., 10.)], [(4., 4.), (5., 4.), (5., 5.), (4., 5.)]], [[(-10., -10.), (-10., -9.), (-9., 10.)]]]);
response
MULTIPOLYGON(((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))
readWKTMultiPolygon {#readwktmultipolygon}
Converts a WKT (Well Known Text) MultiPolygon into a MultiPolygon type.
Example {#example}
```sql
SELECT
toTypeName(readWKTMultiPolygon('MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))')) AS type,
readWKTMultiPolygon('MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))') AS output FORMAT Markdown
```
| type | output |
|:-|:-|
| MultiPolygon | [[[(2,0),(10,0),(10,10),(0,10),(2,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] |
Input parameters {#input-parameters}
String starting with
MULTIPOLYGON
Returned value {#returned-value}
MultiPolygon
readWKTPolygon {#readwktpolygon}
Converts a WKT (Well Known Text) MultiPolygon into a Polygon type.
Example {#example-1}
sql
SELECT
toTypeName(readWKTPolygon('POLYGON((2 0,10 0,10 10,0 10,2 0))')) AS type,
readWKTPolygon('POLYGON((2 0,10 0,10 10,0 10,2 0))') AS output
FORMAT Markdown
| type | output |
|:-|:-|
| Polygon | [[(2,0),(10,0),(10,10),(0,10),(2,0)]] |
Input parameters {#input-parameters-1}
String starting with
POLYGON
Returned value {#returned-value-1}
Polygon
readWKTPoint {#readwktpoint}
The
readWKTPoint
function in ClickHouse parses a Well-Known Text (WKT) representation of a Point geometry and returns a point in the internal ClickHouse format.
Syntax {#syntax}
sql
readWKTPoint(wkt_string) | {"source_file": "polygon.md"} | [
-0.09099327772855759,
0.025062311440706253,
-0.04535079747438431,
0.022887304425239563,
-0.0936400517821312,
-0.07719824463129044,
0.08735360950231552,
0.06592366099357605,
-0.01498428825289011,
-0.0268564336001873,
-0.06232553347945213,
-0.0759805366396904,
0.028482399880886078,
-0.059080... |
26f60f62-f3b9-4c6d-8289-96ba8b05d7d8 | Syntax {#syntax}
sql
readWKTPoint(wkt_string)
Arguments {#arguments}
wkt_string
: The input WKT string representing a Point geometry.
Returned value {#returned-value-2}
The function returns a ClickHouse internal representation of the Point geometry.
Example {#example-2}
sql
SELECT readWKTPoint('POINT (1.2 3.4)');
response
(1.2,3.4)
readWKTLineString {#readwktlinestring}
Parses a Well-Known Text (WKT) representation of a LineString geometry and returns it in the internal ClickHouse format.
Syntax {#syntax-1}
sql
readWKTLineString(wkt_string)
Arguments {#arguments-1}
wkt_string
: The input WKT string representing a LineString geometry.
Returned value {#returned-value-3}
The function returns a ClickHouse internal representation of the linestring geometry.
Example {#example-3}
sql
SELECT readWKTLineString('LINESTRING (1 1, 2 2, 3 3, 1 1)');
response
[(1,1),(2,2),(3,3),(1,1)]
readWKTMultiLineString {#readwktmultilinestring}
Parses a Well-Known Text (WKT) representation of a MultiLineString geometry and returns it in the internal ClickHouse format.
Syntax {#syntax-2}
sql
readWKTMultiLineString(wkt_string)
Arguments {#arguments-2}
wkt_string
: The input WKT string representing a MultiLineString geometry.
Returned value {#returned-value-4}
The function returns a ClickHouse internal representation of the multilinestring geometry.
Example {#example-4}
sql
SELECT readWKTMultiLineString('MULTILINESTRING ((1 1, 2 2, 3 3), (4 4, 5 5, 6 6))');
response
[[(1,1),(2,2),(3,3)],[(4,4),(5,5),(6,6)]]
readWKTRing {#readwktring}
Parses a Well-Known Text (WKT) representation of a Polygon geometry and returns a ring (closed linestring) in the internal ClickHouse format.
Syntax {#syntax-3}
sql
readWKTRing(wkt_string)
Arguments {#arguments-3}
wkt_string
: The input WKT string representing a Polygon geometry.
Returned value {#returned-value-5}
The function returns a ClickHouse internal representation of the ring (closed linestring) geometry.
Example {#example-5}
sql
SELECT readWKTRing('POLYGON ((1 1, 2 2, 3 3, 1 1))');
response
[(1,1),(2,2),(3,3),(1,1)]
polygonsWithinSpherical {#polygonswithinspherical}
Returns true or false depending on whether or not one polygon lies completely inside another polygon. Reference https://www.boost.org/doc/libs/1_62_0/libs/geometry/doc/html/geometry/reference/algorithms/within/within_2.html
Example {#example-6}
sql
SELECT polygonsWithinSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]);
response
0
readWKBMultiPolygon {#readwkbmultipolygon}
Converts a WKB (Well Known Binary) MultiPolygon into a MultiPolygon type.
Example {#example-7} | {"source_file": "polygon.md"} | [
-0.06280636787414551,
0.0129051823168993,
-0.08181723207235336,
0.03239592909812927,
-0.1208118125796318,
-0.06828015297651291,
0.1018475666642189,
0.06706584244966507,
0.007918194867670536,
-0.01458583865314722,
-0.04506431519985199,
-0.05461294949054718,
0.04745115339756012,
-0.065233759... |
c8e7f273-b2a8-4771-9b27-3e417187268d | response
0
readWKBMultiPolygon {#readwkbmultipolygon}
Converts a WKB (Well Known Binary) MultiPolygon into a MultiPolygon type.
Example {#example-7}
```sql
SELECT
toTypeName(readWKBMultiPolygon(unhex('0106000000020000000103000000020000000500000000000000000000400000000000000000000000000000244000000000000000000000000000002440000000000000244000000000000000000000000000002440000000000000004000000000000000000500000000000000000010400000000000001040000000000000144000000000000010400000000000001440000000000000144000000000000010400000000000001440000000000000104000000000000010400103000000010000000400000000000000000024c000000000000024c000000000000024c000000000000022c000000000000022c0000000000000244000000000000024c000000000000024c0'))) AS type,
readWKBMultiPolygon(unhex('0106000000020000000103000000020000000500000000000000000000400000000000000000000000000000244000000000000000000000000000002440000000000000244000000000000000000000000000002440000000000000004000000000000000000500000000000000000010400000000000001040000000000000144000000000000010400000000000001440000000000000144000000000000010400000000000001440000000000000104000000000000010400103000000010000000400000000000000000024c000000000000024c000000000000024c000000000000022c000000000000022c0000000000000244000000000000024c000000000000024c0')) AS output FORMAT Markdown
```
| type | output |
|:-|:-|
| MultiPolygon | [[[(2,0),(10,0),(10,10),(0,10),(2,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] |
Input parameters {#input-parameters-2}
String starting with
MULTIPOLYGON
Returned value {#returned-value-6}
MultiPolygon
readWKBPolygon {#readwkbpolygon}
Converts a WKB (Well Known Binary) MultiPolygon into a Polygon type.
Example {#example-8}
sql
SELECT
toTypeName(readWKBPolygon(unhex('010300000001000000050000000000000000000040000000000000000000000000000024400000000000000000000000000000244000000000000024400000000000000000000000000000244000000000000000400000000000000000'))) AS type,
readWKBPolygon(unhex('010300000001000000050000000000000000000040000000000000000000000000000024400000000000000000000000000000244000000000000024400000000000000000000000000000244000000000000000400000000000000000')) AS output
FORMAT Markdown
| type | output |
|:-|:-|
| Polygon | [[(2,0),(10,0),(10,10),(0,10),(2,0)]] |
Input parameters {#input-parameters-3}
String starting with
POLYGON
Returned value {#returned-value-7}
Polygon
readWKBPoint {#readwkbpoint}
The
readWKBPoint
function in ClickHouse parses a Well-Known Binary (WKB) representation of a Point geometry and returns a point in the internal ClickHouse format.
Syntax {#syntax-4}
sql
readWKBPoint(wkb_string)
Arguments {#arguments-4}
wkb_string
: The input WKB string representing a Point geometry.
Returned value {#returned-value-8}
The function returns a ClickHouse internal representation of the Point geometry.
Example {#example-9} | {"source_file": "polygon.md"} | [
-0.026690151542425156,
0.05105305835604668,
-0.067717045545578,
0.021470630541443825,
-0.08678349107503891,
-0.028165824711322784,
0.07074631005525589,
0.008356086909770966,
-0.07480908185243607,
-0.03199264779686928,
-0.06604315340518951,
-0.09105981141328812,
0.07165856659412384,
-0.0676... |
3c55e751-399e-4823-9c02-18d137df6b44 | Returned value {#returned-value-8}
The function returns a ClickHouse internal representation of the Point geometry.
Example {#example-9}
sql
SELECT readWKBPoint(unhex('0101000000333333333333f33f3333333333330b40'));
response
(1.2,3.4)
readWKBLineString {#readwkblinestring}
Parses a Well-Known Binary (WKB) representation of a LineString geometry and returns it in the internal ClickHouse format.
Syntax {#syntax-5}
sql
readWKBLineString(wkb_string)
Arguments {#arguments-5}
wkb_string
: The input WKB string representing a LineString geometry.
Returned value {#returned-value-9}
The function returns a ClickHouse internal representation of the linestring geometry.
Example {#example-10}
sql
SELECT readWKBLineString(unhex('010200000004000000000000000000f03f000000000000f03f0000000000000040000000000000004000000000000008400000000000000840000000000000f03f000000000000f03f'));
response
[(1,1),(2,2),(3,3),(1,1)]
readWKBMultiLineString {#readwkbmultilinestring}
Parses a Well-Known Binary (WKB) representation of a MultiLineString geometry and returns it in the internal ClickHouse format.
Syntax {#syntax-6}
sql
readWKBMultiLineString(wkb_string)
Arguments {#arguments-6}
wkb_string
: The input WKB string representing a MultiLineString geometry.
Returned value {#returned-value-10}
The function returns a ClickHouse internal representation of the multilinestring geometry.
Example {#example-11}
sql
SELECT readWKBMultiLineString(unhex('010500000002000000010200000003000000000000000000f03f000000000000f03f0000000000000040000000000000004000000000000008400000000000000840010200000003000000000000000000104000000000000010400000000000001440000000000000144000000000000018400000000000001840'));
response
[[(1,1),(2,2),(3,3)],[(4,4),(5,5),(6,6)]]
Input parameters {#input-parameters-4}
Returned value {#returned-value-11}
UInt8, 0 for false, 1 for true
polygonsDistanceSpherical {#polygonsdistancespherical}
Calculates the minimal distance between two points where one point belongs to the first polygon and the second to another polygon. Spherical means that coordinates are interpreted as coordinates on a pure and ideal sphere, which is not true for the Earth. Using this type of coordinate system speeds up execution, but of course is not precise.
Example {#example-12}
sql
SELECT polygonsDistanceSpherical([[[(0, 0), (0, 0.1), (0.1, 0.1), (0.1, 0)]]], [[[(10., 10.), (10., 40.), (40., 40.), (40., 10.), (10., 10.)]]])
response
0.24372872211133834
Input parameters {#input-parameters-5}
Two polygons
Returned value {#returned-value-12}
Float64
polygonsDistanceCartesian {#polygonsdistancecartesian}
Calculates distance between two polygons
Example {#example-13}
sql
SELECT polygonsDistanceCartesian([[[(0, 0), (0, 0.1), (0.1, 0.1), (0.1, 0)]]], [[[(10., 10.), (10., 40.), (40., 40.), (40., 10.), (10., 10.)]]])
response
14.000714267493642
Input parameters {#input-parameters-6}
Two polygons | {"source_file": "polygon.md"} | [
-0.03708576038479805,
0.012270375154912472,
-0.07486893981695175,
0.026548152789473534,
-0.10488545894622803,
-0.032720897346735,
0.06687162816524506,
0.05857153236865997,
0.006347296293824911,
-0.03656868264079094,
-0.04925253987312317,
-0.07385211437940598,
0.06278245896100998,
-0.093571... |
7602e6be-5411-4d7f-8ba0-95095227cdb6 | response
14.000714267493642
Input parameters {#input-parameters-6}
Two polygons
Returned value {#returned-value-13}
Float64
polygonsEqualsCartesian {#polygonsequalscartesian}
Returns true if two polygons are equal
Example {#example-14}
sql
SELECT polygonsEqualsCartesian([[[(1., 1.), (1., 4.), (4., 4.), (4., 1.)]]], [[[(1., 1.), (1., 4.), (4., 4.), (4., 1.), (1., 1.)]]])
response
1
Input parameters {#input-parameters-7}
Two polygons
Returned value {#returned-value-14}
UInt8, 0 for false, 1 for true
polygonsSymDifferenceSpherical {#polygonssymdifferencespherical}
Calculates the spatial set theoretic symmetric difference (XOR) between two polygons
Example {#example-15}
sql
SELECT wkt(arraySort(polygonsSymDifferenceSpherical([[(50., 50.), (50., -50.), (-50., -50.), (-50., 50.), (50., 50.)], [(10., 10.), (10., 40.), (40., 40.), (40., 10.), (10., 10.)], [(-10., -10.), (-10., -40.), (-40., -40.), (-40., -10.), (-10., -10.)]], [[(-20., -20.), (-20., 20.), (20., 20.), (20., -20.), (-20., -20.)]])));
response
MULTIPOLYGON(((-20 -10.3067,-10 -10,-10 -20.8791,-20 -20,-20 -10.3067)),((10 20.8791,20 20,20 10.3067,10 10,10 20.8791)),((50 50,50 -50,-50 -50,-50 50,50 50),(20 10.3067,40 10,40 40,10 40,10 20.8791,-20 20,-20 -10.3067,-40 -10,-40 -40,-10 -40,-10 -20.8791,20 -20,20 10.3067)))
Input parameters {#input-parameters-8}
Polygons
Returned value {#returned-value-15}
MultiPolygon
polygonsSymDifferenceCartesian {#polygonssymdifferencecartesian}
The same as
polygonsSymDifferenceSpherical
, but the coordinates are in the Cartesian coordinate system; which is more close to the model of the real Earth.
Example {#example-16}
sql
SELECT wkt(polygonsSymDifferenceCartesian([[[(0, 0), (0, 3), (1, 2.9), (2, 2.6), (2.6, 2), (2.9, 1), (3, 0), (0, 0)]]], [[[(1., 1.), (1., 4.), (4., 4.), (4., 1.), (1., 1.)]]]))
response
MULTIPOLYGON(((1 2.9,1 1,2.9 1,3 0,0 0,0 3,1 2.9)),((1 2.9,1 4,4 4,4 1,2.9 1,2.6 2,2 2.6,1 2.9)))
Input parameters {#input-parameters-9}
Polygons
Returned value {#returned-value-16}
MultiPolygon
polygonsIntersectionSpherical {#polygonsintersectionspherical}
Calculates the intersection (AND) between polygons, coordinates are spherical.
Example {#example-17}
sql
SELECT wkt(arrayMap(a -> arrayMap(b -> arrayMap(c -> (round(c.1, 6), round(c.2, 6)), b), a), polygonsIntersectionSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]])))
response
MULTIPOLYGON(((4.3666 50.8434,4.36024 50.8436,4.34956 50.8536,4.35268 50.8567,4.36794 50.8525,4.3666 50.8434)))
Input parameters {#input-parameters-10}
Polygons
Returned value {#returned-value-17}
MultiPolygon
polygonsWithinCartesian {#polygonswithincartesian} | {"source_file": "polygon.md"} | [
0.022604189813137054,
0.023384997621178627,
0.02156873233616352,
0.0028268557507544756,
-0.11557555198669434,
-0.10678175836801529,
0.04895147681236267,
0.006590061821043491,
-0.04858379438519478,
-0.02141517773270607,
-0.06194630265235901,
-0.08973687887191772,
0.025603551417589188,
0.036... |
3f3260d8-10af-4798-b27e-5e37d0c86936 | Input parameters {#input-parameters-10}
Polygons
Returned value {#returned-value-17}
MultiPolygon
polygonsWithinCartesian {#polygonswithincartesian}
Returns true if the second polygon is within the first polygon.
Example {#example-18}
sql
SELECT polygonsWithinCartesian([[[(2., 2.), (2., 3.), (3., 3.), (3., 2.)]]], [[[(1., 1.), (1., 4.), (4., 4.), (4., 1.), (1., 1.)]]])
response
1
Input parameters {#input-parameters-11}
Two polygons
Returned value {#returned-value-18}
UInt8, 0 for false, 1 for true
polygonsIntersectCartesian {#polygonsintersectcartesian}
Returns true if the two polygons intersect (share any common area or boundary).
Example {#example-intersects-cartesian}
sql
SELECT polygonsIntersectCartesian([[[(2., 2.), (2., 3.), (3., 3.), (3., 2.)]]], [[[(1., 1.), (1., 4.), (4., 4.), (4., 1.), (1., 1.)]]])
response
1
Input parameters {#input-parameters-intersects-cartesian}
Two polygons
Returned value {#returned-value-intersects-cartesian}
UInt8, 0 for false, 1 for true
polygonsIntersectSpherical {#polygonsintersectspherical}
Returns true if the two polygons intersect (share any common area or boundary). Reference https://www.boost.org/doc/libs/1_62_0/libs/geometry/doc/html/geometry/reference/algorithms/intersects.html
Example {#example-intersects-spherical}
sql
SELECT polygonsIntersectSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]);
response
1
Input parameters {#input-parameters-intersects-spherical}
Two polygons
Returned value {#returned-value-intersects-spherical}
UInt8, 0 for false, 1 for true
polygonConvexHullCartesian {#polygonconvexhullcartesian}
Calculates a convex hull.
Reference
Coordinates are in Cartesian coordinate system.
Example {#example-19}
sql
SELECT wkt(polygonConvexHullCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.), (2., 3.)]]]))
response
POLYGON((0 0,0 5,5 5,5 0,0 0))
Input parameters {#input-parameters-12}
MultiPolygon
Returned value {#returned-value-19}
Polygon
polygonAreaSpherical {#polygonareaspherical}
Calculates the surface area of a polygon.
Example {#example-20}
sql
SELECT round(polygonAreaSpherical([[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]), 14)
response
9.387704e-8
Input parameters {#input-parameters-13}
Polygon
Returned value {#returned-value-20}
Float
polygonsUnionSpherical {#polygonsunionspherical}
Calculates a union (OR).
Example {#example-21} | {"source_file": "polygon.md"} | [
0.05792831629514694,
0.008211344480514526,
0.03433523699641228,
0.0012062379391863942,
0.004266562405973673,
-0.08489537984132767,
0.10464447736740112,
-0.04855755716562271,
-0.09555532038211823,
-0.047371987253427505,
-0.040814097970724106,
-0.0831824541091919,
0.027851026505231857,
0.007... |
da5c8538-3c66-403a-98bb-ea071cfff464 | Input parameters {#input-parameters-13}
Polygon
Returned value {#returned-value-20}
Float
polygonsUnionSpherical {#polygonsunionspherical}
Calculates a union (OR).
Example {#example-21}
sql
SELECT wkt(polygonsUnionSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]))
response
MULTIPOLYGON(((4.36661 50.8434,4.36623 50.8408,4.34496 50.8333,4.33807 50.8487,4.34669 50.8583,4.35268 50.8567,4.36136 50.8652,4.36131 50.8651,4.39045 50.8565,4.38303 50.8429,4.36661 50.8434)))
Input parameters {#input-parameters-14}
Polygons
Returned value {#returned-value-21}
MultiPolygon
polygonPerimeterSpherical {#polygonperimeterspherical}
Calculates the perimeter of the polygon.
Example {#example-22}
Polygon representing Zimbabwe {#polygon-representing-zimbabwe}
This is the polygon representing Zimbabwe: | {"source_file": "polygon.md"} | [
0.01979473978281021,
0.10622718185186386,
0.013821002095937729,
0.019188834354281425,
-0.06402807682752609,
-0.07605937868356705,
0.04179687425494194,
-0.0105785196647048,
-0.009220585227012634,
-0.008070171810686588,
-0.10805284231901169,
-0.11408564448356628,
-0.0031363442540168762,
-0.0... |
f4ba7575-1395-4c2e-bf89-006c86cb9e6f | text | {"source_file": "polygon.md"} | [
0.014475799165666103,
0.07228008657693863,
-0.021847393363714218,
0.027838286012411118,
-0.03255626931786537,
0.03891466185450554,
0.13196884095668793,
0.04014258459210396,
0.1118897944688797,
-0.04175683110952377,
0.040231991559267044,
0.0572623685002327,
0.014718570746481419,
-0.00237390... |
b3a43484-5093-4d40-8e32-6c17275169fa | POLYGON((30.0107 -15.6462,30.0502 -15.6401,30.09 -15.6294,30.1301 -15.6237,30.1699 -15.6322,30.1956 -15.6491,30.2072 -15.6532,30.2231 -15.6497,30.231 -15.6447,30.2461 -15.6321,30.2549 -15.6289,30.2801 -15.6323,30.2962 -15.639,30.3281 -15.6524,30.3567 -15.6515,30.3963 -15.636,30.3977 -15.7168,30.3993 -15.812,30.4013 -15.9317,30.4026 -16.0012,30.5148 -16.0004,30.5866 -16,30.7497 -15.9989,30.8574 -15.9981,30.9019 -16.0071,30.9422 -16.0345,30.9583 -16.0511,30.9731 -16.062,30.9898 -16.0643,31.012 -16.0549,31.0237 -16.0452,31.0422 -16.0249,31.0569 -16.0176,31.0654 -16.0196,31.0733 -16.0255,31.0809 -16.0259,31.089 -16.0119,31.1141 -15.9969,31.1585 -16.0002,31.26 -16.0235,31.2789 -16.0303,31.2953 -16.0417,31.3096 -16.059,31.3284 -16.0928,31.3409 -16.1067,31.3603 -16.1169,31.3703 -16.1237,31.3746 -16.1329,31.3778 -16.1422,31.384 -16.1488,31.3877 -16.1496,31.3956 -16.1477,31.3996 -16.1473,31.4043 -16.1499,31.4041 -16.1545,31.4027 -16.1594,31.4046 -16.1623,31.4241 -16.1647,31.4457 -16.165,31.4657 -16.1677,31.4806 -16.178,31.5192 -16.1965,31.6861 -16.2072,31.7107 -16.2179,31.7382 -16.2398,31.7988 -16.3037,31.8181 -16.3196,31.8601 -16.3408,31.8719 -16.3504,31.8807 -16.368,31.8856 -16.4063,31.8944 -16.4215,31.9103 -16.4289,32.0141 -16.4449,32.2118 -16.4402,32.2905 -16.4518,32.3937 -16.4918,32.5521 -16.5534,32.6718 -16.5998,32.6831 -16.6099,32.6879 -16.6243,32.6886 -16.6473,32.6987 -16.6868,32.7252 -16.7064,32.7309 -16.7087,32.7313 -16.7088,32.7399 -16.7032,32.7538 -16.6979,32.7693 -16.6955,32.8007 -16.6973,32.862 -16.7105,32.8934 -16.7124,32.9096 -16.7081,32.9396 -16.6898,32.9562 -16.6831,32.9685 -16.6816,32.9616 -16.7103,32.9334 -16.8158,32.9162 -16.8479,32.9005 -16.8678,32.8288 -16.9351,32.8301 -16.9415,32.8868 -17.0382,32.9285 -17.1095,32.9541 -17.1672,32.9678 -17.2289,32.9691 -17.2661,32.9694 -17.2761,32.9732 -17.2979,32.9836 -17.3178,32.9924 -17.3247,33.0147 -17.3367,33.0216 -17.3456,33.0225 -17.3615,33.0163 -17.3772,33.0117 -17.384,32.9974 -17.405,32.9582 -17.4785,32.9517 -17.4862,32.943 -17.4916,32.9366 -17.4983,32.9367 -17.5094,32.9472 -17.5432,32.9517 -17.5514,32.9691 -17.5646,33.0066 -17.581,33.0204 -17.5986,33.0245 -17.6192,33.0206 -17.6385,33.0041 -17.6756,33.0002 -17.7139,33.0032 -17.7577,32.9991 -17.7943,32.9736 -17.8106,32.957 -17.818,32.9461 -17.8347,32.9397 -17.8555,32.9369 -17.875,32.9384 -17.8946,32.9503 -17.9226,32.9521 -17.9402,32.9481 -17.9533,32.9404 -17.96,32.9324 -17.9649,32.9274 -17.9729,32.929 -17.9823,32.9412 -17.9963,32.9403 -18.0048,32.9349 -18.0246,32.9371 -18.0471,32.9723 -18.1503,32.9755 -18.1833,32.9749 -18.1908,32.9659 -18.2122,32.9582 -18.2254,32.9523 -18.233,32.9505 -18.2413,32.955 -18.2563,32.9702 -18.2775,33.0169 -18.3137,33.035 -18.3329,33.0428 -18.352,33.0381 -18.3631,33.0092 -18.3839,32.9882 -18.4132,32.9854 -18.4125,32.9868 -18.4223,32.9995 -18.4367,33.003 -18.4469,32.9964 -18.4671,32.9786 -18.4801,32.9566 -18.4899,32.9371 -18.501,32.9193 -18.51,32.9003 -18.5153,32.8831 -18.5221,32.8707 -18.5358,32.8683 | {"source_file": "polygon.md"} | [
0.043543700128793716,
0.03943037986755371,
-0.04493918642401695,
-0.020002301782369614,
-0.07131647318601608,
-0.06915213167667389,
-0.03404568135738373,
0.04666741192340851,
-0.005156798753887415,
0.04048917070031166,
-0.026936467736959457,
-0.055087387561798096,
0.001702224020846188,
0.0... |
b045a30b-48ce-4d18-a87d-0be9270f94e3 | -18.4125,32.9868 -18.4223,32.9995 -18.4367,33.003 -18.4469,32.9964 -18.4671,32.9786 -18.4801,32.9566 -18.4899,32.9371 -18.501,32.9193 -18.51,32.9003 -18.5153,32.8831 -18.5221,32.8707 -18.5358,32.8683 -18.5526,32.8717 -18.5732,32.8845 -18.609,32.9146 -18.6659,32.9223 -18.6932,32.9202 -18.7262,32.9133 -18.753,32.9025 -18.7745,32.8852 -18.7878,32.8589 -18.79,32.8179 -18.787,32.7876 -18.7913,32.6914 -18.8343,32.6899 -18.8432,32.6968 -18.8972,32.7032 -18.9119,32.7158 -18.9198,32.7051 -18.9275,32.6922 -18.9343,32.6825 -18.9427,32.6811 -18.955,32.6886 -18.9773,32.6903 -18.9882,32.6886 -19.001,32.6911 -19.0143,32.699 -19.0222,32.7103 -19.026,32.7239 -19.0266,32.786 -19.0177,32.8034 -19.0196,32.8142 -19.0238,32.82 -19.0283,32.823 -19.0352,32.8253 -19.0468,32.8302 -19.0591,32.8381 -19.0669,32.8475 -19.0739,32.8559 -19.0837,32.8623 -19.1181,32.8332 -19.242,32.8322 -19.2667,32.8287 -19.2846,32.8207 -19.3013,32.8061 -19.3234,32.7688 -19.3636,32.7665 -19.3734,32.7685 -19.4028,32.7622 -19.4434,32.7634 -19.464,32.7739 -19.4759,32.7931 -19.4767,32.8113 -19.4745,32.8254 -19.4792,32.8322 -19.5009,32.8325 -19.5193,32.8254 -19.5916,32.8257 -19.6008,32.8282 -19.6106,32.8296 -19.6237,32.8254 -19.6333,32.8195 -19.642,32.8163 -19.6521,32.8196 -19.6743,32.831 -19.6852,32.8491 -19.6891,32.8722 -19.6902,32.8947 -19.6843,32.9246 -19.6553,32.9432 -19.6493,32.961 -19.6588,32.9624 -19.6791,32.9541 -19.7178,32.9624 -19.7354,32.9791 -19.7514,33.0006 -19.7643,33.0228 -19.7731,33.0328 -19.7842,33.0296 -19.8034,33.0229 -19.8269,33.0213 -19.8681,33.002 -19.927,32.9984 -20.0009,33.0044 -20.0243,33.0073 -20.032,32.9537 -20.0302,32.9401 -20.0415,32.9343 -20.0721,32.9265 -20.0865,32.9107 -20.0911,32.8944 -20.094,32.8853 -20.103,32.8779 -20.1517,32.8729 -20.1672,32.8593 -20.1909,32.8571 -20.2006,32.8583 -20.2075,32.8651 -20.2209,32.8656 -20.2289,32.8584 -20.2595,32.853 -20.2739,32.8452 -20.2867,32.8008 -20.3386,32.7359 -20.4142,32.7044 -20.4718,32.6718 -20.5318,32.6465 -20.558,32.6037 -20.5648,32.5565 -20.5593,32.5131 -20.5646,32.4816 -20.603,32.4711 -20.6455,32.4691 -20.6868,32.4835 -20.7942,32.4972 -20.8981,32.491 -20.9363,32.4677 -20.9802,32.4171 -21.0409,32.3398 -21.1341,32.3453 -21.1428,32.3599 -21.1514,32.3689 -21.163,32.3734 -21.1636,32.3777 -21.1634,32.3806 -21.1655,32.3805 -21.1722,32.3769 -21.1785,32.373 -21.184,32.3717 -21.1879,32.4446 -21.3047,32.4458 -21.309,32.4472 -21.3137,32.4085 -21.2903,32.373 -21.3279,32.3245 -21.3782,32.2722 -21.4325,32.2197 -21.4869,32.1673 -21.5413,32.1148 -21.5956,32.0624 -21.65,32.01 -21.7045,31.9576 -21.7588,31.9052 -21.8132,31.8527 -21.8676,31.8003 -21.922,31.7478 -21.9764,31.6955 -22.0307,31.6431 -22.0852,31.5907 -22.1396,31.5382 -22.1939,31.4858 -22.2483,31.4338 -22.302,31.3687 -22.345,31.2889 -22.3973,31.2656 -22.3655,31.2556 -22.358,31.2457 -22.3575,31.2296 -22.364,31.2215 -22.3649,31.2135 -22.3619,31.1979 -22.3526,31.1907 -22.3506,31.1837 -22.3456,31.1633 -22.3226,31.1526 -22.3164,31.1377 -22.3185,31.1045 -22.3334,31.097 -22.3349,31.0876 | {"source_file": "polygon.md"} | [
0.04027903825044632,
-0.01428050547838211,
-0.00165465846657753,
-0.025072365999221802,
-0.10768580436706543,
-0.05703480541706085,
-0.04413134604692459,
-0.008151239715516567,
-0.0015711867017671466,
0.022787917405366898,
0.0014446991262957454,
-0.022505346685647964,
0.033953387290239334,
... |
7eff3435-5714-4ea5-b704-c914625cb434 | -22.364,31.2215 -22.3649,31.2135 -22.3619,31.1979 -22.3526,31.1907 -22.3506,31.1837 -22.3456,31.1633 -22.3226,31.1526 -22.3164,31.1377 -22.3185,31.1045 -22.3334,31.097 -22.3349,31.0876 -22.3369,31.0703 -22.3337,31.0361 -22.3196,30.9272 -22.2957,30.8671 -22.2896,30.8379 -22.2823,30.8053 -22.2945,30.6939 -22.3028,30.6743 -22.3086,30.6474 -22.3264,30.6324 -22.3307,30.6256 -22.3286,30.6103 -22.3187,30.6011 -22.3164,30.5722 -22.3166,30.5074 -22.3096,30.4885 -22.3102,30.4692 -22.3151,30.4317 -22.3312,30.4127 -22.3369,30.3721 -22.3435,30.335 -22.3447,30.3008 -22.337,30.2693 -22.3164,30.2553 -22.3047,30.2404 -22.2962,30.2217 -22.2909,30.197 -22.2891,30.1527 -22.2948,30.1351 -22.2936,30.1111 -22.2823,30.0826 -22.2629,30.0679 -22.2571,30.0381 -22.2538,30.0359 -22.2506,30.0345 -22.2461,30.0155 -22.227,30.0053 -22.2223,29.9838 -22.2177,29.974 -22.214,29.9467 -22.1983,29.9321 -22.1944,29.896 -22.1914,29.8715 -22.1793,29.8373 -22.1724,29.7792 -22.1364,29.7589 -22.1309,29.6914 -22.1341,29.6796 -22.1383,29.6614 -22.1265,29.6411 -22.1292,29.604 -22.1451,29.5702 -22.142,29.551 -22.146,29.5425 -22.1625,29.5318 -22.1724,29.5069 -22.1701,29.4569 -22.1588,29.4361 -22.1631,29.3995 -22.1822,29.378 -22.1929,29.3633 -22.1923,29.3569 -22.1909,29.3501 -22.1867,29.2736 -22.1251,29.2673 -22.1158,29.2596 -22.0961,29.2541 -22.0871,29.2444 -22.0757,29.2393 -22.0726,29.1449 -22.0753,29.108 -22.0692,29.0708 -22.051,29.0405 -22.0209,29.0216 -21.9828,29.0138 -21.9404,29.0179 -21.8981,29.0289 -21.8766,29.0454 -21.8526,29.0576 -21.8292,29.0553 -21.81,29.0387 -21.7979,28.9987 -21.786,28.9808 -21.7748,28.9519 -21.7683,28.891 -21.7649,28.8609 -21.7574,28.7142 -21.6935,28.6684 -21.68,28.6297 -21.6513,28.6157 -21.6471,28.5859 -21.6444,28.554 -21.6366,28.5429 -21.6383,28.5325 -21.6431,28.4973 -21.6515,28.4814 -21.6574,28.4646 -21.6603,28.4431 -21.6558,28.3618 -21.6163,28.3219 -21.6035,28.2849 -21.5969,28.1657 -21.5952,28.0908 -21.5813,28.0329 -21.5779,28.0166 -21.5729,28.0026 -21.5642,27.9904 -21.5519,27.9847 -21.5429,27.9757 -21.5226,27.9706 -21.5144,27.9637 -21.5105,27.9581 -21.5115,27.9532 -21.5105,27.9493 -21.5008,27.9544 -21.4878,27.9504 -21.482,27.9433 -21.4799,27.9399 -21.478,27.9419 -21.4685,27.9496 -21.4565,27.953 -21.4487,27.9502 -21.4383,27.9205 -21.3812,27.9042 -21.3647,27.8978 -21.3554,27.8962 -21.3479,27.8967 -21.3324,27.8944 -21.3243,27.885 -21.3102,27.8491 -21.2697,27.8236 -21.2317,27.7938 -21.1974,27.7244 -21.1497,27.7092 -21.1345,27.6748 -21.0901,27.6666 -21.0712,27.6668 -21.0538,27.679 -21.0007,27.6804 -20.9796,27.6727 -20.9235,27.6726 -20.9137,27.6751 -20.8913,27.6748 -20.8799,27.676 -20.8667,27.6818 -20.8576,27.689 -20.849,27.6944 -20.8377,27.7096 -20.7567,27.7073 -20.7167,27.6825 -20.6373,27.6904 -20.6015,27.7026 -20.5661,27.7056 -20.5267,27.6981 -20.5091,27.6838 -20.4961,27.666 -20.4891,27.6258 -20.4886,27.5909 -20.4733,27.5341 -20.483,27.4539 -20.4733,27.3407 -20.473,27.306 -20.4774,27.2684 -20.4958,27.284 -20.3515,27.266 -20.2342,27.2149 -20.1105,27.2018 | {"source_file": "polygon.md"} | [
0.04274081438779831,
-0.004556444473564625,
-0.028487976640462875,
0.03128381073474884,
-0.05189364030957222,
0.00944193359464407,
-0.04540500044822693,
-0.003535544266924262,
-0.05922453850507736,
0.0438263863325119,
0.0596550814807415,
-0.02652476727962494,
0.05064602941274643,
-0.028710... |
a4c3b46d-a7f5-4fab-8abd-7ec9dd0d07f0 | -20.4961,27.666 -20.4891,27.6258 -20.4886,27.5909 -20.4733,27.5341 -20.483,27.4539 -20.4733,27.3407 -20.473,27.306 -20.4774,27.2684 -20.4958,27.284 -20.3515,27.266 -20.2342,27.2149 -20.1105,27.2018 -20.093,27.1837 -20.0823,27.1629 -20.0766,27.1419 -20.0733,27.1297 -20.0729,27.1198 -20.0739,27.1096 -20.0732,27.0973 -20.0689,27.0865 -20.0605,27.0692 -20.0374,27.0601 -20.0276,27.0267 -20.0101,26.9943 -20.0068,26.9611 -20.0072,26.9251 -20.0009,26.8119 -19.9464,26.7745 -19.9398,26.7508 -19.9396,26.731 -19.9359,26.7139 -19.9274,26.6986 -19.9125,26.6848 -19.8945,26.6772 -19.8868,26.6738 -19.8834,26.6594 -19.8757,26.6141 -19.8634,26.5956 -19.8556,26.5819 -19.8421,26.5748 -19.8195,26.5663 -19.8008,26.5493 -19.7841,26.5089 -19.7593,26.4897 -19.7519,26.4503 -19.7433,26.4319 -19.7365,26.4128 -19.7196,26.3852 -19.6791,26.3627 -19.6676,26.3323 -19.6624,26.3244 -19.6591,26.3122 -19.6514,26.3125 -19.6496,26.3191 -19.6463,26.3263 -19.6339,26.3335 -19.613,26.331 -19.605,26.3211 -19.592,26.3132 -19.5842,26.3035 -19.5773,26.2926 -19.5725,26.2391 -19.5715,26.1945 -19.5602,26.1555 -19.5372,26.1303 -19.5011,26.0344 -19.2437,26.0114 -19.1998,25.9811 -19.1618,25.9565 -19.1221,25.9486 -19.1033,25.9449 -19.0792,25.9481 -19.0587,25.9644 -19.0216,25.9678 -19.001,25.9674 -18.9999,25.9407 -18.9213,25.8153 -18.814,25.7795 -18.7388,25.7734 -18.6656,25.7619 -18.6303,25.7369 -18.6087,25.6983 -18.5902,25.6695 -18.566,25.6221 -18.5011,25.6084 -18.4877,25.5744 -18.4657,25.5085 -18.3991,25.4956 -18.3789,25.4905 -18.3655,25.4812 -18.3234,25.4732 -18.3034,25.4409 -18.2532,25.4088 -18.176,25.3875 -18.139,25.3574 -18.1158,25.3234 -18.0966,25.2964 -18.0686,25.255 -18.0011,25.2261 -17.9319,25.2194 -17.908,25.2194 -17.8798,25.2598 -17.7941,25.2667 -17.8009,25.2854 -17.8093,25.3159 -17.8321,25.3355 -17.8412,25.3453 -17.8426,25.3765 -17.8412,25.4095 -17.853,25.4203 -17.8549,25.4956 -17.8549,25.5007 -17.856,25.5102 -17.8612,25.5165 -17.8623,25.5221 -17.8601,25.5309 -17.851,25.5368 -17.8487,25.604 -17.8362,25.657 -17.8139,25.6814 -17.8115,25.6942 -17.8194,25.7064 -17.8299,25.7438 -17.8394,25.766 -17.8498,25.786 -17.8622,25.7947 -17.8727,25.8044 -17.8882,25.8497 -17.9067,25.8636 -17.9238,25.8475 -17.9294,25.8462 -17.9437,25.8535 -17.96,25.8636 -17.9716,25.9245 -17.999,25.967 -18.0005,25.9785 -17.999,26.0337 -17.9716,26.0406 -17.9785,26.0466 -17.9663,26.0625 -17.9629,26.0812 -17.9624,26.0952 -17.9585,26.0962 -17.9546,26.0942 -17.9419,26.0952 -17.9381,26.1012 -17.9358,26.1186 -17.9316,26.1354 -17.9226,26.1586 -17.9183,26.1675 -17.9136,26.203 -17.8872,26.2119 -17.8828,26.2211 -17.8863,26.2282 -17.8947,26.2339 -17.904,26.2392 -17.9102,26.2483 -17.9134,26.2943 -17.9185,26.3038 -17.9228,26.312 -17.9284,26.3183 -17.9344,26.3255 -17.936,26.3627 -17.9306,26.4086 -17.939,26.4855 -17.9793,26.5271 -17.992,26.5536 -17.9965,26.5702 -18.0029,26.5834 -18.0132,26.5989 -18.03,26.6127 -18.0412,26.6288 -18.0492,26.6857 -18.0668,26.7 -18.0692,26.7119 -18.0658,26.7406 -18.0405,26.7536 -18.033,26.7697 -18.029,26.794 | {"source_file": "polygon.md"} | [
0.028348395600914955,
0.0019794697873294353,
0.030432650819420815,
0.015862124040722847,
-0.08529312163591385,
-0.03534896671772003,
-0.12779338657855988,
-0.01922130025923252,
-0.06796818971633911,
0.05311811342835426,
0.028741605579853058,
-0.02908089943230152,
0.004388401750475168,
-0.0... |
dc8044f9-a694-42b4-9adc-e831a14bbf2d | -17.9965,26.5702 -18.0029,26.5834 -18.0132,26.5989 -18.03,26.6127 -18.0412,26.6288 -18.0492,26.6857 -18.0668,26.7 -18.0692,26.7119 -18.0658,26.7406 -18.0405,26.7536 -18.033,26.7697 -18.029,26.794 -18.0262,26.8883 -17.9846,26.912 -17.992,26.9487 -17.9689,26.9592 -17.9647,27.0063 -17.9627,27.0213 -17.9585,27.0485 -17.9443,27.0782 -17.917,27.1154 -17.8822,27.149 -17.8425,27.1465 -17.8189,27.1453 -17.7941,27.147 -17.7839,27.1571 -17.7693,27.4221 -17.5048,27.5243 -17.4151,27.5773 -17.3631,27.6045 -17.3128,27.6249 -17.2333,27.6412 -17.1985,27.7773 -17.0012,27.8169 -16.9596,27.8686 -16.9297,28.023 -16.8654,28.1139 -16.8276,28.2125 -16.7486,28.2801 -16.7065,28.6433 -16.5688,28.6907 -16.5603,28.7188 -16.5603,28.7328 -16.5581,28.7414 -16.5507,28.7611 -16.5323,28.7693 -16.5152,28.8089 -16.4863,28.8225 -16.4708,28.8291 -16.4346,28.8331 -16.4264,28.8572 -16.3882,28.857 -16.3655,28.8405 -16.3236,28.8368 -16.3063,28.8403 -16.2847,28.8642 -16.2312,28.8471 -16.2027,28.8525 -16.1628,28.8654 -16.1212,28.871 -16.0872,28.8685 -16.0822,28.8638 -16.0766,28.8593 -16.0696,28.8572 -16.0605,28.8603 -16.0494,28.8741 -16.0289,28.8772 -16.022,28.8989 -15.9955,28.9324 -15.9637,28.9469 -15.9572,28.9513 -15.9553,28.9728 -15.9514,29.0181 -15.9506,29.0423 -15.9463,29.0551 -15.9344,29.0763 -15.8954,29.0862 -15.8846,29.1022 -15.8709,29.1217 -15.8593,29.1419 -15.8545,29.151 -15.8488,29.1863 -15.8128,29.407 -15.7142,29.4221 -15.711,29.5085 -15.7036,29.5262 -15.6928,29.5634 -15.6621,29.5872 -15.6557,29.6086 -15.6584,29.628 -15.6636,29.6485 -15.6666,29.6728 -15.6633,29.73 -15.6447,29.7733 -15.6381,29.8143 -15.6197,29.8373 -15.6148,29.8818 -15.6188,29.9675 -15.6415,30.0107 -15.6462)) | {"source_file": "polygon.md"} | [
0.0795801430940628,
-0.028996163979172707,
0.002814918290823698,
0.004079850390553474,
-0.04849027097225189,
-0.06286191940307617,
-0.03388640284538269,
-0.052454058080911636,
-0.04378121346235275,
0.04566550999879837,
0.03644726425409317,
-0.044695109128952026,
0.031042706221342087,
-0.03... |
9b7ffc87-f9be-4f29-876e-8c0c683a179a | Usage of polygonPerimeterSpherical function {#usage-of-polygon-perimeter-spherical} | {"source_file": "polygon.md"} | [
0.048622481524944305,
0.06593014299869537,
0.03893331065773964,
-0.01880887895822525,
0.009471931494772434,
-0.03489074483513832,
0.05696757137775421,
0.07028453052043915,
0.06075878068804741,
0.04837346449494362,
0.028776157647371292,
-0.046924371272325516,
-0.019257647916674614,
0.035454... |
e3c95675-f6d7-4f46-a05c-2e24d4a4982a | sql | {"source_file": "polygon.md"} | [
0.07582295686006546,
0.0011653322726488113,
-0.03202968090772629,
0.07204441726207733,
-0.10746068507432938,
0.006198782008141279,
0.1837887018918991,
0.028402604162693024,
-0.0444914735853672,
-0.0017817936604842544,
0.0661090537905693,
-0.0014285664074122906,
0.08542580902576447,
-0.0624... |
4b85e615-d24a-4c4d-8a13-6ce88fc35267 | SELECT round(polygonPerimeterSpherical([(30.010654, -15.646227), (30.050238, -15.640129), (30.090029, -15.629381), (30.130129, -15.623696), (30.16992, -15.632171), (30.195552, -15.649121), (30.207231, -15.653152), (30.223147, -15.649741), (30.231002, -15.644677), (30.246091, -15.632068), (30.254876, -15.628864), (30.280094, -15.632275), (30.296196, -15.639042), (30.32805, -15.652428), (30.356679, -15.651498), (30.396263, -15.635995), (30.39771, -15.716817), (30.39926, -15.812005), (30.401327, -15.931688), (30.402568, -16.001244), (30.514809, -16.000418), (30.586587, -16.000004), (30.74973, -15.998867), (30.857424, -15.998144), (30.901865, -16.007136), (30.942173, -16.034524), (30.958296, -16.05106), (30.973075, -16.062016), (30.989767, -16.06429), (31.012039, -16.054885), (31.023718, -16.045169), (31.042218, -16.024912), (31.056895, -16.017574), (31.065421, -16.019641), (31.073328, -16.025532), (31.080872, -16.025946), (31.089037, -16.01189), (31.1141, -15.996904), (31.15849, -16.000211), (31.259983, -16.023465), (31.278897, -16.030287), (31.29533, -16.041655), (31.309592, -16.059019), (31.328351, -16.092815), (31.340908, -16.106664), (31.360339, -16.116896), (31.37026, -16.123718), (31.374601, -16.132916), (31.377754, -16.142218), (31.384006, -16.148832), (31.387727, -16.149556), (31.395582, -16.147695), (31.399613, -16.147282), (31.404315, -16.149866), (31.404057, -16.154517), (31.402713, -16.159374), (31.404574, -16.162268), (31.424107, -16.164749), (31.445708, -16.164955), (31.465655, -16.167746), (31.480641, -16.177978), (31.519192, -16.196478), (31.686107, -16.207227), (31.710705, -16.217872), (31.738197, -16.239783), (31.798761, -16.303655), (31.818088, -16.319571), (31.86005, -16.340759), (31.871935, -16.35037), (31.88072, -16.368044), (31.88563, -16.406284), (31.894363, -16.421477), (31.910279, -16.428919), (32.014149, -16.444938), (32.211759, -16.440184), (32.290463, -16.45176), (32.393661, -16.491757), (32.5521, -16.553355), (32.671783, -16.599761), (32.6831, -16.609889), (32.687906, -16.624255), (32.68863, -16.647303), (32.698655, -16.686784), (32.725217, -16.706421), (32.73095, -16.708656), (32.731314, -16.708798), (32.739893, -16.703217), (32.753845, -16.697946), (32.769348, -16.695466), (32.800664, -16.697326), (32.862004, -16.710452), (32.893372, -16.712415), (32.909598, -16.708075), (32.93957, -16.689781), (32.95621, -16.683063), (32.968509, -16.681615999999998), (32.961585, -16.710348), (32.933369, -16.815768), (32.916213, -16.847911), (32.900503, -16.867755), (32.828776, -16.935141), (32.83012, -16.941549), (32.886757, -17.038184), (32.928512, -17.109497), (32.954143, -17.167168), (32.967786, -17.22887), (32.96909, -17.266115), (32.969439, -17.276102), (32.973212, -17.297909), (32.983599, -17.317753), (32.992384, -17.324678), (33.014656, -17.336667), (33.021633, -17.345555), (33.022459, -17.361471), (33.016258, -17.377181), (33.011651, -17.383991), (32.997448, -17.404983), (32.958174, -17.478467), (32.951663, -17.486218), | {"source_file": "polygon.md"} | [
0.03892688453197479,
0.04892747849225998,
0.021660393103957176,
-0.014320006594061852,
0.0064764986746013165,
-0.1059049591422081,
0.025093361735343933,
0.031369034200906754,
0.0062998561188578606,
0.007567460183054209,
-0.04303072765469551,
-0.09919038414955139,
-0.021841002628207207,
-0.... |
aba8d1be-f22f-4753-807b-bfb1e4c6c88a | (33.014656, -17.336667), (33.021633, -17.345555), (33.022459, -17.361471), (33.016258, -17.377181), (33.011651, -17.383991), (32.997448, -17.404983), (32.958174, -17.478467), (32.951663, -17.486218), (32.942981, -17.491593), (32.936573, -17.498311), (32.936676, -17.509369), (32.947218, -17.543166), (32.951663, -17.551434), (32.969129, -17.56456), (33.006646, -17.580993), (33.020392, -17.598563), (33.024526, -17.619233), (33.020599, -17.638457), (33.004063, -17.675561), (33.000238, -17.713905), (33.003184, -17.757726), (32.999102, -17.794313), (32.973573, -17.810643), (32.957037, -17.817981), (32.946082, -17.834724), (32.939674, -17.855498), (32.936883, -17.875032), (32.938433, -17.894566), (32.950267, -17.922574), (32.952128, -17.940247), (32.948149, -17.95327), (32.940397, -17.959988), (32.932439, -17.964949), (32.927375, -17.972907), (32.928977, -17.982312), (32.941224, -17.996265), (32.940294, -18.004843), (32.934919, -18.024583), (32.93709, -18.047114), (32.972282, -18.150261), (32.975537, -18.183333), (32.974865, -18.190775), (32.965925, -18.212169), (32.958174, -18.225398), (32.952283, -18.233046), (32.950525999999996, -18.241314), (32.95497, -18.256301), (32.970163, -18.277488), (33.016878, -18.313661), (33.034965, -18.332885), (33.042768, -18.352005), (33.038066, -18.363064), (33.00923, -18.383941), (32.988198, -18.41319), (32.985356, -18.412467), (32.986803, -18.422285), (32.999515, -18.436651), (33.003029, -18.446883), (32.996414, -18.46714), (32.978586, -18.48006), (32.956624, -18.489878), (32.937142, -18.50104), (32.919313, -18.510032), (32.900296, -18.515303), (32.88314, -18.522124), (32.870737, -18.535767), (32.868257, -18.552613), (32.871668, -18.57318), (32.884483, -18.609044), (32.914559, -18.665888), (32.92231, -18.693173), (32.920243, -18.726246), (32.913267, -18.753014), (32.902518, -18.774512), (32.885207, -18.787844), (32.858852, -18.790015), (32.817924, -18.787018), (32.787642, -18.791255), (32.69142, -18.83425), (32.68987, -18.843241), (32.696794, -18.897192), (32.703202, -18.911868), (32.71576, -18.919826), (32.705063, -18.927474), (32.692247, -18.934295), (32.682532, -18.942667), (32.681085, -18.954966), (32.68863, -18.97729), (32.690283, -18.988246), (32.68863, -19.000958), (32.691058, -19.01429), (32.698965, -19.022249), (32.710282, -19.025969), (32.723873, -19.026589), (32.785988, -19.017701), (32.803351, -19.019561), (32.814203, -19.023799), (32.819991, -19.028346), (32.822988, -19.035168), (32.825262, -19.046847), (32.830223, -19.059146), (32.83813, -19.066897), (32.847483, -19.073925), (32.855906, -19.083744), (32.862262, -19.118057), (32.83322, -19.241977), (32.832187, -19.266678), (32.828673, -19.284558), (32.820715, -19.301301), (32.806142, -19.323419), (32.768831, -19.363623), (32.766454, -19.373442), (32.768521, -19.402794), (32.762217, -19.443412), (32.763354, -19.463979), (32.773947, -19.475864), (32.793119, -19.476691), (32.811309, -19.474521), (32.825365, -19.479172), (32.832187, -19.500876), | {"source_file": "polygon.md"} | [
0.04772958159446716,
-0.022699657827615738,
-0.01396819856017828,
-0.03392709419131279,
-0.02103043533861637,
-0.09710324555635452,
-0.014885281212627888,
-0.07105153053998947,
-0.054762184619903564,
-0.010779784061014652,
0.04656193032860756,
-0.013545367866754532,
0.010574140585958958,
-... |
2aed5917-0781-4bd6-b410-0f1633b883ba | (32.768521, -19.402794), (32.762217, -19.443412), (32.763354, -19.463979), (32.773947, -19.475864), (32.793119, -19.476691), (32.811309, -19.474521), (32.825365, -19.479172), (32.832187, -19.500876), (32.832497000000004, -19.519273), (32.825365, -19.59162), (32.825675, -19.600818), (32.828156, -19.610636), (32.829603, -19.623659), (32.825365, -19.633271), (32.819474, -19.641952), (32.81627, -19.652081), (32.819629, -19.674302), (32.83105, -19.685154), (32.849137, -19.689081), (32.872184, -19.690218), (32.894715, -19.684327), (32.924584, -19.655285), (32.943188, -19.64929), (32.960964, -19.658799), (32.962411, -19.679056), (32.954143, -19.717813), (32.962411, -19.735383), (32.979051, -19.751403), (33.0006, -19.764322), (33.022769, -19.773107), (33.032795, -19.784166), (33.029642, -19.80339), (33.022873, -19.826851), (33.021322, -19.868088), (33.001995, -19.927), (32.998378, -20.000897), (33.004373, -20.024255), (33.007266, -20.032006), (32.95373, -20.030249), (32.940087, -20.041515), (32.934299, -20.072107), (32.926548, -20.086473), (32.910683, -20.091124), (32.894405, -20.094018), (32.88531, -20.10301), (32.877869, -20.151689), (32.872908, -20.167192), (32.859265, -20.190859), (32.857095, -20.200575), (32.858335, -20.207499), (32.865053, -20.220935), (32.86557, -20.228893), (32.858438, -20.259486), (32.852961, -20.273852), (32.845209, -20.286668), (32.800767, -20.338551), (32.735862, -20.414205), (32.704443, -20.471773), (32.671783, -20.531821), (32.646462, -20.557969), (32.603674, -20.56479), (32.556545, -20.559312), (32.513136, -20.564583), (32.481614, -20.603031), (32.471072, -20.645509), (32.469108, -20.68685), (32.483474, -20.794233), (32.49722, -20.898103), (32.491019, -20.936344), (32.467661, -20.980165), (32.417122, -21.040937), (32.339814, -21.134058), (32.345343, -21.142843), (32.359864, -21.151421), (32.368856, -21.162997), (32.373352, -21.163617), (32.377744, -21.16341), (32.380638, -21.165477), (32.380535, -21.172195), (32.376866, -21.178499), (32.37299, -21.183977), (32.37175, -21.187905), (32.444613, -21.304693), (32.445849, -21.308994), (32.447197, -21.313685), (32.408543, -21.290327), (32.37299, -21.327948), (32.324517, -21.378177), (32.272221, -21.432541), (32.219718, -21.486904), (32.167318, -21.541268), (32.114814, -21.595632), (32.062415, -21.649995), (32.010015, -21.704462), (31.957615, -21.758826), (31.905215, -21.813189), (31.852712, -21.867553), (31.800312, -21.92202), (31.747808, -21.976384), (31.695512, -22.030747), (31.643112, -22.085214), (31.590712, -22.139578), (31.538209, -22.193941), (31.485809, -22.248305), (31.433822, -22.302048), (31.36871, -22.345043), (31.288922, -22.39734), (31.265616, -22.365507), (31.255642, -22.357962), (31.24572, -22.357549), (31.229597, -22.363957), (31.221536, -22.364887), (31.213474, -22.36189), (31.197868, -22.352588), (31.190685, -22.350624), (31.183657, -22.34556), (31.163348, -22.322616), (31.152599, -22.316414), (31.137717, -22.318482), (31.10454, -22.333364), (31.097048, | {"source_file": "polygon.md"} | [
0.03424994647502899,
-0.02205260470509529,
-0.021139588207006454,
-0.013053659349679947,
-0.04067692533135414,
-0.08129923790693283,
-0.009491752833127975,
-0.028923293575644493,
-0.03769414499402046,
-0.02366645634174347,
0.02599494345486164,
-0.025169817730784416,
0.021409671753644943,
-... |
98f81200-a792-4ce3-b2cd-64025b096bc8 | -22.36189), (31.197868, -22.352588), (31.190685, -22.350624), (31.183657, -22.34556), (31.163348, -22.322616), (31.152599, -22.316414), (31.137717, -22.318482), (31.10454, -22.333364), (31.097048, -22.334922), (31.087642, -22.336878), (31.07033, -22.333674), (31.036121, -22.319618), (30.927187, -22.295744), (30.867087, -22.289646), (30.83789, -22.282308), (30.805282, -22.294504), (30.693919, -22.302772), (30.674282, -22.30856), (30.647410999999998, -22.32644), (30.632424, -22.330677), (30.625551, -22.32861), (30.610307, -22.318688), (30.601108, -22.316414), (30.57217, -22.316621), (30.507367, -22.309593), (30.488454, -22.310213), (30.46923, -22.315071), (30.431713, -22.331194), (30.412696, -22.336878), (30.372078, -22.343493), (30.334975, -22.344733), (30.300765, -22.336982), (30.269346, -22.316414), (30.25529, -22.304736), (30.240407, -22.296157), (30.2217, -22.290886), (30.196999, -22.289129), (30.15266, -22.294814), (30.13509, -22.293574), (30.111113, -22.282308), (30.082587, -22.262878), (30.067911, -22.25709), (30.038145, -22.253783), (30.035872, -22.250579), (30.034528, -22.246135), (30.015511, -22.227014), (30.005279, -22.22226), (29.983782, -22.217713), (29.973963, -22.213992), (29.946678, -22.198282), (29.932105, -22.194355), (29.896035, -22.191358), (29.871489, -22.179265), (29.837331, -22.172444), (29.779246, -22.136374), (29.758886, -22.130896), (29.691448, -22.1341), (29.679614, -22.138338), (29.661424, -22.126452), (29.641064, -22.129242), (29.60396, -22.145055), (29.570164, -22.141955), (29.551043, -22.145986), (29.542517, -22.162522), (29.53182, -22.172444), (29.506912, -22.170067), (29.456889, -22.158801), (29.436115, -22.163142), (29.399528, -22.182159), (29.378031, -22.192908), (29.363250999999998, -22.192288), (29.356947, -22.190944000000002), (29.350074, -22.186707), (29.273644, -22.125108), (29.26734, -22.115807), (29.259588, -22.096066), (29.254111, -22.087074), (29.244395, -22.075706), (29.239331, -22.072605), (29.144867, -22.075292), (29.10797, -22.069194), (29.070763, -22.051004), (29.040532, -22.020929), (29.021567, -21.982791), (29.013815, -21.940417), (29.017949, -21.898145), (29.028905, -21.876648), (29.045441, -21.852567), (29.057637, -21.829209), (29.05526, -21.809985), (29.038723, -21.797893), (28.998726, -21.786008), (28.980846, -21.774845), (28.951907, -21.768334), (28.891032, -21.764924), (28.860853, -21.757379), (28.714195, -21.693507), (28.66841, -21.679968), (28.629704, -21.651339), (28.6157, -21.647101), (28.585934, -21.644414), (28.553998, -21.636559), (28.542939, -21.638316), (28.532501, -21.643071), (28.497309, -21.651546), (28.481393, -21.657437), (28.464598, -21.660331), (28.443101, -21.655783), (28.361762, -21.616302), (28.321919, -21.603486), (28.284867, -21.596872), (28.165702, -21.595218), (28.090771, -21.581266), (28.032893, -21.577855), (28.016563, -21.572894), (28.002559, -21.564212), (27.990415, -21.551913), (27.984731, -21.542922), (27.975739, -21.522561), (27.970571, -21.514396), (27.963698, | {"source_file": "polygon.md"} | [
0.016660869121551514,
-0.019624676555395126,
-0.005853724665939808,
0.02765490673482418,
-0.04825924336910248,
-0.049470219761133194,
-0.03830277919769287,
-0.036479849368333817,
-0.06733241677284241,
-0.025108134374022484,
0.0639539435505867,
-0.010495263151824474,
0.042398691177368164,
-... |
10913372-4915-4992-ba66-0defcfa0b6c6 | -21.581266), (28.032893, -21.577855), (28.016563, -21.572894), (28.002559, -21.564212), (27.990415, -21.551913), (27.984731, -21.542922), (27.975739, -21.522561), (27.970571, -21.514396), (27.963698, -21.510469), (27.958066, -21.511502), (27.953208, -21.510469), (27.949281, -21.500754), (27.954448, -21.487835), (27.950418, -21.482047), (27.943338, -21.479876), (27.939876, -21.478016), (27.941943, -21.468508), (27.949642, -21.456519), (27.953001, -21.448664), (27.950211, -21.438329), (27.920549, -21.381174), (27.904219, -21.364741), (27.897811, -21.35544), (27.896157, -21.347895), (27.896674, -21.332392), (27.8944, -21.32433), (27.884995, -21.310171), (27.849132, -21.269657), (27.823604, -21.231726), (27.793838, -21.197413), (27.724385, -21.149664), (27.709192, -21.134471), (27.674775, -21.090133), (27.666611, -21.071219), (27.666817, -21.053753), (27.678961, -21.000733), (27.680356, -20.979649), (27.672657, -20.923528), (27.672605, -20.913709), (27.675085, -20.891282), (27.674775, -20.879913), (27.676016, -20.866684), (27.681803, -20.857589), (27.689038, -20.849011), (27.694412, -20.837744999999998), (27.709605, -20.756716), (27.707332, -20.716719), (27.682475, -20.637344), (27.690382, -20.60148), (27.702629, -20.566134), (27.705575, -20.526653), (27.698133, -20.509083), (27.683767, -20.49606), (27.66599, -20.489136), (27.625786, -20.488619), (27.590853, -20.473323), (27.534112, -20.483038), (27.45391, -20.473323), (27.340739, -20.473013), (27.306012, -20.477354), (27.268392, -20.49575), (27.283998, -20.35147), (27.266015, -20.234164), (27.214907, -20.110451), (27.201781, -20.092984), (27.183746, -20.082339), (27.16292, -20.076551), (27.141888, -20.073347), (27.129692, -20.072934), (27.119771, -20.073864), (27.109642, -20.073244), (27.097343, -20.068903), (27.086491, -20.060532), (27.069231, -20.03738), (27.060136, -20.027562), (27.02665, -20.010095), (26.9943, -20.006788), (26.961072, -20.007201), (26.925054, -20.000897), (26.811882, -19.94643), (26.774469, -19.939815), (26.750801, -19.939609), (26.730957, -19.935888), (26.713904, -19.927413), (26.698608, -19.91253), (26.684758, -19.894547), (26.67717, -19.886815), (26.673803, -19.883385), (26.659437, -19.875737), (26.614065, -19.863438), (26.595565, -19.855583), (26.581922, -19.842147), (26.574791, -19.819513), (26.566316, -19.800806), (26.549263, -19.784063), (26.508852, -19.759258), (26.489731, -19.75192), (26.450251, -19.743342), (26.431854, -19.73652), (26.412837, -19.71957), (26.385242, -19.679056), (26.362711, -19.667584), (26.332325, -19.662416), (26.324367, -19.659109), (26.312171, -19.651358), (26.312481, -19.649601), (26.319096, -19.646293), (26.326331, -19.633891), (26.333462, -19.613014), (26.330981, -19.604952), (26.32106, -19.592033), (26.313205, -19.584178), (26.30349, -19.577254), (26.292638, -19.572499), (26.239101, -19.571466), (26.194452, -19.560200000000002), (26.155488, -19.537153), (26.13027, -19.501082), (26.034359, -19.243734), (26.011414, -19.199809), (25.981132, | {"source_file": "polygon.md"} | [
0.030520278960466385,
-0.026089955121278763,
-0.017361924052238464,
-0.002468520076945424,
-0.04901422932744026,
-0.06768249720335007,
-0.001305827870965004,
-0.021635254845023155,
-0.04338207095861435,
-0.023778587579727173,
0.05654224753379822,
-0.02492043375968933,
0.023213140666484833,
... |
fb0a66a9-cbc1-4587-ad6c-0f5d3b5127d6 | (26.292638, -19.572499), (26.239101, -19.571466), (26.194452, -19.560200000000002), (26.155488, -19.537153), (26.13027, -19.501082), (26.034359, -19.243734), (26.011414, -19.199809), (25.981132, -19.161775), (25.956534, -19.122088), (25.948576, -19.103277), (25.944855, -19.079196), (25.948059, -19.058732), (25.964389, -19.021629), (25.9678, -19.000958), (25.967449, -18.999925), (25.940721, -18.921273), (25.815251, -18.813993), (25.779491, -18.738752), (25.773393, -18.665578), (25.761921, -18.630335), (25.736909, -18.608734), (25.698255, -18.590234), (25.669523, -18.566049), (25.622084, -18.501143), (25.608442, -18.487708), (25.574439, -18.465693), (25.508499, -18.399134), (25.49558, -18.378877), (25.490516, -18.365545), (25.481163, -18.323377), (25.473204, -18.303429), (25.440855, -18.2532), (25.408816, -18.175995), (25.387525, -18.138995), (25.357449, -18.115844), (25.323446, -18.09662), (25.296368, -18.068612), (25.255026, -18.001122), (25.226088, -17.931876), (25.21937, -17.908001), (25.21937, -17.879786), (25.259781, -17.794107), (25.266705, -17.800928), (25.285412, -17.809299), (25.315901, -17.83214), (25.335538, -17.841235), (25.345254, -17.842579), (25.376466, -17.841235), (25.409539, -17.853018), (25.420288, -17.854878), (25.49558, -17.854878), (25.500748, -17.856015), (25.510153, -17.861183), (25.516458, -17.862319), (25.522142, -17.860149), (25.530927, -17.850951), (25.536818, -17.848677), (25.603997, -17.836171), (25.657017, -17.81395), (25.681409, -17.81147), (25.694224, -17.819428), (25.70642, -17.829867), (25.743834, -17.839375), (25.765951, -17.849814), (25.786002, -17.862216), (25.794683, -17.872655), (25.804399, -17.888158), (25.849667, -17.906658), (25.86362, -17.923814), (25.847497, -17.929395), (25.846153, -17.943658), (25.853490999999998, -17.959988), (25.86362, -17.971563), (25.924495, -17.998952), (25.966973, -18.000502), (25.978548, -17.998952), (26.033739, -17.971563), (26.04056, -17.978488), (26.046554, -17.966292), (26.062471, -17.962882), (26.081178, -17.962365), (26.095234, -17.958541), (26.096164, -17.954614), (26.0942, -17.941901), (26.095234, -17.938077), (26.101228, -17.935803), (26.118591, -17.931566), (26.135438, -17.922574), (26.158589, -17.918337), (26.167477, -17.913582), (26.203031, -17.887227), (26.211919, -17.882783), (26.221117, -17.886297), (26.228249, -17.894669), (26.233933, -17.903971), (26.239204, -17.910172), (26.248299, -17.913376), (26.294291, -17.918543), (26.3038, -17.922781), (26.311965, -17.928362), (26.318269, -17.934356), (26.325504, -17.93601), (26.362711, -17.930636), (26.408599, -17.939007), (26.485494, -17.979315), (26.527145, -17.992027), (26.553604, -17.996471), (26.570243, -18.002879), (26.583369, -18.013215), (26.598872, -18.029958), (26.612721, -18.041223), (26.628844, -18.049181), (26.685689, -18.066751), (26.700003, -18.069232), (26.71194, -18.065821), (26.740569, -18.0405), (26.753591, -18.032955), (26.769714, -18.029028), (26.794002, -18.026237), (26.88826, -17.984586), | {"source_file": "polygon.md"} | [
0.02165043354034424,
-0.029520677402615547,
-0.000901044812053442,
-0.002558141713961959,
-0.05579742044210434,
-0.06271135807037354,
-0.004638553597033024,
-0.029323846101760864,
-0.03557113930583,
-0.03070431388914585,
0.040516216307878494,
-0.018787318840622902,
0.026232684031128883,
-0... |
efff2b4d-f22a-4616-ab3e-ca0f69c518d2 | (26.685689, -18.066751), (26.700003, -18.069232), (26.71194, -18.065821), (26.740569, -18.0405), (26.753591, -18.032955), (26.769714, -18.029028), (26.794002, -18.026237), (26.88826, -17.984586), (26.912031, -17.992027), (26.94867, -17.968876), (26.95916, -17.964742), (27.006289, -17.962675), (27.021275, -17.958541), (27.048457, -17.944278), (27.078171, -17.916993), (27.11543, -17.882163), (27.149019, -17.842476), (27.146539, -17.818911), (27.145299, -17.794107), (27.146952, -17.783875), (27.157081, -17.769302), (27.422078, -17.504822), (27.524294, -17.415112), (27.577314, -17.363125), (27.604495, -17.312792), (27.624856, -17.233314), (27.641186, -17.198484), (27.777301, -17.001183), (27.816886, -16.959636), (27.868562, -16.929663), (28.022993, -16.865393), (28.113922, -16.827551), (28.21252, -16.748589), (28.280113, -16.706524), (28.643295, -16.568755), (28.690734, -16.56028), (28.718794, -16.56028), (28.73285, -16.55811), (28.741377, -16.550668), (28.761117, -16.532271), (28.769282, -16.515218), (28.808866, -16.486279), (28.822509, -16.470776), (28.829124, -16.434603), (28.833051, -16.426438), (28.857236, -16.388198), (28.857029, -16.36546), (28.840492, -16.323602), (28.836772, -16.306342), (28.840286, -16.284741), (28.86416, -16.231205), (28.847107, -16.202679), (28.852481, -16.162785), (28.8654, -16.121237), (28.870981, -16.087234), (28.868501, -16.08217), (28.86385, -16.076589), (28.859303, -16.069561), (28.857236, -16.060466), (28.860336, -16.049407), (28.874082, -16.028943), (28.877183, -16.022018), (28.898887, -15.995457), (28.932373, -15.963727), (28.946862, -15.957235), (28.951287, -15.955252), (28.972784, -15.951428), (29.018053, -15.950602), (29.042341, -15.946261), (29.055053, -15.934375), (29.076344, -15.895411), (29.086162, -15.884559), (29.102182, -15.870916), (29.121716, -15.859341), (29.141869, -15.854483), (29.150964, -15.848799), (29.186311, -15.812832), (29.406969, -15.714233), (29.422059, -15.711030000000001), (29.508462, -15.703588), (29.526239, -15.692839), (29.563446, -15.662144), (29.587217, -15.655736), (29.608559, -15.658422999999999), (29.62799, -15.663591), (29.648505, -15.666588), (29.672793, -15.663281), (29.73005, -15.644677), (29.773252, -15.638062), (29.814283, -15.619666), (29.837331, -15.614808), (29.881773, -15.618839), (29.967504, -15.641473), (30.010654, -15.646227)]), 6) | {"source_file": "polygon.md"} | [
0.03598804399371147,
-0.018230551853775978,
-0.013466350734233856,
-0.00815508607774973,
-0.03622802346944809,
-0.06806095689535141,
-0.01235707476735115,
-0.053527284413576126,
-0.04206715524196625,
-0.01858857087790966,
0.04944879189133644,
-0.012877080589532852,
0.01754477433860302,
-0.... |
bb6fece6-0633-4f53-bce6-97d36e0ccb78 | response
0.45539
Input parameters {#input-parameters-15}
Returned value {#returned-value-22}
polygonsIntersectionCartesian {#polygonsintersectioncartesian}
Calculates the intersection of polygons.
Example {#example-23}
sql
SELECT wkt(polygonsIntersectionCartesian([[[(0., 0.), (0., 3.), (1., 2.9), (2., 2.6), (2.6, 2.), (2.9, 1.), (3., 0.), (0., 0.)]]], [[[(1., 1.), (1., 4.), (4., 4.), (4., 1.), (1., 1.)]]]))
response
MULTIPOLYGON(((1 2.9,2 2.6,2.6 2,2.9 1,1 1,1 2.9)))
Input parameters {#input-parameters-16}
Polygons
Returned value {#returned-value-23}
MultiPolygon
polygonAreaCartesian {#polygonareacartesian}
Calculates the area of a polygon
Example {#example-24}
sql
SELECT polygonAreaCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.)]]])
response
25
Input parameters {#input-parameters-17}
Polygon
Returned value {#returned-value-24}
Float64
polygonPerimeterCartesian {#polygonperimetercartesian}
Calculates the perimeter of a polygon.
Example {#example-25}
sql
SELECT polygonPerimeterCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.)]]])
response
15
Input parameters {#input-parameters-18}
Polygon
Returned value {#returned-value-25}
Float64
polygonsUnionCartesian {#polygonsunioncartesian}
Calculates the union of polygons.
Example {#example-26}
sql
SELECT wkt(polygonsUnionCartesian([[[(0., 0.), (0., 3.), (1., 2.9), (2., 2.6), (2.6, 2.), (2.9, 1), (3., 0.), (0., 0.)]]], [[[(1., 1.), (1., 4.), (4., 4.), (4., 1.), (1., 1.)]]]))
response
MULTIPOLYGON(((1 2.9,1 4,4 4,4 1,2.9 1,3 0,0 0,0 3,1 2.9)))
Input parameters {#input-parameters-19}
Polygons
Returned value {#returned-value-26}
MultiPolygon
For more information on geometry systems, see this
presentation
about the Boost library, which is what ClickHouse uses. | {"source_file": "polygon.md"} | [
0.044928841292858124,
0.040320586413145065,
0.013348951004445553,
-0.015534206293523312,
-0.04382844269275665,
-0.07903055101633072,
0.08825507760047913,
0.021225910633802414,
-0.05782062187790871,
-0.03091439977288246,
-0.052728764712810516,
-0.12360049784183502,
0.003030979074537754,
-0.... |
17f57be2-4827-4265-831c-7d230e775da1 | title: 'Storage efficiency - Time-series'
sidebar_label: 'Storage efficiency'
description: 'Improving time-series storage efficiency'
slug: /use-cases/time-series/storage-efficiency
keywords: ['time-series', 'storage efficiency', 'compression', 'data retention', 'TTL', 'storage optimization', 'disk usage']
show_related_blogs: true
doc_type: 'guide'
Time-series storage efficiency
After exploring how to query our Wikipedia statistics dataset, let's focus on optimizing its storage efficiency in ClickHouse.
This section demonstrates practical techniques to reduce storage requirements while maintaining query performance.
Type optimization {#time-series-type-optimization}
The general approach to optimizing storage efficiency is using optimal data types.
Let's take the
project
and
subproject
columns. These columns are of type String, but have a relatively small amount of unique values:
sql
SELECT
uniq(project),
uniq(subproject)
FROM wikistat;
text
┌─uniq(project)─┬─uniq(subproject)─┐
│ 1332 │ 130 │
└───────────────┴──────────────────┘
This means we can use the LowCardinality() data type, which uses dictionary-based encoding. This causes ClickHouse to store the internal value ID instead of the original string value, which in turn saves a lot of space:
sql
ALTER TABLE wikistat
MODIFY COLUMN `project` LowCardinality(String),
MODIFY COLUMN `subproject` LowCardinality(String)
We've also used a UInt64 type for the
hits
column, which takes 8 bytes, but has a relatively small max value:
sql
SELECT max(hits)
FROM wikistat;
text
┌─max(hits)─┐
│ 449017 │
└───────────┘
Given this value, we can use UInt32 instead, which takes only 4 bytes, and allows us to store up to ~4b as a max value:
sql
ALTER TABLE wikistat
MODIFY COLUMN `hits` UInt32;
This will reduce the size of this column in memory by at least a factor of two. Note that the size on disk will remain unchanged due to compression. But be careful, pick data types that are not too small!
Specialized codecs {#time-series-specialized-codecs}
When we deal with sequential data, like time-series, we can further improve storage efficiency by using special codecs.
The general idea is to store changes between values instead of absolute values themselves, which results in much less space needed when dealing with slowly changing data:
sql
ALTER TABLE wikistat
MODIFY COLUMN `time` CODEC(Delta, ZSTD);
We've used the Delta codec for the
time
column, which is a good fit for time-series data.
The right ordering key can also save disk space.
Since we usually want to filter by a path, we will add
path
to the sorting key.
This requires recreation of the table.
Below we can see the
CREATE
command for our initial table and the optimized table:
sql
CREATE TABLE wikistat
(
`time` DateTime,
`project` String,
`subproject` String,
`path` String,
`hits` UInt64
)
ENGINE = MergeTree
ORDER BY (time); | {"source_file": "04_storage-efficiency.md"} | [
-0.028078895062208176,
0.031392209231853485,
-0.055102743208408356,
0.07521367818117142,
-0.03876855969429016,
-0.052100274711847305,
0.027969855815172195,
0.040397465229034424,
0.027307962998747826,
0.027550596743822098,
0.008536580950021744,
0.05137012526392937,
0.06370676308870316,
0.01... |
a512d874-0e01-4718-8380-4ccb39570ef6 | sql
CREATE TABLE wikistat
(
`time` DateTime,
`project` String,
`subproject` String,
`path` String,
`hits` UInt64
)
ENGINE = MergeTree
ORDER BY (time);
sql
CREATE TABLE optimized_wikistat
(
`time` DateTime CODEC(Delta(4), ZSTD(1)),
`project` LowCardinality(String),
`subproject` LowCardinality(String),
`path` String,
`hits` UInt32
)
ENGINE = MergeTree
ORDER BY (path, time);
And let's have a look at the amount of space taken up by the data in each table:
sql
SELECT
table,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed,
formatReadableSize(sum(data_compressed_bytes)) AS compressed,
count() AS parts
FROM system.parts
WHERE table LIKE '%wikistat%'
GROUP BY ALL;
text
┌─table──────────────┬─uncompressed─┬─compressed─┬─parts─┐
│ wikistat │ 35.28 GiB │ 12.03 GiB │ 1 │
│ optimized_wikistat │ 30.31 GiB │ 2.84 GiB │ 1 │
└────────────────────┴──────────────┴────────────┴───────┘
The optimized table takes up just over 4 times less space in its compressed form. | {"source_file": "04_storage-efficiency.md"} | [
0.046419333666563034,
0.029359346255660057,
0.011576879769563675,
0.055768903344869614,
-0.019060716032981873,
-0.1172039806842804,
0.020294055342674255,
0.07292527705430984,
-0.0017734614666551352,
0.0843505933880806,
0.043376997113227844,
-0.01057207677513361,
0.04736284539103508,
-0.039... |
d000ac8a-2f8b-41cf-b7e1-293ab41437ea | title: 'Date and time data types - Time-series'
sidebar_label: 'Date and time data types'
description: 'Time-series data types in ClickHouse.'
slug: /use-cases/time-series/date-time-data-types
keywords: ['time-series', 'DateTime', 'DateTime64', 'Date', 'data types', 'temporal data', 'timestamp']
show_related_blogs: true
doc_type: 'reference'
Date and time data types
Having a comprehensive suite of date and time types is necessary for effective time series data management, and ClickHouse delivers exactly that.
From compact date representations to high-precision timestamps with nanosecond accuracy, these types are designed to balance storage efficiency with practical requirements for different time series applications.
Whether you're working with historical financial data, IoT sensor readings, or future-dated events, ClickHouse's date and time types provide the flexibility needed to handle various temporal data scenarios.
The range of supported types allows you to optimize both storage space and query performance while maintaining the precision your use case demands.
The
Date
type should be sufficient in most cases. This type requires 2 bytes to store a date and limits the range to
[1970-01-01, 2149-06-06]
.
Date32
covers a wider range of dates. It requires 4 bytes to store a date and limits the range to
[1900-01-01, 2299-12-31]
DateTime
stores date time values with second precision and a range of
[1970-01-01 00:00:00, 2106-02-07 06:28:15]
It requires 4 bytes per value.
For cases where more precision is required,
DateTime64
can be used. This allows storing time with up to nanoseconds precision, with a range of
[1900-01-01 00:00:00, 2299-12-31 23:59:59.99999999]
. It requires 8 bytes per value.
Let's create a table that stores various date types:
sql
CREATE TABLE dates
(
`date` Date,
`wider_date` Date32,
`datetime` DateTime,
`precise_datetime` DateTime64(3),
`very_precise_datetime` DateTime64(9)
)
ENGINE = MergeTree
ORDER BY tuple();
We can use the
now()
function to return the current time and
now64()
to get it in a specified precision via the first argument.
sql
INSERT INTO dates
SELECT now(),
now()::Date32 + toIntervalYear(100),
now(),
now64(3),
now64(9) + toIntervalYear(200);
This will populate our columns with time accordingly to the column type:
sql
SELECT * FROM dates
FORMAT Vertical;
text
Row 1:
──────
date: 2025-03-12
wider_date: 2125-03-12
datetime: 2025-03-12 11:39:07
precise_datetime: 2025-03-12 11:39:07.196
very_precise_datetime: 2025-03-12 11:39:07.196724000
Timezones {#time-series-timezones}
Many use cases require having timezones stored as well. We can set the timezone as the last argument to the
DateTime
or
DateTime64
types: | {"source_file": "01_date-time-data-types.md"} | [
-0.04978110268712044,
-0.026950722560286522,
-0.004081618972122669,
0.011406326666474342,
-0.01766934059560299,
0.0011539761908352375,
-0.024706190451979637,
0.03634028509259224,
0.003464290639385581,
-0.038430143147706985,
-0.01745099201798439,
0.01605413481593132,
-0.04161440581083298,
0... |
ffc69b56-d90e-46b0-a490-86396f31dfaf | Timezones {#time-series-timezones}
Many use cases require having timezones stored as well. We can set the timezone as the last argument to the
DateTime
or
DateTime64
types:
sql
CREATE TABLE dtz
(
`id` Int8,
`dt_1` DateTime('Europe/Berlin'),
`dt_2` DateTime,
`dt64_1` DateTime64(9, 'Europe/Berlin'),
`dt64_2` DateTime64(9),
)
ENGINE = MergeTree
ORDER BY id;
Having defined a timezone in our DDL, we can now insert times using different timezones:
sql
INSERT INTO dtz
SELECT 1,
toDateTime('2022-12-12 12:13:14', 'America/New_York'),
toDateTime('2022-12-12 12:13:14', 'America/New_York'),
toDateTime64('2022-12-12 12:13:14.123456789', 9, 'America/New_York'),
toDateTime64('2022-12-12 12:13:14.123456789', 9, 'America/New_York')
UNION ALL
SELECT 2,
toDateTime('2022-12-12 12:13:15'),
toDateTime('2022-12-12 12:13:15'),
toDateTime64('2022-12-12 12:13:15.123456789', 9),
toDateTime64('2022-12-12 12:13:15.123456789', 9);
And now let's have a look what's in our table:
sql
SELECT dt_1, dt64_1, dt_2, dt64_2
FROM dtz
FORMAT Vertical;
```text
Row 1:
──────
dt_1: 2022-12-12 18:13:14
dt64_1: 2022-12-12 18:13:14.123456789
dt_2: 2022-12-12 17:13:14
dt64_2: 2022-12-12 17:13:14.123456789
Row 2:
──────
dt_1: 2022-12-12 13:13:15
dt64_1: 2022-12-12 13:13:15.123456789
dt_2: 2022-12-12 12:13:15
dt64_2: 2022-12-12 12:13:15.123456789
```
In the first row, we inserted all values using the
America/New_York
timezone.
*
dt_1
and
dt64_1
are automatically converted to
Europe/Berlin
at query time.
*
dt_2
and
dt64_2
didn't have a time zone specified, so they use the server's local time zone, which in this case is
Europe/London
.
In the second row, we inserted all the values without a timezone, so the server's local time zone was used.
As in the first row,
dt_1
and
dt64_1
are converted to
Europe/Berlin
, while
dt_2
and
dt64_2
use the server's local time zone.
Date and time functions {#time-series-date-time-functions}
ClickHouse also comes with a set of functions that let us convert between the different data types.
For example, we can use
toDate
to convert a
DateTime
value to the
Date
type:
sql
SELECT
now() AS current_time,
toTypeName(current_time),
toDate(current_time) AS date_only,
toTypeName(date_only)
FORMAT Vertical;
text
Row 1:
──────
current_time: 2025-03-12 12:32:54
toTypeName(current_time): DateTime
date_only: 2025-03-12
toTypeName(date_only): Date
We can use
toDateTime64
to convert
DateTime
to
DateTime64
:
sql
SELECT
now() AS current_time,
toTypeName(current_time),
toDateTime64(current_time, 3) AS date_only,
toTypeName(date_only)
FORMAT Vertical;
text
Row 1:
──────
current_time: 2025-03-12 12:35:01
toTypeName(current_time): DateTime
date_only: 2025-03-12 12:35:01.000
toTypeName(date_only): DateTime64(3) | {"source_file": "01_date-time-data-types.md"} | [
0.02686993218958378,
-0.06509380042552948,
0.0022881918121129274,
-0.009259947575628757,
-0.05269695445895195,
-0.04978668689727783,
-0.03839481249451637,
-0.009914146736264229,
-0.009956318885087967,
-0.021904634311795235,
-0.04760352522134781,
-0.049982067197561264,
-0.03941548243165016,
... |
69151c14-cf8e-425d-b84b-e223c21408e0 | text
Row 1:
──────
current_time: 2025-03-12 12:35:01
toTypeName(current_time): DateTime
date_only: 2025-03-12 12:35:01.000
toTypeName(date_only): DateTime64(3)
And we can use
toDateTime
to go from
Date
or
DateTime64
back to
DateTime
:
sql
SELECT
now64() AS current_time,
toTypeName(current_time),
toDateTime(current_time) AS date_time1,
toTypeName(date_time1),
today() AS current_date,
toTypeName(current_date),
toDateTime(current_date) AS date_time2,
toTypeName(date_time2)
FORMAT Vertical;
text
Row 1:
──────
current_time: 2025-03-12 12:41:00.598
toTypeName(current_time): DateTime64(3)
date_time1: 2025-03-12 12:41:00
toTypeName(date_time1): DateTime
current_date: 2025-03-12
toTypeName(current_date): Date
date_time2: 2025-03-12 00:00:00
toTypeName(date_time2): DateTime | {"source_file": "01_date-time-data-types.md"} | [
0.006954452954232693,
0.012850632891058922,
0.005625366233289242,
0.04337872937321663,
-0.03641759231686592,
0.020917534828186035,
0.04647825285792351,
-0.01970580406486988,
-0.04880881682038307,
0.013971288688480854,
-0.010201890952885151,
-0.05814080312848091,
-0.036215439438819885,
-0.0... |
1d1595ea-b332-40f9-9858-be4eae8a4a12 | title: 'Query performance - Time-series'
sidebar_label: 'Query performance'
description: 'Improving time-series query performance'
slug: /use-cases/time-series/query-performance
keywords: ['time-series', 'query performance', 'optimization', 'indexing', 'partitioning', 'query tuning', 'performance']
show_related_blogs: true
doc_type: 'guide'
Time-series query performance
After optimizing storage, the next step is improving query performance.
This section explores two key techniques: optimizing
ORDER BY
keys and using materialized views.
We'll see how these approaches can reduce query times from seconds to milliseconds.
Optimize
ORDER BY
keys {#time-series-optimize-order-by}
Before attempting other optimizations, you should optimize ordering keys to ensure ClickHouse produces the fastest possible results.
Choosing the right key largely depends on the queries you're going to run. Suppose most of our queries filter by the
project
and
subproject
columns.
In this case, it's a good idea to add them to the ordering key — as well as the
time
column since we query on time as well.
Let's create another version of the table that has the same column types as
wikistat
, but is ordered by
(project, subproject, time)
.
sql
CREATE TABLE wikistat_project_subproject
(
`time` DateTime,
`project` String,
`subproject` String,
`path` String,
`hits` UInt64
)
ENGINE = MergeTree
ORDER BY (project, subproject, time);
Let's now compare multiple queries to get an idea of how essential our ordering key expression is to performance. Note that we have haven't applied our previous data type and codec optimizations, so any query performance differences are only based on the sort order.
Query
`(time)`
`(project, subproject, time)`
```sql
SELECT project, sum(hits) AS h
FROM wikistat
GROUP BY project
ORDER BY h DESC
LIMIT 10;
```
2.381 sec
1.660 sec
```sql
SELECT subproject, sum(hits) AS h
FROM wikistat
WHERE project = 'it'
GROUP BY subproject
ORDER BY h DESC
LIMIT 10;
```
2.148 sec
0.058 sec
```sql
SELECT toStartOfMonth(time) AS m, sum(hits) AS h
FROM wikistat
WHERE (project = 'it') AND (subproject = 'zero')
GROUP BY m
ORDER BY m DESC
LIMIT 10;
```
2.192 sec
0.012 sec
```sql
SELECT path, sum(hits) AS h
FROM wikistat
WHERE (project = 'it') AND (subproject = 'zero')
GROUP BY path
ORDER BY h DESC
LIMIT 10;
```
2.968 sec
0.010 sec
Materialized views {#time-series-materialized-views}
Another option is to use materialized views to aggregate and store the results of popular queries. These results can be queried instead of the original table. Suppose the following query is executed quite often in our case:
sql
SELECT path, SUM(hits) AS v
FROM wikistat
WHERE toStartOfMonth(time) = '2015-05-01'
GROUP BY path
ORDER BY v DESC
LIMIT 10 | {"source_file": "05_query-performance.md"} | [
-0.04914078488945961,
0.010376201011240482,
-0.025199607014656067,
0.04527432844042778,
-0.04713025689125061,
-0.02048635296523571,
0.0022325371392071247,
0.0007252118084579706,
0.03546999394893646,
0.01239524595439434,
-0.03544650971889496,
0.05580984801054001,
0.027086492627859116,
-0.00... |
0609b69a-6839-468f-baf2-c3bb48ed3e72 | sql
SELECT path, SUM(hits) AS v
FROM wikistat
WHERE toStartOfMonth(time) = '2015-05-01'
GROUP BY path
ORDER BY v DESC
LIMIT 10
```text
┌─path──────────────────┬────────v─┐
│ - │ 89650862 │
│ Angelsberg │ 19165753 │
│ Ana_Sayfa │ 6368793 │
│ Academy_Awards │ 4901276 │
│ Accueil_(homonymie) │ 3805097 │
│ Adolf_Hitler │ 2549835 │
│ 2015_in_spaceflight │ 2077164 │
│ Albert_Einstein │ 1619320 │
│ 19_Kids_and_Counting │ 1430968 │
│ 2015_Nepal_earthquake │ 1406422 │
└───────────────────────┴──────────┘
10 rows in set. Elapsed: 2.285 sec. Processed 231.41 million rows, 9.22 GB (101.26 million rows/s., 4.03 GB/s.)
Peak memory usage: 1.50 GiB.
```
Create materialized view {#time-series-create-materialized-view}
We can create the following materialized view:
sql
CREATE TABLE wikistat_top
(
`path` String,
`month` Date,
hits UInt64
)
ENGINE = SummingMergeTree
ORDER BY (month, hits);
sql
CREATE MATERIALIZED VIEW wikistat_top_mv
TO wikistat_top
AS
SELECT
path,
toStartOfMonth(time) AS month,
sum(hits) AS hits
FROM wikistat
GROUP BY path, month;
Backfilling destination table {#time-series-backfill-destination-table}
This destination table will only be populated when new records are inserted into the
wikistat
table, so we need to do some
backfilling
.
The easiest way to do this is using an
INSERT INTO SELECT
statement to insert directly into the materialized view's target table
using
the view's
SELECT
query (transformation):
sql
INSERT INTO wikistat_top
SELECT
path,
toStartOfMonth(time) AS month,
sum(hits) AS hits
FROM wikistat
GROUP BY path, month;
Depending on the cardinality of the raw data set (we have 1 billion rows!), this can be a memory-intensive approach. Alternatively, you can use a variant that requires minimal memory:
Creating a temporary table with a Null table engine
Connecting a copy of the normally used materialized view to that temporary table
Using an
INSERT INTO SELECT
query, copying all data from the raw data set into that temporary table
Dropping the temporary table and the temporary materialized view.
With this approach, rows from the raw data set are copied block-wise into the temporary table (which doesn't store any of these rows), and for each block of rows, a partial state is calculated and written to the target table, where these states are incrementally merged in the background.
sql
CREATE TABLE wikistat_backfill
(
`time` DateTime,
`project` String,
`subproject` String,
`path` String,
`hits` UInt64
)
ENGINE = Null;
Next, we'll create a materialized view to read from
wikistat_backfill
and write into
wikistat_top
sql
CREATE MATERIALIZED VIEW wikistat_backfill_top_mv
TO wikistat_top
AS
SELECT
path,
toStartOfMonth(time) AS month,
sum(hits) AS hits
FROM wikistat_backfill
GROUP BY path, month; | {"source_file": "05_query-performance.md"} | [
0.07343504577875137,
-0.0649825781583786,
-0.054802123457193375,
0.09011818468570709,
-0.014521492645144463,
-0.020313963294029236,
0.053153276443481445,
0.025068629533052444,
0.0114853885024786,
0.1373005360364914,
0.028264176100492477,
-0.04781914874911308,
0.048228971660137177,
-0.02734... |
bf3b17a1-7409-4732-8524-92ca40b36ae9 | sql
CREATE MATERIALIZED VIEW wikistat_backfill_top_mv
TO wikistat_top
AS
SELECT
path,
toStartOfMonth(time) AS month,
sum(hits) AS hits
FROM wikistat_backfill
GROUP BY path, month;
And then finally, we'll populate
wikistat_backfill
from the initial
wikistat
table:
sql
INSERT INTO wikistat_backfill
SELECT *
FROM wikistat;
Once that query's finished, we can delete the backfill table and materialized view:
sql
DROP VIEW wikistat_backfill_top_mv;
DROP TABLE wikistat_backfill;
Now we can query the materialized view instead of the original table:
sql
SELECT path, sum(hits) AS hits
FROM wikistat_top
WHERE month = '2015-05-01'
GROUP BY ALL
ORDER BY hits DESC
LIMIT 10;
```text
┌─path──────────────────┬─────hits─┐
│ - │ 89543168 │
│ Angelsberg │ 7047863 │
│ Ana_Sayfa │ 5923985 │
│ Academy_Awards │ 4497264 │
│ Accueil_(homonymie) │ 2522074 │
│ 2015_in_spaceflight │ 2050098 │
│ Adolf_Hitler │ 1559520 │
│ 19_Kids_and_Counting │ 813275 │
│ Andrzej_Duda │ 796156 │
│ 2015_Nepal_earthquake │ 726327 │
└───────────────────────┴──────────┘
10 rows in set. Elapsed: 0.004 sec.
```
Our performance improvement here is dramatic.
Before it took just over 2 seconds to compute the answer to this query and now it takes only 4 milliseconds. | {"source_file": "05_query-performance.md"} | [
-0.011607560329139233,
-0.08395034819841385,
-0.02868272364139557,
0.07230041921138763,
-0.047260284423828125,
0.008134874515235424,
-0.0018783671548590064,
-0.054562702775001526,
-0.006933207157999277,
0.09234485030174255,
0.040947724133729935,
-0.017290066927671432,
0.0526513047516346,
-... |
a801a71f-0156-48d1-97b3-32d463eaf057 | title: 'Analysis functions - Time-series'
sidebar_label: 'Analysis functions'
description: 'Functions for analyzing time-series data in ClickHouse.'
slug: /use-cases/time-series/analysis-functions
keywords: ['time-series', 'analysis functions', 'window functions', 'aggregation functions', 'moving averages', 'trend analysis']
show_related_blogs: true
doc_type: 'reference'
Time-series analysis functions
Time series analysis in ClickHouse can be performed using standard SQL aggregation and window functions.
When working with time series data, you'll typically encounter three main types of metrics:
Counter metrics that monotonically increase over time (like page views or total events)
Gauge metrics that represent point-in-time measurements that can go up and down (like CPU usage or temperature)
Histograms that sample observations and count them in buckets (like request durations or response sizes)
Common analysis patterns for these metrics include comparing values between periods, calculating cumulative totals, determining rates of change, and analyzing distributions.
These can all be achieved through combinations of aggregations, window functions like
sum() OVER
, and specialized functions like
histogram()
.
Period-over-period changes {#time-series-period-over-period-changes}
When analyzing time series data, we often need to understand how values change between time periods.
This is essential for both gauge and counter metrics.
The
lagInFrame
window function lets us access the previous period's value to calculate these changes.
The following query demonstrates this by calculating day-over-day changes in views for "Weird Al" Yankovic's Wikipedia page.
The trend column shows whether traffic increased (positive values) or decreased (negative values) compared to the previous day, helping identify unusual spikes or drops in activity.
sql
SELECT
toDate(time) AS day,
sum(hits) AS h,
lagInFrame(h) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS p,
h - p AS trend
FROM wikistat
WHERE path = '"Weird_Al"_Yankovic'
GROUP BY ALL
LIMIT 10;
text
┌────────day─┬────h─┬────p─┬─trend─┐
│ 2015-05-01 │ 3934 │ 0 │ 3934 │
│ 2015-05-02 │ 3411 │ 3934 │ -523 │
│ 2015-05-03 │ 3195 │ 3411 │ -216 │
│ 2015-05-04 │ 3076 │ 3195 │ -119 │
│ 2015-05-05 │ 3450 │ 3076 │ 374 │
│ 2015-05-06 │ 3053 │ 3450 │ -397 │
│ 2015-05-07 │ 2890 │ 3053 │ -163 │
│ 2015-05-08 │ 3898 │ 2890 │ 1008 │
│ 2015-05-09 │ 3092 │ 3898 │ -806 │
│ 2015-05-10 │ 3508 │ 3092 │ 416 │
└────────────┴──────┴──────┴───────┘
Cumulative values {#time-series-cumulative-values}
Counter metrics naturally accumulate over time.
To analyze this cumulative growth, we can calculate running totals using window functions.
The following query demonstrates this by using the
sum() OVER
clause, which creates a running total. The
bar()
function provides a visual representation of the growth. | {"source_file": "03_analysis-functions.md"} | [
-0.053446169942617416,
-0.0752515122294426,
-0.07651251554489136,
0.00510648638010025,
-0.06221671402454376,
-0.03784199431538582,
0.0020801026839762926,
0.03127016872167587,
0.04032949358224869,
-0.0349949412047863,
-0.05400269478559494,
-0.02651100605726242,
-0.028868425637483597,
0.0467... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.