id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
489d2d09-e07c-4dc5-9080-bde25c586fe5 | description: 'quantileExact, quantileExactLow, quantileExactHigh, quantileExactExclusive,
quantileExactInclusive functions'
sidebar_position: 173
slug: /sql-reference/aggregate-functions/reference/quantileexact
title: 'quantileExact Functions'
doc_type: 'reference'
quantileExact functions
quantileExact {#quantileexact}
Exactly computes the
quantile
of a numeric data sequence.
To get exact value, all the passed values ββare combined into an array, which is then partially sorted. Therefore, the function consumes
O(n)
memory, where
n
is a number of values that were passed. However, for a small number of values, the function is very effective.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileExact(level)(expr)
Alias:
medianExact
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Returned value
Quantile of the specified level.
Type:
For numeric data types the output format will be the same as the input format. For example:
```sql
SELECT
toTypeName(quantileExact(number)) AS
quantile
,
toTypeName(quantileExact(number::Int32)) AS
quantile_int32
,
toTypeName(quantileExact(number::Float32)) AS
quantile_float32
,
toTypeName(quantileExact(number::Float64)) AS
quantile_float64
,
toTypeName(quantileExact(number::Int64)) AS
quantile_int64
FROM numbers(1)
ββquantileββ¬βquantile_int32ββ¬βquantile_float32ββ¬βquantile_float64ββ¬βquantile_int64ββ
1. β UInt64 β Int32 β Float32 β Float64 β Int64 β
ββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
sql
SELECT quantileExact(number) FROM numbers(10)
Result:
text
ββquantileExact(number)ββ
β 5 β
βββββββββββββββββββββββββ
quantileExactLow {#quantileexactlow}
Similar to
quantileExact
, this computes the exact
quantile
of a numeric data sequence.
To get the exact value, all the passed values are combined into an array, which is then fully sorted. The sorting
algorithm's
complexity is
O(NΒ·log(N))
, where
N = std::distance(first, last)
comparisons. | {"source_file": "quantileexact.md"} | [
-0.05503267049789429,
-0.0029429325368255377,
0.017305834218859673,
0.026664575561881065,
-0.10428397357463837,
-0.0718473345041275,
0.030811280012130737,
0.069960817694664,
0.020495908334851265,
-0.0028710556216537952,
0.004782371688634157,
-0.02583734504878521,
-0.011428505182266235,
-0.... |
cc5f66ca-90dd-4be6-8050-82f42eb93e1e | The return value depends on the quantile level and the number of elements in the selection, i.e. if the level is 0.5, then the function returns the lower median value for an even number of elements and the middle median value for an odd number of elements. Median is calculated similarly to the
median_low
implementation which is used in python.
For all other levels, the element at the index corresponding to the value of
level * size_of_array
is returned. For example:
```sql
SELECT quantileExactLow(0.1)(number) FROM numbers(10)
ββquantileExactLow(0.1)(number)ββ
β 1 β
βββββββββββββββββββββββββββββββββ
```
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileExactLow(level)(expr)
Alias:
medianExactLow
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
sql
SELECT quantileExactLow(number) FROM numbers(10)
Result:
text
ββquantileExactLow(number)ββ
β 4 β
ββββββββββββββββββββββββββββ
quantileExactHigh {#quantileexacthigh}
Similar to
quantileExact
, this computes the exact
quantile
of a numeric data sequence.
All the passed values are combined into an array, which is then fully sorted, to get the exact value. The sorting
algorithm's
complexity is
O(NΒ·log(N))
, where
N = std::distance(first, last)
comparisons.
The return value depends on the quantile level and the number of elements in the selection, i.e. if the level is 0.5, then the function returns the higher median value for an even number of elements and the middle median value for an odd number of elements. Median is calculated similarly to the
median_high
implementation which is used in python. For all other levels, the element at the index corresponding to the value of
level * size_of_array
is returned.
This implementation behaves exactly similar to the current
quantileExact
implementation.
When using multiple
quantile*
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantiles
function.
Syntax
sql
quantileExactHigh(level)(expr)
Alias:
medianExactHigh
.
Arguments | {"source_file": "quantileexact.md"} | [
-0.03339419886469841,
0.014280281029641628,
0.004128881264477968,
0.035615403205156326,
-0.03776434063911438,
-0.10632792860269547,
0.04832545295357704,
0.07872892171144485,
0.0007696716347709298,
-0.008594291284680367,
-0.00690985843539238,
-0.08032619953155518,
-0.00020091107580810785,
-... |
43f552e3-756a-4a80-99ae-07cfa8fd3697 | Syntax
sql
quantileExactHigh(level)(expr)
Alias:
medianExactHigh
.
Arguments
level
β Level of quantile. Optional parameter. Constant floating-point number from 0 to 1. We recommend using a
level
value in the range of
[0.01, 0.99]
. Default value: 0.5. At
level=0.5
the function calculates
median
.
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
sql
SELECT quantileExactHigh(number) FROM numbers(10)
Result:
text
ββquantileExactHigh(number)ββ
β 5 β
βββββββββββββββββββββββββββββ
quantileExactExclusive {#quantileexactexclusive}
Exactly computes the
quantile
of a numeric data sequence.
To get exact value, all the passed values ββare combined into an array, which is then partially sorted. Therefore, the function consumes
O(n)
memory, where
n
is a number of values that were passed. However, for a small number of values, the function is very effective.
This function is equivalent to
PERCENTILE.EXC
Excel function, (
type R6
).
When using multiple
quantileExactExclusive
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantilesExactExclusive
function.
Syntax
sql
quantileExactExclusive(level)(expr)
Arguments
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Parameters
level
β Level of quantile. Optional. Possible values: (0, 1) β bounds not included. Default value: 0.5. At
level=0.5
the function calculates
median
.
Float
.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
```sql
CREATE TABLE num AS numbers(1000);
SELECT quantileExactExclusive(0.6)(x) FROM (SELECT number AS x FROM num);
```
Result:
text
ββquantileExactExclusive(0.6)(x)ββ
β 599.6 β
ββββββββββββββββββββββββββββββββββ
quantileExactInclusive {#quantileexactinclusive}
Exactly computes the
quantile
of a numeric data sequence.
To get exact value, all the passed values ββare combined into an array, which is then partially sorted. Therefore, the function consumes
O(n)
memory, where
n
is a number of values that were passed. However, for a small number of values, the function is very effective.
This function is equivalent to
PERCENTILE.INC
Excel function, (
type R7
). | {"source_file": "quantileexact.md"} | [
-0.054399728775024414,
0.03040408343076706,
0.0400729700922966,
0.02689550444483757,
-0.06355789303779602,
-0.07520697265863419,
0.03421292081475258,
0.08398126065731049,
0.05118674784898758,
-0.006310367025434971,
-0.009800411760807037,
-0.024794751778244972,
-0.04024361073970795,
-0.0090... |
cba52052-870c-4b2e-9a1b-fe31e88ec796 | This function is equivalent to
PERCENTILE.INC
Excel function, (
type R7
).
When using multiple
quantileExactInclusive
functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the
quantilesExactInclusive
function.
Syntax
sql
quantileExactInclusive(level)(expr)
Arguments
expr
β Expression over the column values resulting in numeric
data types
,
Date
or
DateTime
.
Parameters
level
β Level of quantile. Optional. Possible values: [0, 1] β bounds included. Default value: 0.5. At
level=0.5
the function calculates
median
.
Float
.
Returned value
Quantile of the specified level.
Type:
Float64
for numeric data type input.
Date
if input values have the
Date
type.
DateTime
if input values have the
DateTime
type.
Example
Query:
```sql
CREATE TABLE num AS numbers(1000);
SELECT quantileExactInclusive(0.6)(x) FROM (SELECT number AS x FROM num);
```
Result:
text
ββquantileExactInclusive(0.6)(x)ββ
β 599.4 β
ββββββββββββββββββββββββββββββββββ
See Also
median
quantiles | {"source_file": "quantileexact.md"} | [
-0.05125981569290161,
0.03777161240577698,
0.010128902271389961,
-0.009091053158044815,
-0.04272015392780304,
-0.03438489884138107,
0.007840495556592941,
0.10058712959289551,
0.020824888721108437,
0.000902312807738781,
0.014399231411516666,
-0.12836164236068726,
0.00030979426810517907,
-0.... |
10dd367a-448e-4079-851e-ec7de8a89868 | description: 'Documentation for ALTER DATABASE ... MODIFY COMMENT statements
which allow adding, modifying, or removing database comments.'
slug: /sql-reference/statements/alter/database-comment
sidebar_position: 51
sidebar_label: 'ALTER DATABASE ... MODIFY COMMENT'
title: 'ALTER DATABASE ... MODIFY COMMENT Statements'
keywords: ['ALTER DATABASE', 'MODIFY COMMENT']
doc_type: 'reference'
ALTER DATABASE ... MODIFY COMMENT
Adds, modifies, or removes a database comment, regardless of whether it was set
before or not. The comment change is reflected in both
system.databases
and the
SHOW CREATE DATABASE
query.
Syntax {#syntax}
sql
ALTER DATABASE [db].name [ON CLUSTER cluster] MODIFY COMMENT 'Comment'
Examples {#examples}
To create a
DATABASE
with a comment:
sql
CREATE DATABASE database_with_comment ENGINE = Memory COMMENT 'The temporary database';
To modify the comment:
sql
ALTER DATABASE database_with_comment
MODIFY COMMENT 'new comment on a database';
To view the modified comment:
sql
SELECT comment
FROM system.databases
WHERE name = 'database_with_comment';
text
ββcommentββββββββββββββββββ
β new comment on database β
βββββββββββββββββββββββββββ
To remove the database comment:
sql
ALTER DATABASE database_with_comment
MODIFY COMMENT '';
To verify that the comment was removed:
sql title="Query"
SELECT comment
FROM system.databases
WHERE name = 'database_with_comment';
text title="Response"
ββcommentββ
β β
βββββββββββ
Related content {#related-content}
COMMENT
clause
ALTER TABLE ... MODIFY COMMENT | {"source_file": "database-comment.md"} | [
-0.02357841655611992,
-0.07250310480594635,
0.008671767078340054,
0.10414379835128784,
-0.03318572789430618,
-0.012399244122207165,
0.06896423548460007,
-0.01224430464208126,
0.005628011655062437,
0.038763534277677536,
0.02211238630115986,
0.015676524490118027,
0.1302347332239151,
-0.04253... |
2c58e52f-ae52-4a40-8fcb-ae2714c739e6 | description: 'Documentation for Manipulations with Table TTL'
sidebar_label: 'TTL'
sidebar_position: 44
slug: /sql-reference/statements/alter/ttl
title: 'Manipulations with Table TTL'
doc_type: 'reference'
Manipulations with Table TTL
:::note
If you are looking for details on using TTL for managing old data, check out the
Manage Data with TTL
user guide. The docs below demonstrate how to alter or remove an existing TTL rule.
:::
MODIFY TTL {#modify-ttl}
You can change
table TTL
with a request of the following form:
sql
ALTER TABLE [db.]table_name [ON CLUSTER cluster] MODIFY TTL ttl_expression;
REMOVE TTL {#remove-ttl}
TTL-property can be removed from table with the following query:
sql
ALTER TABLE [db.]table_name [ON CLUSTER cluster] REMOVE TTL
Example
Consider the table with table
TTL
:
```sql
CREATE TABLE table_with_ttl
(
event_time DateTime,
UserID UInt64,
Comment String
)
ENGINE MergeTree()
ORDER BY tuple()
TTL event_time + INTERVAL 3 MONTH
SETTINGS min_bytes_for_wide_part = 0;
INSERT INTO table_with_ttl VALUES (now(), 1, 'username1');
INSERT INTO table_with_ttl VALUES (now() - INTERVAL 4 MONTH, 2, 'username2');
```
Run
OPTIMIZE
to force
TTL
cleanup:
sql
OPTIMIZE TABLE table_with_ttl FINAL;
SELECT * FROM table_with_ttl FORMAT PrettyCompact;
Second row was deleted from table.
text
ββββββββββevent_timeβββββ¬ββUserIDββ¬βββββCommentβββ
β 2020-12-11 12:44:57 β 1 β username1 β
βββββββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββ
Now remove table
TTL
with the following query:
sql
ALTER TABLE table_with_ttl REMOVE TTL;
Re-insert the deleted row and force the
TTL
cleanup again with
OPTIMIZE
:
sql
INSERT INTO table_with_ttl VALUES (now() - INTERVAL 4 MONTH, 2, 'username2');
OPTIMIZE TABLE table_with_ttl FINAL;
SELECT * FROM table_with_ttl FORMAT PrettyCompact;
The
TTL
is no longer there, so the second row is not deleted:
text
ββββββββββevent_timeβββββ¬ββUserIDββ¬βββββCommentβββ
β 2020-12-11 12:44:57 β 1 β username1 β
β 2020-08-11 12:44:57 β 2 β username2 β
βββββββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββ
See Also
More about the
TTL-expression
.
Modify column
with TTL
. | {"source_file": "ttl.md"} | [
-0.015923835337162018,
0.006250524893403053,
0.03029228188097477,
0.04756340757012367,
-0.08819004893302917,
0.002006958005949855,
0.01632610522210598,
-0.010794081725180149,
-0.002593612065538764,
0.01649855449795723,
0.042495131492614746,
0.008548783138394356,
0.02526424452662468,
-0.057... |
1f6500be-ddd2-41ad-8338-62b768ec6856 | description: 'Documentation for ALTER TABLE ... MODIFY QUERY Statement'
sidebar_label: 'VIEW'
sidebar_position: 50
slug: /sql-reference/statements/alter/view
title: 'ALTER TABLE ... MODIFY QUERY Statement'
doc_type: 'reference'
ALTER TABLE ... MODIFY QUERY Statement
You can modify
SELECT
query that was specified when a
materialized view
was created with the
ALTER TABLE ... MODIFY QUERY
statement without interrupting ingestion process.
This command is created to change materialized view created with
TO [db.]name
clause. It does not change the structure of the underlying storage table and it does not change the columns' definition of the materialized view, because of this the application of this command is very limited for materialized views are created without
TO [db.]name
clause.
Example with TO table
```sql
CREATE TABLE events (ts DateTime, event_type String)
ENGINE = MergeTree ORDER BY (event_type, ts);
CREATE TABLE events_by_day (ts DateTime, event_type String, events_cnt UInt64)
ENGINE = SummingMergeTree ORDER BY (event_type, ts);
CREATE MATERIALIZED VIEW mv TO events_by_day AS
SELECT toStartOfDay(ts) ts, event_type, count() events_cnt
FROM events
GROUP BY ts, event_type;
INSERT INTO events
SELECT DATE '2020-01-01' + interval number * 900 second,
['imp', 'click'][number%2+1]
FROM numbers(100);
SELECT ts, event_type, sum(events_cnt)
FROM events_by_day
GROUP BY ts, event_type
ORDER BY ts, event_type;
βββββββββββββββββββtsββ¬βevent_typeββ¬βsum(events_cnt)ββ
β 2020-01-01 00:00:00 β click β 48 β
β 2020-01-01 00:00:00 β imp β 48 β
β 2020-01-02 00:00:00 β click β 2 β
β 2020-01-02 00:00:00 β imp β 2 β
βββββββββββββββββββββββ΄βββββββββββββ΄ββββββββββββββββββ
-- Let's add the new measurement
cost
-- and the new dimension
browser
.
ALTER TABLE events
ADD COLUMN browser String,
ADD COLUMN cost Float64;
-- Column do not have to match in a materialized view and TO
-- (destination table), so the next alter does not break insertion.
ALTER TABLE events_by_day
ADD COLUMN cost Float64,
ADD COLUMN browser String after event_type,
MODIFY ORDER BY (event_type, ts, browser);
INSERT INTO events
SELECT Date '2020-01-02' + interval number * 900 second,
['imp', 'click'][number%2+1],
['firefox', 'safary', 'chrome'][number%3+1],
10/(number+1)%33
FROM numbers(100);
-- New columns
browser
and
cost
are empty because we did not change Materialized View yet.
SELECT ts, event_type, browser, sum(events_cnt) events_cnt, round(sum(cost),2) cost
FROM events_by_day
GROUP BY ts, event_type, browser
ORDER BY ts, event_type; | {"source_file": "view.md"} | [
-0.020260728895664215,
-0.017588991671800613,
0.011414910666644573,
0.06133453547954559,
-0.0742189958691597,
-0.00827381107956171,
0.020766915753483772,
0.011736461892724037,
-0.007476535625755787,
0.08764566481113434,
0.014790081419050694,
-0.027000319212675095,
0.01760832965373993,
-0.0... |
2e1e94e0-c6c4-428a-ba68-d46e6da1ed27 | SELECT ts, event_type, browser, sum(events_cnt) events_cnt, round(sum(cost),2) cost
FROM events_by_day
GROUP BY ts, event_type, browser
ORDER BY ts, event_type;
βββββββββββββββββββtsββ¬βevent_typeββ¬βbrowserββ¬βevents_cntββ¬βcostββ
β 2020-01-01 00:00:00 β click β β 48 β 0 β
β 2020-01-01 00:00:00 β imp β β 48 β 0 β
β 2020-01-02 00:00:00 β click β β 50 β 0 β
β 2020-01-02 00:00:00 β imp β β 50 β 0 β
β 2020-01-03 00:00:00 β click β β 2 β 0 β
β 2020-01-03 00:00:00 β imp β β 2 β 0 β
βββββββββββββββββββββββ΄βββββββββββββ΄ββββββββββ΄βββββββββββββ΄βββββββ
ALTER TABLE mv MODIFY QUERY
SELECT toStartOfDay(ts) ts, event_type, browser,
count() events_cnt,
sum(cost) cost
FROM events
GROUP BY ts, event_type, browser;
INSERT INTO events
SELECT Date '2020-01-03' + interval number * 900 second,
['imp', 'click'][number%2+1],
['firefox', 'safary', 'chrome'][number%3+1],
10/(number+1)%33
FROM numbers(100);
SELECT ts, event_type, browser, sum(events_cnt) events_cnt, round(sum(cost),2) cost
FROM events_by_day
GROUP BY ts, event_type, browser
ORDER BY ts, event_type;
βββββββββββββββββββtsββ¬βevent_typeββ¬βbrowserββ¬βevents_cntββ¬ββcostββ
β 2020-01-01 00:00:00 β click β β 48 β 0 β
β 2020-01-01 00:00:00 β imp β β 48 β 0 β
β 2020-01-02 00:00:00 β click β β 50 β 0 β
β 2020-01-02 00:00:00 β imp β β 50 β 0 β
β 2020-01-03 00:00:00 β click β firefox β 16 β 6.84 β
β 2020-01-03 00:00:00 β click β β 2 β 0 β
β 2020-01-03 00:00:00 β click β safary β 16 β 9.82 β
β 2020-01-03 00:00:00 β click β chrome β 16 β 5.63 β
β 2020-01-03 00:00:00 β imp β β 2 β 0 β
β 2020-01-03 00:00:00 β imp β firefox β 16 β 15.14 β
β 2020-01-03 00:00:00 β imp β safary β 16 β 6.14 β
β 2020-01-03 00:00:00 β imp β chrome β 16 β 7.89 β
β 2020-01-04 00:00:00 β click β safary β 1 β 0.1 β
β 2020-01-04 00:00:00 β click β firefox β 1 β 0.1 β
β 2020-01-04 00:00:00 β imp β firefox β 1 β 0.1 β
β 2020-01-04 00:00:00 β imp β chrome β 1 β 0.1 β
βββββββββββββββββββββββ΄βββββββββββββ΄ββββββββββ΄βββββββββββββ΄ββββββββ
-- !!! During
MODIFY ORDER BY
PRIMARY KEY was implicitly introduced.
SHOW CREATE TABLE events_by_day FORMAT TSVRaw
CREATE TABLE test.events_by_day
(
ts
DateTime,
event_type
String,
browser
String,
events_cnt
UInt64,
cost
Float64
)
ENGINE = SummingMergeTree
PRIMARY KEY (event_type, ts)
ORDER BY (event_type, ts, browser)
SETTINGS index_granularity = 8192 | {"source_file": "view.md"} | [
0.01734156720340252,
-0.004697415977716446,
0.054204776883125305,
0.09272976964712143,
-0.051080070436000824,
-0.016043834388256073,
0.10439549386501312,
-0.030369708314538002,
0.027077889069914818,
0.008026443421840668,
-0.012058544903993607,
-0.12534943222999573,
-0.0032511798199266195,
... |
1d215a61-0660-4e5e-a852-c7b516f3f0ec | -- !!! The columns' definition is unchanged but it does not matter, we are not querying
-- MATERIALIZED VIEW, we are querying TO (storage) table.
-- SELECT section is updated.
SHOW CREATE TABLE mv FORMAT TSVRaw;
CREATE MATERIALIZED VIEW test.mv TO test.events_by_day
(
ts
DateTime,
event_type
String,
events_cnt
UInt64
) AS
SELECT
toStartOfDay(ts) AS ts,
event_type,
browser,
count() AS events_cnt,
sum(cost) AS cost
FROM test.events
GROUP BY
ts,
event_type,
browser
```
Example without TO table
The application is very limited because you can only change the
SELECT
section without adding new columns.
sql
CREATE TABLE src_table (`a` UInt32) ENGINE = MergeTree ORDER BY a;
CREATE MATERIALIZED VIEW mv (`a` UInt32) ENGINE = MergeTree ORDER BY a AS SELECT a FROM src_table;
INSERT INTO src_table (a) VALUES (1), (2);
SELECT * FROM mv;
text
ββaββ
β 1 β
β 2 β
βββββ
sql
ALTER TABLE mv MODIFY QUERY SELECT a * 2 as a FROM src_table;
INSERT INTO src_table (a) VALUES (3), (4);
SELECT * FROM mv;
text
ββaββ
β 6 β
β 8 β
βββββ
ββaββ
β 1 β
β 2 β
βββββ
ALTER TABLE ... MODIFY REFRESH Statement {#alter-table--modify-refresh-statement}
ALTER TABLE ... MODIFY REFRESH
statement changes refresh parameters of a
Refreshable Materialized View
. See
Changing Refresh Parameters
. | {"source_file": "view.md"} | [
-0.03793421760201454,
-0.07160649448633194,
-0.04480958357453346,
0.08522501587867737,
-0.0696796178817749,
-0.040817756205797195,
-0.030611446127295494,
0.016071349382400513,
0.03542241081595421,
0.09989115595817566,
0.05535176023840904,
-0.05324177071452141,
0.02438206598162651,
-0.02607... |
737ed722-dc91-436b-80b2-740a85503f12 | description: 'Documentation for Column'
sidebar_label: 'COLUMN'
sidebar_position: 37
slug: /sql-reference/statements/alter/column
title: 'Column Manipulations'
doc_type: 'reference'
A set of queries that allow changing the table structure.
Syntax:
sql
ALTER [TEMPORARY] TABLE [db].name [ON CLUSTER cluster] ADD|DROP|RENAME|CLEAR|COMMENT|{MODIFY|ALTER}|MATERIALIZE COLUMN ...
In the query, specify a list of one or more comma-separated actions.
Each action is an operation on a column.
The following actions are supported:
ADD COLUMN
β Adds a new column to the table.
DROP COLUMN
β Deletes the column.
RENAME COLUMN
β Renames an existing column.
CLEAR COLUMN
β Resets column values.
COMMENT COLUMN
β Adds a text comment to the column.
MODIFY COLUMN
β Changes column's type, default expression, TTL, and column settings.
MODIFY COLUMN REMOVE
β Removes one of the column properties.
MODIFY COLUMN MODIFY SETTING
- Changes column settings.
MODIFY COLUMN RESET SETTING
- Reset column settings.
MATERIALIZE COLUMN
β Materializes the column in the parts where the column is missing.
These actions are described in detail below.
ADD COLUMN {#add-column}
sql
ADD COLUMN [IF NOT EXISTS] name [type] [default_expr] [codec] [AFTER name_after | FIRST]
Adds a new column to the table with the specified
name
,
type
,
codec
and
default_expr
(see the section
Default expressions
).
If the
IF NOT EXISTS
clause is included, the query won't return an error if the column already exists. If you specify
AFTER name_after
(the name of another column), the column is added after the specified one in the list of table columns. If you want to add a column to the beginning of the table use the
FIRST
clause. Otherwise, the column is added to the end of the table. For a chain of actions,
name_after
can be the name of a column that is added in one of the previous actions.
Adding a column just changes the table structure, without performing any actions with data. The data does not appear on the disk after
ALTER
. If the data is missing for a column when reading from the table, it is filled in with default values (by performing the default expression if there is one, or using zeros or empty strings). The column appears on the disk after merging data parts (see
MergeTree
).
This approach allows us to complete the
ALTER
query instantly, without increasing the volume of old data.
Example:
sql
ALTER TABLE alter_test ADD COLUMN Added1 UInt32 FIRST;
ALTER TABLE alter_test ADD COLUMN Added2 UInt32 AFTER NestedColumn;
ALTER TABLE alter_test ADD COLUMN Added3 UInt32 AFTER ToDrop;
DESC alter_test FORMAT TSV;
text
Added1 UInt32
CounterID UInt32
StartDate Date
UserID UInt32
VisitID UInt32
NestedColumn.A Array(UInt8)
NestedColumn.S Array(String)
Added2 UInt32
ToDrop UInt32
Added3 UInt32
DROP COLUMN {#drop-column}
sql
DROP COLUMN [IF EXISTS] name | {"source_file": "column.md"} | [
-0.021916547790169716,
-0.009167891927063465,
-0.061971791088581085,
0.1279682219028473,
0.01276978850364685,
0.04441466182470322,
-0.010120067745447159,
-0.023173213005065918,
-0.037255026400089264,
0.09662152081727982,
0.04381208494305611,
0.0007031714194454253,
0.03796372935175896,
-0.1... |
c90a65ea-7e4c-4f1f-a465-5568abaaf563 | DROP COLUMN {#drop-column}
sql
DROP COLUMN [IF EXISTS] name
Deletes the column with the name
name
. If the
IF EXISTS
clause is specified, the query won't return an error if the column does not exist.
Deletes data from the file system. Since this deletes entire files, the query is completed almost instantly.
:::tip
You can't delete a column if it is referenced by
materialized view
. Otherwise, it returns an error.
:::
Example:
sql
ALTER TABLE visits DROP COLUMN browser
RENAME COLUMN {#rename-column}
sql
RENAME COLUMN [IF EXISTS] name to new_name
Renames the column
name
to
new_name
. If the
IF EXISTS
clause is specified, the query won't return an error if the column does not exist. Since renaming does not involve the underlying data, the query is completed almost instantly.
NOTE
: Columns specified in the key expression of the table (either with
ORDER BY
or
PRIMARY KEY
) cannot be renamed. Trying to change these columns will produce
SQL Error [524]
.
Example:
sql
ALTER TABLE visits RENAME COLUMN webBrowser TO browser
CLEAR COLUMN {#clear-column}
sql
CLEAR COLUMN [IF EXISTS] name IN PARTITION partition_name
Resets all data in a column for a specified partition. Read more about setting the partition name in the section
How to set the partition expression
.
If the
IF EXISTS
clause is specified, the query won't return an error if the column does not exist.
Example:
sql
ALTER TABLE visits CLEAR COLUMN browser IN PARTITION tuple()
COMMENT COLUMN {#comment-column}
sql
COMMENT COLUMN [IF EXISTS] name 'Text comment'
Adds a comment to the column. If the
IF EXISTS
clause is specified, the query won't return an error if the column does not exist.
Each column can have one comment. If a comment already exists for the column, a new comment overwrites the previous comment.
Comments are stored in the
comment_expression
column returned by the
DESCRIBE TABLE
query.
Example:
sql
ALTER TABLE visits COMMENT COLUMN browser 'This column shows the browser used for accessing the site.'
MODIFY COLUMN {#modify-column}
sql
MODIFY COLUMN [IF EXISTS] name [type] [default_expr] [codec] [TTL] [settings] [AFTER name_after | FIRST]
ALTER COLUMN [IF EXISTS] name TYPE [type] [default_expr] [codec] [TTL] [settings] [AFTER name_after | FIRST]
This query changes the
name
column properties:
Type
Default expression
Compression Codec
TTL
Column-level Settings
For examples of columns compression CODECS modifying, see
Column Compression Codecs
.
For examples of columns TTL modifying, see
Column TTL
.
For examples of column-level settings modifying, see
Column-level Settings
.
If the
IF EXISTS
clause is specified, the query won't return an error if the column does not exist. | {"source_file": "column.md"} | [
-0.042154937982559204,
-0.06013250723481178,
-0.03236245736479759,
0.04452936723828316,
0.004195772111415863,
-0.11483369767665863,
0.052789438515901566,
-0.06544850021600723,
0.09061173349618912,
0.06308577954769135,
0.10457426309585571,
0.017744014039635658,
0.021418167278170586,
-0.0856... |
a6a709be-276a-4e65-9cfe-9c4ca9700833 | For examples of column-level settings modifying, see
Column-level Settings
.
If the
IF EXISTS
clause is specified, the query won't return an error if the column does not exist.
When changing the type, values are converted as if the
toType
functions were applied to them. If only the default expression is changed, the query does not do anything complex, and is completed almost instantly.
Example:
sql
ALTER TABLE visits MODIFY COLUMN browser Array(String)
Changing the column type is the only complex action β it changes the contents of files with data. For large tables, this may take a long time.
The query also can change the order of the columns using
FIRST | AFTER
clause, see
ADD COLUMN
description, but column type is mandatory in this case.
Example:
```sql
CREATE TABLE users (
c1 Int16,
c2 String
) ENGINE = MergeTree
ORDER BY c1;
DESCRIBE users;
ββnameββ¬βtypeββββ¬
β c1 β Int16 β
β c2 β String β
ββββββββ΄βββββββββ΄
ALTER TABLE users MODIFY COLUMN c2 String FIRST;
DESCRIBE users;
ββnameββ¬βtypeββββ¬
β c2 β String β
β c1 β Int16 β
ββββββββ΄βββββββββ΄
ALTER TABLE users ALTER COLUMN c2 TYPE String AFTER c1;
DESCRIBE users;
ββnameββ¬βtypeββββ¬
β c1 β Int16 β
β c2 β String β
ββββββββ΄βββββββββ΄
```
The
ALTER
query is atomic. For MergeTree tables it is also lock-free.
The
ALTER
query for changing columns is replicated. The instructions are saved in ZooKeeper, then each replica applies them. All
ALTER
queries are run in the same order. The query waits for the appropriate actions to be completed on the other replicas. However, a query to change columns in a replicated table can be interrupted, and all actions will be performed asynchronously.
:::note
Please be careful when changing a Nullable column to Non-Nullable. Make sure it doesn't have any NULL values, otherwise it will cause problems when reading from it. In that case, the workaround would be to Kill the mutation and revert the column back to Nullable type.
:::
MODIFY COLUMN REMOVE {#modify-column-remove}
Removes one of the column properties:
DEFAULT
,
ALIAS
,
MATERIALIZED
,
CODEC
,
COMMENT
,
TTL
,
SETTINGS
.
Syntax:
sql
ALTER TABLE table_name MODIFY COLUMN column_name REMOVE property;
Example
Remove TTL:
sql
ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL;
See Also
REMOVE TTL
.
MODIFY COLUMN MODIFY SETTING {#modify-column-modify-setting}
Modify a column setting.
Syntax:
sql
ALTER TABLE table_name MODIFY COLUMN column_name MODIFY SETTING name=value,...;
Example
Modify column's
max_compress_block_size
to
1MB
:
sql
ALTER TABLE table_name MODIFY COLUMN column_name MODIFY SETTING max_compress_block_size = 1048576;
MODIFY COLUMN RESET SETTING {#modify-column-reset-setting}
Reset a column setting, also removes the setting declaration in the column expression of the table's CREATE query.
Syntax:
sql
ALTER TABLE table_name MODIFY COLUMN column_name RESET SETTING name,...; | {"source_file": "column.md"} | [
0.009697233326733112,
-0.05688748508691788,
0.01123209111392498,
0.054154347628355026,
-0.027573253959417343,
-0.04475622996687889,
-0.03771105781197548,
0.04226274415850639,
0.0014013779582455754,
0.042111244052648544,
0.06856491416692734,
-0.009993011131882668,
-0.045976344496011734,
-0.... |
d4a5b9ae-7630-45be-9bc6-fdd2b6ae6fbe | Syntax:
sql
ALTER TABLE table_name MODIFY COLUMN column_name RESET SETTING name,...;
Example
Reset column setting
max_compress_block_size
to it's default value:
sql
ALTER TABLE table_name MODIFY COLUMN column_name RESET SETTING max_compress_block_size;
MATERIALIZE COLUMN {#materialize-column}
Materializes a column with a
DEFAULT
or
MATERIALIZED
value expression. When adding a materialized column using
ALTER TABLE table_name ADD COLUMN column_name MATERIALIZED
, existing rows without materialized values are not automatically filled.
MATERIALIZE COLUMN
statement can be used to rewrite existing column data after a
DEFAULT
or
MATERIALIZED
expression has been added or updated (which only updates the metadata but does not change existing data). Note that materializing a column in the sort key is an invalid operation because it could break the sort order.
Implemented as a
mutation
.
For columns with a new or updated
MATERIALIZED
value expression, all existing rows are rewritten.
For columns with a new or updated
DEFAULT
value expression, the behavior depends on the ClickHouse version:
- In ClickHouse < v24.2, all existing rows are rewritten.
- ClickHouse >= v24.2 distinguishes if a row value in a column with
DEFAULT
value expression was explicitly specified when it was inserted, or not, i.e. calculated from the
DEFAULT
value expression. If the value was explicitly specified, ClickHouse keeps it as is. If the value was calculated, ClickHouse changes it to the new or updated
MATERIALIZED
value expression.
Syntax:
sql
ALTER TABLE [db.]table [ON CLUSTER cluster] MATERIALIZE COLUMN col [IN PARTITION partition | IN PARTITION ID 'partition_id'];
- If you specify a PARTITION, a column will be materialized with only the specified partition.
Example
```sql
DROP TABLE IF EXISTS tmp;
SET mutations_sync = 2;
CREATE TABLE tmp (x Int64) ENGINE = MergeTree() ORDER BY tuple() PARTITION BY tuple();
INSERT INTO tmp SELECT * FROM system.numbers LIMIT 5;
ALTER TABLE tmp ADD COLUMN s String MATERIALIZED toString(x);
ALTER TABLE tmp MATERIALIZE COLUMN s;
SELECT groupArray(x), groupArray(s) FROM (select x,s from tmp order by x);
ββgroupArray(x)ββ¬βgroupArray(s)ββββββββββ
β [0,1,2,3,4] β ['0','1','2','3','4'] β
βββββββββββββββββ΄ββββββββββββββββββββββββ
ALTER TABLE tmp MODIFY COLUMN s String MATERIALIZED toString(round(100/x));
INSERT INTO tmp SELECT * FROM system.numbers LIMIT 5,5;
SELECT groupArray(x), groupArray(s) FROM tmp;
ββgroupArray(x)ββββββββββ¬βgroupArray(s)βββββββββββββββββββββββββββββββββββ
β [0,1,2,3,4,5,6,7,8,9] β ['0','1','2','3','4','20','17','14','12','11'] β
βββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββ
ALTER TABLE tmp MATERIALIZE COLUMN s;
SELECT groupArray(x), groupArray(s) FROM tmp; | {"source_file": "column.md"} | [
-0.04597313702106476,
-0.005431346129626036,
-0.032233826816082,
0.0708717554807663,
-0.05781864374876022,
-0.037380728870630264,
-0.01465153880417347,
-0.02683963067829609,
-0.04763518273830414,
0.13474620878696442,
0.01857764646410942,
0.019046103581786156,
-0.016632655635476112,
-0.0515... |
10becc58-b588-44a5-9b05-26346374bf8c | ALTER TABLE tmp MATERIALIZE COLUMN s;
SELECT groupArray(x), groupArray(s) FROM tmp;
ββgroupArray(x)ββββββββββ¬βgroupArray(s)ββββββββββββββββββββββββββββββββββββββββββ
β [0,1,2,3,4,5,6,7,8,9] β ['inf','100','50','33','25','20','17','14','12','11'] β
βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
See Also
MATERIALIZED
.
Limitations {#limitations}
The
ALTER
query lets you create and delete separate elements (columns) in nested data structures, but not whole nested data structures. To add a nested data structure, you can add columns with a name like
name.nested_name
and the type
Array(T)
. A nested data structure is equivalent to multiple array columns with a name that has the same prefix before the dot.
There is no support for deleting columns in the primary key or the sampling key (columns that are used in the
ENGINE
expression). Changing the type for columns that are included in the primary key is only possible if this change does not cause the data to be modified (for example, you are allowed to add values to an Enum or to change a type from
DateTime
to
UInt32
).
If the
ALTER
query is not sufficient to make the table changes you need, you can create a new table, copy the data to it using the
INSERT SELECT
query, then switch the tables using the
RENAME
query and delete the old table.
The
ALTER
query blocks all reads and writes for the table. In other words, if a long
SELECT
is running at the time of the
ALTER
query, the
ALTER
query will wait for it to complete. At the same time, all new queries to the same table will wait while this
ALTER
is running.
For tables that do not store data themselves (such as
Merge
and
Distributed
),
ALTER
just changes the table structure, and does not change the structure of subordinate tables. For example, when running ALTER for a
Distributed
table, you will also need to run
ALTER
for the tables on all remote servers. | {"source_file": "column.md"} | [
-0.022680146619677544,
0.03746715560555458,
0.01621364988386631,
0.014440614730119705,
0.029541531577706337,
-0.06591591984033585,
-0.0010378507431596518,
-0.06777986139059067,
-0.05667376518249512,
0.08547496050596237,
0.0014020116068422794,
-0.0014928621239960194,
-0.0126110864803195,
-0... |
0de69596-2f88-4e52-b0c1-492a51cd8b75 | description: 'Documentation for Manipulating SAMPLE BY expression'
sidebar_label: 'SAMPLE BY'
sidebar_position: 41
slug: /sql-reference/statements/alter/sample-by
title: 'Manipulating Sampling-Key Expressions'
doc_type: 'reference'
Manipulating SAMPLE BY expression
The following operations are available:
MODIFY {#modify}
sql
ALTER TABLE [db].name [ON CLUSTER cluster] MODIFY SAMPLE BY new_expression
The command changes the
sampling key
of the table to
new_expression
(an expression or a tuple of expressions). The primary key must contain the new sample key.
REMOVE {#remove}
sql
ALTER TABLE [db].name [ON CLUSTER cluster] REMOVE SAMPLE BY
The command removes the
sampling key
of the table.
The commands
MODIFY
and
REMOVE
are lightweight in the sense that they only change metadata or remove files.
:::note
It only works for tables in the
MergeTree
family (including
replicated
tables).
::: | {"source_file": "sample-by.md"} | [
-0.03245653584599495,
0.05019740015268326,
-0.011854139156639576,
0.050556689500808716,
0.05677475035190582,
-0.010701438412070274,
0.06188930943608284,
-0.0007831386756151915,
-0.020870670676231384,
0.08893747627735138,
0.04111072048544884,
-0.02384585328400135,
0.036684878170490265,
-0.1... |
59587c31-881b-4e40-9d7b-898f2923e973 | description: 'Documentation for Manipulating Key Expressions'
sidebar_label: 'ORDER BY'
sidebar_position: 41
slug: /sql-reference/statements/alter/order-by
title: 'Manipulating Key Expressions'
doc_type: 'reference'
Manipulating Key Expressions
sql
ALTER TABLE [db].name [ON CLUSTER cluster] MODIFY ORDER BY new_expression
The command changes the
sorting key
of the table to
new_expression
(an expression or a tuple of expressions). Primary key remains the same.
The command is lightweight in a sense that it only changes metadata. To keep the property that data part rows are ordered by the sorting key expression you cannot add expressions containing existing columns to the sorting key (only columns added by the
ADD COLUMN
command in the same
ALTER
query, without default column value).
:::note
It only works for tables in the
MergeTree
family (including
replicated
tables).
::: | {"source_file": "order-by.md"} | [
-0.002772899577394128,
0.015487092547118664,
0.02545354887843132,
0.03768086060881615,
-0.02443445846438408,
0.04156510904431343,
-0.02391333319246769,
-0.0006355260848067701,
-0.0010177554795518517,
0.15308186411857605,
0.06420944631099701,
0.03163841739296913,
0.00798793788999319,
-0.088... |
ae1d0d2f-1de2-49d1-9d8f-2d62f1fecddb | description: 'Documentation for Manipulating Column Statistics'
sidebar_label: 'STATISTICS'
sidebar_position: 45
slug: /sql-reference/statements/alter/statistics
title: 'Manipulating Column Statistics'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Manipulating Column Statistics
The following operations are available:
ALTER TABLE [db].table ADD STATISTICS [IF NOT EXISTS] (column list) TYPE (type list)
- Adds statistic description to tables metadata.
ALTER TABLE [db].table MODIFY STATISTICS (column list) TYPE (type list)
- Modifies statistic description to tables metadata.
ALTER TABLE [db].table DROP STATISTICS [IF EXISTS] (column list)
- Removes statistics from the metadata of the specified columns and deletes all statistics objects in all parts for the specified columns.
ALTER TABLE [db].table CLEAR STATISTICS [IF EXISTS] (column list)
- Deletes all statistics objects in all parts for the specified columns. Statistics objects can be rebuild using
ALTER TABLE MATERIALIZE STATISTICS
.
ALTER TABLE [db.]table MATERIALIZE STATISTICS (ALL | [IF EXISTS] (column list))
- Rebuilds the statistic for columns. Implemented as a
mutation
.
The first two commands are lightweight in a sense that they only change metadata or remove files.
Also, they are replicated, syncing statistics metadata via ZooKeeper.
Example: {#example}
Adding two statistics types to two columns:
sql
ALTER TABLE t1 MODIFY STATISTICS c, d TYPE TDigest, Uniq;
:::note
Statistic are supported only for
*MergeTree
engine tables (including
replicated
variants).
::: | {"source_file": "statistics.md"} | [
-0.011139828711748123,
0.017446689307689667,
-0.05748383700847626,
0.08117444813251495,
-0.008470515720546246,
0.008208313025534153,
0.021969608962535858,
-0.026987595483660698,
-0.022385310381650925,
0.12471001595258713,
0.02604934759438038,
-0.039680950343608856,
0.06989865005016327,
-0.... |
2a353d48-8ea3-42ce-801a-5dc3e4185ae5 | description: 'Documentation for Manipulating Projections'
sidebar_label: 'PROJECTION'
sidebar_position: 49
slug: /sql-reference/statements/alter/projection
title: 'Projections'
doc_type: 'reference'
Projections store data in a format that optimizes query execution, this feature is useful for:
- Running queries on a column that is not a part of the primary key
- Pre-aggregating columns, it will reduce both computation and IO
You can define one or more projections for a table, and during the query analysis the projection with the least data to scan will be selected by ClickHouse without modifying the query provided by the user.
:::note Disk usage
Projections will create internally a new hidden table, this means that more IO and space on disk will be required.
Example, If the projection has defined a different primary key, all the data from the original table will be duplicated.
:::
You can see more technical details about how projections work internally on this
page
.
Example filtering without using primary keys {#example-filtering-without-using-primary-keys}
Creating the table:
sql
CREATE TABLE visits_order
(
`user_id` UInt64,
`user_name` String,
`pages_visited` Nullable(Float64),
`user_agent` String
)
ENGINE = MergeTree()
PRIMARY KEY user_agent
Using
ALTER TABLE
, we could add the Projection to an existing table:
```sql
ALTER TABLE visits_order ADD PROJECTION user_name_projection (
SELECT
*
ORDER BY user_name
)
ALTER TABLE visits_order MATERIALIZE PROJECTION user_name_projection
Inserting the data:
sql
INSERT INTO visits_order SELECT
number,
'test',
1.5 * (number / 2),
'Android'
FROM numbers(1, 100);
```
The Projection will allow us to filter by
user_name
fast even if in the original Table
user_name
was not defined as a
PRIMARY_KEY
.
At query time ClickHouse determined that less data will be processed if the projection is used, as the data is ordered by
user_name
.
sql
SELECT
*
FROM visits_order
WHERE user_name='test'
LIMIT 2
To verify that a query is using the projection, we could review the
system.query_log
table. On the
projections
field we have the name of the projection used or empty if none has been used:
sql
SELECT query, projections FROM system.query_log WHERE query_id='<query_id>'
Example pre-aggregation query {#example-pre-aggregation-query}
Creating the table with the Projection:
sql
CREATE TABLE visits
(
`user_id` UInt64,
`user_name` String,
`pages_visited` Nullable(Float64),
`user_agent` String,
PROJECTION projection_visits_by_user
(
SELECT
user_agent,
sum(pages_visited)
GROUP BY user_id, user_agent
)
)
ENGINE = MergeTree()
ORDER BY user_agent
Inserting the data:
sql
INSERT INTO visits SELECT
number,
'test',
1.5 * (number / 2),
'Android'
FROM numbers(1, 100);
sql
INSERT INTO visits SELECT
number,
'test',
1. * (number / 2),
'IOS'
FROM numbers(100, 500); | {"source_file": "projection.md"} | [
-0.041210055351257324,
0.025941502302885056,
-0.09292278438806534,
0.07492255419492722,
-0.005533158779144287,
-0.037654224783182144,
0.06445257365703583,
-0.0019226876320317388,
-0.013989155180752277,
0.05246969684958458,
0.016428235918283463,
0.01858663186430931,
0.11591644585132599,
-0.... |
8165c46f-3ae6-4e78-b3a7-39b8656ec250 | sql
INSERT INTO visits SELECT
number,
'test',
1. * (number / 2),
'IOS'
FROM numbers(100, 500);
We will execute a first query using
GROUP BY
using the field
user_agent
, this query will not use the projection defined as the pre-aggregation does not match.
sql
SELECT
user_agent,
count(DISTINCT user_id)
FROM visits
GROUP BY user_agent
To use the projection we could execute queries that select part of, or all of the pre-aggregation and
GROUP BY
fields.
sql
SELECT
user_agent
FROM visits
WHERE user_id > 50 AND user_id < 150
GROUP BY user_agent
sql
SELECT
user_agent,
sum(pages_visited)
FROM visits
GROUP BY user_agent
As mentioned before, we could review the
system.query_log
table. On the
projections
field we have the name of the projection used or empty if none has been used:
sql
SELECT query, projections FROM system.query_log WHERE query_id='<query_id>'
Normal projection with
_part_offset
field {#normal-projection-with-part-offset-field}
Creating a table with a normal projection that utilizes the
_part_offset
field:
sql
CREATE TABLE events
(
`event_time` DateTime,
`event_id` UInt64,
`user_id` UInt64,
`huge_string` String,
PROJECTION order_by_user_id
(
SELECT
_part_offset
ORDER BY user_id
)
)
ENGINE = MergeTree()
ORDER BY (event_id);
Inserting some sample data:
sql
INSERT INTO events SELECT * FROM generateRandom() LIMIT 100000;
Using
_part_offset
as a secondary index {#normal-projection-secondary-index}
The
_part_offset
field preserves its value through merges and mutations, making it valuable for secondary indexing. We can leverage this in queries:
sql
SELECT
count()
FROM events
WHERE _part_starting_offset + _part_offset IN (
SELECT _part_starting_offset + _part_offset
FROM events
WHERE user_id = 42
)
SETTINGS enable_shared_storage_snapshot_in_query = 1
Manipulating Projections
The following operations with
projections
are available:
ADD PROJECTION {#add-projection}
ALTER TABLE [db.]name [ON CLUSTER cluster] ADD PROJECTION [IF NOT EXISTS] name ( SELECT <COLUMN LIST EXPR> [GROUP BY] [ORDER BY] )
- Adds projection description to tables metadata.
DROP PROJECTION {#drop-projection}
ALTER TABLE [db.]name [ON CLUSTER cluster] DROP PROJECTION [IF EXISTS] name
- Removes projection description from tables metadata and deletes projection files from disk. Implemented as a
mutation
.
MATERIALIZE PROJECTION {#materialize-projection}
ALTER TABLE [db.]table [ON CLUSTER cluster] MATERIALIZE PROJECTION [IF EXISTS] name [IN PARTITION partition_name]
- The query rebuilds the projection
name
in the partition
partition_name
. Implemented as a
mutation
.
CLEAR PROJECTION {#clear-projection}
ALTER TABLE [db.]table [ON CLUSTER cluster] CLEAR PROJECTION [IF EXISTS] name [IN PARTITION partition_name]
- Deletes projection files from disk without removing description. Implemented as a
mutation
. | {"source_file": "projection.md"} | [
0.06000871583819389,
0.028184572234749794,
-0.011674284934997559,
0.054397959262132645,
-0.11130265891551971,
0.0000856970000313595,
0.11190740764141083,
-0.012505407445132732,
0.023453891277313232,
0.0344921350479126,
0.016402026638388634,
-0.04569067060947418,
0.09413847327232361,
-0.001... |
dd3ddd50-21e5-4564-925f-88c753a0cf04 | The commands
ADD
,
DROP
and
CLEAR
are lightweight in a sense that they only change metadata or remove files.
Also, they are replicated, syncing projections metadata via ClickHouse Keeper or ZooKeeper.
:::note
Projection manipulation is supported only for tables with
*MergeTree
engine (including
replicated
variants).
::: | {"source_file": "projection.md"} | [
-0.054472822695970535,
-0.029910340905189514,
-0.044063009321689606,
0.008417661301791668,
0.06909142434597015,
-0.054170161485672,
-0.0005573269445449114,
-0.018034595996141434,
0.029040036723017693,
0.0913342833518982,
0.05259505286812782,
-0.005730386823415756,
-0.015577221289277077,
-0... |
7209e864-3008-4f1a-8802-2f9f8ac8dcd6 | description: 'Documentation for Apply mask of deleted rows'
sidebar_label: 'APPLY DELETED MASK'
sidebar_position: 46
slug: /sql-reference/statements/alter/apply-deleted-mask
title: 'Apply mask of deleted rows'
doc_type: 'reference'
Apply mask of deleted rows
sql
ALTER TABLE [db].name [ON CLUSTER cluster] APPLY DELETED MASK [IN PARTITION partition_id]
The command applies mask created by
lightweight delete
and forcefully removes rows marked as deleted from disk. This command is a heavyweight mutation, and it semantically equals to query
ALTER TABLE [db].name DELETE WHERE _row_exists = 0
.
:::note
It only works for tables in the
MergeTree
family (including
replicated
tables).
:::
See also
Lightweight deletes
Heavyweight deletes | {"source_file": "apply-deleted-mask.md"} | [
-0.01889670640230179,
0.0075762164779007435,
0.005982131697237492,
0.02687842771410942,
0.03364001587033272,
-0.04893554374575615,
0.04533834755420685,
-0.10375968366861343,
0.017443375661969185,
0.013573606498539448,
0.06493418663740158,
-0.0272507406771183,
0.07114890962839127,
-0.070913... |
47a17d9f-0c8c-4aa4-8742-e8a4efdc6ae9 | description: 'Documentation for ALTER NAMED COLLECTION'
sidebar_label: 'NAMED COLLECTION'
slug: /sql-reference/statements/alter/named-collection
title: 'ALTER NAMED COLLECTION'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
ALTER NAMED COLLECTION
This query intends to modify already existing named collections.
Syntax
sql
ALTER NAMED COLLECTION [IF EXISTS] name [ON CLUSTER cluster]
[ SET
key_name1 = 'some value' [[NOT] OVERRIDABLE],
key_name2 = 'some value' [[NOT] OVERRIDABLE],
key_name3 = 'some value' [[NOT] OVERRIDABLE],
... ] |
[ DELETE key_name4, key_name5, ... ]
Example
```sql
CREATE NAMED COLLECTION foobar AS a = '1' NOT OVERRIDABLE, b = '2';
ALTER NAMED COLLECTION foobar SET a = '2' OVERRIDABLE, c = '3';
ALTER NAMED COLLECTION foobar DELETE b;
``` | {"source_file": "named-collection.md"} | [
-0.028754061087965965,
0.010536769405007362,
0.02742360532283783,
0.042641062289476395,
-0.07665020227432251,
0.010619877837598324,
0.027685124427080154,
-0.08052613586187363,
0.03331577032804489,
0.07911061495542526,
0.042153242975473404,
-0.05864740535616875,
0.0699273943901062,
-0.10066... |
99ecddc7-9049-4eb3-9815-aac9b7ad2e3b | description: 'Documentation for ALTER TABLE ... MODIFY COMMENT which allow
adding, modifying, or removing table comments'
sidebar_label: 'ALTER TABLE ... MODIFY COMMENT'
sidebar_position: 51
slug: /sql-reference/statements/alter/comment
title: 'ALTER TABLE ... MODIFY COMMENT'
keywords: ['ALTER TABLE', 'MODIFY COMMENT']
doc_type: 'reference'
ALTER TABLE ... MODIFY COMMENT
Adds, modifies, or removes a table comment, regardless of whether it was set
before or not. The comment change is reflected in both
system.tables
and in the
SHOW CREATE TABLE
query.
Syntax {#syntax}
sql
ALTER TABLE [db].name [ON CLUSTER cluster] MODIFY COMMENT 'Comment'
Examples {#examples}
To create a table with a comment:
sql
CREATE TABLE table_with_comment
(
`k` UInt64,
`s` String
)
ENGINE = Memory()
COMMENT 'The temporary table';
To modify the table comment:
sql
ALTER TABLE table_with_comment
MODIFY COMMENT 'new comment on a table';
To view the modified comment:
sql title="Query"
SELECT comment
FROM system.tables
WHERE database = currentDatabase() AND name = 'table_with_comment';
text title="Response"
ββcommentβββββββββββββββββ
β new comment on a table β
ββββββββββββββββββββββββββ
To remove the table comment:
sql
ALTER TABLE table_with_comment MODIFY COMMENT '';
To verify that the comment was removed:
sql title="Query"
SELECT comment
FROM system.tables
WHERE database = currentDatabase() AND name = 'table_with_comment';
text title="Response"
ββcommentββ
β β
βββββββββββ
Caveats {#caveats}
For Replicated tables, the comment can be different on different replicas.
Modifying the comment applies to a single replica.
The feature is available since version 23.9. It does not work in previous
ClickHouse versions.
Related content {#related-content}
COMMENT
clause
ALTER DATABASE ... MODIFY COMMENT | {"source_file": "comment.md"} | [
-0.028975753113627434,
-0.07570283114910126,
0.022584905847907066,
0.10257647931575775,
-0.01463942788541317,
0.001550663961097598,
0.05752138793468475,
0.002397045260295272,
-0.0008850035956129432,
0.0856083482503891,
0.03457207605242729,
-0.0031454323325306177,
0.12118124216794968,
-0.06... |
317205d8-fd37-4cf8-b807-523f24bdd2dc | description: 'Documentation for Partition'
sidebar_label: 'PARTITION'
sidebar_position: 38
slug: /sql-reference/statements/alter/partition
title: 'Manipulating Partitions and Parts'
doc_type: 'reference'
The following operations with
partitions
are available:
DETACH PARTITION\|PART
β Moves a partition or part to the
detached
directory and forget it.
DROP PARTITION\|PART
β Deletes a partition or part.
DROP DETACHED PARTITION\|PART
- Delete a part or all parts of a partition from
detached
.
FORGET PARTITION
β Deletes a partition metadata from zookeeper if it's empty.
ATTACH PARTITION\|PART
β Adds a partition or part from the
detached
directory to the table.
ATTACH PARTITION FROM
β Copies the data partition from one table to another and adds.
REPLACE PARTITION
β Copies the data partition from one table to another and replaces.
MOVE PARTITION TO TABLE
β Moves the data partition from one table to another.
CLEAR COLUMN IN PARTITION
β Resets the value of a specified column in a partition.
CLEAR INDEX IN PARTITION
β Resets the specified secondary index in a partition.
FREEZE PARTITION
β Creates a backup of a partition.
UNFREEZE PARTITION
β Removes a backup of a partition.
FETCH PARTITION\|PART
β Downloads a part or partition from another server.
MOVE PARTITION\|PART
β Move partition/data part to another disk or volume.
UPDATE IN PARTITION
β Update data inside the partition by condition.
DELETE IN PARTITION
β Delete data inside the partition by condition.
REWRITE PARTS
β Rewrite parts in the table (or specific partition) completely.
DETACH PARTITION\|PART {#detach-partitionpart}
sql
ALTER TABLE table_name [ON CLUSTER cluster] DETACH PARTITION|PART partition_expr
Moves all data for the specified partition to the
detached
directory. The server forgets about the detached data partition as if it does not exist. The server will not know about this data until you make the
ATTACH
query.
Example:
sql
ALTER TABLE mt DETACH PARTITION '2020-11-21';
ALTER TABLE mt DETACH PART 'all_2_2_0';
Read about setting the partition expression in a section
How to set the partition expression
.
After the query is executed, you can do whatever you want with the data in the
detached
directory β delete it from the file system, or just leave it.
This query is replicated β it moves the data to the
detached
directory on all replicas. Note that you can execute this query only on a leader replica. To find out if a replica is a leader, perform the
SELECT
query to the
system.replicas
table. Alternatively, it is easier to make a
DETACH
query on all replicas - all the replicas throw an exception, except the leader replicas (as multiple leaders are allowed).
DROP PARTITION\|PART {#drop-partitionpart}
sql
ALTER TABLE table_name [ON CLUSTER cluster] DROP PARTITION|PART partition_expr | {"source_file": "partition.md"} | [
-0.026192471385002136,
-0.020793259143829346,
0.03147709742188454,
0.03234903886914253,
0.044782087206840515,
-0.03391389921307564,
0.05112587288022041,
0.032562341541051865,
-0.023496301844716072,
-0.01552728470414877,
0.03338112682104111,
0.016207613050937653,
0.054670628160238266,
-0.01... |
356726d7-a817-49cf-8c77-7327f2bf451c | DROP PARTITION\|PART {#drop-partitionpart}
sql
ALTER TABLE table_name [ON CLUSTER cluster] DROP PARTITION|PART partition_expr
Deletes the specified partition from the table. This query tags the partition as inactive and deletes data completely, approximately in 10 minutes.
Read about setting the partition expression in a section
How to set the partition expression
.
The query is replicated β it deletes data on all replicas.
Example:
sql
ALTER TABLE mt DROP PARTITION '2020-11-21';
ALTER TABLE mt DROP PART 'all_4_4_0';
DROP DETACHED PARTITION\|PART {#drop-detached-partitionpart}
sql
ALTER TABLE table_name [ON CLUSTER cluster] DROP DETACHED PARTITION|PART ALL|partition_expr
Removes the specified part or all parts of the specified partition from
detached
.
Read more about setting the partition expression in a section
How to set the partition expression
.
FORGET PARTITION {#forget-partition}
sql
ALTER TABLE table_name FORGET PARTITION partition_expr
Removes all metadata about an empty partition from ZooKeeper. Query fails if partition is not empty or unknown. Make sure to execute only for partitions that will never be used again.
Read about setting the partition expression in a section
How to set the partition expression
.
Example:
sql
ALTER TABLE mt FORGET PARTITION '20201121';
ATTACH PARTITION\|PART {#attach-partitionpart}
sql
ALTER TABLE table_name ATTACH PARTITION|PART partition_expr
Adds data to the table from the
detached
directory. It is possible to add data for an entire partition or for a separate part. Examples:
sql
ALTER TABLE visits ATTACH PARTITION 201901;
ALTER TABLE visits ATTACH PART 201901_2_2_0;
Read more about setting the partition expression in a section
How to set the partition expression
.
This query is replicated. The replica-initiator checks whether there is data in the
detached
directory.
If data exists, the query checks its integrity. If everything is correct, the query adds the data to the table.
If the non-initiator replica, receiving the attach command, finds the part with the correct checksums in its own
detached
folder, it attaches the data without fetching it from other replicas.
If there is no part with the correct checksums, the data is downloaded from any replica having the part.
You can put data to the
detached
directory on one replica and use the
ALTER ... ATTACH
query to add it to the table on all replicas.
ATTACH PARTITION FROM {#attach-partition-from}
sql
ALTER TABLE table2 [ON CLUSTER cluster] ATTACH PARTITION partition_expr FROM table1
This query copies the data partition from
table1
to
table2
.
Note that:
Data will be deleted neither from
table1
nor from
table2
.
table1
may be a temporary table.
For the query to run successfully, the following conditions must be met:
Both tables must have the same structure.
Both tables must have the same partition key, the same order by key and the same primary key. | {"source_file": "partition.md"} | [
-0.002187849720939994,
0.029850158840417862,
0.08528310060501099,
0.05146448314189911,
0.06661812216043472,
-0.06953749805688858,
0.07653072476387024,
-0.01325976848602295,
0.01350475661456585,
0.01979285106062889,
0.02932383492588997,
-0.053559936583042145,
0.046946462243795395,
0.0316808... |
3e4a4385-8481-4800-bb90-11e77966bdec | Both tables must have the same structure.
Both tables must have the same partition key, the same order by key and the same primary key.
Both tables must have the same storage policy.
The destination table must include all indices and projections from the source table. If the
enforce_index_structure_match_on_partition_manipulation
setting is enabled in destination table, the indices and projections must be identical. Otherwise, the destination table can have a superset of the source table's indices and projections.
REPLACE PARTITION {#replace-partition}
sql
ALTER TABLE table2 [ON CLUSTER cluster] REPLACE PARTITION partition_expr FROM table1
This query copies the data partition from
table1
to
table2
and replaces the existing partition in
table2
. The operation is atomic.
Note that:
Data won't be deleted from
table1
.
table1
may be a temporary table.
For the query to run successfully, the following conditions must be met:
Both tables must have the same structure.
Both tables must have the same partition key, the same order by key and the same primary key.
Both tables must have the same storage policy.
The destination table must include all indices and projections from the source table. If the
enforce_index_structure_match_on_partition_manipulation
setting is enabled in destination table, the indices and projections must be identical. Otherwise, the destination table can have a superset of the source table's indices and projections.
MOVE PARTITION TO TABLE {#move-partition-to-table}
sql
ALTER TABLE table_source [ON CLUSTER cluster] MOVE PARTITION partition_expr TO TABLE table_dest
This query moves the data partition from the
table_source
to
table_dest
with deleting the data from
table_source
.
For the query to run successfully, the following conditions must be met:
Both tables must have the same structure.
Both tables must have the same partition key, the same order by key and the same primary key.
Both tables must have the same storage policy.
Both tables must be the same engine family (replicated or non-replicated).
The destination table must include all indices and projections from the source table. If the
enforce_index_structure_match_on_partition_manipulation
setting is enabled in destination table, the indices and projections must be identical. Otherwise, the destination table can have a superset of the source table's indices and projections.
CLEAR COLUMN IN PARTITION {#clear-column-in-partition}
sql
ALTER TABLE table_name [ON CLUSTER cluster] CLEAR COLUMN column_name IN PARTITION partition_expr
Resets all values in the specified column in a partition. If the
DEFAULT
clause was determined when creating a table, this query sets the column value to a specified default value.
Example:
sql
ALTER TABLE visits CLEAR COLUMN hour in PARTITION 201902
FREEZE PARTITION {#freeze-partition} | {"source_file": "partition.md"} | [
-0.003296470968052745,
-0.029253121465444565,
0.02490329183638096,
0.049521949142217636,
0.015592892654240131,
-0.06706313043832779,
0.061499472707509995,
-0.049798715859651566,
-0.01698506809771061,
-0.016781413927674294,
0.01127584371715784,
-0.03316883742809296,
0.060274578630924225,
-0... |
af1638a9-c6ef-46f2-a836-4e8464fd16e1 | Example:
sql
ALTER TABLE visits CLEAR COLUMN hour in PARTITION 201902
FREEZE PARTITION {#freeze-partition}
sql
ALTER TABLE table_name [ON CLUSTER cluster] FREEZE [PARTITION partition_expr] [WITH NAME 'backup_name']
This query creates a local backup of a specified partition. If the
PARTITION
clause is omitted, the query creates the backup of all partitions at once.
:::note
The entire backup process is performed without stopping the server.
:::
Note that for old-styled tables you can specify the prefix of the partition name (for example,
2019
) - then the query creates the backup for all the corresponding partitions. Read about setting the partition expression in a section
How to set the partition expression
.
At the time of execution, for a data snapshot, the query creates hardlinks to a table data. Hardlinks are placed in the directory
/var/lib/clickhouse/shadow/N/...
, where:
/var/lib/clickhouse/
is the working ClickHouse directory specified in the config.
N
is the incremental number of the backup.
if the
WITH NAME
parameter is specified, then the value of the
'backup_name'
parameter is used instead of the incremental number.
:::note
If you use
a set of disks for data storage in a table
, the
shadow/N
directory appears on every disk, storing data parts that matched by the
PARTITION
expression.
:::
The same structure of directories is created inside the backup as inside
/var/lib/clickhouse/
. The query performs
chmod
for all files, forbidding writing into them.
After creating the backup, you can copy the data from
/var/lib/clickhouse/shadow/
to the remote server and then delete it from the local server. Note that the
ALTER t FREEZE PARTITION
query is not replicated. It creates a local backup only on the local server.
The query creates backup almost instantly (but first it waits for the current queries to the corresponding table to finish running).
ALTER TABLE t FREEZE PARTITION
copies only the data, not table metadata. To make a backup of table metadata, copy the file
/var/lib/clickhouse/metadata/database/table.sql
To restore data from a backup, do the following:
Create the table if it does not exist. To view the query, use the .sql file (replace
ATTACH
in it with
CREATE
).
Copy the data from the
data/database/table/
directory inside the backup to the
/var/lib/clickhouse/data/database/table/detached/
directory.
Run
ALTER TABLE t ATTACH PARTITION
queries to add the data to a table.
Restoring from a backup does not require stopping the server.
The query processes parts in parallel, the number of threads is regulated by the
max_threads
setting.
For more information about backups and restoring data, see the
Data Backup
section.
UNFREEZE PARTITION {#unfreeze-partition}
sql
ALTER TABLE table_name [ON CLUSTER cluster] UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name' | {"source_file": "partition.md"} | [
-0.06216754764318466,
-0.019930660724639893,
0.038502324372529984,
0.06871991604566574,
0.0326005257666111,
-0.01619841530919075,
0.01205296628177166,
0.0024555183481425047,
0.019301660358905792,
-0.008678039535880089,
0.025224577635526657,
-0.0033686186652630568,
0.07714194059371948,
-0.1... |
45a20a5f-bf7d-48b1-b144-d3f8602a0406 | UNFREEZE PARTITION {#unfreeze-partition}
sql
ALTER TABLE table_name [ON CLUSTER cluster] UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name'
Removes
frozen
partitions with the specified name from the disk. If the
PARTITION
clause is omitted, the query removes the backup of all partitions at once.
CLEAR INDEX IN PARTITION {#clear-index-in-partition}
sql
ALTER TABLE table_name [ON CLUSTER cluster] CLEAR INDEX index_name IN PARTITION partition_expr
The query works similar to
CLEAR COLUMN
, but it resets an index instead of a column data.
FETCH PARTITION|PART {#fetch-partitionpart}
sql
ALTER TABLE table_name [ON CLUSTER cluster] FETCH PARTITION|PART partition_expr FROM 'path-in-zookeeper'
Downloads a partition from another server. This query only works for the replicated tables.
The query does the following:
Downloads the partition|part from the specified shard. In 'path-in-zookeeper' you must specify a path to the shard in ZooKeeper.
Then the query puts the downloaded data to the
detached
directory of the
table_name
table. Use the
ATTACH PARTITION\|PART
query to add the data to the table.
For example:
FETCH PARTITION
sql
ALTER TABLE users FETCH PARTITION 201902 FROM '/clickhouse/tables/01-01/visits';
ALTER TABLE users ATTACH PARTITION 201902;
FETCH PART
sql
ALTER TABLE users FETCH PART 201901_2_2_0 FROM '/clickhouse/tables/01-01/visits';
ALTER TABLE users ATTACH PART 201901_2_2_0;
Note that:
The
ALTER ... FETCH PARTITION|PART
query isn't replicated. It places the part or partition to the
detached
directory only on the local server.
The
ALTER TABLE ... ATTACH
query is replicated. It adds the data to all replicas. The data is added to one of the replicas from the
detached
directory, and to the others - from neighboring replicas.
Before downloading, the system checks if the partition exists and the table structure matches. The most appropriate replica is selected automatically from the healthy replicas.
Although the query is called
ALTER TABLE
, it does not change the table structure and does not immediately change the data available in the table.
MOVE PARTITION\|PART {#move-partitionpart}
Moves partitions or data parts to another volume or disk for
MergeTree
-engine tables. See
Using Multiple Block Devices for Data Storage
.
sql
ALTER TABLE table_name [ON CLUSTER cluster] MOVE PARTITION|PART partition_expr TO DISK|VOLUME 'disk_name'
The
ALTER TABLE t MOVE
query:
Not replicated, because different replicas can have different storage policies.
Returns an error if the specified disk or volume is not configured. Query also returns an error if conditions of data moving, that specified in the storage policy, can't be applied. | {"source_file": "partition.md"} | [
-0.0018080982845276594,
0.013925699517130852,
0.028243884444236755,
0.09386506676673889,
0.041177455335855484,
-0.05422266200184822,
0.06698071211576462,
-0.014820353128015995,
0.023302629590034485,
0.022698158398270607,
0.03518303111195564,
-0.0015765666030347347,
0.06945206224918365,
-0.... |
d0c93ab1-82ae-42a0-81bf-13fb70402b32 | Returns an error if the specified disk or volume is not configured. Query also returns an error if conditions of data moving, that specified in the storage policy, can't be applied.
Can return an error in the case, when data to be moved is already moved by a background process, concurrent
ALTER TABLE t MOVE
query or as a result of background data merging. A user shouldn't perform any additional actions in this case.
Example:
sql
ALTER TABLE hits MOVE PART '20190301_14343_16206_438' TO VOLUME 'slow'
ALTER TABLE hits MOVE PARTITION '2019-09-01' TO DISK 'fast_ssd'
UPDATE IN PARTITION {#update-in-partition}
Manipulates data in the specifies partition matching the specified filtering expression. Implemented as a
mutation
.
Syntax:
sql
ALTER TABLE [db.]table [ON CLUSTER cluster] UPDATE column1 = expr1 [, ...] [IN PARTITION partition_expr] WHERE filter_expr
Example {#example}
```sql
-- using partition name
ALTER TABLE mt UPDATE x = x + 1 IN PARTITION 2 WHERE p = 2;
-- using partition id
ALTER TABLE mt UPDATE x = x + 1 IN PARTITION ID '2' WHERE p = 2;
```
See Also {#see-also}
UPDATE
DELETE IN PARTITION {#delete-in-partition}
Deletes data in the specifies partition matching the specified filtering expression. Implemented as a
mutation
.
Syntax:
sql
ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE [IN PARTITION partition_expr] WHERE filter_expr
Example {#example-1}
```sql
-- using partition name
ALTER TABLE mt DELETE IN PARTITION 2 WHERE p = 2;
-- using partition id
ALTER TABLE mt DELETE IN PARTITION ID '2' WHERE p = 2;
```
REWRITE PARTS {#rewrite-parts}
This will rewrite the parts from scratch, using all new settings. This makes sense because table-level settings like
use_const_adaptive_granularity
are applied only for newly written parts by default.
Example {#example-rewrite-parts}
sql
ALTER TABLE mt REWRITE PARTS;
ALTER TABLE mt REWRITE PARTS IN PARTITION 2;
See Also {#see-also-1}
DELETE
How to Set Partition Expression {#how-to-set-partition-expression}
You can specify the partition expression in
ALTER ... PARTITION
queries in different ways:
As a value from the
partition
column of the
system.parts
table. For example,
ALTER TABLE visits DETACH PARTITION 201901
.
Using the keyword
ALL
. It can be used only with DROP/DETACH/ATTACH/ATTACH FROM. For example,
ALTER TABLE visits ATTACH PARTITION ALL
.
As a tuple of expressions or constants that matches (in types) the table partitioning keys tuple. In the case of a single element partitioning key, the expression should be wrapped in the
tuple (...)
function. For example,
ALTER TABLE visits DETACH PARTITION tuple(toYYYYMM(toDate('2019-01-25')))
. | {"source_file": "partition.md"} | [
-0.018573954701423645,
-0.02300497516989708,
0.04238450899720192,
0.01756402663886547,
0.02948453091084957,
-0.05903118848800659,
0.09479890763759613,
-0.005719443783164024,
0.020169999450445175,
0.02239341475069523,
0.07049141824245453,
-0.04274555668234825,
0.08803530037403107,
-0.065387... |
cf43832c-ec65-4538-bc33-e015f75c0c35 | Using the partition ID. Partition ID is a string identifier of the partition (human-readable, if possible) that is used as the names of partitions in the file system and in ZooKeeper. The partition ID must be specified in the
PARTITION ID
clause, in a single quotes. For example,
ALTER TABLE visits DETACH PARTITION ID '201901'
.
In the
ALTER ATTACH PART
and
DROP DETACHED PART
query, to specify the name of a part, use string literal with a value from the
name
column of the
system.detached_parts
table. For example,
ALTER TABLE visits ATTACH PART '201901_1_1_0'
.
Usage of quotes when specifying the partition depends on the type of partition expression. For example, for the
String
type, you have to specify its name in quotes (
'
). For the
Date
and
Int*
types no quotes are needed.
All the rules above are also true for the
OPTIMIZE
query. If you need to specify the only partition when optimizing a non-partitioned table, set the expression
PARTITION tuple()
. For example:
sql
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
IN PARTITION
specifies the partition to which the
UPDATE
or
DELETE
expressions are applied as a result of the
ALTER TABLE
query. New parts are created only from the specified partition. In this way,
IN PARTITION
helps to reduce the load when the table is divided into many partitions, and you only need to update the data point-by-point.
The examples of
ALTER ... PARTITION
queries are demonstrated in the tests
00502_custom_partitioning_local
and
00502_custom_partitioning_replicated_zookeeper
. | {"source_file": "partition.md"} | [
-0.005604997277259827,
0.03247553110122681,
0.0558602474629879,
0.03366644307971001,
0.0472007654607296,
-0.029366804286837578,
0.0959889218211174,
0.08195380121469498,
-0.010203183628618717,
0.01716129295527935,
-0.010996534489095211,
-0.0577952116727829,
0.012250260449945927,
0.022167719... |
66528a2a-fa74-4ac7-8245-d6b5ec0bdd27 | description: 'Documentation for ALTER'
sidebar_label: 'ALTER'
sidebar_position: 35
slug: /sql-reference/statements/alter/
title: 'ALTER'
doc_type: 'reference'
ALTER
Most
ALTER TABLE
queries modify table settings or data:
| Modifier |
|-------------------------------------------------------------------------------------|
|
COLUMN
|
|
PARTITION
|
|
DELETE
|
|
UPDATE
|
|
ORDER BY
|
|
INDEX
|
|
CONSTRAINT
|
|
TTL
|
|
STATISTICS
|
|
APPLY DELETED MASK
|
:::note
Most
ALTER TABLE
queries are supported only for
*MergeTree
,
Merge
and
Distributed
tables.
:::
These
ALTER
statements manipulate views:
| Statement | Description |
|-------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
|
ALTER TABLE ... MODIFY QUERY
| Modifies a
Materialized view
structure. |
These
ALTER
statements modify entities related to role-based access control:
| Statement |
|---------------------------------------------------------------------------------|
|
USER
|
|
ROLE
|
|
QUOTA
|
|
ROW POLICY
|
|
SETTINGS PROFILE
|
| Statement | Description |
|---------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
|
ALTER TABLE ... MODIFY COMMENT
| Adds, modifies, or removes comments to the table, regardless if it was set before or not. |
|
ALTER NAMED COLLECTION
| Modifies
Named Collections
. |
Mutations {#mutations}
ALTER
queries that are intended to manipulate table data are implemented with a mechanism called "mutations", most notably
ALTER TABLE ... DELETE
and
ALTER TABLE ... UPDATE
. They are asynchronous background processes similar to merges in
MergeTree
tables that to produce new "mutated" versions of parts. | {"source_file": "index.md"} | [
-0.026393361389636993,
0.026622187346220016,
0.0906812772154808,
0.09210176765918732,
-0.02475891448557377,
0.04548206180334091,
0.03844565525650978,
0.025261929258704185,
0.02585553750395775,
0.03890718147158623,
0.029898596927523613,
-0.07418349385261536,
0.05868510529398918,
-0.07330288... |
e3cee4bc-4887-4b69-b5d2-d2a57adc1afa | For
*MergeTree
tables mutations execute by
rewriting whole data parts
.
There is no atomicity β parts are substituted for mutated parts as soon as they are ready and a
SELECT
query that started executing during a mutation will see data from parts that have already been mutated along with data from parts that have not been mutated yet.
Mutations are totally ordered by their creation order and are applied to each part in that order. Mutations are also partially ordered with
INSERT INTO
queries: data that was inserted into the table before the mutation was submitted will be mutated and data that was inserted after that will not be mutated. Note that mutations do not block inserts in any way.
A mutation query returns immediately after the mutation entry is added (in case of replicated tables to ZooKeeper, for non-replicated tables - to the filesystem). The mutation itself executes asynchronously using the system profile settings. To track the progress of mutations you can use the
system.mutations
table. A mutation that was successfully submitted will continue to execute even if ClickHouse servers are restarted. There is no way to roll back the mutation once it is submitted, but if the mutation is stuck for some reason it can be cancelled with the
KILL MUTATION
query.
Entries for finished mutations are not deleted right away (the number of preserved entries is determined by the
finished_mutations_to_keep
storage engine parameter). Older mutation entries are deleted.
Synchronicity of ALTER Queries {#synchronicity-of-alter-queries}
For non-replicated tables, all
ALTER
queries are performed synchronously. For replicated tables, the query just adds instructions for the appropriate actions to
ZooKeeper
, and the actions themselves are performed as soon as possible. However, the query can wait for these actions to be completed on all the replicas.
For
ALTER
queries that creates mutations (e.g.: including, but not limited to
UPDATE
,
DELETE
,
MATERIALIZE INDEX
,
MATERIALIZE PROJECTION
,
MATERIALIZE COLUMN
,
APPLY DELETED MASK
,
CLEAR STATISTIC
,
MATERIALIZE STATISTIC
) the synchronicity is defined by the
mutations_sync
setting.
For other
ALTER
queries which only modify the metadata, you can use the
alter_sync
setting to set up waiting.
You can specify how long (in seconds) to wait for inactive replicas to execute all
ALTER
queries with the
replication_wait_for_inactive_replica_timeout
setting.
:::note
For all
ALTER
queries, if
alter_sync = 2
and some replicas are not active for more than the time, specified in the
replication_wait_for_inactive_replica_timeout
setting, then an exception
UNFINISHED
is thrown.
:::
Related content {#related-content}
Blog:
Handling Updates and Deletes in ClickHouse | {"source_file": "index.md"} | [
0.004431433044373989,
-0.0408867672085762,
0.008419734425842762,
-0.010264500975608826,
0.018385306000709534,
-0.1169179305434227,
-0.021112963557243347,
0.0036483127623796463,
0.036064330488443375,
0.03180811554193497,
0.06052689626812935,
0.04600212350487709,
0.05483590066432953,
-0.0633... |
40b68a84-a448-4c7e-8de7-fb6dae51492e | description: 'Documentation for Settings Profile'
sidebar_label: 'SETTINGS PROFILE'
sidebar_position: 48
slug: /sql-reference/statements/alter/settings-profile
title: 'ALTER SETTINGS PROFILE'
doc_type: 'reference'
Changes settings profiles.
Syntax:
sql
ALTER SETTINGS PROFILE [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[DROP ALL PROFILES]
[DROP ALL SETTINGS]
[DROP SETTINGS variable [,...] ]
[DROP PROFILES 'profile_name' [,...] ]
[ADD|MODIFY SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | INHERIT 'profile_name'] [,...]
[TO {{role1 | user1 [, role2 | user2 ...]} | NONE | ALL | ALL EXCEPT {role1 | user1 [, role2 | user2 ...]}}]
[ADD PROFILES 'profile_name' [,...] ] | {"source_file": "settings-profile.md"} | [
0.07052012532949448,
0.02909393236041069,
0.003004854079335928,
0.0926826223731041,
-0.05678972974419594,
0.06655719876289368,
0.052823785692453384,
0.0615607313811779,
-0.15423552691936493,
-0.005568712018430233,
0.013531113043427467,
-0.08202550560235977,
0.0986635833978653,
-0.040129080... |
f9c51a8c-18d1-4159-b246-cc1b204c327b | description: 'Documentation for Manipulating Constraints'
sidebar_label: 'CONSTRAINT'
sidebar_position: 43
slug: /sql-reference/statements/alter/constraint
title: 'Manipulating Constraints'
doc_type: 'reference'
Manipulating Constraints
Constraints could be added or deleted using following syntax:
sql
ALTER TABLE [db].name [ON CLUSTER cluster] ADD CONSTRAINT [IF NOT EXISTS] constraint_name CHECK expression;
ALTER TABLE [db].name [ON CLUSTER cluster] DROP CONSTRAINT [IF EXISTS] constraint_name;
See more on
constraints
.
Queries will add or remove metadata about constraints from table, so they are processed immediately.
:::tip
Constraint check
will not be executed
on existing data if it was added.
:::
All changes on replicated tables are broadcast to ZooKeeper and will be applied on other replicas as well. | {"source_file": "constraint.md"} | [
-0.012543891556560993,
0.009487086907029152,
-0.01581135392189026,
0.04561125487089157,
0.02664291113615036,
0.007935975678265095,
0.0042546591721475124,
-0.0778920128941536,
-0.05744097754359245,
0.0650373175740242,
0.05025685951113701,
-0.03440386429429054,
0.05568879842758179,
-0.022360... |
b063a074-0774-4218-8708-37d6d01719de | description: 'Documentation for ALTER TABLE ... DELETE Statement'
sidebar_label: 'DELETE'
sidebar_position: 39
slug: /sql-reference/statements/alter/delete
title: 'ALTER TABLE ... DELETE Statement'
doc_type: 'reference'
ALTER TABLE ... DELETE Statement
sql
ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE WHERE filter_expr
Deletes data matching the specified filtering expression. Implemented as a
mutation
.
:::note
The
ALTER TABLE
prefix makes this syntax different from most other systems supporting SQL. It is intended to signify that unlike similar queries in OLTP databases this is a heavy operation not designed for frequent use.
ALTER TABLE
is considered a heavyweight operation that requires the underlying data to be merged before it is deleted. For MergeTree tables, consider using the
DELETE FROM
query
, which performs a lightweight delete and can be considerably faster.
:::
The
filter_expr
must be of type
UInt8
. The query deletes rows in the table for which this expression takes a non-zero value.
One query can contain several commands separated by commas.
The synchronicity of the query processing is defined by the
mutations_sync
setting. By default, it is asynchronous.
See also
Mutations
Synchronicity of ALTER Queries
mutations_sync
setting
Related content {#related-content}
Blog:
Handling Updates and Deletes in ClickHouse | {"source_file": "delete.md"} | [
-0.02097899839282036,
0.02399297244846821,
0.041078150272369385,
0.08126524835824966,
0.030777577310800552,
-0.06093224883079529,
0.03725844994187355,
-0.0479762926697731,
0.06296779215335846,
0.018398161977529526,
0.07740073651075363,
-0.002748652594164014,
0.028073957189917564,
-0.091132... |
b97e78cd-d5ba-4713-8147-2d0f174177df | description: 'Documentation for ALTER TABLE ... UPDATE Statements'
sidebar_label: 'UPDATE'
sidebar_position: 40
slug: /sql-reference/statements/alter/update
title: 'ALTER TABLE ... UPDATE Statements'
doc_type: 'reference'
ALTER TABLE ... UPDATE Statements
sql
ALTER TABLE [db.]table [ON CLUSTER cluster] UPDATE column1 = expr1 [, ...] [IN PARTITION partition_id] WHERE filter_expr
Manipulates data matching the specified filtering expression. Implemented as a
mutation
.
:::note
The
ALTER TABLE
prefix makes this syntax different from most other systems supporting SQL. It is intended to signify that unlike similar queries in OLTP databases this is a heavy operation not designed for frequent use.
:::
The
filter_expr
must be of type
UInt8
. This query updates values of specified columns to the values of corresponding expressions in rows for which the
filter_expr
takes a non-zero value. Values are cast to the column type using the
CAST
operator. Updating columns that are used in the calculation of the primary or the partition key is not supported.
One query can contain several commands separated by commas.
The synchronicity of the query processing is defined by the
mutations_sync
setting. By default, it is asynchronous.
See also
Mutations
Synchronicity of ALTER Queries
mutations_sync
setting
Related content {#related-content}
Blog:
Handling Updates and Deletes in ClickHouse | {"source_file": "update.md"} | [
-0.03429534286260605,
0.022164905443787575,
0.04293283820152283,
0.07374770194292068,
0.011128210462629795,
-0.046131256967782974,
0.04052789509296417,
-0.01822468638420105,
0.002383096143603325,
0.019903771579265594,
0.059055134654045105,
-0.035257160663604736,
0.015039614401757717,
-0.12... |
cc02ed56-73ce-4456-9337-274e25dd3612 | description: 'Documentation for User'
sidebar_label: 'USER'
sidebar_position: 45
slug: /sql-reference/statements/alter/user
title: 'ALTER USER'
doc_type: 'reference'
Changes ClickHouse user accounts.
Syntax:
sql
ALTER USER [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[NOT IDENTIFIED | RESET AUTHENTICATION METHODS TO NEW | {IDENTIFIED | ADD IDENTIFIED} {[WITH {plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | WITH NO_PASSWORD | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']} [VALID UNTIL datetime]
[, {[{plaintext_password | sha256_password | sha256_hash | ...}] BY {'password' | 'hash'}} | {ldap SERVER 'server_name'} | {...} | ... [,...]]]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]
[DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[DROP ALL PROFILES]
[DROP ALL SETTINGS]
[DROP SETTINGS variable [,...] ]
[DROP PROFILES 'profile_name' [,...] ]
[ADD|MODIFY SETTINGS variable [=value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE|CONST|CHANGEABLE_IN_READONLY] [,...] ]
[ADD PROFILES 'profile_name' [,...] ]
To use
ALTER USER
you must have the
ALTER USER
privilege.
GRANTEES Clause {#grantees-clause}
Specifies users or roles which are allowed to receive
privileges
from this user on the condition this user has also all required access granted with
GRANT OPTION
. Options of the
GRANTEES
clause:
user
β Specifies a user this user can grant privileges to.
role
β Specifies a role this user can grant privileges to.
ANY
β This user can grant privileges to anyone. It's the default setting.
NONE
β This user can grant privileges to none.
You can exclude any user or role by using the
EXCEPT
expression. For example,
ALTER USER user1 GRANTEES ANY EXCEPT user2
. It means if
user1
has some privileges granted with
GRANT OPTION
it will be able to grant those privileges to anyone except
user2
.
Examples {#examples}
Set assigned roles as default:
sql
ALTER USER user DEFAULT ROLE role1, role2
If roles aren't previously assigned to a user, ClickHouse throws an exception.
Set all the assigned roles to default:
sql
ALTER USER user DEFAULT ROLE ALL
If a role is assigned to a user in the future, it will become default automatically.
Set all the assigned roles to default, excepting
role1
and
role2
:
sql
ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2
Allows the user with
john
account to grant his privileges to the user with
jack
account: | {"source_file": "user.md"} | [
0.03340627998113632,
-0.01576109416782856,
-0.09740520268678665,
0.007957451976835728,
-0.051305633038282394,
0.018210066482424736,
0.03999742865562439,
-0.03227132186293602,
-0.06408558785915375,
0.0052642603404819965,
0.04609787464141846,
-0.06640277802944183,
0.09592332690954208,
-0.038... |
ca7b33ee-1175-4e29-b7d8-dec4a15e6f72 | sql
ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2
Allows the user with
john
account to grant his privileges to the user with
jack
account:
sql
ALTER USER john GRANTEES jack;
Adds new authentication methods to the user while keeping the existing ones:
sql
ALTER USER user1 ADD IDENTIFIED WITH plaintext_password by '1', bcrypt_password by '2', plaintext_password by '3'
Notes:
1. Older versions of ClickHouse might not support the syntax of multiple authentication methods. Therefore, if the ClickHouse server contains such users and is downgraded to a version that does not support it, such users will become unusable and some user related operations will be broken. In order to downgrade gracefully, one must set all users to contain a single authentication method prior to downgrading. Alternatively, if the server was downgraded without the proper procedure, the faulty users should be dropped.
2.
no_password
can not co-exist with other authentication methods for security reasons.
Because of that, it is not possible to
ADD
a
no_password
authentication method. The below query will throw an error:
sql
ALTER USER user1 ADD IDENTIFIED WITH no_password
If you want to drop authentication methods for a user and rely on
no_password
, you must specify in the below replacing form.
Reset authentication methods and adds the ones specified in the query (effect of leading IDENTIFIED without the ADD keyword):
sql
ALTER USER user1 IDENTIFIED WITH plaintext_password by '1', bcrypt_password by '2', plaintext_password by '3'
Reset authentication methods and keep the most recent added one:
sql
ALTER USER user1 RESET AUTHENTICATION METHODS TO NEW
VALID UNTIL Clause {#valid-until-clause}
Allows you to specify the expiration date and, optionally, the time for an authentication method. It accepts a string as a parameter. It is recommended to use the
YYYY-MM-DD [hh:mm:ss] [timezone]
format for datetime. By default, this parameter equals
'infinity'
.
The
VALID UNTIL
clause can only be specified along with an authentication method, except for the case where no authentication method has been specified in the query. In this scenario, the
VALID UNTIL
clause will be applied to all existing authentication methods.
Examples:
ALTER USER name1 VALID UNTIL '2025-01-01'
ALTER USER name1 VALID UNTIL '2025-01-01 12:00:00 UTC'
ALTER USER name1 VALID UNTIL 'infinity'
ALTER USER name1 IDENTIFIED WITH plaintext_password BY 'no_expiration', bcrypt_password BY 'expiration_set' VALID UNTIL'2025-01-01'' | {"source_file": "user.md"} | [
-0.028788117691874504,
-0.03688798099756241,
-0.009739726781845093,
-0.06555293500423431,
-0.10921372473239899,
0.0013595055788755417,
0.04111474007368088,
-0.044089801609516144,
-0.0510389506816864,
-0.0338275209069252,
0.07113257795572281,
0.03019898198544979,
0.05878017097711563,
-0.012... |
8d387dde-3eb6-48b8-bec1-82cba7499c2a | description: 'Documentation for Manipulating Data Skipping Indices'
sidebar_label: 'INDEX'
sidebar_position: 42
slug: /sql-reference/statements/alter/skipping-index
title: 'Manipulating Data Skipping Indices'
toc_hidden_folder: true
doc_type: 'reference'
Manipulating Data Skipping Indices
The following operations are available:
ADD INDEX {#add-index}
ALTER TABLE [db.]table_name [ON CLUSTER cluster] ADD INDEX [IF NOT EXISTS] name expression TYPE type [GRANULARITY value] [FIRST|AFTER name]
- Adds index description to tables metadata.
DROP INDEX {#drop-index}
ALTER TABLE [db.]table_name [ON CLUSTER cluster] DROP INDEX [IF EXISTS] name
- Removes index description from tables metadata and deletes index files from disk. Implemented as a
mutation
.
MATERIALIZE INDEX {#materialize-index}
ALTER TABLE [db.]table_name [ON CLUSTER cluster] MATERIALIZE INDEX [IF EXISTS] name [IN PARTITION partition_name]
- Rebuilds the secondary index
name
for the specified
partition_name
. Implemented as a
mutation
. If
IN PARTITION
part is omitted then it rebuilds the index for the whole table data.
CLEAR INDEX {#clear-index}
ALTER TABLE [db.]table_name [ON CLUSTER cluster] CLEAR INDEX [IF EXISTS] name [IN PARTITION partition_name]
- Deletes the secondary index files from disk without removing description. Implemented as a
mutation
.
The commands
ADD
,
DROP
, and
CLEAR
are lightweight in the sense that they only change metadata or remove files.
Also, they are replicated, syncing indices metadata via ClickHouse Keeper or ZooKeeper.
:::note
Index manipulation is supported only for tables with
*MergeTree
engine (including
replicated
variants).
::: | {"source_file": "skipping-index.md"} | [
0.005114880856126547,
0.014283237047493458,
0.02616311050951481,
0.1336318701505661,
0.06062373146414757,
-0.01189629640430212,
0.05857689306139946,
-0.06248736009001732,
-0.04683041572570801,
0.015589682385325432,
0.035738833248615265,
-0.033810924738645554,
0.05916028469800949,
-0.044302... |
d97d0d44-4cea-42f0-838e-7d78125da769 | description: 'Documentation for ALTER ROW POLICY'
sidebar_label: 'ROW POLICY'
sidebar_position: 47
slug: /sql-reference/statements/alter/row-policy
title: 'ALTER ROW POLICY'
doc_type: 'reference'
ALTER ROW POLICY
Changes row policy.
Syntax:
sql
ALTER [ROW] POLICY [IF EXISTS] name1 [ON CLUSTER cluster_name1] ON [database1.]table1 [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] ON [database2.]table2 [RENAME TO new_name2] ...]
[AS {PERMISSIVE | RESTRICTIVE}]
[FOR SELECT]
[USING {condition | NONE}][,...]
[TO {role [,...] | ALL | ALL EXCEPT role [,...]}] | {"source_file": "row-policy.md"} | [
0.005574579816311598,
0.05674505606293678,
-0.0008598439744673669,
0.04047084599733353,
0.0012826332822442055,
0.05141960829496384,
0.05787675082683563,
-0.06228672340512276,
-0.10321958363056183,
0.07179740816354752,
0.053427908569574356,
-0.015147458761930466,
0.06598217785358429,
-0.070... |
72dbbb88-f2ff-43e0-9e33-7f2006cbf63f | description: 'Documentation for Table Settings Manipulations'
sidebar_label: 'SETTING'
sidebar_position: 38
slug: /sql-reference/statements/alter/setting
title: 'Table Settings Manipulations'
doc_type: 'reference'
Table Settings Manipulations
There is a set of queries to change table settings. You can modify settings or reset them to default values. A single query can change several settings at once.
If a setting with the specified name does not exist, then the query raises an exception.
Syntax
sql
ALTER TABLE [db].name [ON CLUSTER cluster] MODIFY|RESET SETTING ...
:::note
These queries can be applied to
MergeTree
tables only.
:::
MODIFY SETTING {#modify-setting}
Changes table settings.
Syntax
sql
MODIFY SETTING setting_name=value [, ...]
Example
```sql
CREATE TABLE example_table (id UInt32, data String) ENGINE=MergeTree() ORDER BY id;
ALTER TABLE example_table MODIFY SETTING max_part_loading_threads=8, max_parts_in_total=50000;
```
RESET SETTING {#reset-setting}
Resets table settings to their default values. If a setting is in a default state, then no action is taken.
Syntax
sql
RESET SETTING setting_name [, ...]
Example
```sql
CREATE TABLE example_table (id UInt32, data String) ENGINE=MergeTree() ORDER BY id
SETTINGS max_part_loading_threads=8;
ALTER TABLE example_table RESET SETTING max_part_loading_threads;
```
See Also
MergeTree settings | {"source_file": "setting.md"} | [
0.0380517803132534,
-0.012610046193003654,
0.058114442974328995,
0.07422641664743423,
-0.12803152203559875,
-0.008113575167953968,
0.002253604819998145,
0.04805530235171318,
-0.10316555947065353,
0.01925608329474926,
0.033374909311532974,
-0.034870695322752,
0.04835755378007889,
-0.1266789... |
80c9e0df-ba89-484d-b937-a9149cf008ce | description: 'Documentation for Role'
sidebar_label: 'ROLE'
sidebar_position: 46
slug: /sql-reference/statements/alter/role
title: 'ALTER ROLE'
doc_type: 'reference'
Changes roles.
Syntax:
sql
ALTER ROLE [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[DROP ALL PROFILES]
[DROP ALL SETTINGS]
[DROP PROFILES 'profile_name' [,...] ]
[DROP SETTINGS variable [,...] ]
[ADD|MODIFY SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | PROFILE 'profile_name'] [,...]
[ADD PROFILES 'profile_name' [,...] ] | {"source_file": "role.md"} | [
0.05601024627685547,
0.031531352549791336,
-0.002426165621727705,
0.09516396373510361,
-0.0385100319981575,
0.08043904602527618,
0.0630175992846489,
0.03501735255122185,
-0.14223168790340424,
0.0016950723947957158,
0.004393845330923796,
-0.05721747875213623,
0.08290490508079529,
-0.0479171... |
582b477a-2c9c-491a-8403-d75dd858b640 | description: 'Documentation for Quota'
sidebar_label: 'QUOTA'
sidebar_position: 46
slug: /sql-reference/statements/alter/quota
title: 'ALTER QUOTA'
doc_type: 'reference'
Changes quotas.
Syntax:
sql
ALTER QUOTA [IF EXISTS] name [ON CLUSTER cluster_name]
[RENAME TO new_name]
[KEYED BY {user_name | ip_address | client_key | client_key,user_name | client_key,ip_address} | NOT KEYED]
[FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year}
{MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] |
NO LIMITS | TRACKING ONLY} [,...]]
[TO {role [,...] | ALL | ALL EXCEPT role [,...]}]
Keys
user_name
,
ip_address
,
client_key
,
client_key, user_name
and
client_key, ip_address
correspond to the fields in the
system.quotas
table.
Parameters
queries
,
query_selects
,
query_inserts
,
errors
,
result_rows
,
result_bytes
,
read_rows
,
read_bytes
,
execution_time
correspond to the fields in the
system.quotas_usage
table.
ON CLUSTER
clause allows creating quotas on a cluster, see
Distributed DDL
.
Examples
Limit the maximum number of queries for the current user with 123 queries in 15 months constraint:
sql
ALTER QUOTA IF EXISTS qA FOR INTERVAL 15 month MAX queries = 123 TO CURRENT_USER;
For the default user limit the maximum execution time with half a second in 30 minutes, and limit the maximum number of queries with 321 and the maximum number of errors with 10 in 5 quarters:
sql
ALTER QUOTA IF EXISTS qB FOR INTERVAL 30 minute MAX execution_time = 0.5, FOR INTERVAL 5 quarter MAX queries = 321, errors = 10 TO default; | {"source_file": "quota.md"} | [
-0.004047703463584185,
0.03586438298225403,
-0.02833154797554016,
0.06509678065776825,
-0.060859665274620056,
0.009107820689678192,
0.05814094841480255,
0.013327403925359249,
-0.01980341225862503,
0.04581919312477112,
-0.026005040854215622,
-0.07096738368272781,
0.09895673394203186,
-0.065... |
8ca328b6-d78e-4c73-a8f5-c64741c768a5 | description: 'Documentation for ARRAY JOIN Clause'
sidebar_label: 'ARRAY JOIN'
slug: /sql-reference/statements/select/array-join
title: 'ARRAY JOIN Clause'
doc_type: 'reference'
ARRAY JOIN Clause
It is a common operation for tables that contain an array column to produce a new table that has a row with each individual array element of that initial column, while values of other columns are duplicated. This is the basic case of what
ARRAY JOIN
clause does.
Its name comes from the fact that it can be looked at as executing
JOIN
with an array or nested data structure. The intent is similar to the
arrayJoin
function, but the clause functionality is broader.
Syntax:
sql
SELECT <expr_list>
FROM <left_subquery>
[LEFT] ARRAY JOIN <array>
[WHERE|PREWHERE <expr>]
...
Supported types of
ARRAY JOIN
are listed below:
ARRAY JOIN
- In base case, empty arrays are not included in the result of
JOIN
.
LEFT ARRAY JOIN
- The result of
JOIN
contains rows with empty arrays. The value for an empty array is set to the default value for the array element type (usually 0, empty string or NULL).
Basic ARRAY JOIN Examples {#basic-array-join-examples}
ARRAY JOIN and LEFT ARRAY JOIN {#array-join-left-array-join-examples}
The examples below demonstrate the usage of the
ARRAY JOIN
and
LEFT ARRAY JOIN
clauses. Let's create a table with an
Array
type column and insert values into it:
```sql
CREATE TABLE arrays_test
(
s String,
arr Array(UInt8)
) ENGINE = Memory;
INSERT INTO arrays_test
VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []);
```
response
ββsββββββββββββ¬βarrββββββ
β Hello β [1,2] β
β World β [3,4,5] β
β Goodbye β [] β
βββββββββββββββ΄ββββββββββ
The example below uses the
ARRAY JOIN
clause:
sql
SELECT s, arr
FROM arrays_test
ARRAY JOIN arr;
response
ββsββββββ¬βarrββ
β Hello β 1 β
β Hello β 2 β
β World β 3 β
β World β 4 β
β World β 5 β
βββββββββ΄ββββββ
The next example uses the
LEFT ARRAY JOIN
clause:
sql
SELECT s, arr
FROM arrays_test
LEFT ARRAY JOIN arr;
response
ββsββββββββββββ¬βarrββ
β Hello β 1 β
β Hello β 2 β
β World β 3 β
β World β 4 β
β World β 5 β
β Goodbye β 0 β
βββββββββββββββ΄ββββββ
ARRAY JOIN and arrayEnumerate function {#array-join-arrayEnumerate}
This function is normally used with
ARRAY JOIN
. It allows counting something just once for each array after applying
ARRAY JOIN
. Example:
sql
SELECT
count() AS Reaches,
countIf(num = 1) AS Hits
FROM test.hits
ARRAY JOIN
GoalsReached,
arrayEnumerate(GoalsReached) AS num
WHERE CounterID = 160656
LIMIT 10
text
ββReachesββ¬ββHitsββ
β 95606 β 31406 β
βββββββββββ΄ββββββββ
In this example, Reaches is the number of conversions (the strings received after applying
ARRAY JOIN
), and Hits is the number of pageviews (strings before
ARRAY JOIN
). In this particular case, you can get the same result in an easier way: | {"source_file": "array-join.md"} | [
0.013163515366613865,
0.013289258815348148,
-0.0282882247120142,
0.08316485583782196,
-0.015948129817843437,
-0.03488082438707352,
0.06865577399730682,
0.025358034297823906,
-0.0474952757358551,
-0.05255180597305298,
-0.017245659604668617,
0.06394209712743759,
0.08154382556676865,
-0.06881... |
4764b68b-cfde-4ebb-ab64-a57072420317 | sql
SELECT
sum(length(GoalsReached)) AS Reaches,
count() AS Hits
FROM test.hits
WHERE (CounterID = 160656) AND notEmpty(GoalsReached)
text
ββReachesββ¬ββHitsββ
β 95606 β 31406 β
βββββββββββ΄ββββββββ
ARRAY JOIN and arrayEnumerateUniq {#array_join_arrayEnumerateUniq}
This function is useful when using
ARRAY JOIN
and aggregating array elements.
In this example, each goal ID has a calculation of the number of conversions (each element in the Goals nested data structure is a goal that was reached, which we refer to as a conversion) and the number of sessions. Without
ARRAY JOIN
, we would have counted the number of sessions as sum(Sign). But in this particular case, the rows were multiplied by the nested Goals structure, so in order to count each session one time after this, we apply a condition to the value of the
arrayEnumerateUniq(Goals.ID)
function.
sql
SELECT
Goals.ID AS GoalID,
sum(Sign) AS Reaches,
sumIf(Sign, num = 1) AS Visits
FROM test.visits
ARRAY JOIN
Goals,
arrayEnumerateUniq(Goals.ID) AS num
WHERE CounterID = 160656
GROUP BY GoalID
ORDER BY Reaches DESC
LIMIT 10
text
βββGoalIDββ¬βReachesββ¬βVisitsββ
β 53225 β 3214 β 1097 β
β 2825062 β 3188 β 1097 β
β 56600 β 2803 β 488 β
β 1989037 β 2401 β 365 β
β 2830064 β 2396 β 910 β
β 1113562 β 2372 β 373 β
β 3270895 β 2262 β 812 β
β 1084657 β 2262 β 345 β
β 56599 β 2260 β 799 β
β 3271094 β 2256 β 812 β
βββββββββββ΄ββββββββββ΄βββββββββ
Using Aliases {#using-aliases}
An alias can be specified for an array in the
ARRAY JOIN
clause. In this case, an array item can be accessed by this alias, but the array itself is accessed by the original name. Example:
sql
SELECT s, arr, a
FROM arrays_test
ARRAY JOIN arr AS a;
response
ββsββββββ¬βarrββββββ¬βaββ
β Hello β [1,2] β 1 β
β Hello β [1,2] β 2 β
β World β [3,4,5] β 3 β
β World β [3,4,5] β 4 β
β World β [3,4,5] β 5 β
βββββββββ΄ββββββββββ΄ββββ
Using aliases, you can perform
ARRAY JOIN
with an external array. For example:
sql
SELECT s, arr_external
FROM arrays_test
ARRAY JOIN [1, 2, 3] AS arr_external;
response
ββsββββββββββββ¬βarr_externalββ
β Hello β 1 β
β Hello β 2 β
β Hello β 3 β
β World β 1 β
β World β 2 β
β World β 3 β
β Goodbye β 1 β
β Goodbye β 2 β
β Goodbye β 3 β
βββββββββββββββ΄βββββββββββββββ
Multiple arrays can be comma-separated in the
ARRAY JOIN
clause. In this case,
JOIN
is performed with them simultaneously (the direct sum, not the cartesian product). Note that all the arrays must have the same size by default. Example:
sql
SELECT s, arr, a, num, mapped
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS mapped; | {"source_file": "array-join.md"} | [
0.07348181307315826,
0.06815588474273682,
-0.04366454854607582,
0.09176644682884216,
-0.03991852328181267,
0.06192963942885399,
0.08963284641504288,
0.009710322134196758,
0.042936522513628006,
-0.017101550474762917,
-0.08200498670339584,
-0.03454166650772095,
0.07904767245054245,
0.0034871... |
4be5e9e0-7706-4cc6-a378-abbc7dc88a5a | sql
SELECT s, arr, a, num, mapped
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS mapped;
response
ββsββββββ¬βarrββββββ¬βaββ¬βnumββ¬βmappedββ
β Hello β [1,2] β 1 β 1 β 2 β
β Hello β [1,2] β 2 β 2 β 3 β
β World β [3,4,5] β 3 β 1 β 4 β
β World β [3,4,5] β 4 β 2 β 5 β
β World β [3,4,5] β 5 β 3 β 6 β
βββββββββ΄ββββββββββ΄ββββ΄ββββββ΄βββββββββ
The example below uses the
arrayEnumerate
function:
sql
SELECT s, arr, a, num, arrayEnumerate(arr)
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
response
ββsββββββ¬βarrββββββ¬βaββ¬βnumββ¬βarrayEnumerate(arr)ββ
β Hello β [1,2] β 1 β 1 β [1,2] β
β Hello β [1,2] β 2 β 2 β [1,2] β
β World β [3,4,5] β 3 β 1 β [1,2,3] β
β World β [3,4,5] β 4 β 2 β [1,2,3] β
β World β [3,4,5] β 5 β 3 β [1,2,3] β
βββββββββ΄ββββββββββ΄ββββ΄ββββββ΄ββββββββββββββββββββββ
Multiple arrays with different sizes can be joined by using:
SETTINGS enable_unaligned_array_join = 1
. Example:
sql
SELECT s, arr, a, b
FROM arrays_test ARRAY JOIN arr AS a, [['a','b'],['c']] AS b
SETTINGS enable_unaligned_array_join = 1;
response
ββsββββββββ¬βarrββββββ¬βaββ¬βbββββββββββ
β Hello β [1,2] β 1 β ['a','b'] β
β Hello β [1,2] β 2 β ['c'] β
β World β [3,4,5] β 3 β ['a','b'] β
β World β [3,4,5] β 4 β ['c'] β
β World β [3,4,5] β 5 β [] β
β Goodbye β [] β 0 β ['a','b'] β
β Goodbye β [] β 0 β ['c'] β
βββββββββββ΄ββββββββββ΄ββββ΄ββββββββββββ
ARRAY JOIN with Nested Data Structure {#array-join-with-nested-data-structure}
ARRAY JOIN
also works with
nested data structures
:
```sql
CREATE TABLE nested_test
(
s String,
nest Nested(
x UInt8,
y UInt32)
) ENGINE = Memory;
INSERT INTO nested_test
VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', [], []);
```
response
ββsββββββββ¬βnest.xβββ¬βnest.yββββββ
β Hello β [1,2] β [10,20] β
β World β [3,4,5] β [30,40,50] β
β Goodbye β [] β [] β
βββββββββββ΄ββββββββββ΄βββββββββββββ
sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN nest;
response
ββsββββββ¬βnest.xββ¬βnest.yββ
β Hello β 1 β 10 β
β Hello β 2 β 20 β
β World β 3 β 30 β
β World β 4 β 40 β
β World β 5 β 50 β
βββββββββ΄βββββββββ΄βββββββββ
When specifying names of nested data structures in
ARRAY JOIN
, the meaning is the same as
ARRAY JOIN
with all the array elements that it consists of. Examples are listed below:
sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`, `nest.y`;
response
ββsββββββ¬βnest.xββ¬βnest.yββ
β Hello β 1 β 10 β
β Hello β 2 β 20 β
β World β 3 β 30 β
β World β 4 β 40 β
β World β 5 β 50 β
βββββββββ΄βββββββββ΄βββββββββ
This variation also makes sense:
sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`; | {"source_file": "array-join.md"} | [
0.08304424583911896,
-0.04070762172341347,
-0.015741394832730293,
-0.025296751409769058,
-0.04226701334118843,
-0.03934168070554733,
0.06691884994506836,
-0.08782872557640076,
-0.06495657563209534,
0.06044463440775871,
-0.014897528104484081,
-0.005548468325287104,
0.07657790184020996,
-0.0... |
cf0a7b42-89ce-4511-bb2f-53798e48e9eb | This variation also makes sense:
sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`;
response
ββsββββββ¬βnest.xββ¬βnest.yββββββ
β Hello β 1 β [10,20] β
β Hello β 2 β [10,20] β
β World β 3 β [30,40,50] β
β World β 4 β [30,40,50] β
β World β 5 β [30,40,50] β
βββββββββ΄βββββββββ΄βββββββββββββ
An alias may be used for a nested data structure, in order to select either the
JOIN
result or the source array. Example:
sql
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN nest AS n;
response
ββsββββββ¬βn.xββ¬βn.yββ¬βnest.xβββ¬βnest.yββββββ
β Hello β 1 β 10 β [1,2] β [10,20] β
β Hello β 2 β 20 β [1,2] β [10,20] β
β World β 3 β 30 β [3,4,5] β [30,40,50] β
β World β 4 β 40 β [3,4,5] β [30,40,50] β
β World β 5 β 50 β [3,4,5] β [30,40,50] β
βββββββββ΄ββββββ΄ββββββ΄ββββββββββ΄βββββββββββββ
Example of using the
arrayEnumerate
function:
sql
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`, num
FROM nested_test
ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num;
response
ββsββββββ¬βn.xββ¬βn.yββ¬βnest.xβββ¬βnest.yββββββ¬βnumββ
β Hello β 1 β 10 β [1,2] β [10,20] β 1 β
β Hello β 2 β 20 β [1,2] β [10,20] β 2 β
β World β 3 β 30 β [3,4,5] β [30,40,50] β 1 β
β World β 4 β 40 β [3,4,5] β [30,40,50] β 2 β
β World β 5 β 50 β [3,4,5] β [30,40,50] β 3 β
βββββββββ΄ββββββ΄ββββββ΄ββββββββββ΄βββββββββββββ΄ββββββ
Implementation Details {#implementation-details}
The query execution order is optimized when running
ARRAY JOIN
. Although
ARRAY JOIN
must always be specified before the
WHERE
/
PREWHERE
clause in a query, technically they can be performed in any order, unless result of
ARRAY JOIN
is used for filtering. The processing order is controlled by the query optimizer.
Incompatibility with short-circuit function evaluation {#incompatibility-with-short-circuit-function-evaluation}
Short-circuit function evaluation
is a feature that optimizes the execution of complex expressions in specific functions such as
if
,
multiIf
,
and
, and
or
. It prevents potential exceptions, such as division by zero, from occurring during the execution of these functions.
arrayJoin
is always executed and not supported for short circuit function evaluation. That's because it's a unique function processed separately from all other functions during query analysis and execution and requires additional logic that doesn't work with short circuit function execution. The reason is that the number of rows in the result depends on the arrayJoin result, and it's too complex and expensive to implement lazy execution of
arrayJoin
.
Related content {#related-content}
Blog:
Working with time series data in ClickHouse | {"source_file": "array-join.md"} | [
0.0140458345413208,
0.016406506299972534,
0.022849097847938538,
0.05479166656732559,
0.027362501248717308,
-0.11373309046030045,
0.02561667375266552,
-0.027825340628623962,
-0.014193056151270866,
0.012594369240105152,
-0.016096843406558037,
0.01165818888694048,
0.01396770216524601,
-0.0077... |
7f01fa4d-e25e-49ea-98dd-8ad1770cd859 | description: 'Documentation for SAMPLE Clause'
sidebar_label: 'SAMPLE'
slug: /sql-reference/statements/select/sample
title: 'SAMPLE Clause'
doc_type: 'reference'
SAMPLE Clause
The
SAMPLE
clause allows for approximated
SELECT
query processing.
When data sampling is enabled, the query is not performed on all the data, but only on a certain fraction of data (sample). For example, if you need to calculate statistics for all the visits, it is enough to execute the query on the 1/10 fraction of all the visits and then multiply the result by 10.
Approximated query processing can be useful in the following cases:
When you have strict latency requirements (like below 100ms) but you can't justify the cost of additional hardware resources to meet them.
When your raw data is not accurate, so approximation does not noticeably degrade the quality.
Business requirements target approximate results (for cost-effectiveness, or to market exact results to premium users).
:::note
You can only use sampling with the tables in the
MergeTree
family, and only if the sampling expression was specified during table creation (see
MergeTree engine
).
:::
The features of data sampling are listed below:
Data sampling is a deterministic mechanism. The result of the same
SELECT .. SAMPLE
query is always the same.
Sampling works consistently for different tables. For tables with a single sampling key, a sample with the same coefficient always selects the same subset of possible data. For example, a sample of user IDs takes rows with the same subset of all the possible user IDs from different tables. This means that you can use the sample in subqueries in the
IN
clause. Also, you can join samples using the
JOIN
clause.
Sampling allows reading less data from a disk. Note that you must specify the sampling key correctly. For more information, see
Creating a MergeTree Table
.
For the
SAMPLE
clause the following syntax is supported: | {"source_file": "sample.md"} | [
0.007604079321026802,
0.014686215668916702,
0.02274182252585888,
0.019250022247433662,
0.013224491849541664,
-0.06004681810736656,
0.013776063919067383,
0.08362902700901031,
0.00913335382938385,
0.010164657607674599,
-0.0382855162024498,
-0.049720995128154755,
0.004954458214342594,
-0.1012... |
77b15004-3b4c-48f4-950a-7f01ca5d82b3 | For the
SAMPLE
clause the following syntax is supported:
| SAMPLEΒ ClauseΒ Syntax | Description |
|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
SAMPLE k
| Here
k
is the number from 0 to 1. The query is executed on
k
fraction of data. For example,
SAMPLE 0.1
runs the query on 10% of data.
Read more
|
|
SAMPLE n
| Here
n
is a sufficiently large integer. The query is executed on a sample of at least
n
rows (but not significantly more than this). For example,
SAMPLE 10000000
runs the query on a minimum of 10,000,000 rows.
Read more
|
|
SAMPLE k OFFSET m
| Here
k
and
m
are the numbers from 0 to 1. The query is executed on a sample of
k
fraction of the data. The data used for the sample is offset by
m
fraction.
Read more
|
SAMPLE K {#sample-k}
Here
k
is the number from 0 to 1 (both fractional and decimal notations are supported). For example,
SAMPLE 1/2
or
SAMPLE 0.5
.
In a
SAMPLE k
clause, the sample is taken from the
k
fraction of data. The example is shown below:
sql
SELECT
Title,
count() * 10 AS PageViews
FROM hits_distributed
SAMPLE 0.1
WHERE
CounterID = 34
GROUP BY Title
ORDER BY PageViews DESC LIMIT 1000
In this example, the query is executed on a sample from 0.1 (10%) of data. Values of aggregate functions are not corrected automatically, so to get an approximate result, the value
count()
is manually multiplied by 10.
SAMPLE N {#sample-n}
Here
n
is a sufficiently large integer. For example,
SAMPLE 10000000
.
In this case, the query is executed on a sample of at least
n
rows (but not significantly more than this). For example,
SAMPLE 10000000
runs the query on a minimum of 10,000,000 rows.
Since the minimum unit for data reading is one granule (its size is set by the
index_granularity
setting), it makes sense to set a sample that is much larger than the size of the granule.
When using the
SAMPLE n
clause, you do not know which relative percent of data was processed. So you do not know the coefficient the aggregate functions should be multiplied by. Use the
_sample_factor
virtual column to get the approximate result.
The
_sample_factor
column contains relative coefficients that are calculated dynamically. This column is created automatically when you
create
a table with the specified sampling key. The usage examples of the
_sample_factor
column are shown below. | {"source_file": "sample.md"} | [
0.031813643872737885,
0.04020911455154419,
0.010421712882816792,
0.01506603229790926,
0.0010515624890103936,
0.08196666091680527,
0.028783153742551804,
0.06705045700073242,
-0.019736340269446373,
0.003211956936866045,
0.024744238704442978,
-0.09734810888767242,
0.01851629838347435,
-0.0679... |
5c7dcd47-6a38-4d72-9058-aeeb7acdc602 | Let's consider the table
visits
, which contains the statistics about site visits. The first example shows how to calculate the number of page views:
sql
SELECT sum(PageViews * _sample_factor)
FROM visits
SAMPLE 10000000
The next example shows how to calculate the total number of visits:
sql
SELECT sum(_sample_factor)
FROM visits
SAMPLE 10000000
The example below shows how to calculate the average session duration. Note that you do not need to use the relative coefficient to calculate the average values.
sql
SELECT avg(Duration)
FROM visits
SAMPLE 10000000
SAMPLE K OFFSET M {#sample-k-offset-m}
Here
k
and
m
are numbers from 0 to 1. Examples are shown below.
Example 1
sql
SAMPLE 1/10
In this example, the sample is 1/10th of all data:
[++------------]
Example 2
sql
SAMPLE 1/10 OFFSET 1/2
Here, a sample of 10% is taken from the second half of the data.
[------++------] | {"source_file": "sample.md"} | [
0.07434920221567154,
0.011831318028271198,
-0.02848443016409874,
0.039091557264328,
-0.031348783522844315,
0.021579356864094734,
0.020971257239580154,
0.0503494031727314,
0.004555836319923401,
0.03892927244305611,
-0.02258562296628952,
-0.07367996871471405,
0.050927404314279556,
0.02710941... |
a0620b62-4d60-49ce-874a-227200610687 | description: 'Documentation for INTERSECT Clause'
sidebar_label: 'INTERSECT'
slug: /sql-reference/statements/select/intersect
title: 'INTERSECT Clause'
doc_type: 'reference'
INTERSECT Clause
The
INTERSECT
clause returns only those rows that result from both the first and the second queries. The queries must match the number of columns, order, and type. The result of
INTERSECT
can contain duplicate rows.
Multiple
INTERSECT
statements are executed left to right if parentheses are not specified. The
INTERSECT
operator has a higher priority than the
UNION
and
EXCEPT
clauses.
```sql
SELECT column1 [, column2 ]
FROM table1
[WHERE condition]
INTERSECT
SELECT column1 [, column2 ]
FROM table2
[WHERE condition]
```
The condition could be any expression based on your requirements.
Examples {#examples}
Here is a simple example that intersects the numbers 1 to 10 with the numbers 3 to 8:
sql
SELECT number FROM numbers(1,10) INTERSECT SELECT number FROM numbers(3,8);
Result:
response
ββnumberββ
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
ββββββββββ
INTERSECT
is useful if you have two tables that share a common column (or columns). You can intersect the results of two queries, as long as the results contain the same columns. For example, suppose we have a few million rows of historical cryptocurrency data that contains trade prices and volume:
```sql
CREATE TABLE crypto_prices
(
trade_date Date,
crypto_name String,
volume Float32,
price Float32,
market_cap Float32,
change_1_day Float32
)
ENGINE = MergeTree
PRIMARY KEY (crypto_name, trade_date);
INSERT INTO crypto_prices
SELECT *
FROM s3(
'https://learn-clickhouse.s3.us-east-2.amazonaws.com/crypto_prices.csv',
'CSVWithNames'
);
SELECT * FROM crypto_prices
WHERE crypto_name = 'Bitcoin'
ORDER BY trade_date DESC
LIMIT 10;
```
response
ββtrade_dateββ¬βcrypto_nameββ¬ββββββvolumeββ¬ββββpriceββ¬βββmarket_capββ¬ββchange_1_dayββ
β 2020-11-02 β Bitcoin β 30771456000 β 13550.49 β 251119860000 β -0.013585099 β
β 2020-11-01 β Bitcoin β 24453857000 β 13737.11 β 254569760000 β -0.0031840964 β
β 2020-10-31 β Bitcoin β 30306464000 β 13780.99 β 255372070000 β 0.017308505 β
β 2020-10-30 β Bitcoin β 30581486000 β 13546.52 β 251018150000 β 0.008084608 β
β 2020-10-29 β Bitcoin β 56499500000 β 13437.88 β 248995320000 β 0.012552661 β
β 2020-10-28 β Bitcoin β 35867320000 β 13271.29 β 245899820000 β -0.02804481 β
β 2020-10-27 β Bitcoin β 33749879000 β 13654.22 β 252985950000 β 0.04427984 β
β 2020-10-26 β Bitcoin β 29461459000 β 13075.25 β 242251000000 β 0.0033826586 β
β 2020-10-25 β Bitcoin β 24406921000 β 13031.17 β 241425220000 β -0.0058658565 β
β 2020-10-24 β Bitcoin β 24542319000 β 13108.06 β 242839880000 β 0.013650347 β
ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄ββββββββββββββββ | {"source_file": "intersect.md"} | [
0.02102535218000412,
-0.0045741256326437,
0.016669850796461105,
0.03501564636826515,
0.012940101325511932,
0.008059071376919746,
0.07961411029100418,
0.0196290984749794,
-0.0962236151099205,
0.0035966814029961824,
-0.007492159027606249,
-0.008124285377562046,
0.05172281339764595,
-0.054527... |
b40b8e7b-2d5a-45bf-931f-38fefc7ce794 | Now suppose we have a table named
holdings
that contains a list of cryptocurrencies that we own, along with the number of coins:
```sql
CREATE TABLE holdings
(
crypto_name String,
quantity UInt64
)
ENGINE = MergeTree
PRIMARY KEY (crypto_name);
INSERT INTO holdings VALUES
('Bitcoin', 1000),
('Bitcoin', 200),
('Ethereum', 250),
('Ethereum', 5000),
('DOGEFI', 10);
('Bitcoin Diamond', 5000);
```
We can use
INTERSECT
to answer questions like
"Which coins do we own that have traded at a price greater than $100?"
:
sql
SELECT crypto_name FROM holdings
INTERSECT
SELECT crypto_name FROM crypto_prices
WHERE price > 100
Result:
response
ββcrypto_nameββ
β Bitcoin β
β Bitcoin β
β Ethereum β
β Ethereum β
βββββββββββββββ
This means at some point in time, Bitcoin and Ethereum traded above $100, and DOGEFI and Bitcoin Diamond have never traded above $100 (at least using the data we have here in this example).
INTERSECT DISTINCT {#intersect-distinct}
Notice in the previous query we had multiple Bitcoin and Ethereum holdings that traded above $100. It might be nice to remove duplicate rows (since they only repeat what we already know). You can add
DISTINCT
to
INTERSECT
to eliminate duplicate rows from the result:
sql
SELECT crypto_name FROM holdings
INTERSECT DISTINCT
SELECT crypto_name FROM crypto_prices
WHERE price > 100;
Result:
response
ββcrypto_nameββ
β Bitcoin β
β Ethereum β
βββββββββββββββ
See Also
UNION
EXCEPT | {"source_file": "intersect.md"} | [
0.060665931552648544,
0.03794543817639351,
-0.021057453006505966,
0.05865256115794182,
-0.04491356387734413,
-0.06565825641155243,
0.10489905625581741,
0.031321678310632706,
-0.02430880255997181,
-0.010878496803343296,
0.03499806299805641,
-0.12775413691997528,
0.07696680724620819,
-0.0509... |
c70a4f60-fe04-4f3a-a4ec-8405b9bc4ad7 | description: 'Documentation for QUALIFY Clause'
sidebar_label: 'QUALIFY'
slug: /sql-reference/statements/select/qualify
title: 'QUALIFY Clause'
doc_type: 'reference'
QUALIFY Clause
Allows filtering window functions results. It is similar to the
WHERE
clause, but the difference is that
WHERE
is performed before window functions evaluation, while
QUALIFY
is performed after it.
It is possible to reference window functions results from
SELECT
clause in
QUALIFY
clause by their alias. Alternatively,
QUALIFY
clause can filter on results of additional window functions that are not returned in query results.
Limitations {#limitations}
QUALIFY
can't be used if there are no window functions to evaluate. Use
WHERE
instead.
Examples {#examples}
Example:
sql
SELECT number, COUNT() OVER (PARTITION BY number % 3) AS partition_count
FROM numbers(10)
QUALIFY partition_count = 4
ORDER BY number;
text
ββnumberββ¬βpartition_countββ
β 0 β 4 β
β 3 β 4 β
β 6 β 4 β
β 9 β 4 β
ββββββββββ΄ββββββββββββββββββ | {"source_file": "qualify.md"} | [
-0.023051846772432327,
0.0524623841047287,
0.09708321839570999,
0.03835555538535118,
0.02245139889419079,
0.07568692415952682,
0.016521023586392403,
0.11508508026599884,
-0.03530460596084595,
-0.07515505701303482,
-0.04428398609161377,
-0.06404316425323486,
0.04265745356678963,
-0.03699528... |
1529b7b3-57e3-465e-a55a-15d3eafb8f51 | description: 'Documentation for HAVING Clause'
sidebar_label: 'HAVING'
slug: /sql-reference/statements/select/having
title: 'HAVING Clause'
doc_type: 'reference'
HAVING Clause
Allows filtering the aggregation results produced by
GROUP BY
. It is similar to the
WHERE
clause, but the difference is that
WHERE
is performed before aggregation, while
HAVING
is performed after it.
It is possible to reference aggregation results from
SELECT
clause in
HAVING
clause by their alias. Alternatively,
HAVING
clause can filter on results of additional aggregates that are not returned in query results.
Example {#example}
If you have a
sales
table as follows:
sql
CREATE TABLE sales
(
region String,
salesperson String,
amount Float64
)
ORDER BY (region, salesperson);
You can query it like so:
sql
SELECT
region,
salesperson,
sum(amount) AS total_sales
FROM sales
GROUP BY
region,
salesperson
HAVING total_sales > 10000
ORDER BY total_sales DESC;
This will list sales people with greater than 10,000 in total sales in their region.
Limitations {#limitations}
HAVING
can't be used if aggregation is not performed. Use
WHERE
instead. | {"source_file": "having.md"} | [
0.02999713085591793,
0.05698630213737488,
0.0330427922308445,
0.04457227140665054,
-0.09814178943634033,
0.05534442886710167,
0.03464999049901962,
0.0560898594558239,
-0.0374959260225296,
-0.05730443820357323,
0.003908390644937754,
-0.014451645314693451,
0.08472385257482529,
-0.01417683809... |
dee6ab90-9777-4f4a-847b-98fee0173ff5 | description: 'Documentation for the EXCEPT clause which returns only those rows that result from the first query without the second.'
sidebar_label: 'EXCEPT'
slug: /sql-reference/statements/select/except
title: 'EXCEPT clause'
keywords: ['EXCEPT', 'clause']
doc_type: 'reference'
EXCEPT clause
The
EXCEPT
clause returns only those rows that result from the first query without the second.
Both queries must have the same number of columns in the same order and data type.
The result of
EXCEPT
can contain duplicate rows. Use
EXCEPT DISTINCT
if this is not desirable.
Multiple
EXCEPT
statements are executed from left to right if parentheses are not specified.
The
EXCEPT
operator has the same priority as the
UNION
clause and lower priority than the
INTERSECT
clause.
Syntax {#syntax}
```sql
SELECT column1 [, column2 ]
FROM table1
[WHERE condition]
EXCEPT
SELECT column1 [, column2 ]
FROM table2
[WHERE condition]
```
The condition could be any expression based on your requirements.
Additionally,
EXCEPT()
can be used to exclude columns from a result in the same table, as is possible with BigQuery (Google Cloud), using the following syntax:
sql
SELECT column1 [, column2 ] EXCEPT (column3 [, column4])
FROM table1
[WHERE condition]
Examples {#examples}
The examples in this section demonstrate usage of the
EXCEPT
clause.
Filtering Numbers Using the
EXCEPT
Clause {#filtering-numbers-using-the-except-clause}
Here is a simple example that returns the numbers 1 to 10 that are
not
a part of the numbers 3 to 8:
sql title="Query"
SELECT number
FROM numbers(1, 10)
EXCEPT
SELECT number
FROM numbers(3, 6)
response title="Response"
ββnumberββ
β 1 β
β 2 β
β 9 β
β 10 β
ββββββββββ
Excluding Specific Columns Using
EXCEPT()
{#excluding-specific-columns-using-except}
EXCEPT()
can be used to quickly exclude columns from a result. For instance if we want to select all columns from a table, except a few select columns as shown in the example below:
```sql title="Query"
SHOW COLUMNS IN system.settings
SELECT * EXCEPT (default, alias_for, readonly, description)
FROM system.settings
LIMIT 5
``` | {"source_file": "except.md"} | [
0.014426277950406075,
0.0662553459405899,
0.06009691208600998,
0.013944651931524277,
0.033637866377830505,
-0.01464368961751461,
0.011258949525654316,
-0.01639760285615921,
-0.042856935411691666,
-0.016232235357165337,
0.04440926015377045,
0.026127716526389122,
0.0705273449420929,
-0.17504... |
502e748c-2b6d-412d-acc5-4a7b39abb8de | ```sql title="Query"
SHOW COLUMNS IN system.settings
SELECT * EXCEPT (default, alias_for, readonly, description)
FROM system.settings
LIMIT 5
```
```response title="Response"
ββfieldββββββββ¬βtypeββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βnullββ¬βkeyββ¬βdefaultββ¬βextraββ
1. β alias_for β String β NO β β α΄Ία΅α΄Έα΄Έ β β
2. β changed β UInt8 β NO β β α΄Ία΅α΄Έα΄Έ β β
3. β default β String β NO β β α΄Ία΅α΄Έα΄Έ β β
4. β description β String β NO β β α΄Ία΅α΄Έα΄Έ β β
5. β is_obsolete β UInt8 β NO β β α΄Ία΅α΄Έα΄Έ β β
6. β max β Nullable(String) β YES β β α΄Ία΅α΄Έα΄Έ β β
7. β min β Nullable(String) β YES β β α΄Ία΅α΄Έα΄Έ β β
8. β name β String β NO β β α΄Ία΅α΄Έα΄Έ β β
9. β readonly β UInt8 β NO β β α΄Ία΅α΄Έα΄Έ β β
10. β tier β Enum8('Production' = 0, 'Obsolete' = 4, 'Experimental' = 8, 'Beta' = 12) β NO β β α΄Ία΅α΄Έα΄Έ β β
11. β type β String β NO β β α΄Ία΅α΄Έα΄Έ β β
12. β value β String β NO β β α΄Ία΅α΄Έα΄Έ β β
βββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ΄ββββββ΄ββββββββββ΄ββββββββ
ββnameβββββββββββββββββββββ¬βvalueβββββββ¬βchangedββ¬βminβββ¬βmaxβββ¬βtypeβββββ¬βis_obsoleteββ¬βtierββββββββ
1. β dialect β clickhouse β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Dialect β 0 β Production β
2. β min_compress_block_size β 65536 β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β UInt64 β 0 β Production β
3. β max_compress_block_size β 1048576 β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β UInt64 β 0 β Production β
4. β max_block_size β 65409 β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β UInt64 β 0 β Production β
5. β max_insert_block_size β 1048449 β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β UInt64 β 0 β Production β
βββββββββββββββββββββββββββ΄βββββββββββββ΄ββββββββββ΄βββββββ΄βββββββ΄ββββββββββ΄ββββββββββββββ΄βββββββββββββ
```
Using
EXCEPT
and
INTERSECT
with Cryptocurrency Data {#using-except-and-intersect-with-cryptocurrency-data} | {"source_file": "except.md"} | [
0.04818429797887802,
-0.02215716242790222,
-0.03398711606860161,
0.03172945976257324,
-0.0459335558116436,
-0.060044847428798676,
0.003118786495178938,
0.027114376425743103,
-0.06173858046531677,
0.06833475083112717,
0.0314861424267292,
-0.049383603036403656,
0.05771927908062935,
-0.090641... |
cc1760d6-c465-4289-9227-3a2c6440909c | Using
EXCEPT
and
INTERSECT
with Cryptocurrency Data {#using-except-and-intersect-with-cryptocurrency-data}
EXCEPT
and
INTERSECT
can often be used interchangeably with different Boolean logic, and they are both useful if you have two tables that share a common column (or columns).
For example, suppose we have a few million rows of historical cryptocurrency data that contains trade prices and volume:
```sql title="Query"
CREATE TABLE crypto_prices
(
trade_date Date,
crypto_name String,
volume Float32,
price Float32,
market_cap Float32,
change_1_day Float32
)
ENGINE = MergeTree
PRIMARY KEY (crypto_name, trade_date);
INSERT INTO crypto_prices
SELECT *
FROM s3(
'https://learn-clickhouse.s3.us-east-2.amazonaws.com/crypto_prices.csv',
'CSVWithNames'
);
SELECT * FROM crypto_prices
WHERE crypto_name = 'Bitcoin'
ORDER BY trade_date DESC
LIMIT 10;
```
response title="Response"
ββtrade_dateββ¬βcrypto_nameββ¬ββββββvolumeββ¬ββββpriceββ¬βββmarket_capββ¬ββchange_1_dayββ
β 2020-11-02 β Bitcoin β 30771456000 β 13550.49 β 251119860000 β -0.013585099 β
β 2020-11-01 β Bitcoin β 24453857000 β 13737.11 β 254569760000 β -0.0031840964 β
β 2020-10-31 β Bitcoin β 30306464000 β 13780.99 β 255372070000 β 0.017308505 β
β 2020-10-30 β Bitcoin β 30581486000 β 13546.52 β 251018150000 β 0.008084608 β
β 2020-10-29 β Bitcoin β 56499500000 β 13437.88 β 248995320000 β 0.012552661 β
β 2020-10-28 β Bitcoin β 35867320000 β 13271.29 β 245899820000 β -0.02804481 β
β 2020-10-27 β Bitcoin β 33749879000 β 13654.22 β 252985950000 β 0.04427984 β
β 2020-10-26 β Bitcoin β 29461459000 β 13075.25 β 242251000000 β 0.0033826586 β
β 2020-10-25 β Bitcoin β 24406921000 β 13031.17 β 241425220000 β -0.0058658565 β
β 2020-10-24 β Bitcoin β 24542319000 β 13108.06 β 242839880000 β 0.013650347 β
ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄ββββββββββββββββ
Now suppose we have a table named
holdings
that contains a list of cryptocurrencies that we own, along with the number of coins:
```sql
CREATE TABLE holdings
(
crypto_name String,
quantity UInt64
)
ENGINE = MergeTree
PRIMARY KEY (crypto_name);
INSERT INTO holdings VALUES
('Bitcoin', 1000),
('Bitcoin', 200),
('Ethereum', 250),
('Ethereum', 5000),
('DOGEFI', 10),
('Bitcoin Diamond', 5000);
```
We can use
EXCEPT
to answer a question like
"Which coins do we own have never traded below $10?"
:
sql title="Query"
SELECT crypto_name FROM holdings
EXCEPT
SELECT crypto_name FROM crypto_prices
WHERE price < 10;
response title="Response"
ββcrypto_nameββ
β Bitcoin β
β Bitcoin β
βββββββββββββββ
This means of the four cryptocurrencies we own, only Bitcoin has never dropped below $10 (based on the limited data we have here in this example).
Using
EXCEPT DISTINCT
{#using-except-distinct} | {"source_file": "except.md"} | [
0.04362691566348076,
0.0346045196056366,
-0.06148731708526611,
0.01586286723613739,
0.04564530774950981,
-0.06339415907859802,
-0.0010237469105049968,
-0.03868424519896507,
0.0010842038318514824,
-0.022786661982536316,
0.038235705345869064,
-0.08019424974918365,
0.06475815922021866,
-0.089... |
0310d7a7-c8a8-431d-8645-dd337c20dc2b | This means of the four cryptocurrencies we own, only Bitcoin has never dropped below $10 (based on the limited data we have here in this example).
Using
EXCEPT DISTINCT
{#using-except-distinct}
Notice in the previous query we had multiple Bitcoin holdings in the result. You can add
DISTINCT
to
EXCEPT
to eliminate duplicate rows from the result:
sql title="Query"
SELECT crypto_name FROM holdings
EXCEPT DISTINCT
SELECT crypto_name FROM crypto_prices
WHERE price < 10;
response title="Response"
ββcrypto_nameββ
β Bitcoin β
βββββββββββββββ
See Also
UNION
INTERSECT | {"source_file": "except.md"} | [
0.03584487736225128,
0.03181276470422745,
0.007693099323660135,
0.051712822169065475,
-0.007217647973448038,
-0.0353710800409317,
0.03861250355839729,
-0.008982319384813309,
0.038456909358501434,
-0.05412498116493225,
0.026511091738939285,
-0.06152676045894623,
0.08598044514656067,
-0.0474... |
72ab6817-5ebf-44e5-a0d9-80ad06b50222 | description: 'Documentation for PREWHERE Clause'
sidebar_label: 'PREWHERE'
slug: /sql-reference/statements/select/prewhere
title: 'PREWHERE Clause'
doc_type: 'reference'
PREWHERE Clause
Prewhere is an optimization to apply filtering more efficiently. It is enabled by default even if
PREWHERE
clause is not specified explicitly. It works by automatically moving part of
WHERE
condition to prewhere stage. The role of
PREWHERE
clause is only to control this optimization if you think that you know how to do it better than it happens by default.
With prewhere optimization, at first only the columns necessary for executing prewhere expression are read. Then the other columns are read that are needed for running the rest of the query, but only those blocks where the prewhere expression is
true
at least for some rows. If there are a lot of blocks where prewhere expression is
false
for all rows and prewhere needs less columns than other parts of query, this often allows to read a lot less data from disk for query execution.
Controlling Prewhere Manually {#controlling-prewhere-manually}
The clause has the same meaning as the
WHERE
clause. The difference is in which data is read from the table. When manually controlling
PREWHERE
for filtration conditions that are used by a minority of the columns in the query, but that provide strong data filtration. This reduces the volume of data to read.
A query may simultaneously specify
PREWHERE
and
WHERE
. In this case,
PREWHERE
precedes
WHERE
.
If the
optimize_move_to_prewhere
setting is set to 0, heuristics to automatically move parts of expressions from
WHERE
to
PREWHERE
are disabled.
If query has
FINAL
modifier, the
PREWHERE
optimization is not always correct. It is enabled only if both settings
optimize_move_to_prewhere
and
optimize_move_to_prewhere_if_final
are turned on.
:::note
The
PREWHERE
section is executed before
FINAL
, so the results of
FROM ... FINAL
queries may be skewed when using
PREWHERE
with fields not in the
ORDER BY
section of a table.
:::
Limitations {#limitations}
PREWHERE
is only supported by tables from the
*MergeTree
family.
Example {#example}
``sql
CREATE TABLE mydata
(
A
Int64,
B
Int8,
C` String
)
ENGINE = MergeTree
ORDER BY A AS
SELECT
number,
0,
if(number between 1000 and 2000, 'x', toString(number))
FROM numbers(10000000);
SELECT count()
FROM mydata
WHERE (B = 0) AND (C = 'x');
1 row in set. Elapsed: 0.074 sec. Processed 10.00 million rows, 168.89 MB (134.98 million rows/s., 2.28 GB/s.)
-- let's enable tracing to see which predicate are moved to PREWHERE
set send_logs_level='debug';
MergeTreeWhereOptimizer: condition "B = 0" moved to PREWHERE
-- Clickhouse moves automatically
B = 0
to PREWHERE, but it has no sense because B is always 0.
-- Let's move other predicate
C = 'x'
SELECT count()
FROM mydata
PREWHERE C = 'x'
WHERE B = 0; | {"source_file": "prewhere.md"} | [
0.034815847873687744,
0.05196213349699974,
0.05206279456615448,
0.0593482181429863,
0.006178066600114107,
0.04602804407477379,
0.020909586921334267,
0.015246464870870113,
0.029807494953274727,
0.06041063368320465,
-0.0243417676538229,
0.03352678194642067,
0.014099734835326672,
-0.101796597... |
7e72356f-b080-4f8c-957f-341c87dcecf0 | -- Let's move other predicate
C = 'x'
SELECT count()
FROM mydata
PREWHERE C = 'x'
WHERE B = 0;
1 row in set. Elapsed: 0.069 sec. Processed 10.00 million rows, 158.89 MB (144.90 million rows/s., 2.30 GB/s.)
-- This query with manual
PREWHERE
processes slightly less data: 158.89 MB VS 168.89 MB
``` | {"source_file": "prewhere.md"} | [
0.10324254631996155,
0.008840871043503284,
-0.02176063321530819,
0.019611500203609467,
-0.02631288580596447,
-0.00565730594098568,
0.05530965328216553,
0.02174900844693184,
0.06975876539945602,
0.06907752901315689,
0.02189386822283268,
-0.02928377315402031,
0.034352850168943405,
-0.0684380... |
21880022-48e8-476e-88ac-7b1d2effa7e5 | description: 'Documentation describing the APPLY modifier which allows you to invoke some function for each row returned by an outer table expression of a query.'
sidebar_label: 'APPLY'
slug: /sql-reference/statements/select/apply-modifier
title: 'APPLY modifier'
keywords: ['APPLY', 'modifier']
doc_type: 'reference'
APPLY modifier {#apply}
Allows you to invoke some function for each row returned by an outer table expression of a query.
Syntax {#syntax}
sql
SELECT <expr> APPLY( <func> ) FROM [db.]table_name
Example {#example}
sql
CREATE TABLE columns_transformers (i Int64, j Int16, k Int64) ENGINE = MergeTree ORDER by (i);
INSERT INTO columns_transformers VALUES (100, 10, 324), (120, 8, 23);
SELECT * APPLY(sum) FROM columns_transformers;
response
ββsum(i)ββ¬βsum(j)ββ¬βsum(k)ββ
β 220 β 18 β 347 β
ββββββββββ΄βββββββββ΄βββββββββ | {"source_file": "apply_modifier.md"} | [
-0.02589486353099346,
0.033011868596076965,
-0.007465968374162912,
0.005564346443861723,
-0.025488030165433884,
-0.030023187398910522,
0.027351897209882736,
0.08163940161466599,
-0.07556484639644623,
0.05147232115268707,
0.0356806181371212,
-0.04864666983485222,
0.04611334949731827,
-0.156... |
ae2505e3-1766-4ecd-90bb-4bf4c5dfccf0 | description: 'Documentation for UNION Clause'
sidebar_label: 'UNION'
slug: /sql-reference/statements/select/union
title: 'UNION Clause'
doc_type: 'reference'
UNION Clause
You can use
UNION
with explicitly specifying
UNION ALL
or
UNION DISTINCT
.
If you don't specify
ALL
or
DISTINCT
, it will depend on the
union_default_mode
setting. The difference between
UNION ALL
and
UNION DISTINCT
is that
UNION DISTINCT
will do a distinct transform for union result, it is equivalent to
SELECT DISTINCT
from a subquery containing
UNION ALL
.
You can use
UNION
to combine any number of
SELECT
queries by extending their results. Example:
```sql
SELECT CounterID, 1 AS table, toInt64(count()) AS c
FROM test.hits
GROUP BY CounterID
UNION ALL
SELECT CounterID, 2 AS table, sum(Sign) AS c
FROM test.visits
GROUP BY CounterID
HAVING c > 0
```
Result columns are matched by their index (order inside
SELECT
). If column names do not match, names for the final result are taken from the first query.
Type casting is performed for unions. For example, if two queries being combined have the same field with non-
Nullable
and
Nullable
types from a compatible type, the resulting
UNION
has a
Nullable
type field.
Queries that are parts of
UNION
can be enclosed in round brackets.
ORDER BY
and
LIMIT
are applied to separate queries, not to the final result. If you need to apply a conversion to the final result, you can put all the queries with
UNION
in a subquery in the
FROM
clause.
If you use
UNION
without explicitly specifying
UNION ALL
or
UNION DISTINCT
, you can specify the union mode using the
union_default_mode
setting. The setting values can be
ALL
,
DISTINCT
or an empty string. However, if you use
UNION
with
union_default_mode
setting to empty string, it will throw an exception. The following examples demonstrate the results of queries with different values setting.
Query:
sql
SET union_default_mode = 'DISTINCT';
SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 2;
Result:
text
ββ1ββ
β 1 β
βββββ
ββ1ββ
β 2 β
βββββ
ββ1ββ
β 3 β
βββββ
Query:
sql
SET union_default_mode = 'ALL';
SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 2;
Result:
text
ββ1ββ
β 1 β
βββββ
ββ1ββ
β 2 β
βββββ
ββ1ββ
β 2 β
βββββ
ββ1ββ
β 3 β
βββββ
Queries that are parts of
UNION/UNION ALL/UNION DISTINCT
can be run simultaneously, and their results can be mixed together.
See Also
insert_null_as_default
setting.
union_default_mode
setting. | {"source_file": "union.md"} | [
0.021974559873342514,
0.016705334186553955,
-0.0017580728745087981,
0.10954808443784714,
-0.09422226995229721,
0.06672180444002151,
0.046541936695575714,
0.01678040809929371,
-0.012823968194425106,
0.014283494092524052,
-0.04974907264113426,
-0.03954904526472092,
0.07195354998111725,
-0.05... |
f3a7ff8f-6920-4680-9839-81f7660a2131 | description: 'Documentation describing the APPLY modifier which allows you to invoke some function for each row returned by an outer table expression of a query.'
sidebar_label: 'REPLACE'
slug: /sql-reference/statements/select/replace-modifier
title: 'Replace modifier'
keywords: ['REPLACE', 'modifier']
doc_type: 'reference'
Replace modifier {#replace}
Allows you to specify one or more
expression aliases
.
Each alias must match a column name from the
SELECT *
statement. In the output column list, the column that matches
the alias is replaced by the expression in that
REPLACE
.
This modifier does not change the names or order of columns. However, it can change the value and the value type.
Syntax:
sql
SELECT <expr> REPLACE( <expr> AS col_name) from [db.]table_name
Example:
sql
SELECT * REPLACE(i + 1 AS i) from columns_transformers;
response
ββββiββ¬ββjββ¬βββkββ
β 101 β 10 β 324 β
β 121 β 8 β 23 β
βββββββ΄βββββ΄ββββββ | {"source_file": "replace_modifier.md"} | [
-0.040513407438993454,
0.01674501784145832,
0.04193177446722984,
0.04172502085566521,
0.004246289376169443,
-0.02439337596297264,
0.006456258241087198,
0.02814866043627262,
-0.025363672524690628,
0.07342755049467087,
0.02437795326113701,
0.010968755930662155,
0.01967177912592888,
-0.149626... |
a06d0c97-c1c4-4fc8-94f3-12b21434ea85 | description: 'Documentation for ORDER BY Clause'
sidebar_label: 'ORDER BY'
slug: /sql-reference/statements/select/order-by
title: 'ORDER BY Clause'
doc_type: 'reference'
ORDER BY Clause
The
ORDER BY
clause contains
a list of expressions, e.g.
ORDER BY visits, search_phrase
,
a list of numbers referring to columns in the
SELECT
clause, e.g.
ORDER BY 2, 1
, or
ALL
which means all columns of the
SELECT
clause, e.g.
ORDER BY ALL
.
To disable sorting by column numbers, set setting
enable_positional_arguments
= 0.
To disable sorting by
ALL
, set setting
enable_order_by_all
= 0.
The
ORDER BY
clause can be attributed by a
DESC
(descending) or
ASC
(ascending) modifier which determines the sorting direction.
Unless an explicit sort order is specified,
ASC
is used by default.
The sorting direction applies to a single expression, not to the entire list, e.g.
ORDER BY Visits DESC, SearchPhrase
.
Also, sorting is performed case-sensitively.
Rows with identical values for a sort expressions are returned in an arbitrary and non-deterministic order.
If the
ORDER BY
clause is omitted in a
SELECT
statement, the row order is also arbitrary and non-deterministic.
Sorting of Special Values {#sorting-of-special-values}
There are two approaches to
NaN
and
NULL
sorting order:
By default or with the
NULLS LAST
modifier: first the values, then
NaN
, then
NULL
.
With the
NULLS FIRST
modifier: first
NULL
, then
NaN
, then other values.
Example {#example}
For the table
text
ββxββ¬ββββyββ
β 1 β α΄Ία΅α΄Έα΄Έ β
β 2 β 2 β
β 1 β nan β
β 2 β 2 β
β 3 β 4 β
β 5 β 6 β
β 6 β nan β
β 7 β α΄Ία΅α΄Έα΄Έ β
β 6 β 7 β
β 8 β 9 β
βββββ΄βββββββ
Run the query
SELECT * FROM t_null_nan ORDER BY y NULLS FIRST
to get:
text
ββxββ¬ββββyββ
β 1 β α΄Ία΅α΄Έα΄Έ β
β 7 β α΄Ία΅α΄Έα΄Έ β
β 1 β nan β
β 6 β nan β
β 2 β 2 β
β 2 β 2 β
β 3 β 4 β
β 5 β 6 β
β 6 β 7 β
β 8 β 9 β
βββββ΄βββββββ
When floating point numbers are sorted, NaNs are separate from the other values. Regardless of the sorting order, NaNs come at the end. In other words, for ascending sorting they are placed as if they are larger than all the other numbers, while for descending sorting they are placed as if they are smaller than the rest.
Collation Support {#collation-support}
For sorting by
String
values, you can specify collation (comparison). Example:
ORDER BY SearchPhrase COLLATE 'tr'
- for sorting by keyword in ascending order, using the Turkish alphabet, case insensitive, assuming that strings are UTF-8 encoded.
COLLATE
can be specified or not for each expression in ORDER BY independently. If
ASC
or
DESC
is specified,
COLLATE
is specified after it. When using
COLLATE
, sorting is always case-insensitive.
Collate is supported in
LowCardinality
,
Nullable
,
Array
and
Tuple
. | {"source_file": "order-by.md"} | [
0.008471833541989326,
0.012429703027009964,
0.030986832454800606,
0.02486303821206093,
-0.0444449745118618,
0.016018912196159363,
0.0026100692339241505,
-0.0048064724542200565,
0.014585406519472599,
0.09208673238754272,
0.02321453019976616,
0.17300671339035034,
0.038975607603788376,
-0.054... |
5c3fed7f-7b64-46e8-9d9c-5ad042146fb1 | Collate is supported in
LowCardinality
,
Nullable
,
Array
and
Tuple
.
We only recommend using
COLLATE
for final sorting of a small number of rows, since sorting with
COLLATE
is less efficient than normal sorting by bytes.
Collation Examples {#collation-examples}
Example only with
String
values:
Input table:
text
ββxββ¬βsβββββ
β 1 β bca β
β 2 β ABC β
β 3 β 123a β
β 4 β abc β
β 5 β BCA β
βββββ΄βββββββ
Query:
sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
Result:
text
ββxββ¬βsβββββ
β 3 β 123a β
β 4 β abc β
β 2 β ABC β
β 1 β bca β
β 5 β BCA β
βββββ΄βββββββ
Example with
Nullable
:
Input table:
text
ββxββ¬βsβββββ
β 1 β bca β
β 2 β α΄Ία΅α΄Έα΄Έ β
β 3 β ABC β
β 4 β 123a β
β 5 β abc β
β 6 β α΄Ία΅α΄Έα΄Έ β
β 7 β BCA β
βββββ΄βββββββ
Query:
sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
Result:
text
ββxββ¬βsβββββ
β 4 β 123a β
β 5 β abc β
β 3 β ABC β
β 1 β bca β
β 7 β BCA β
β 6 β α΄Ία΅α΄Έα΄Έ β
β 2 β α΄Ία΅α΄Έα΄Έ β
βββββ΄βββββββ
Example with
Array
:
Input table:
text
ββxββ¬βsββββββββββββββ
β 1 β ['Z'] β
β 2 β ['z'] β
β 3 β ['a'] β
β 4 β ['A'] β
β 5 β ['z','a'] β
β 6 β ['z','a','a'] β
β 7 β [''] β
βββββ΄ββββββββββββββββ
Query:
sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
Result:
text
ββxββ¬βsββββββββββββββ
β 7 β [''] β
β 3 β ['a'] β
β 4 β ['A'] β
β 2 β ['z'] β
β 5 β ['z','a'] β
β 6 β ['z','a','a'] β
β 1 β ['Z'] β
βββββ΄ββββββββββββββββ
Example with
LowCardinality
string:
Input table:
response
ββxββ¬βsββββ
β 1 β Z β
β 2 β z β
β 3 β a β
β 4 β A β
β 5 β za β
β 6 β zaa β
β 7 β β
βββββ΄ββββββ
Query:
sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
Result:
response
ββxββ¬βsββββ
β 7 β β
β 3 β a β
β 4 β A β
β 2 β z β
β 1 β Z β
β 5 β za β
β 6 β zaa β
βββββ΄ββββββ
Example with
Tuple
:
response
ββxββ¬βsββββββββ
β 1 β (1,'Z') β
β 2 β (1,'z') β
β 3 β (1,'a') β
β 4 β (2,'z') β
β 5 β (1,'A') β
β 6 β (2,'Z') β
β 7 β (2,'A') β
βββββ΄ββββββββββ
Query:
sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
Result:
response
ββxββ¬βsββββββββ
β 3 β (1,'a') β
β 5 β (1,'A') β
β 2 β (1,'z') β
β 1 β (1,'Z') β
β 7 β (2,'A') β
β 4 β (2,'z') β
β 6 β (2,'Z') β
βββββ΄ββββββββββ
Implementation Details {#implementation-details}
Less RAM is used if a small enough
LIMIT
is specified in addition to
ORDER BY
. Otherwise, the amount of memory spent is proportional to the volume of data for sorting. For distributed query processing, if
GROUP BY
is omitted, sorting is partially done on remote servers, and the results are merged on the requestor server. This means that for distributed sorting, the volume of data to sort can be greater than the amount of memory on a single server. | {"source_file": "order-by.md"} | [
0.041718289256095886,
-0.06512418389320374,
0.016818229109048843,
0.019351260736584663,
-0.04122078791260719,
-0.01799171231687069,
0.0005868474254384637,
0.029178569093346596,
-0.07342053204774857,
0.028300810605287552,
-0.03457185998558998,
0.07456131279468536,
0.0042879958637058735,
-0.... |
823dcf70-a8dd-45bf-b1ef-0e0765a671a8 | If there is not enough RAM, it is possible to perform sorting in external memory (creating temporary files on a disk). Use the setting
max_bytes_before_external_sort
for this purpose. If it is set to 0 (the default), external sorting is disabled. If it is enabled, when the volume of data to sort reaches the specified number of bytes, the collected data is sorted and dumped into a temporary file. After all data is read, all the sorted files are merged and the results are output. Files are written to the
/var/lib/clickhouse/tmp/
directory in the config (by default, but you can use the
tmp_path
parameter to change this setting). You can also use spilling to disk only if query exceeds memory limits, i.e.
max_bytes_ratio_before_external_sort=0.6
will enable spilling to disk only once the query hits
60%
memory limit (user/sever).
Running a query may use more memory than
max_bytes_before_external_sort
. For this reason, this setting must have a value significantly smaller than
max_memory_usage
. As an example, if your server has 128 GB of RAM and you need to run a single query, set
max_memory_usage
to 100 GB, and
max_bytes_before_external_sort
to 80 GB.
External sorting works much less effectively than sorting in RAM.
Optimization of Data Reading {#optimization-of-data-reading}
If
ORDER BY
expression has a prefix that coincides with the table sorting key, you can optimize the query by using the
optimize_read_in_order
setting.
When the
optimize_read_in_order
setting is enabled, the ClickHouse server uses the table index and reads the data in order of the
ORDER BY
key. This allows to avoid reading all data in case of specified
LIMIT
. So queries on big data with small limit are processed faster.
Optimization works with both
ASC
and
DESC
and does not work together with
GROUP BY
clause and
FINAL
modifier.
When the
optimize_read_in_order
setting is disabled, the ClickHouse server does not use the table index while processing
SELECT
queries.
Consider disabling
optimize_read_in_order
manually, when running queries that have
ORDER BY
clause, large
LIMIT
and
WHERE
condition that requires to read huge amount of records before queried data is found.
Optimization is supported in the following table engines:
MergeTree
(including
materialized views
),
Merge
,
Buffer
In
MaterializedView
-engine tables the optimization works with views like
SELECT ... FROM merge_tree_table ORDER BY pk
. But it is not supported in the queries like
SELECT ... FROM view ORDER BY pk
if the view query does not have the
ORDER BY
clause.
ORDER BY Expr WITH FILL Modifier {#order-by-expr-with-fill-modifier}
This modifier also can be combined with
LIMIT ... WITH TIES modifier
.
WITH FILL
modifier can be set after
ORDER BY expr
with optional
FROM expr
,
TO expr
and
STEP expr
parameters.
All missed values of
expr
column will be filled sequentially and other columns will be filled as defaults. | {"source_file": "order-by.md"} | [
0.0036806564312428236,
-0.04072017967700958,
-0.05528739094734192,
0.04708627238869667,
-0.015213234350085258,
-0.06793882697820663,
0.02073316089808941,
0.06634437292814255,
0.002160662319511175,
0.0903133675456047,
0.04335061088204384,
0.10698556900024414,
0.030076907947659492,
-0.043732... |
925dafcf-86aa-43b6-af8c-18289a40dfa5 | To fill multiple columns, add
WITH FILL
modifier with optional parameters after each field name in
ORDER BY
section.
sql
ORDER BY expr [WITH FILL] [FROM const_expr] [TO const_expr] [STEP const_numeric_expr] [STALENESS const_numeric_expr], ... exprN [WITH FILL] [FROM expr] [TO expr] [STEP numeric_expr] [STALENESS numeric_expr]
[INTERPOLATE [(col [AS expr], ... colN [AS exprN])]]
WITH FILL
can be applied for fields with Numeric (all kinds of float, decimal, int) or Date/DateTime types. When applied for
String
fields, missed values are filled with empty strings.
When
FROM const_expr
not defined sequence of filling use minimal
expr
field value from
ORDER BY
.
When
TO const_expr
not defined sequence of filling use maximum
expr
field value from
ORDER BY
.
When
STEP const_numeric_expr
defined then
const_numeric_expr
interprets
as is
for numeric types, as
days
for Date type, as
seconds
for DateTime type. It also supports
INTERVAL
data type representing time and date intervals.
When
STEP const_numeric_expr
omitted then sequence of filling use
1.0
for numeric type,
1 day
for Date type and
1 second
for DateTime type.
When
STALENESS const_numeric_expr
is defined, the query will generate rows until the difference from the previous row in the original data exceeds
const_numeric_expr
.
INTERPOLATE
can be applied to columns not participating in
ORDER BY WITH FILL
. Such columns are filled based on previous fields values by applying
expr
. If
expr
is not present will repeat previous value. Omitted list will result in including all allowed columns.
Example of a query without
WITH FILL
:
sql
SELECT n, source FROM (
SELECT toFloat32(number % 10) AS n, 'original' AS source
FROM numbers(10) WHERE number % 3 = 1
) ORDER BY n;
Result:
text
ββnββ¬βsourceββββ
β 1 β original β
β 4 β original β
β 7 β original β
βββββ΄βββββββββββ
Same query after applying
WITH FILL
modifier:
sql
SELECT n, source FROM (
SELECT toFloat32(number % 10) AS n, 'original' AS source
FROM numbers(10) WHERE number % 3 = 1
) ORDER BY n WITH FILL FROM 0 TO 5.51 STEP 0.5;
Result:
text
ββββnββ¬βsourceββββ
β 0 β β
β 0.5 β β
β 1 β original β
β 1.5 β β
β 2 β β
β 2.5 β β
β 3 β β
β 3.5 β β
β 4 β original β
β 4.5 β β
β 5 β β
β 5.5 β β
β 7 β original β
βββββββ΄βββββββββββ
For the case with multiple fields
ORDER BY field2 WITH FILL, field1 WITH FILL
order of filling will follow the order of fields in the
ORDER BY
clause.
Example:
sql
SELECT
toDate((number * 10) * 86400) AS d1,
toDate(number * 86400) AS d2,
'original' AS source
FROM numbers(10)
WHERE (number % 3) = 1
ORDER BY
d2 WITH FILL,
d1 WITH FILL STEP 5;
Result: | {"source_file": "order-by.md"} | [
-0.05799982324242592,
-0.006695910356938839,
-0.005347033031284809,
0.032390132546424866,
-0.0578121617436409,
-0.019862741231918335,
-0.002710344735532999,
0.0739373043179512,
-0.0854216143488884,
-0.011480000801384449,
-0.043582748621702194,
0.00417729839682579,
0.0006844557938165963,
-0... |
f245fb4c-8596-479a-95b8-55bbe0318f52 | Result:
text
ββββd1ββββββββ¬βββd2ββββββββ¬βsourceββββ
β 1970-01-11 β 1970-01-02 β original β
β 1970-01-01 β 1970-01-03 β β
β 1970-01-01 β 1970-01-04 β β
β 1970-02-10 β 1970-01-05 β original β
β 1970-01-01 β 1970-01-06 β β
β 1970-01-01 β 1970-01-07 β β
β 1970-03-12 β 1970-01-08 β original β
ββββββββββββββ΄βββββββββββββ΄βββββββββββ
Field
d1
does not fill in and use the default value cause we do not have repeated values for
d2
value, and the sequence for
d1
can't be properly calculated.
The following query with the changed field in
ORDER BY
:
sql
SELECT
toDate((number * 10) * 86400) AS d1,
toDate(number * 86400) AS d2,
'original' AS source
FROM numbers(10)
WHERE (number % 3) = 1
ORDER BY
d1 WITH FILL STEP 5,
d2 WITH FILL;
Result:
text
ββββd1ββββββββ¬βββd2ββββββββ¬βsourceββββ
β 1970-01-11 β 1970-01-02 β original β
β 1970-01-16 β 1970-01-01 β β
β 1970-01-21 β 1970-01-01 β β
β 1970-01-26 β 1970-01-01 β β
β 1970-01-31 β 1970-01-01 β β
β 1970-02-05 β 1970-01-01 β β
β 1970-02-10 β 1970-01-05 β original β
β 1970-02-15 β 1970-01-01 β β
β 1970-02-20 β 1970-01-01 β β
β 1970-02-25 β 1970-01-01 β β
β 1970-03-02 β 1970-01-01 β β
β 1970-03-07 β 1970-01-01 β β
β 1970-03-12 β 1970-01-08 β original β
ββββββββββββββ΄βββββββββββββ΄βββββββββββ
The following query uses the
INTERVAL
data type of 1 day for each data filled on column
d1
:
sql
SELECT
toDate((number * 10) * 86400) AS d1,
toDate(number * 86400) AS d2,
'original' AS source
FROM numbers(10)
WHERE (number % 3) = 1
ORDER BY
d1 WITH FILL STEP INTERVAL 1 DAY,
d2 WITH FILL;
Result: | {"source_file": "order-by.md"} | [
-0.10774681717157364,
0.001486922730691731,
0.1291782557964325,
-0.021180540323257446,
-0.05321882292628288,
-0.002975816372781992,
0.00984268169850111,
0.025340095162391663,
0.05779865011572838,
-0.012905415147542953,
0.0447898730635643,
0.02178393118083477,
-0.09272851049900055,
-0.09327... |
62a2dc47-e085-46a7-9df2-a992afd0c0bf | Result:
response
ββββββββββd1ββ¬βββββββββd2ββ¬βsourceββββ
β 1970-01-11 β 1970-01-02 β original β
β 1970-01-12 β 1970-01-01 β β
β 1970-01-13 β 1970-01-01 β β
β 1970-01-14 β 1970-01-01 β β
β 1970-01-15 β 1970-01-01 β β
β 1970-01-16 β 1970-01-01 β β
β 1970-01-17 β 1970-01-01 β β
β 1970-01-18 β 1970-01-01 β β
β 1970-01-19 β 1970-01-01 β β
β 1970-01-20 β 1970-01-01 β β
β 1970-01-21 β 1970-01-01 β β
β 1970-01-22 β 1970-01-01 β β
β 1970-01-23 β 1970-01-01 β β
β 1970-01-24 β 1970-01-01 β β
β 1970-01-25 β 1970-01-01 β β
β 1970-01-26 β 1970-01-01 β β
β 1970-01-27 β 1970-01-01 β β
β 1970-01-28 β 1970-01-01 β β
β 1970-01-29 β 1970-01-01 β β
β 1970-01-30 β 1970-01-01 β β
β 1970-01-31 β 1970-01-01 β β
β 1970-02-01 β 1970-01-01 β β
β 1970-02-02 β 1970-01-01 β β
β 1970-02-03 β 1970-01-01 β β
β 1970-02-04 β 1970-01-01 β β
β 1970-02-05 β 1970-01-01 β β
β 1970-02-06 β 1970-01-01 β β
β 1970-02-07 β 1970-01-01 β β
β 1970-02-08 β 1970-01-01 β β
β 1970-02-09 β 1970-01-01 β β
β 1970-02-10 β 1970-01-05 β original β
β 1970-02-11 β 1970-01-01 β β
β 1970-02-12 β 1970-01-01 β β
β 1970-02-13 β 1970-01-01 β β
β 1970-02-14 β 1970-01-01 β β
β 1970-02-15 β 1970-01-01 β β
β 1970-02-16 β 1970-01-01 β β
β 1970-02-17 β 1970-01-01 β β
β 1970-02-18 β 1970-01-01 β β
β 1970-02-19 β 1970-01-01 β β
β 1970-02-20 β 1970-01-01 β β
β 1970-02-21 β 1970-01-01 β β
β 1970-02-22 β 1970-01-01 β β
β 1970-02-23 β 1970-01-01 β β
β 1970-02-24 β 1970-01-01 β β
β 1970-02-25 β 1970-01-01 β β
β 1970-02-26 β 1970-01-01 β β
β 1970-02-27 β 1970-01-01 β β
β 1970-02-28 β 1970-01-01 β β
β 1970-03-01 β 1970-01-01 β β
β 1970-03-02 β 1970-01-01 β β
β 1970-03-03 β 1970-01-01 β β
β 1970-03-04 β 1970-01-01 β β
β 1970-03-05 β 1970-01-01 β β
β 1970-03-06 β 1970-01-01 β β
β 1970-03-07 β 1970-01-01 β β
β 1970-03-08 β 1970-01-01 β β
β 1970-03-09 β 1970-01-01 β β
β 1970-03-10 β 1970-01-01 β β
β 1970-03-11 β 1970-01-01 β β
β 1970-03-12 β 1970-01-08 β original β
ββββββββββββββ΄βββββββββββββ΄βββββββββββ
Example of a query without
STALENESS
:
sql
SELECT number AS key, 5 * number value, 'original' AS source
FROM numbers(16) WHERE key % 5 == 0
ORDER BY key WITH FILL;
Result: | {"source_file": "order-by.md"} | [
-0.05288140848278999,
0.0580114983022213,
-0.045162998139858246,
0.03369942307472229,
-0.006575243081897497,
-0.03560882434248924,
0.03453652188181877,
0.010584820993244648,
-0.012560427188873291,
0.03942781686782837,
0.05502890795469284,
-0.00840887613594532,
0.06506778299808502,
-0.02575... |
e2852e81-d3c2-45af-8500-7eeb614da546 | Example of a query without
STALENESS
:
sql
SELECT number AS key, 5 * number value, 'original' AS source
FROM numbers(16) WHERE key % 5 == 0
ORDER BY key WITH FILL;
Result:
text
ββkeyββ¬βvalueββ¬βsourceββββ
1. β 0 β 0 β original β
2. β 1 β 0 β β
3. β 2 β 0 β β
4. β 3 β 0 β β
5. β 4 β 0 β β
6. β 5 β 25 β original β
7. β 6 β 0 β β
8. β 7 β 0 β β
9. β 8 β 0 β β
10. β 9 β 0 β β
11. β 10 β 50 β original β
12. β 11 β 0 β β
13. β 12 β 0 β β
14. β 13 β 0 β β
15. β 14 β 0 β β
16. β 15 β 75 β original β
βββββββ΄ββββββββ΄βββββββββββ
Same query after applying
STALENESS 3
:
sql
SELECT number AS key, 5 * number value, 'original' AS source
FROM numbers(16) WHERE key % 5 == 0
ORDER BY key WITH FILL STALENESS 3;
Result:
text
ββkeyββ¬βvalueββ¬βsourceββββ
1. β 0 β 0 β original β
2. β 1 β 0 β β
3. β 2 β 0 β β
4. β 5 β 25 β original β
5. β 6 β 0 β β
6. β 7 β 0 β β
7. β 10 β 50 β original β
8. β 11 β 0 β β
9. β 12 β 0 β β
10. β 15 β 75 β original β
11. β 16 β 0 β β
12. β 17 β 0 β β
βββββββ΄ββββββββ΄βββββββββββ
Example of a query without
INTERPOLATE
:
sql
SELECT n, source, inter FROM (
SELECT toFloat32(number % 10) AS n, 'original' AS source, number AS inter
FROM numbers(10) WHERE number % 3 = 1
) ORDER BY n WITH FILL FROM 0 TO 5.51 STEP 0.5;
Result:
text
ββββnββ¬βsourceββββ¬βinterββ
β 0 β β 0 β
β 0.5 β β 0 β
β 1 β original β 1 β
β 1.5 β β 0 β
β 2 β β 0 β
β 2.5 β β 0 β
β 3 β β 0 β
β 3.5 β β 0 β
β 4 β original β 4 β
β 4.5 β β 0 β
β 5 β β 0 β
β 5.5 β β 0 β
β 7 β original β 7 β
βββββββ΄βββββββββββ΄ββββββββ
Same query after applying
INTERPOLATE
:
sql
SELECT n, source, inter FROM (
SELECT toFloat32(number % 10) AS n, 'original' AS source, number AS inter
FROM numbers(10) WHERE number % 3 = 1
) ORDER BY n WITH FILL FROM 0 TO 5.51 STEP 0.5 INTERPOLATE (inter AS inter + 1);
Result:
text
ββββnββ¬βsourceββββ¬βinterββ
β 0 β β 0 β
β 0.5 β β 0 β
β 1 β original β 1 β
β 1.5 β β 2 β
β 2 β β 3 β
β 2.5 β β 4 β
β 3 β β 5 β
β 3.5 β β 6 β
β 4 β original β 4 β
β 4.5 β β 5 β
β 5 β β 6 β
β 5.5 β β 7 β
β 7 β original β 7 β
βββββββ΄βββββββββββ΄ββββββββ
Filling grouped by sorting prefix {#filling-grouped-by-sorting-prefix} | {"source_file": "order-by.md"} | [
-0.0607997328042984,
-0.04791620746254921,
0.046557530760765076,
0.02164715714752674,
-0.011568472720682621,
-0.02129930444061756,
0.029340334236621857,
-0.06721470504999161,
0.018475910648703575,
0.033896736800670624,
0.04542706906795502,
0.05608070641756058,
0.0023611232172697783,
-0.117... |
e3333556-b6f9-476e-a928-5b7adaaf54fd | Filling grouped by sorting prefix {#filling-grouped-by-sorting-prefix}
It can be useful to fill rows which have the same values in particular columns independently, - a good example is filling missing values in time series.
Assume there is the following time series table:
``sql
CREATE TABLE timeseries
(
sensor_id
UInt64,
timestamp
DateTime64(3, 'UTC'),
value` Float64
)
ENGINE = Memory;
SELECT * FROM timeseries;
ββsensor_idββ¬βββββββββββββββtimestampββ¬βvalueββ
β 234 β 2021-12-01 00:00:03.000 β 3 β
β 432 β 2021-12-01 00:00:01.000 β 1 β
β 234 β 2021-12-01 00:00:07.000 β 7 β
β 432 β 2021-12-01 00:00:05.000 β 5 β
βββββββββββββ΄ββββββββββββββββββββββββββ΄ββββββββ
And we'd like to fill missing values for each sensor independently with 1 second interval.
The way to achieve it is to use `sensor_id` column as sorting prefix for filling column `timestamp`:
sql
SELECT *
FROM timeseries
ORDER BY
sensor_id,
timestamp WITH FILL
INTERPOLATE ( value AS 9999 )
ββsensor_idββ¬βββββββββββββββtimestampββ¬βvalueββ
β 234 β 2021-12-01 00:00:03.000 β 3 β
β 234 β 2021-12-01 00:00:04.000 β 9999 β
β 234 β 2021-12-01 00:00:05.000 β 9999 β
β 234 β 2021-12-01 00:00:06.000 β 9999 β
β 234 β 2021-12-01 00:00:07.000 β 7 β
β 432 β 2021-12-01 00:00:01.000 β 1 β
β 432 β 2021-12-01 00:00:02.000 β 9999 β
β 432 β 2021-12-01 00:00:03.000 β 9999 β
β 432 β 2021-12-01 00:00:04.000 β 9999 β
β 432 β 2021-12-01 00:00:05.000 β 5 β
βββββββββββββ΄ββββββββββββββββββββββββββ΄ββββββββ
``
Here, the
value
column was interpolated with
9999
just to make filled rows more noticeable.
This behavior is controlled by setting
use_with_fill_by_sorting_prefix` (enabled by default)
Related content {#related-content}
Blog:
Working with time series data in ClickHouse | {"source_file": "order-by.md"} | [
-0.06527075916528702,
-0.025714140385389328,
0.06579315662384033,
0.025299128144979477,
-0.026533083990216255,
-0.02781374752521515,
0.018868762999773026,
-0.0042030601762235165,
0.009224538691341877,
0.019904350861907005,
-0.010177122429013252,
-0.007447259500622749,
-0.007618439383804798,
... |
4e991e09-184b-4ba8-b368-207aaced2f10 | description: 'Documentation for FROM Clause'
sidebar_label: 'FROM'
slug: /sql-reference/statements/select/from
title: 'FROM Clause'
doc_type: 'reference'
FROM Clause
The
FROM
clause specifies the source to read data from:
Table
Subquery
Table function
JOIN
and
ARRAY JOIN
clauses may also be used to extend the functionality of the
FROM
clause.
Subquery is another
SELECT
query that may be specified in parenthesis inside
FROM
clause.
The
FROM
can contain multiple data sources, separated by commas, which is equivalent of performing
CROSS JOIN
on them.
FROM
can optionally appear before a
SELECT
clause. This is a ClickHouse-specific extension of standard SQL which makes
SELECT
statements easier to read. Example:
sql
FROM table
SELECT *
FINAL Modifier {#final-modifier}
When
FINAL
is specified, ClickHouse fully merges the data before returning the result. This also performs all data transformations that happen during merges for the given table engine.
It is applicable when selecting data from tables using the following table engines:
-
ReplacingMergeTree
-
SummingMergeTree
-
AggregatingMergeTree
-
CollapsingMergeTree
-
VersionedCollapsingMergeTree
SELECT
queries with
FINAL
are executed in parallel. The
max_final_threads
setting limits the number of threads used.
Drawbacks {#drawbacks}
Queries that use
FINAL
execute slightly slower than similar queries that do not use
FINAL
because:
Data is merged during query execution.
Queries with
FINAL
may read primary key columns in addition to the columns specified in the query.
FINAL
requires additional compute and memory resources because the processing that normally would occur at merge time must occur in memory at the time of the query. However, using FINAL is sometimes necessary in order to produce accurate results (as data may not yet be fully merged). It is less expensive than running
OPTIMIZE
to force a merge.
As an alternative to using
FINAL
, it is sometimes possible to use different queries that assume the background processes of the
MergeTree
engine have not yet occurred and deal with it by applying an aggregation (for example, to discard duplicates). If you need to use
FINAL
in your queries in order to get the required results, it is okay to do so but be aware of the additional processing required.
FINAL
can be applied automatically using
FINAL
setting to all tables in a query using a session or a user profile.
Example Usage {#example-usage}
Using the
FINAL
keyword
sql
SELECT x, y FROM mytable FINAL WHERE x > 1;
Using
FINAL
as a query-level setting
sql
SELECT x, y FROM mytable WHERE x > 1 SETTINGS final = 1;
Using
FINAL
as a session-level setting
sql
SET final = 1;
SELECT x, y FROM mytable WHERE x > 1;
Implementation Details {#implementation-details} | {"source_file": "from.md"} | [
-0.044131603091955185,
-0.004346354864537716,
0.024475300684571266,
0.061485737562179565,
0.0502002015709877,
-0.01743384823203087,
0.0518447607755661,
0.016151485964655876,
-0.018111657351255417,
-0.028691181913018227,
0.037722695618867874,
0.01645302027463913,
0.057452742010354996,
-0.15... |
c17b9824-6e87-4201-b157-684ae3c0358b | Using
FINAL
as a session-level setting
sql
SET final = 1;
SELECT x, y FROM mytable WHERE x > 1;
Implementation Details {#implementation-details}
If the
FROM
clause is omitted, data will be read from the
system.one
table.
The
system.one
table contains exactly one row (this table fulfills the same purpose as the DUAL table found in other DBMSs).
To execute a query, all the columns listed in the query are extracted from the appropriate table. Any columns not needed for the external query are thrown out of the subqueries.
If a query does not list any columns (for example,
SELECT count() FROM t
), some column is extracted from the table anyway (the smallest one is preferred), in order to calculate the number of rows. | {"source_file": "from.md"} | [
0.0690496563911438,
-0.01704678125679493,
-0.024069633334875107,
-0.008017019368708134,
-0.05898767337203026,
-0.07217556983232498,
0.014094571582973003,
0.11244605481624603,
-0.0028573593590408564,
0.05427313223481178,
0.01788521744310856,
0.012204445898532867,
0.09075402468442917,
-0.091... |
132ec22f-9013-48e4-9e96-0816b6e8f64f | description: 'Documentation for LIMIT BY Clause'
sidebar_label: 'LIMIT BY'
slug: /sql-reference/statements/select/limit-by
title: 'LIMIT BY Clause'
doc_type: 'reference'
LIMIT BY Clause
A query with the
LIMIT n BY expressions
clause selects the first
n
rows for each distinct value of
expressions
. The key for
LIMIT BY
can contain any number of
expressions
.
ClickHouse supports the following syntax variants:
LIMIT [offset_value, ]n BY expressions
LIMIT n OFFSET offset_value BY expressions
During query processing, ClickHouse selects data ordered by sorting key. The sorting key is set explicitly using an
ORDER BY
clause or implicitly as a property of the table engine (row order is only guaranteed when using
ORDER BY
, otherwise the row blocks will not be ordered due to multi-threading). Then ClickHouse applies
LIMIT n BY expressions
and returns the first
n
rows for each distinct combination of
expressions
. If
OFFSET
is specified, then for each data block that belongs to a distinct combination of
expressions
, ClickHouse skips
offset_value
number of rows from the beginning of the block and returns a maximum of
n
rows as a result. If
offset_value
is bigger than the number of rows in the data block, ClickHouse returns zero rows from the block.
:::note
LIMIT BY
is not related to
LIMIT
. They can both be used in the same query.
:::
If you want to use column numbers instead of column names in the
LIMIT BY
clause, enable the setting
enable_positional_arguments
.
Examples {#examples}
Sample table:
sql
CREATE TABLE limit_by(id Int, val Int) ENGINE = Memory;
INSERT INTO limit_by VALUES (1, 10), (1, 11), (1, 12), (2, 20), (2, 21);
Queries:
sql
SELECT * FROM limit_by ORDER BY id, val LIMIT 2 BY id
text
ββidββ¬βvalββ
β 1 β 10 β
β 1 β 11 β
β 2 β 20 β
β 2 β 21 β
ββββββ΄ββββββ
sql
SELECT * FROM limit_by ORDER BY id, val LIMIT 1, 2 BY id
text
ββidββ¬βvalββ
β 1 β 11 β
β 1 β 12 β
β 2 β 21 β
ββββββ΄ββββββ
The
SELECT * FROM limit_by ORDER BY id, val LIMIT 2 OFFSET 1 BY id
query returns the same result.
The following query returns the top 5 referrers for each
domain, device_type
pair with a maximum of 100 rows in total (
LIMIT n BY + LIMIT
).
sql
SELECT
domainWithoutWWW(URL) AS domain,
domainWithoutWWW(REFERRER_URL) AS referrer,
device_type,
count() cnt
FROM hits
GROUP BY domain, referrer, device_type
ORDER BY cnt DESC
LIMIT 5 BY domain, device_type
LIMIT 100
LIMIT BY ALL {#limit-by-all}
LIMIT BY ALL
is equivalent to listing all the SELECT-ed expressions that are not aggregate functions.
For example:
sql
SELECT col1, col2, col3 FROM table LIMIT 2 BY ALL
is the same as
sql
SELECT col1, col2, col3 FROM table LIMIT 2 BY col1, col2, col3
For a special case that if there is a function having both aggregate functions and other fields as its arguments, the
LIMIT BY
keys will contain the maximum non-aggregate fields we can extract from it. | {"source_file": "limit-by.md"} | [
-0.03019314631819725,
-0.007602638099342585,
0.01964249461889267,
-0.0028092211578041315,
-0.04641837254166603,
-0.009202226996421814,
0.007056661881506443,
-0.013682706281542778,
0.001115736784413457,
0.023321647197008133,
-0.0039027214515954256,
0.05138099566102028,
0.025361694395542145,
... |
4dce8d9b-9203-42dd-9894-b4cec346a972 | For example:
sql
SELECT substring(a, 4, 2), substring(substring(a, 1, 2), 1, count(b)) FROM t LIMIT 2 BY ALL
is the same as
sql
SELECT substring(a, 4, 2), substring(substring(a, 1, 2), 1, count(b)) FROM t LIMIT 2 BY substring(a, 4, 2), substring(a, 1, 2)
Examples {#examples-limit-by-all}
Sample table:
sql
CREATE TABLE limit_by(id Int, val Int) ENGINE = Memory;
INSERT INTO limit_by VALUES (1, 10), (1, 11), (1, 12), (2, 20), (2, 21);
Queries:
sql
SELECT * FROM limit_by ORDER BY id, val LIMIT 2 BY id
text
ββidββ¬βvalββ
β 1 β 10 β
β 1 β 11 β
β 2 β 20 β
β 2 β 21 β
ββββββ΄ββββββ
sql
SELECT * FROM limit_by ORDER BY id, val LIMIT 1, 2 BY id
text
ββidββ¬βvalββ
β 1 β 11 β
β 1 β 12 β
β 2 β 21 β
ββββββ΄ββββββ
The
SELECT * FROM limit_by ORDER BY id, val LIMIT 2 OFFSET 1 BY id
query returns the same result.
Using
LIMIT BY ALL
:
sql
SELECT id, val FROM limit_by ORDER BY id, val LIMIT 2 BY ALL
This is equivalent to:
sql
SELECT id, val FROM limit_by ORDER BY id, val LIMIT 2 BY id, val
The following query returns the top 5 referrers for each
domain, device_type
pair with a maximum of 100 rows in total (
LIMIT n BY + LIMIT
).
sql
SELECT
domainWithoutWWW(URL) AS domain,
domainWithoutWWW(REFERRER_URL) AS referrer,
device_type,
count() cnt
FROM hits
GROUP BY domain, referrer, device_type
ORDER BY cnt DESC
LIMIT 5 BY domain, device_type
LIMIT 100 | {"source_file": "limit-by.md"} | [
-0.02787882089614868,
-0.03745068237185478,
-0.000992955407127738,
-0.02943672612309456,
-0.0494372621178627,
-0.013133466243743896,
0.05669501796364784,
0.009537299163639545,
0.03939474746584892,
-0.0794166848063469,
0.008502393029630184,
0.05862219259142876,
0.05549117922782898,
-0.08597... |
50d77ccf-7e2c-4e28-b7ae-9d2a2ef34891 | description: 'Documentation for GROUP BY Clause'
sidebar_label: 'GROUP BY'
slug: /sql-reference/statements/select/group-by
title: 'GROUP BY Clause'
doc_type: 'reference'
GROUP BY Clause
GROUP BY
clause switches the
SELECT
query into an aggregation mode, which works as follows:
GROUP BY
clause contains a list of expressions (or a single expression, which is considered to be the list of length one). This list acts as a "grouping key", while each individual expression will be referred to as a "key expression".
All the expressions in the
SELECT
,
HAVING
, and
ORDER BY
clauses
must
be calculated based on key expressions
or
on
aggregate functions
over non-key expressions (including plain columns). In other words, each column selected from the table must be used either in a key expression or inside an aggregate function, but not both.
Result of aggregating
SELECT
query will contain as many rows as there were unique values of "grouping key" in source table. Usually, this significantly reduces the row count, often by orders of magnitude, but not necessarily: row count stays the same if all "grouping key" values were distinct.
When you want to group data in the table by column numbers instead of column names, enable the setting
enable_positional_arguments
.
:::note
There's an additional way to run aggregation over a table. If a query contains table columns only inside aggregate functions, the
GROUP BY clause
can be omitted, and aggregation by an empty set of keys is assumed. Such queries always return exactly one row.
:::
NULL Processing {#null-processing}
For grouping, ClickHouse interprets
NULL
as a value, and
NULL==NULL
. It differs from
NULL
processing in most other contexts.
Here's an example to show what this means.
Assume you have this table:
text
ββxββ¬ββββyββ
β 1 β 2 β
β 2 β α΄Ία΅α΄Έα΄Έ β
β 3 β 2 β
β 3 β 3 β
β 3 β α΄Ία΅α΄Έα΄Έ β
βββββ΄βββββββ
The query
SELECT sum(x), y FROM t_null_big GROUP BY y
results in:
text
ββsum(x)ββ¬ββββyββ
β 4 β 2 β
β 3 β 3 β
β 5 β α΄Ία΅α΄Έα΄Έ β
ββββββββββ΄βββββββ
You can see that
GROUP BY
for
y = NULL
summed up
x
, as if
NULL
is this value.
If you pass several keys to
GROUP BY
, the result will give you all the combinations of the selection, as if
NULL
were a specific value.
ROLLUP Modifier {#rollup-modifier}
ROLLUP
modifier is used to calculate subtotals for the key expressions, based on their order in the
GROUP BY
list. The subtotals rows are added after the result table.
The subtotals are calculated in the reverse order: at first subtotals are calculated for the last key expression in the list, then for the previous one, and so on up to the first key expression.
In the subtotals rows the values of already "grouped" key expressions are set to
0
or empty line.
:::note
Mind that
HAVING
clause can affect the subtotals results.
:::
Example
Consider the table t: | {"source_file": "group-by.md"} | [
-0.027051836252212524,
-0.02059921808540821,
0.049669697880744934,
0.07471103221178055,
0.0073620774783194065,
0.05066389963030815,
-0.0036190703976899385,
0.020080141723155975,
0.09422264993190765,
0.02208099327981472,
0.03303489089012146,
0.081475168466568,
0.03073546662926674,
-0.097442... |
423ec503-d824-4402-bff2-72de65f27953 | :::note
Mind that
HAVING
clause can affect the subtotals results.
:::
Example
Consider the table t:
text
ββyearββ¬βmonthββ¬βdayββ
β 2019 β 1 β 5 β
β 2019 β 1 β 15 β
β 2020 β 1 β 5 β
β 2020 β 1 β 15 β
β 2020 β 10 β 5 β
β 2020 β 10 β 15 β
ββββββββ΄ββββββββ΄ββββββ
Query:
sql
SELECT year, month, day, count(*) FROM t GROUP BY ROLLUP(year, month, day);
As
GROUP BY
section has three key expressions, the result contains four tables with subtotals "rolled up" from right to left:
GROUP BY year, month, day
;
GROUP BY year, month
(and
day
column is filled with zeros);
GROUP BY year
(now
month, day
columns are both filled with zeros);
and totals (and all three key expression columns are zeros).
text
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2020 β 10 β 15 β 1 β
β 2020 β 1 β 5 β 1 β
β 2019 β 1 β 5 β 1 β
β 2020 β 1 β 15 β 1 β
β 2019 β 1 β 15 β 1 β
β 2020 β 10 β 5 β 1 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2019 β 1 β 0 β 2 β
β 2020 β 1 β 0 β 2 β
β 2020 β 10 β 0 β 2 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2019 β 0 β 0 β 2 β
β 2020 β 0 β 0 β 4 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 0 β 0 β 0 β 6 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
The same query also can be written using
WITH
keyword.
sql
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH ROLLUP;
See also
group_by_use_nulls
setting for SQL standard compatibility.
CUBE Modifier {#cube-modifier}
CUBE
modifier is used to calculate subtotals for every combination of the key expressions in the
GROUP BY
list. The subtotals rows are added after the result table.
In the subtotals rows the values of all "grouped" key expressions are set to
0
or empty line.
:::note
Mind that
HAVING
clause can affect the subtotals results.
:::
Example
Consider the table t:
text
ββyearββ¬βmonthββ¬βdayββ
β 2019 β 1 β 5 β
β 2019 β 1 β 15 β
β 2020 β 1 β 5 β
β 2020 β 1 β 15 β
β 2020 β 10 β 5 β
β 2020 β 10 β 15 β
ββββββββ΄ββββββββ΄ββββββ
Query:
sql
SELECT year, month, day, count(*) FROM t GROUP BY CUBE(year, month, day);
As
GROUP BY
section has three key expressions, the result contains eight tables with subtotals for all key expression combinations:
GROUP BY year, month, day
GROUP BY year, month
GROUP BY year, day
GROUP BY year
GROUP BY month, day
GROUP BY month
GROUP BY day
and totals.
Columns, excluded from
GROUP BY
, are filled with zeros. | {"source_file": "group-by.md"} | [
-0.005622030235826969,
0.0974152684211731,
0.02275817282497883,
0.035124555230140686,
-0.05990835279226303,
0.016383491456508636,
0.031133996322751045,
-0.020211301743984222,
-0.012864869087934494,
0.04767569899559021,
0.11227893829345703,
-0.07516274601221085,
0.023501606658101082,
-0.052... |
1161b474-4f6a-4e85-b812-1af846810bdb | GROUP BY year, month
GROUP BY year, day
GROUP BY year
GROUP BY month, day
GROUP BY month
GROUP BY day
and totals.
Columns, excluded from
GROUP BY
, are filled with zeros.
text
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2020 β 10 β 15 β 1 β
β 2020 β 1 β 5 β 1 β
β 2019 β 1 β 5 β 1 β
β 2020 β 1 β 15 β 1 β
β 2019 β 1 β 15 β 1 β
β 2020 β 10 β 5 β 1 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2019 β 1 β 0 β 2 β
β 2020 β 1 β 0 β 2 β
β 2020 β 10 β 0 β 2 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2020 β 0 β 5 β 2 β
β 2019 β 0 β 5 β 1 β
β 2020 β 0 β 15 β 2 β
β 2019 β 0 β 15 β 1 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 2019 β 0 β 0 β 2 β
β 2020 β 0 β 0 β 4 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 0 β 1 β 5 β 2 β
β 0 β 10 β 15 β 1 β
β 0 β 10 β 5 β 1 β
β 0 β 1 β 15 β 2 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 0 β 1 β 0 β 4 β
β 0 β 10 β 0 β 2 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 0 β 0 β 5 β 3 β
β 0 β 0 β 15 β 3 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
ββyearββ¬βmonthββ¬βdayββ¬βcount()ββ
β 0 β 0 β 0 β 6 β
ββββββββ΄ββββββββ΄ββββββ΄ββββββββββ
The same query also can be written using
WITH
keyword.
sql
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH CUBE;
See also
group_by_use_nulls
setting for SQL standard compatibility.
WITH TOTALS Modifier {#with-totals-modifier}
If the
WITH TOTALS
modifier is specified, another row will be calculated. This row will have key columns containing default values (zeros or empty lines), and columns of aggregate functions with the values calculated across all the rows (the "total" values).
This extra row is only produced in
JSON*
,
TabSeparated*
, and
Pretty*
formats, separately from the other rows:
In
XML
and
JSON*
formats, this row is output as a separate 'totals' field.
In
TabSeparated*
,
CSV*
and
Vertical
formats, the row comes after the main result, preceded by an empty row (after the other data).
In
Pretty*
formats, the row is output as a separate table after the main result.
In
Template
format, the row is output according to specified template.
In the other formats it is not available.
:::note
totals is output in the results of
SELECT
queries, and is not output in
INSERT INTO ... SELECT
.
:::
WITH TOTALS
can be run in different ways when
HAVING
is present. The behavior depends on the
totals_mode
setting.
Configuring Totals Processing {#configuring-totals-processing} | {"source_file": "group-by.md"} | [
0.023802345618605614,
0.06008581817150116,
-0.014988154172897339,
0.06256067752838135,
-0.014798177406191826,
-0.04115213826298714,
0.017447270452976227,
-0.05898895487189293,
-0.009861582890152931,
0.04230833426117897,
0.12036066502332687,
-0.09752529114484787,
0.004124304745346308,
-0.02... |
d7b1c0e8-01e7-44b5-8b3a-7126aadbd857 | WITH TOTALS
can be run in different ways when
HAVING
is present. The behavior depends on the
totals_mode
setting.
Configuring Totals Processing {#configuring-totals-processing}
By default,
totals_mode = 'before_having'
. In this case, 'totals' is calculated across all rows, including the ones that do not pass through HAVING and
max_rows_to_group_by
.
The other alternatives include only the rows that pass through HAVING in 'totals', and behave differently with the setting
max_rows_to_group_by
and
group_by_overflow_mode = 'any'
.
after_having_exclusive
β Don't include rows that didn't pass through
max_rows_to_group_by
. In other words, 'totals' will have less than or the same number of rows as it would if
max_rows_to_group_by
were omitted.
after_having_inclusive
β Include all the rows that didn't pass through 'max_rows_to_group_by' in 'totals'. In other words, 'totals' will have more than or the same number of rows as it would if
max_rows_to_group_by
were omitted.
after_having_auto
β Count the number of rows that passed through HAVING. If it is more than a certain amount (by default, 50%), include all the rows that didn't pass through 'max_rows_to_group_by' in 'totals'. Otherwise, do not include them.
totals_auto_threshold
β By default, 0.5. The coefficient for
after_having_auto
.
If
max_rows_to_group_by
and
group_by_overflow_mode = 'any'
are not used, all variations of
after_having
are the same, and you can use any of them (for example,
after_having_auto
).
You can use
WITH TOTALS
in subqueries, including subqueries in the
JOIN
clause (in this case, the respective total values are combined).
GROUP BY ALL {#group-by-all}
GROUP BY ALL
is equivalent to listing all the SELECT-ed expressions that are not aggregate functions.
For example:
sql
SELECT
a * 2,
b,
count(c),
FROM t
GROUP BY ALL
is the same as
sql
SELECT
a * 2,
b,
count(c),
FROM t
GROUP BY a * 2, b
For a special case that if there is a function having both aggregate functions and other fields as its arguments, the
GROUP BY
keys will contain the maximum non-aggregate fields we can extract from it.
For example:
sql
SELECT
substring(a, 4, 2),
substring(substring(a, 1, 2), 1, count(b))
FROM t
GROUP BY ALL
is the same as
sql
SELECT
substring(a, 4, 2),
substring(substring(a, 1, 2), 1, count(b))
FROM t
GROUP BY substring(a, 4, 2), substring(a, 1, 2)
Examples {#examples}
Example:
sql
SELECT
count(),
median(FetchTiming > 60 ? 60 : FetchTiming),
count() - sum(Refresh)
FROM hits
As opposed to MySQL (and conforming to standard SQL), you can't get some value of some column that is not in a key or aggregate function (except constant expressions). To work around this, you can use the 'any' aggregate function (get the first encountered value) or 'min/max'.
Example: | {"source_file": "group-by.md"} | [
-0.05341167375445366,
-0.00340264686383307,
0.02043088525533676,
-0.01757722906768322,
-0.017723448574543,
-0.03478208929300308,
-0.022426115348935127,
-0.04229985550045967,
-0.0495033860206604,
0.03502530977129936,
0.06840980052947998,
0.004692540969699621,
0.03608211502432823,
-0.0370055... |
6d6a56a4-bb31-4570-be57-eb674f9684bf | Example:
sql
SELECT
domainWithoutWWW(URL) AS domain,
count(),
any(Title) AS title -- getting the first occurred page header for each domain.
FROM hits
GROUP BY domain
For every different key value encountered,
GROUP BY
calculates a set of aggregate function values.
GROUPING SETS modifier {#grouping-sets-modifier}
This is the most general modifier.
This modifier allows manually specifying several aggregation key sets (grouping sets).
Aggregation is performed separately for each grouping set, and after that, all results are combined.
If a column is not presented in a grouping set, it's filled with a default value.
In other words, modifiers described above can be represented via
GROUPING SETS
.
Despite the fact that queries with
ROLLUP
,
CUBE
and
GROUPING SETS
modifiers are syntactically equal, they may perform differently.
When
GROUPING SETS
try to execute everything in parallel,
ROLLUP
and
CUBE
are executing the final merging of the aggregates in a single thread.
In the situation when source columns contain default values, it might be hard to distinguish if a row is a part of the aggregation which uses those columns as keys or not.
To solve this problem
GROUPING
function must be used.
Example
The following two queries are equivalent.
```sql
-- Query 1
SELECT year, month, day, count(*) FROM t GROUP BY year, month, day WITH ROLLUP;
-- Query 2
SELECT year, month, day, count(*) FROM t GROUP BY
GROUPING SETS
(
(year, month, day),
(year, month),
(year),
()
);
```
See also
group_by_use_nulls
setting for SQL standard compatibility.
Implementation Details {#implementation-details}
Aggregation is one of the most important features of a column-oriented DBMS, and thus it's implementation is one of the most heavily optimized parts of ClickHouse. By default, aggregation is done in memory using a hash-table. It has 40+ specializations that are chosen automatically depending on "grouping key" data types.
GROUP BY Optimization Depending on Table Sorting Key {#group-by-optimization-depending-on-table-sorting-key}
The aggregation can be performed more effectively, if a table is sorted by some key, and
GROUP BY
expression contains at least prefix of sorting key or injective functions. In this case when a new key is read from table, the in-between result of aggregation can be finalized and sent to client. This behaviour is switched on by the
optimize_aggregation_in_order
setting. Such optimization reduces memory usage during aggregation, but in some cases may slow down the query execution.
GROUP BY in External Memory {#group-by-in-external-memory} | {"source_file": "group-by.md"} | [
-0.06239568442106247,
-0.045630648732185364,
-0.06807498633861542,
0.06478872150182724,
-0.013294401578605175,
-0.06203548237681389,
-0.03838738054037094,
-0.04009125754237175,
0.06474479287862778,
-0.036314256489276886,
0.015599912963807583,
-0.027403000742197037,
0.06535661965608597,
-0.... |
aa84e25e-9e49-4ba9-838b-9c66499cfb0a | GROUP BY in External Memory {#group-by-in-external-memory}
You can enable dumping temporary data to the disk to restrict memory usage during
GROUP BY
.
The
max_bytes_before_external_group_by
setting determines the threshold RAM consumption for dumping
GROUP BY
temporary data to the file system. If set to 0 (the default), it is disabled.
Alternatively, you can set
max_bytes_ratio_before_external_group_by
, which allows to use
GROUP BY
in external memory only once the query reaches certain threshold of used memory.
When using
max_bytes_before_external_group_by
, we recommend that you set
max_memory_usage
about twice as high (or
max_bytes_ratio_before_external_group_by=0.5
). This is necessary because there are two stages to aggregation: reading the data and forming intermediate data (1) and merging the intermediate data (2). Dumping data to the file system can only occur during stage 1. If the temporary data wasn't dumped, then stage 2 might require up to the same amount of memory as in stage 1.
For example, if
max_memory_usage
was set to 10000000000 and you want to use external aggregation, it makes sense to set
max_bytes_before_external_group_by
to 10000000000, and
max_memory_usage
to 20000000000. When external aggregation is triggered (if there was at least one dump of temporary data), maximum consumption of RAM is only slightly more than
max_bytes_before_external_group_by
.
With distributed query processing, external aggregation is performed on remote servers. In order for the requester server to use only a small amount of RAM, set
distributed_aggregation_memory_efficient
to 1.
When merging data flushed to the disk, as well as when merging results from remote servers when the
distributed_aggregation_memory_efficient
setting is enabled, consumes up to
1/256 * the_number_of_threads
from the total amount of RAM.
When external aggregation is enabled, if there was less than
max_bytes_before_external_group_by
of data (i.e.Β data was not flushed), the query runs just as fast as without external aggregation. If any temporary data was flushed, the run time will be several times longer (approximately three times).
If you have an
ORDER BY
with a
LIMIT
after
GROUP BY
, then the amount of used RAM depends on the amount of data in
LIMIT
, not in the whole table. But if the
ORDER BY
does not have
LIMIT
, do not forget to enable external sorting (
max_bytes_before_external_sort
). | {"source_file": "group-by.md"} | [
0.019364912062883377,
0.001840272918343544,
-0.07301407307386398,
0.07915805280208588,
-0.028070902451872826,
-0.08158156275749207,
0.02719530090689659,
0.08088938891887665,
0.01455411221832037,
0.034621112048625946,
0.025881614536046982,
0.0464584082365036,
0.036820124834775925,
-0.031982... |
314b2421-fd47-4fe7-b983-02da5d2a6976 | description: 'Documentation for SELECT Query'
sidebar_label: 'SELECT'
sidebar_position: 32
slug: /sql-reference/statements/select/
title: 'SELECT Query'
doc_type: 'reference'
SELECT Query
SELECT
queries perform data retrieval. By default, the requested data is returned to the client, while in conjunction with
INSERT INTO
it can be forwarded to a different table.
Syntax {#syntax}
sql
[WITH expr_list(subquery)]
SELECT [DISTINCT [ON (column1, column2, ...)]] expr_list
[FROM [db.]table | (subquery) | table_function] [FINAL]
[SAMPLE sample_coeff]
[ARRAY JOIN ...]
[GLOBAL] [ANY|ALL|ASOF] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI] JOIN (subquery)|table [(alias1 [, alias2 ...])] (ON <expr_list>)|(USING <column_list>)
[PREWHERE expr]
[WHERE expr]
[GROUP BY expr_list] [WITH ROLLUP|WITH CUBE] [WITH TOTALS]
[HAVING expr]
[WINDOW window_expr_list]
[QUALIFY expr]
[ORDER BY expr_list] [WITH FILL] [FROM expr] [TO expr] [STEP expr] [INTERPOLATE [(expr_list)]]
[LIMIT [offset_value, ]n BY columns]
[LIMIT [n, ]m] [WITH TIES]
[SETTINGS ...]
[UNION ...]
[INTO OUTFILE filename [COMPRESSION type [LEVEL level]] ]
[FORMAT format]
All clauses are optional, except for the required list of expressions immediately after
SELECT
which is covered in more detail
below
.
Specifics of each optional clause are covered in separate sections, which are listed in the same order as they are executed:
WITH clause
SELECT clause
DISTINCT clause
FROM clause
SAMPLE clause
JOIN clause
PREWHERE clause
WHERE clause
WINDOW clause
GROUP BY clause
LIMIT BY clause
HAVING clause
QUALIFY clause
LIMIT clause
OFFSET clause
UNION clause
INTERSECT clause
EXCEPT clause
INTO OUTFILE clause
FORMAT clause
SELECT Clause {#select-clause}
Expressions
specified in the
SELECT
clause are calculated after all the operations in the clauses described above are finished. These expressions work as if they apply to separate rows in the result. If expressions in the
SELECT
clause contain aggregate functions, then ClickHouse processes aggregate functions and expressions used as their arguments during the
GROUP BY
aggregation.
If you want to include all columns in the result, use the asterisk (
*
) symbol. For example,
SELECT * FROM ...
.
Dynamic column selection {#dynamic-column-selection}
Dynamic column selection (also known as a COLUMNS expression) allows you to match some columns in a result with a
re2
regular expression.
sql
COLUMNS('regexp')
For example, consider the table:
sql
CREATE TABLE default.col_names (aa Int8, ab Int8, bc Int8) ENGINE = TinyLog
The following query selects data from all the columns containing the
a
symbol in their name.
sql
SELECT COLUMNS('a') FROM col_names
text
ββaaββ¬βabββ
β 1 β 1 β
ββββββ΄βββββ
The selected columns are returned not in the alphabetical order.
You can use multiple
COLUMNS
expressions in a query and apply functions to them.
For example: | {"source_file": "index.md"} | [
-0.0006823174771852791,
0.03882407397031784,
0.023248160257935524,
0.04061656445264816,
-0.021455654874444008,
-0.00801974069327116,
0.05335298180580139,
0.08604602515697479,
-0.04033266007900238,
-0.002691259840503335,
0.05576296150684357,
-0.03814414516091347,
0.10589510202407837,
-0.143... |
010446ea-ec99-4bcf-bdf5-b7169be53dce | The selected columns are returned not in the alphabetical order.
You can use multiple
COLUMNS
expressions in a query and apply functions to them.
For example:
sql
SELECT COLUMNS('a'), COLUMNS('c'), toTypeName(COLUMNS('c')) FROM col_names
text
ββaaββ¬βabββ¬βbcββ¬βtoTypeName(bc)ββ
β 1 β 1 β 1 β Int8 β
ββββββ΄βββββ΄βββββ΄βββββββββββββββββ
Each column returned by the
COLUMNS
expression is passed to the function as a separate argument. Also you can pass other arguments to the function if it supports them. Be careful when using functions. If a function does not support the number of arguments you have passed to it, ClickHouse throws an exception.
For example:
sql
SELECT COLUMNS('a') + COLUMNS('c') FROM col_names
text
Received exception from server (version 19.14.1):
Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function plus does not match: passed 3, should be 2.
In this example,
COLUMNS('a')
returns two columns:
aa
and
ab
.
COLUMNS('c')
returns the
bc
column. The
+
operator can't apply to 3 arguments, so ClickHouse throws an exception with the relevant message.
Columns that matched the
COLUMNS
expression can have different data types. If
COLUMNS
does not match any columns and is the only expression in
SELECT
, ClickHouse throws an exception.
Asterisk {#asterisk}
You can put an asterisk in any part of a query instead of an expression. When the query is analyzed, the asterisk is expanded to a list of all table columns (excluding the
MATERIALIZED
and
ALIAS
columns). There are only a few cases when using an asterisk is justified:
When creating a table dump.
For tables containing just a few columns, such as system tables.
For getting information about what columns are in a table. In this case, set
LIMIT 1
. But it is better to use the
DESC TABLE
query.
When there is strong filtration on a small number of columns using
PREWHERE
.
In subqueries (since columns that aren't needed for the external query are excluded from subqueries).
In all other cases, we do not recommend using the asterisk, since it only gives you the drawbacks of a columnar DBMS instead of the advantages. In other words using the asterisk is not recommended.
Extreme Values {#extreme-values}
In addition to results, you can also get minimum and maximum values for the results columns. To do this, set the
extremes
setting to 1. Minimums and maximums are calculated for numeric types, dates, and dates with times. For other columns, the default values are output.
An extra two rows are calculated β the minimums and maximums, respectively. These extra two rows are output in
XML
,
JSON*
,
TabSeparated*
,
CSV*
,
Vertical
,
Template
and
Pretty*
formats
, separate from the other rows. They are not output for other formats. | {"source_file": "index.md"} | [
0.023774337023496628,
-0.05705711245536804,
-0.06198832020163536,
0.04962846636772156,
-0.07225904613733292,
-0.023325402289628983,
0.0017639697762206197,
-0.0216839462518692,
-0.023439982905983925,
0.047654278576374054,
0.0488254576921463,
-0.023728925734758377,
0.03407745435833931,
-0.10... |
c1970df9-f283-4e2d-875b-32809749e6d3 | In
JSON*
and
XML
formats, the extreme values are output in a separate 'extremes' field. In
TabSeparated*
,
CSV*
and
Vertical
formats, the row comes after the main result, and after 'totals' if present. It is preceded by an empty row (after the other data). In
Pretty*
formats, the row is output as a separate table after the main result, and after
totals
if present. In
Template
format the extreme values are output according to specified template.
Extreme values are calculated for rows before
LIMIT
, but after
LIMIT BY
. However, when using
LIMIT offset, size
, the rows before
offset
are included in
extremes
. In stream requests, the result may also include a small number of rows that passed through
LIMIT
.
Notes {#notes}
You can use synonyms (
AS
aliases) in any part of a query.
The
GROUP BY
,
ORDER BY
, and
LIMIT BY
clauses can support positional arguments. To enable this, switch on the
enable_positional_arguments
setting. Then, for example,
ORDER BY 1,2
will be sorting rows in the table on the first and then the second column.
Implementation Details {#implementation-details}
If the query omits the
DISTINCT
,
GROUP BY
and
ORDER BY
clauses and the
IN
and
JOIN
subqueries, the query will be completely stream processed, using O(1) amount of RAM. Otherwise, the query might consume a lot of RAM if the appropriate restrictions are not specified:
max_memory_usage
max_rows_to_group_by
max_rows_to_sort
max_rows_in_distinct
max_bytes_in_distinct
max_rows_in_set
max_bytes_in_set
max_rows_in_join
max_bytes_in_join
max_bytes_before_external_sort
max_bytes_ratio_before_external_sort
max_bytes_before_external_group_by
max_bytes_ratio_before_external_group_by
For more information, see the section "Settings". It is possible to use external sorting (saving temporary tables to a disk) and external aggregation.
SELECT modifiers {#select-modifiers}
You can use the following modifiers in
SELECT
queries. | {"source_file": "index.md"} | [
-0.030989451333880424,
0.054886505007743835,
-0.02403460070490837,
-0.03994227573275566,
-0.03268027305603027,
-0.02459915354847908,
-0.07064348459243774,
0.10113737732172012,
0.010013410821557045,
-0.019318370148539543,
0.033183593302965164,
0.014741189777851105,
0.022922508418560028,
0.0... |
1992af06-5fe2-4826-8050-e282222c8b06 | SELECT modifiers {#select-modifiers}
You can use the following modifiers in
SELECT
queries.
| Modifier | Description |
|-------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
APPLY
| Allows you to invoke some function for each row returned by an outer table expression of a query. |
|
EXCEPT
| Specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output. |
|
REPLACE
| Specifies one or more
expression aliases
. Each alias must match a column name from the
SELECT *
statement. In the output column list, the column that matches the alias is replaced by the expression in that
REPLACE
. This modifier does not change the names or order of columns. However, it can change the value and the value type. |
Modifier Combinations {#modifier-combinations}
You can use each modifier separately or combine them.
Examples:
Using the same modifier multiple times.
sql
SELECT COLUMNS('[jk]') APPLY(toString) APPLY(length) APPLY(max) FROM columns_transformers;
response
ββmax(length(toString(j)))ββ¬βmax(length(toString(k)))ββ
β 2 β 3 β
ββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ
Using multiple modifiers in a single query.
sql
SELECT * REPLACE(i + 1 AS i) EXCEPT (j) APPLY(sum) from columns_transformers;
response
ββsum(plus(i, 1))ββ¬βsum(k)ββ
β 222 β 347 β
βββββββββββββββββββ΄βββββββββ
SETTINGS in SELECT Query {#settings-in-select-query}
You can specify the necessary settings right in the
SELECT
query. The setting value is applied only to this query and is reset to default or previous value after the query is executed. | {"source_file": "index.md"} | [
0.04998774826526642,
0.02536751516163349,
0.03665660321712494,
0.07416611909866333,
-0.005894958507269621,
-0.029735572636127472,
0.05307158827781677,
-0.002783537609502673,
0.031131871044635773,
-0.014711595140397549,
0.0192811768501997,
-0.11420664936304092,
0.04384550824761391,
-0.08291... |
2a38db11-dacc-4e16-80b2-9e9c40293a13 | You can specify the necessary settings right in the
SELECT
query. The setting value is applied only to this query and is reset to default or previous value after the query is executed.
Other ways to make settings see
here
.
For boolean settings set to true, you can use a shorthand syntax by omitting the value assignment. When only the setting name is specified, it is automatically set to
1
(true).
Example
sql
SELECT * FROM some_table SETTINGS optimize_read_in_order=1, cast_keep_nullable=1; | {"source_file": "index.md"} | [
0.04918712005019188,
-0.01372440718114376,
-0.019023189321160316,
0.09641806036233902,
-0.12185704708099365,
0.04649181663990021,
0.05579233914613724,
-0.029121462255716324,
-0.05636867880821228,
0.03814224898815155,
0.015108577907085419,
0.02028118073940277,
0.035999927669763565,
-0.07243... |
0ebb10b3-6fa6-4179-8365-a5c1db181236 | description: 'Documentation for WITH Clause'
sidebar_label: 'WITH'
slug: /sql-reference/statements/select/with
title: 'WITH Clause'
doc_type: 'reference'
WITH Clause
ClickHouse supports Common Table Expressions (
CTE
), Common Scalar Expressions and Recursive Queries.
Common Table Expressions {#common-table-expressions}
Common Table Expressions represent named subqueries.
They can be referenced by name anywhere in a
SELECT
query where a table expression is allowed.
Named subqueries can be referenced by name in the scope of the current query or in the scopes of child subqueries.
Every reference to a Common Table Expression in
SELECT
queries is always replaced by the subquery from it's definition.
Recursion is prevented by hiding the current CTE from the identifier resolution process.
Please note that CTEs do not guarantee the same results in all places they are called because the query will be re-executed for each use case.
Syntax {#common-table-expressions-syntax}
sql
WITH <identifier> AS <subquery expression>
Example {#common-table-expressions-example}
An example of when a subquery is re-executed:
sql
WITH cte_numbers AS
(
SELECT
num
FROM generateRandom('num UInt64', NULL)
LIMIT 1000000
)
SELECT
count()
FROM cte_numbers
WHERE num IN (SELECT num FROM cte_numbers)
If CTEs were to pass exactly the results and not just a piece of code, you would always see
1000000
However, due to the fact that we are referring
cte_numbers
twice, random numbers are generated each time and, accordingly, we see different random results,
280501, 392454, 261636, 196227
and so on...
Common Scalar Expressions {#common-scalar-expressions}
ClickHouse allows you to declare aliases to arbitrary scalar expressions in the
WITH
clause.
Common scalar expressions can be referenced in any place in the query.
:::note
If a common scalar expression references something other than a constant literal, the expression may lead to the presence of
free variables
.
ClickHouse resolves any identifier in the closest scope possible, meaning that free variables can reference unexpected entities in case of name clashes or may lead to a correlated subquery.
It is recommended to define CSE as a
lambda function
(possible only with the
analyzer
enabled) binding all the used identifiers to achieve a more predictable behavior of expression identifiers resolution.
:::
Syntax {#common-scalar-expressions-syntax}
sql
WITH <expression> AS <identifier>
Examples {#common-scalar-expressions-examples}
Example 1:
Using constant expression as "variable"
sql
WITH '2019-08-01 15:23:00' AS ts_upper_bound
SELECT *
FROM hits
WHERE
EventDate = toDate(ts_upper_bound) AND
EventTime <= ts_upper_bound;
Example 2:
Using higher-order functions to bound the identifiers
sql
WITH
'.txt' as extension,
(id, extension) -> concat(lower(id), extension) AS gen_name
SELECT gen_name('test', '.sql') as file_name; | {"source_file": "with.md"} | [
-0.0703301951289177,
-0.009428526274859905,
0.05678858980536461,
0.06133691966533661,
-0.05399675667285919,
0.009161893278360367,
0.05964988097548485,
0.04096049815416336,
-0.0018975057173520327,
-0.0010005871299654245,
-0.01811133325099945,
-0.03845112398266792,
0.06959160417318344,
-0.06... |
5a225203-5285-4ae5-a876-08526066dd98 | sql
WITH
'.txt' as extension,
(id, extension) -> concat(lower(id), extension) AS gen_name
SELECT gen_name('test', '.sql') as file_name;
response
ββfile_nameββ
1. β test.sql β
βββββββββββββ
Example 3:
Using higher-order functions with free variables
The following example queries show that unbound identifiers resolve into an entity in the closest scope.
Here,
extension
is not bound in the
gen_name
lambda function body.
Although
extension
is defined to
'.txt'
as a common scalar expression in the scope of
generated_names
definition and usage, it is resolved into a column of the table
extension_list
, because it is available in the
generated_names
subquery.
```sql
CREATE TABLE extension_list
(
extension String
)
ORDER BY extension
AS SELECT '.sql';
WITH
'.txt' as extension,
generated_names as (
WITH
(id) -> concat(lower(id), extension) AS gen_name
SELECT gen_name('test') as file_name FROM extension_list
)
SELECT file_name FROM generated_names;
```
response
ββfile_nameββ
1. β test.sql β
βββββββββββββ
Example 4:
Evicting a sum(bytes) expression result from the SELECT clause column list
sql
WITH sum(bytes) AS s
SELECT
formatReadableSize(s),
table
FROM system.parts
GROUP BY table
ORDER BY s;
Example 5:
Using results of a scalar subquery
sql
/* this example would return TOP 10 of most huge tables */
WITH
(
SELECT sum(bytes)
FROM system.parts
WHERE active
) AS total_disk_usage
SELECT
(sum(bytes) / total_disk_usage) * 100 AS table_disk_usage,
table
FROM system.parts
GROUP BY table
ORDER BY table_disk_usage DESC
LIMIT 10;
Example 6:
Reusing expression in a subquery
sql
WITH test1 AS (SELECT i + 1, j + 1 FROM test1)
SELECT * FROM test1;
Recursive Queries {#recursive-queries}
The optional
RECURSIVE
modifier allows for a WITH query to refer to its own output. Example:
Example:
Sum integers from 1 through 100
sql
WITH RECURSIVE test_table AS (
SELECT 1 AS number
UNION ALL
SELECT number + 1 FROM test_table WHERE number < 100
)
SELECT sum(number) FROM test_table;
text
ββsum(number)ββ
β 5050 β
βββββββββββββββ
:::note
Recursive CTEs rely on the
new query analyzer
introduced in version
24.3
. If you're using version
24.3+
and encounter a
(UNKNOWN_TABLE)
or
(UNSUPPORTED_METHOD)
exception, it suggests that the new analyzer is disabled on your instance, role, or profile. To activate the analyzer, enable the setting
allow_experimental_analyzer
or update the
compatibility
setting to a more recent version.
Starting from version
24.8
the new analyzer has been fully promoted to production, and the setting
allow_experimental_analyzer
has been renamed to
enable_analyzer
.
::: | {"source_file": "with.md"} | [
-0.07241728156805038,
0.02607712522149086,
-0.03964051231741905,
0.08743271976709366,
-0.002063826657831669,
-0.051376085728406906,
0.070426806807518,
0.08153535425662994,
-0.007605068851262331,
0.012795746326446533,
0.10426267981529236,
0.006143351551145315,
0.05036396533250809,
-0.069745... |
2bf2ad83-98e7-4d22-8a4d-86f966a1c60a | The general form of a recursive
WITH
query is always a non-recursive term, then
UNION ALL
, then a recursive term, where only the recursive term can contain a reference to the query's own output. Recursive CTE query is executed as follows:
Evaluate the non-recursive term. Place result of non-recursive term query in a temporary working table.
As long as the working table is not empty, repeat these steps:
Evaluate the recursive term, substituting the current contents of the working table for the recursive self-reference. Place result of recursive term query in a temporary intermediate table.
Replace the contents of the working table with the contents of the intermediate table, then empty the intermediate table.
Recursive queries are typically used to work with hierarchical or tree-structured data. For example, we can write a query that performs tree traversal:
Example:
Tree traversal
First let's create tree table:
```sql
DROP TABLE IF EXISTS tree;
CREATE TABLE tree
(
id UInt64,
parent_id Nullable(UInt64),
data String
) ENGINE = MergeTree ORDER BY id;
INSERT INTO tree VALUES (0, NULL, 'ROOT'), (1, 0, 'Child_1'), (2, 0, 'Child_2'), (3, 1, 'Child_1_1');
```
We can traverse those tree with such query:
Example:
Tree traversal
sql
WITH RECURSIVE search_tree AS (
SELECT id, parent_id, data
FROM tree t
WHERE t.id = 0
UNION ALL
SELECT t.id, t.parent_id, t.data
FROM tree t, search_tree st
WHERE t.parent_id = st.id
)
SELECT * FROM search_tree;
text
ββidββ¬βparent_idββ¬βdataβββββββ
β 0 β α΄Ία΅α΄Έα΄Έ β ROOT β
β 1 β 0 β Child_1 β
β 2 β 0 β Child_2 β
β 3 β 1 β Child_1_1 β
ββββββ΄ββββββββββββ΄ββββββββββββ
Search order {#search-order}
To create a depth-first order, we compute for each result row an array of rows that we have already visited:
Example:
Tree traversal depth-first order
sql
WITH RECURSIVE search_tree AS (
SELECT id, parent_id, data, [t.id] AS path
FROM tree t
WHERE t.id = 0
UNION ALL
SELECT t.id, t.parent_id, t.data, arrayConcat(path, [t.id])
FROM tree t, search_tree st
WHERE t.parent_id = st.id
)
SELECT * FROM search_tree ORDER BY path;
text
ββidββ¬βparent_idββ¬βdataβββββββ¬βpathβββββ
β 0 β α΄Ία΅α΄Έα΄Έ β ROOT β [0] β
β 1 β 0 β Child_1 β [0,1] β
β 3 β 1 β Child_1_1 β [0,1,3] β
β 2 β 0 β Child_2 β [0,2] β
ββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββ
To create a breadth-first order, standard approach is to add column that tracks the depth of the search:
Example:
Tree traversal breadth-first order
sql
WITH RECURSIVE search_tree AS (
SELECT id, parent_id, data, [t.id] AS path, toUInt64(0) AS depth
FROM tree t
WHERE t.id = 0
UNION ALL
SELECT t.id, t.parent_id, t.data, arrayConcat(path, [t.id]), depth + 1
FROM tree t, search_tree st
WHERE t.parent_id = st.id
)
SELECT * FROM search_tree ORDER BY depth; | {"source_file": "with.md"} | [
-0.0000507932563778013,
-0.04008783772587776,
0.029336925595998764,
0.0251372791826725,
-0.10016057640314102,
-0.08395110815763474,
-0.009278432466089725,
0.02939934842288494,
-0.0033317289780825377,
0.06003495678305626,
-0.016369814053177834,
-0.05631294846534729,
0.03691994771361351,
-0.... |
cd0cb9da-b9ec-488e-92c3-5ba55d985eed | text
ββidββ¬βlinkββ¬βdataβββββββ¬βpathβββββ¬βdepthββ
β 0 β α΄Ία΅α΄Έα΄Έ β ROOT β [0] β 0 β
β 1 β 0 β Child_1 β [0,1] β 1 β
β 2 β 0 β Child_2 β [0,2] β 1 β
β 3 β 1 β Child_1_1 β [0,1,3] β 2 β
ββββββ΄βββββββ΄ββββββββββββ΄ββββββββββ΄ββββββββ
Cycle detection {#cycle-detection}
First let's create graph table:
```sql
DROP TABLE IF EXISTS graph;
CREATE TABLE graph
(
from UInt64,
to UInt64,
label String
) ENGINE = MergeTree ORDER BY (from, to);
INSERT INTO graph VALUES (1, 2, '1 -> 2'), (1, 3, '1 -> 3'), (2, 3, '2 -> 3'), (1, 4, '1 -> 4'), (4, 5, '4 -> 5');
```
We can traverse that graph with such query:
Example:
Graph traversal without cycle detection
sql
WITH RECURSIVE search_graph AS (
SELECT from, to, label FROM graph g
UNION ALL
SELECT g.from, g.to, g.label
FROM graph g, search_graph sg
WHERE g.from = sg.to
)
SELECT DISTINCT * FROM search_graph ORDER BY from;
text
ββfromββ¬βtoββ¬βlabelβββ
β 1 β 4 β 1 -> 4 β
β 1 β 2 β 1 -> 2 β
β 1 β 3 β 1 -> 3 β
β 2 β 3 β 2 -> 3 β
β 4 β 5 β 4 -> 5 β
ββββββββ΄βββββ΄βββββββββ
But if we add cycle in that graph, previous query will fail with
Maximum recursive CTE evaluation depth
error:
```sql
INSERT INTO graph VALUES (5, 1, '5 -> 1');
WITH RECURSIVE search_graph AS (
SELECT from, to, label FROM graph g
UNION ALL
SELECT g.from, g.to, g.label
FROM graph g, search_graph sg
WHERE g.from = sg.to
)
SELECT DISTINCT * FROM search_graph ORDER BY from;
```
text
Code: 306. DB::Exception: Received from localhost:9000. DB::Exception: Maximum recursive CTE evaluation depth (1000) exceeded, during evaluation of search_graph AS (SELECT from, to, label FROM graph AS g UNION ALL SELECT g.from, g.to, g.label FROM graph AS g, search_graph AS sg WHERE g.from = sg.to). Consider raising max_recursive_cte_evaluation_depth setting.: While executing RecursiveCTESource. (TOO_DEEP_RECURSION)
The standard method for handling cycles is to compute an array of the already visited nodes:
Example:
Graph traversal with cycle detection
sql
WITH RECURSIVE search_graph AS (
SELECT from, to, label, false AS is_cycle, [tuple(g.from, g.to)] AS path FROM graph g
UNION ALL
SELECT g.from, g.to, g.label, has(path, tuple(g.from, g.to)), arrayConcat(sg.path, [tuple(g.from, g.to)])
FROM graph g, search_graph sg
WHERE g.from = sg.to AND NOT is_cycle
)
SELECT * FROM search_graph WHERE is_cycle ORDER BY from;
text
ββfromββ¬βtoββ¬βlabelβββ¬βis_cycleββ¬βpathβββββββββββββββββββββββ
β 1 β 4 β 1 -> 4 β true β [(1,4),(4,5),(5,1),(1,4)] β
β 4 β 5 β 4 -> 5 β true β [(4,5),(5,1),(1,4),(4,5)] β
β 5 β 1 β 5 -> 1 β true β [(5,1),(1,4),(4,5),(5,1)] β
ββββββββ΄βββββ΄βββββββββ΄βββββββββββ΄ββββββββββββββββββββββββββββ
Infinite queries {#infinite-queries}
It is also possible to use infinite recursive CTE queries if
LIMIT
is used in outer query:
Example:
Infinite recursive CTE query | {"source_file": "with.md"} | [
-0.00003533618655637838,
0.008397642523050308,
0.016595283523201942,
-0.008717167191207409,
-0.01868511736392975,
-0.03833123669028282,
-0.012155305594205856,
-0.022005796432495117,
-0.04767388105392456,
0.049527931958436966,
0.06194266676902771,
-0.0057809907011687756,
0.05546236038208008,
... |
e26b994b-3674-4f2b-b859-64df9a633827 | Infinite queries {#infinite-queries}
It is also possible to use infinite recursive CTE queries if
LIMIT
is used in outer query:
Example:
Infinite recursive CTE query
sql
WITH RECURSIVE test_table AS (
SELECT 1 AS number
UNION ALL
SELECT number + 1 FROM test_table
)
SELECT sum(number) FROM (SELECT number FROM test_table LIMIT 100);
text
ββsum(number)ββ
β 5050 β
βββββββββββββββ | {"source_file": "with.md"} | [
-0.06438058614730835,
-0.020557653158903122,
-0.007103267125785351,
0.04821081832051277,
-0.11134037375450134,
-0.04580514878034592,
0.05298866331577301,
0.018600521609187126,
0.007303787861019373,
0.038661666214466095,
-0.012771619483828545,
-0.018142318353056908,
0.027309922501444817,
-0... |
54233668-7f6c-42cd-9c3e-30d9120379f0 | description: 'Documentation for DISTINCT Clause'
sidebar_label: 'DISTINCT'
slug: /sql-reference/statements/select/distinct
title: 'DISTINCT Clause'
doc_type: 'reference'
DISTINCT Clause
If
SELECT DISTINCT
is specified, only unique rows will remain in a query result. Thus, only a single row will remain out of all the sets of fully matching rows in the result.
You can specify the list of columns that must have unique values:
SELECT DISTINCT ON (column1, column2,...)
. If the columns are not specified, all of them are taken into consideration.
Consider the table:
text
ββaββ¬βbββ¬βcββ
β 1 β 1 β 1 β
β 1 β 1 β 1 β
β 2 β 2 β 2 β
β 2 β 2 β 2 β
β 1 β 1 β 2 β
β 1 β 2 β 2 β
βββββ΄ββββ΄ββββ
Using
DISTINCT
without specifying columns:
sql
SELECT DISTINCT * FROM t1;
text
ββaββ¬βbββ¬βcββ
β 1 β 1 β 1 β
β 2 β 2 β 2 β
β 1 β 1 β 2 β
β 1 β 2 β 2 β
βββββ΄ββββ΄ββββ
Using
DISTINCT
with specified columns:
sql
SELECT DISTINCT ON (a,b) * FROM t1;
text
ββaββ¬βbββ¬βcββ
β 1 β 1 β 1 β
β 2 β 2 β 2 β
β 1 β 2 β 2 β
βββββ΄ββββ΄ββββ
DISTINCT and ORDER BY {#distinct-and-order-by}
ClickHouse supports using the
DISTINCT
and
ORDER BY
clauses for different columns in one query. The
DISTINCT
clause is executed before the
ORDER BY
clause.
Consider the table:
text
ββaββ¬βbββ
β 2 β 1 β
β 1 β 2 β
β 3 β 3 β
β 2 β 4 β
βββββ΄ββββ
Selecting data:
sql
SELECT DISTINCT a FROM t1 ORDER BY b ASC;
text
ββaββ
β 2 β
β 1 β
β 3 β
βββββ
Selecting data with the different sorting direction:
sql
SELECT DISTINCT a FROM t1 ORDER BY b DESC;
text
ββaββ
β 3 β
β 1 β
β 2 β
βββββ
Row
2, 4
was cut before sorting.
Take this implementation specificity into account when programming queries.
Null Processing {#null-processing}
DISTINCT
works with
NULL
as if
NULL
were a specific value, and
NULL==NULL
. In other words, in the
DISTINCT
results, different combinations with
NULL
occur only once. It differs from
NULL
processing in most other contexts.
Alternatives {#alternatives}
It is possible to obtain the same result by applying
GROUP BY
across the same set of values as specified as
SELECT
clause, without using any aggregate functions. But there are few differences from
GROUP BY
approach:
DISTINCT
can be applied together with
GROUP BY
.
When
ORDER BY
is omitted and
LIMIT
is defined, the query stops running immediately after the required number of different rows has been read.
Data blocks are output as they are processed, without waiting for the entire query to finish running. | {"source_file": "distinct.md"} | [
-0.007900556549429893,
-0.039519939571619034,
0.022971119731664658,
-0.019225461408495903,
0.007450864650309086,
0.04440763220191002,
0.03096441738307476,
-0.01806783117353916,
0.0017892669420689344,
-0.010778422467410564,
0.030782533809542656,
0.039356209337711334,
0.04556339979171753,
-0... |
ca6e7769-eb86-4903-90e8-a0534f10283d | description: 'Documentation for FORMAT Clause'
sidebar_label: 'FORMAT'
slug: /sql-reference/statements/select/format
title: 'FORMAT Clause'
doc_type: 'reference'
FORMAT Clause
ClickHouse supports a wide range of
serialization formats
that can be used on query results among other things. There are multiple ways to choose a format for
SELECT
output, one of them is to specify
FORMAT format
at the end of query to get resulting data in any specific format.
Specific format might be used either for convenience, integration with other systems or performance gain.
Default Format {#default-format}
If the
FORMAT
clause is omitted, the default format is used, which depends on both the settings and the interface used for accessing the ClickHouse server. For the
HTTP interface
and the
command-line client
in batch mode, the default format is
TabSeparated
. For the command-line client in interactive mode, the default format is
PrettyCompact
(it produces compact human-readable tables).
Implementation Details {#implementation-details}
When using the command-line client, data is always passed over the network in an internal efficient format (
Native
). The client independently interprets the
FORMAT
clause of the query and formats the data itself (thus relieving the network and the server from the extra load). | {"source_file": "format.md"} | [
0.002154534449800849,
0.005123646929860115,
-0.06962674856185913,
0.043412819504737854,
-0.0934966579079628,
0.019086642190814018,
0.0032803870271891356,
0.0623021125793457,
-0.0007655380177311599,
0.002431947272270918,
-0.009935087524354458,
0.006138952448964119,
0.052303288131952286,
-0.... |
311821f7-7978-4fc0-97d7-4cd4cf256804 | description: 'Documentation describing the EXCEPT modifier which specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output.'
sidebar_label: 'EXCEPT'
slug: /sql-reference/statements/select/except-modifier
title: 'EXCEPT modifier'
keywords: ['EXCEPT', 'modifier']
doc_type: 'reference'
EXCEPT modifier {#except}
Specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output.
Syntax {#syntax}
sql
SELECT <expr> EXCEPT ( col_name1 [, col_name2, col_name3, ...] ) FROM [db.]table_name
Examples {#examples}
sql title="Query"
SELECT * EXCEPT (i) from columns_transformers;
response title="Response"
βββjββ¬βββkββ
β 10 β 324 β
β 8 β 23 β
ββββββ΄ββββββ | {"source_file": "except_modifier.md"} | [
-0.040786921977996826,
0.06473354995250702,
0.008680199272930622,
0.048852089792490005,
0.050383973866701126,
-0.06639272719621658,
0.0173154566437006,
0.02317049168050289,
-0.03567570075392723,
-0.011253316886723042,
0.05055858567357063,
-0.012399095110595226,
0.09680022299289703,
-0.1765... |
09c131cb-a5cc-4c14-a6af-c93079a3fe13 | description: 'Documentation for JOIN Clause'
sidebar_label: 'JOIN'
slug: /sql-reference/statements/select/join
title: 'JOIN Clause'
keywords: ['INNER JOIN', 'LEFT JOIN', 'LEFT OUTER JOIN', 'RIGHT JOIN', 'RIGHT OUTER JOIN', 'FULL OUTER JOIN', 'CROSS JOIN', 'LEFT SEMI JOIN', 'RIGHT SEMI JOIN', 'LEFT ANTI JOIN', 'RIGHT ANTI JOIN', 'LEFT ANY JOIN', 'RIGHT ANY JOIN', 'INNER ANY JOIN', 'ASOF JOIN', 'LEFT ASOF JOIN', 'PASTE JOIN']
doc_type: 'reference'
JOIN clause
The
JOIN
clause produces a new table by combining columns from one or multiple tables by using values common to each. It is a common operation in databases with SQL support, which corresponds to
relational algebra
join. The special case of one table join is often referred to as a "self-join".
Syntax
sql
SELECT <expr_list>
FROM <left_table>
[GLOBAL] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI|ANY|ALL|ASOF] JOIN <right_table>
(ON <expr_list>)|(USING <column_list>) ...
Expressions from the
ON
clause and columns from the
USING
clause are called "join keys". Unless otherwise stated, a
JOIN
produces a
Cartesian product
from rows with matching "join keys", which might produce results with many more rows than the source tables.
Supported types of JOIN {#supported-types-of-join}
All standard
SQL JOIN
types are supported:
| Type | Description |
|-------------------|-------------------------------------------------------------------------------|
|
INNER JOIN
| only matching rows are returned. |
|
LEFT OUTER JOIN
| non-matching rows from left table are returned in addition to matching rows. |
|
RIGHT OUTER JOIN
| non-matching rows from right table are returned in addition to matching rows. |
|
FULL OUTER JOIN
| non-matching rows from both tables are returned in addition to matching rows. |
|
CROSS JOIN
| produces cartesian product of whole tables, "join keys" are
not
specified.|
JOIN
without a type specified implies
INNER
.
The keyword
OUTER
can be safely omitted.
An alternative syntax for
CROSS JOIN
is specifying multiple tables in the
FROM
clause
separated by commas.
Additional join types available in ClickHouse are: | {"source_file": "join.md"} | [
-0.03219868987798691,
-0.01960248313844204,
0.0034668459556996822,
0.09867966175079346,
-0.016804784536361694,
-0.020358271896839142,
0.043059803545475006,
0.08153475075960159,
-0.07541480660438538,
-0.016956547275185585,
0.048997294157743454,
0.008073505014181137,
0.08665632456541061,
-0.... |
bc934dee-8607-439f-aeec-c31b84f50df2 | An alternative syntax for
CROSS JOIN
is specifying multiple tables in the
FROM
clause
separated by commas.
Additional join types available in ClickHouse are:
| Type | Description |
|---------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
|
LEFT SEMI JOIN
,
RIGHT SEMI JOIN
| An allowlist on "join keys", without producing a cartesian product. |
|
LEFT ANTI JOIN
,
RIGHT ANTI JOIN
| A denylist on "join keys", without producing a cartesian product. |
|
LEFT ANY JOIN
,
RIGHT ANY JOIN
,
INNER ANY JOIN
| Partially (for opposite side of
LEFT
and
RIGHT
) or completely (for
INNER
and
FULL
) disables the cartesian product for standard
JOIN
types. |
|
ASOF JOIN
,
LEFT ASOF JOIN
| Joining sequences with a non-exact match.
ASOF JOIN
usage is described below. |
|
PASTE JOIN
| Performs a horizontal concatenation of two tables. |
:::note
When
join_algorithm
is set to
partial_merge
,
RIGHT JOIN
and
FULL JOIN
are supported only with
ALL
strictness (
SEMI
,
ANTI
,
ANY
, and
ASOF
are not supported).
:::
Settings {#settings}
The default join type can be overridden using
join_default_strictness
setting.
The behavior of the ClickHouse server for
ANY JOIN
operations depends on the
any_join_distinct_right_table_keys
setting.
See also
join_algorithm
join_any_take_last_row
join_use_nulls
partial_merge_join_rows_in_right_blocks
join_on_disk_max_files_to_merge
any_join_distinct_right_table_keys
Use the
cross_to_inner_join_rewrite
setting to define the behavior when ClickHouse fails to rewrite a
CROSS JOIN
as an
INNER JOIN
. The default value is
1
, which allows the join to continue but it will be slower. Set
cross_to_inner_join_rewrite
to
0
if you want an error to be thrown, and set it to
2
to not run the cross joins but instead force a rewrite of all comma/cross joins. If the rewriting fails when the value is
2
, you will receive an error message stating "Please, try to simplify
WHERE
section".
ON section conditions {#on-section-conditions}
An
ON
section can contain several conditions combined using the
AND
and
OR
operators. Conditions specifying join keys must:
- reference both left and right tables
- use the equality operator | {"source_file": "join.md"} | [
0.024844208732247353,
0.01962646096944809,
0.018324242904782295,
-0.0037101733032613993,
0.0014554713852703571,
0.03723110631108284,
0.014490216039121151,
-0.012444608844816685,
-0.08937906473875046,
0.010737611912190914,
0.030662858858704567,
-0.0704628974199295,
0.05078592523932457,
-0.0... |
d6de1c55-abc0-4f2c-aeb7-2c4b579abb2c | An
ON
section can contain several conditions combined using the
AND
and
OR
operators. Conditions specifying join keys must:
- reference both left and right tables
- use the equality operator
Other conditions may use other logical operators but they must reference either the left or the right table of a query.
Rows are joined if the whole complex condition is met. If the conditions are not met, rows may still be included in the result depending on the
JOIN
type. Note that if the same conditions are placed in a
WHERE
section and they are not met, then rows are always filtered out from the result.
The
OR
operator inside the
ON
clause works using the hash join algorithm β for each
OR
argument with join keys for
JOIN
, a separate hash table is created, so memory consumption and query execution time grow linearly with an increase in the number of expressions
OR
of the
ON
clause.
:::note
If a condition references columns from different tables, then only the equality operator (
=
) is supported so far.
:::
Example
Consider
table_1
and
table_2
:
response
ββIdββ¬βnameββ ββIdββ¬βtextββββββββββββ¬βscoresββ
β 1 β A β β 1 β Text A β 10 β
β 2 β B β β 1 β Another text A β 12 β
β 3 β C β β 2 β Text B β 15 β
ββββββ΄βββββββ ββββββ΄βββββββββββββββββ΄βββββββββ
Query with one join key condition and an additional condition for
table_2
:
sql
SELECT name, text FROM table_1 LEFT OUTER JOIN table_2
ON table_1.Id = table_2.Id AND startsWith(table_2.text, 'Text');
Note that the result contains the row with the name
C
and the empty text column. It is included into the result because an
OUTER
type of a join is used.
response
ββnameββ¬βtextββββ
β A β Text A β
β B β Text B β
β C β β
ββββββββ΄βββββββββ
Query with
INNER
type of a join and multiple conditions:
sql
SELECT name, text, scores FROM table_1 INNER JOIN table_2
ON table_1.Id = table_2.Id AND table_2.scores > 10 AND startsWith(table_2.text, 'Text');
Result:
sql
ββnameββ¬βtextββββ¬βscoresββ
β B β Text B β 15 β
ββββββββ΄βββββββββ΄βββββββββ
Query with
INNER
type of a join and condition with
OR
:
``sql
CREATE TABLE t1 (
a
Int64,
b` Int64) ENGINE = MergeTree() ORDER BY a;
CREATE TABLE t2 (
key
Int32,
val
Int64) ENGINE = MergeTree() ORDER BY key;
INSERT INTO t1 SELECT number as a, -a as b from numbers(5);
INSERT INTO t2 SELECT if(number % 2 == 0, toInt64(number), -number) as key, number as val from numbers(5);
SELECT a, b, val FROM t1 INNER JOIN t2 ON t1.a = t2.key OR t1.b = t2.key;
```
Result:
response
ββaββ¬ββbββ¬βvalββ
β 0 β 0 β 0 β
β 1 β -1 β 1 β
β 2 β -2 β 2 β
β 3 β -3 β 3 β
β 4 β -4 β 4 β
βββββ΄βββββ΄ββββββ
Query with
INNER
type of a join and conditions with
OR
and
AND
:
:::note | {"source_file": "join.md"} | [
-0.03883294761180878,
0.050171878188848495,
-0.012253282591700554,
0.06928740441799164,
0.054124124348163605,
-0.020196152850985527,
0.021368365734815598,
-0.024248015135526657,
0.009176856838166714,
0.00024163264606613666,
0.013255417346954346,
0.019529303535819054,
0.0963774248957634,
-0... |
f3d8c815-55c6-4f62-a8e1-464ccee52eb4 | Query with
INNER
type of a join and conditions with
OR
and
AND
:
:::note
By default, non-equal conditions are supported as long as they use columns from the same table.
For example,
t1.a = t2.key AND t1.b > 0 AND t2.b > t2.c
, because
t1.b > 0
uses columns only from
t1
and
t2.b > t2.c
uses columns only from
t2
.
However, you can try experimental support for conditions like
t1.a = t2.key AND t1.b > t2.key
, check out the section below for more details.
:::
sql
SELECT a, b, val FROM t1 INNER JOIN t2 ON t1.a = t2.key OR t1.b = t2.key AND t2.val > 3;
Result:
response
ββaββ¬ββbββ¬βvalββ
β 0 β 0 β 0 β
β 2 β -2 β 2 β
β 4 β -4 β 4 β
βββββ΄βββββ΄ββββββ
JOIN with inequality conditions for columns from different tables {#join-with-inequality-conditions-for-columns-from-different-tables}
Clickhouse currently supports
ALL/ANY/SEMI/ANTI INNER/LEFT/RIGHT/FULL JOIN
with inequality conditions in addition to equality conditions. The inequality conditions are supported only for
hash
and
grace_hash
join algorithms. The inequality conditions are not supported with
join_use_nulls
.
Example
Table
t1
:
response
ββkeyβββ¬βattrββ¬βaββ¬βbββ¬βcββ
β key1 β a β 1 β 1 β 2 β
β key1 β b β 2 β 3 β 2 β
β key1 β c β 3 β 2 β 1 β
β key1 β d β 4 β 7 β 2 β
β key1 β e β 5 β 5 β 5 β
β key2 β a2 β 1 β 1 β 1 β
β key4 β f β 2 β 3 β 4 β
ββββββββ΄βββββββ΄ββββ΄ββββ΄ββββ
Table
t2
response
ββkeyβββ¬βattrββ¬βaββ¬βbββ¬βcββ
β key1 β A β 1 β 2 β 1 β
β key1 β B β 2 β 1 β 2 β
β key1 β C β 3 β 4 β 5 β
β key1 β D β 4 β 1 β 6 β
β key3 β a3 β 1 β 1 β 1 β
β key4 β F β 1 β 1 β 1 β
ββββββββ΄βββββββ΄ββββ΄ββββ΄ββββ
sql
SELECT t1.*, t2.* FROM t1 LEFT JOIN t2 ON t1.key = t2.key AND (t1.a < t2.a) ORDER BY (t1.key, t1.attr, t2.key, t2.attr);
response
key1 a 1 1 2 key1 B 2 1 2
key1 a 1 1 2 key1 C 3 4 5
key1 a 1 1 2 key1 D 4 1 6
key1 b 2 3 2 key1 C 3 4 5
key1 b 2 3 2 key1 D 4 1 6
key1 c 3 2 1 key1 D 4 1 6
key1 d 4 7 2 0 0 \N
key1 e 5 5 5 0 0 \N
key2 a2 1 1 1 0 0 \N
key4 f 2 3 4 0 0 \N
NULL values in JOIN keys {#null-values-in-join-keys}
NULL
is not equal to any value, including itself. This means that if a
JOIN
key has a
NULL
value in one table, it won't match a
NULL
value in the other table.
Example
Table
A
:
response
ββββidββ¬βnameβββββ
β 1 β Alice β
β 2 β Bob β
β α΄Ία΅α΄Έα΄Έ β Charlie β
ββββββββ΄ββββββββββ
Table
B
:
response
ββββidββ¬βscoreββ
β 1 β 90 β
β 3 β 85 β
β α΄Ία΅α΄Έα΄Έ β 88 β
ββββββββ΄ββββββββ
sql
SELECT A.name, B.score FROM A LEFT JOIN B ON A.id = B.id
response
ββnameβββββ¬βscoreββ
β Alice β 90 β
β Bob β 0 β
β Charlie β 0 β
βββββββββββ΄ββββββββ | {"source_file": "join.md"} | [
-0.03583761677145958,
0.017546620219945908,
0.046164751052856445,
-0.022201117128133774,
-0.049222446978092194,
-0.0652860626578331,
0.04074306786060333,
-0.015508671291172504,
-0.029095826670527458,
-0.004721359349787235,
0.011599007993936539,
-0.061106014996767044,
0.07524395734071732,
0... |
b6f87467-d6dd-4495-9482-bc0925acfb71 | sql
SELECT A.name, B.score FROM A LEFT JOIN B ON A.id = B.id
response
ββnameβββββ¬βscoreββ
β Alice β 90 β
β Bob β 0 β
β Charlie β 0 β
βββββββββββ΄ββββββββ
Notice that the row with
Charlie
from table
A
and the row with score 88 from table
B
are not in the result because of the
NULL
value in the
JOIN
key.
In case you want to match
NULL
values, use the
isNotDistinctFrom
function to compare the
JOIN
keys.
sql
SELECT A.name, B.score FROM A LEFT JOIN B ON isNotDistinctFrom(A.id, B.id)
markdown
ββnameβββββ¬βscoreββ
β Alice β 90 β
β Bob β 0 β
β Charlie β 88 β
βββββββββββ΄ββββββββ
ASOF JOIN usage {#asof-join-usage}
ASOF JOIN
is useful when you need to join records that have no exact match.
This JOIN algorithm requires a special column in tables. This column:
Must contain an ordered sequence.
Can be one of the following types:
Int, UInt
,
Float
,
Date
,
DateTime
,
Decimal
.
For the
hash
join algorithm it can't be the only column in the
JOIN
clause.
Syntax
ASOF JOIN ... ON
:
sql
SELECT expressions_list
FROM table_1
ASOF LEFT JOIN table_2
ON equi_cond AND closest_match_cond
You can use any number of equality conditions and exactly one closest match condition. For example,
SELECT count() FROM table_1 ASOF LEFT JOIN table_2 ON table_1.a == table_2.b AND table_2.t <= table_1.t
.
Conditions supported for the closest match:
>
,
>=
,
<
,
<=
.
Syntax
ASOF JOIN ... USING
:
sql
SELECT expressions_list
FROM table_1
ASOF JOIN table_2
USING (equi_column1, ... equi_columnN, asof_column)
ASOF JOIN
uses
equi_columnX
for joining on equality and
asof_column
for joining on the closest match with the
table_1.asof_column >= table_2.asof_column
condition. The
asof_column
column is always the last one in the
USING
clause.
For example, consider the following tables:
text
table_1 table_2
event | ev_time | user_id event | ev_time | user_id
----------|---------|---------- ----------|---------|----------
... ...
event_1_1 | 12:00 | 42 event_2_1 | 11:59 | 42
... event_2_2 | 12:30 | 42
event_1_2 | 13:00 | 42 event_2_3 | 13:00 | 42
... ...
ASOF JOIN
can take the timestamp of a user event from
table_1
and find an event in
table_2
where the timestamp is closest to the timestamp of the event from
table_1
corresponding to the closest match condition. Equal timestamp values are the closest if available. Here, the
user_id
column can be used for joining on equality and the
ev_time
column can be used for joining on the closest match. In our example,
event_1_1
can be joined with
event_2_1
and
event_1_2
can be joined with
event_2_3
, but
event_2_2
can't be joined.
:::note | {"source_file": "join.md"} | [
-0.05329185724258423,
0.043276768177747726,
-0.054166652262210846,
0.018726933747529984,
-0.01217726431787014,
0.06583748012781143,
0.05017300695180893,
0.05129959061741829,
-0.0336429588496685,
0.006521277129650116,
-0.0027520444709807634,
0.0027209443505853415,
0.11252415925264359,
-0.10... |
15a8f880-c377-4038-aae6-8e590e64fcc5 | :::note
ASOF JOIN
is supported only by
hash
and
full_sorting_merge
join algorithms.
It's
not
supported in the
Join
table engine.
:::
PASTE JOIN usage {#paste-join-usage}
The result of
PASTE JOIN
is a table that contains all columns from left subquery followed by all columns from the right subquery.
The rows are matched based on their positions in the original tables (the order of rows should be defined).
If the subqueries return a different number of rows, extra rows will be cut.
Example:
```sql
SELECT *
FROM
(
SELECT number AS a
FROM numbers(2)
) AS t1
PASTE JOIN
(
SELECT number AS a
FROM numbers(2)
ORDER BY a DESC
) AS t2
ββaββ¬βt2.aββ
β 0 β 1 β
β 1 β 0 β
βββββ΄βββββββ
```
Note: in this case result can be nondeterministic if the reading is parallel. For example:
```sql
SELECT *
FROM
(
SELECT number AS a
FROM numbers_mt(5)
) AS t1
PASTE JOIN
(
SELECT number AS a
FROM numbers(10)
ORDER BY a DESC
) AS t2
SETTINGS max_block_size = 2;
ββaββ¬βt2.aββ
β 2 β 9 β
β 3 β 8 β
βββββ΄βββββββ
ββaββ¬βt2.aββ
β 0 β 7 β
β 1 β 6 β
βββββ΄βββββββ
ββaββ¬βt2.aββ
β 4 β 5 β
βββββ΄βββββββ
```
Distributed JOIN {#distributed-join}
There are two ways to execute a JOIN involving distributed tables:
When using a normal
JOIN
, the query is sent to remote servers. Subqueries are run on each of them in order to make the right table, and the join is performed with this table. In other words, the right table is formed on each server separately.
When using
GLOBAL ... JOIN
, first the requestor server runs a subquery to calculate the right table. This temporary table is passed to each remote server, and queries are run on them using the temporary data that was transmitted.
Be careful when using
GLOBAL
. For more information, see the
Distributed subqueries
section.
Implicit type conversion {#implicit-type-conversion}
INNER JOIN
,
LEFT JOIN
,
RIGHT JOIN
, and
FULL JOIN
queries support the implicit type conversion for "join keys". However the query can not be executed, if join keys from the left and the right tables cannot be converted to a single type (for example, there is no data type that can hold all values from both
UInt64
and
Int64
, or
String
and
Int32
).
Example
Consider the table
t_1
:
response
ββaββ¬βbββ¬βtoTypeName(a)ββ¬βtoTypeName(b)ββ
β 1 β 1 β UInt16 β UInt8 β
β 2 β 2 β UInt16 β UInt8 β
βββββ΄ββββ΄ββββββββββββββββ΄ββββββββββββββββ
and the table
t_2
:
response
βββaββ¬ββββbββ¬βtoTypeName(a)ββ¬βtoTypeName(b)ββββ
β -1 β 1 β Int16 β Nullable(Int64) β
β 1 β -1 β Int16 β Nullable(Int64) β
β 1 β 1 β Int16 β Nullable(Int64) β
ββββββ΄βββββββ΄ββββββββββββββββ΄ββββββββββββββββββ
The query
sql
SELECT a, b, toTypeName(a), toTypeName(b) FROM t_1 FULL JOIN t_2 USING (a, b);
returns the set: | {"source_file": "join.md"} | [
-0.06930805742740631,
-0.04114183038473129,
0.03697434812784195,
-0.013396667316555977,
-0.05106579512357712,
-0.015080527402460575,
-0.015285509638488293,
-0.015430958941578865,
-0.031690966337919235,
-0.005357617978006601,
0.023579223081469536,
0.07124806940555573,
0.04200507700443268,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.