id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
34d26788-9c82-47cf-8462-312217eb1b40 | sql
SET use_variant_as_common_type = 1;
SELECT map('a', range(number), 'b', number, 'c', 'str_' || toString(number)) as map_of_variants FROM numbers(3);
text
ββmap_of_variantsββββββββββββββββ
β {'a':[],'b':0,'c':'str_0'} β
β {'a':[0],'b':1,'c':'str_1'} β
β {'a':[0,1],'b':2,'c':'str_2'} β
βββββββββββββββββββββββββββββββββ
Reading Variant nested types as subcolumns {#reading-variant-nested-types-as-subcolumns}
Variant type supports reading a single nested type from a Variant column using the type name as a subcolumn.
So, if you have column
variant Variant(T1, T2, T3)
you can read a subcolumn of type
T2
using syntax
variant.T2
,
this subcolumn will have type
Nullable(T2)
if
T2
can be inside
Nullable
and
T2
otherwise. This subcolumn will
be the same size as original
Variant
column and will contain
NULL
values (or empty values if
T2
cannot be inside
Nullable
)
in all rows in which original
Variant
column doesn't have type
T2
.
Variant subcolumns can be also read using function
variantElement(variant_column, type_name)
.
Examples:
sql
CREATE TABLE test (v Variant(UInt64, String, Array(UInt64))) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT v, v.String, v.UInt64, v.`Array(UInt64)` FROM test;
text
ββvββββββββββββββ¬βv.Stringβββββββ¬βv.UInt64ββ¬βv.Array(UInt64)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β
β 42 β α΄Ία΅α΄Έα΄Έ β 42 β [] β
β Hello, World! β Hello, World! β α΄Ία΅α΄Έα΄Έ β [] β
β [1,2,3] β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [1,2,3] β
βββββββββββββββββ΄ββββββββββββββββ΄βββββββββββ΄ββββββββββββββββββ
sql
SELECT toTypeName(v.String), toTypeName(v.UInt64), toTypeName(v.`Array(UInt64)`) FROM test LIMIT 1;
text
ββtoTypeName(v.String)ββ¬βtoTypeName(v.UInt64)ββ¬βtoTypeName(v.Array(UInt64))ββ
β Nullable(String) β Nullable(UInt64) β Array(UInt64) β
ββββββββββββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ
sql
SELECT v, variantElement(v, 'String'), variantElement(v, 'UInt64'), variantElement(v, 'Array(UInt64)') FROM test;
text
ββvββββββββββββββ¬βvariantElement(v, 'String')ββ¬βvariantElement(v, 'UInt64')ββ¬βvariantElement(v, 'Array(UInt64)')ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β
β 42 β α΄Ία΅α΄Έα΄Έ β 42 β [] β
β Hello, World! β Hello, World! β α΄Ία΅α΄Έα΄Έ β [] β
β [1,2,3] β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [1,2,3] β
βββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ | {"source_file": "variant.md"} | [
0.05926110967993736,
-0.03666544705629349,
-0.021745136007666588,
-0.016890620812773705,
-0.009297249838709831,
0.016268350183963776,
0.011872619390487671,
0.05605793371796608,
-0.07940050214529037,
-0.04538331925868988,
0.012390991672873497,
-0.01865324191749096,
-0.023199046030640602,
-0... |
d5140343-bd21-4682-a8c0-b2cd32c5e527 | To know what variant is stored in each row function
variantType(variant_column)
can be used. It returns
Enum
with variant type name for each row (or
'None'
if row is
NULL
).
Example:
sql
CREATE TABLE test (v Variant(UInt64, String, Array(UInt64))) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT variantType(v) FROM test;
text
ββvariantType(v)ββ
β None β
β UInt64 β
β String β
β Array(UInt64) β
ββββββββββββββββββ
sql
SELECT toTypeName(variantType(v)) FROM test LIMIT 1;
text
ββtoTypeName(variantType(v))βββββββββββββββββββββββββββββββββββββββββββ
β Enum8('None' = -1, 'Array(UInt64)' = 0, 'String' = 1, 'UInt64' = 2) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Conversion between a Variant column and other columns {#conversion-between-a-variant-column-and-other-columns}
There are 4 possible conversions that can be performed with a column of type
Variant
.
Converting a String column to a Variant column {#converting-a-string-column-to-a-variant-column}
Conversion from
String
to
Variant
is performed by parsing a value of
Variant
type from the string value:
sql
SELECT '42'::Variant(String, UInt64) AS variant, variantType(variant) AS variant_type
text
ββvariantββ¬βvariant_typeββ
β 42 β UInt64 β
βββββββββββ΄βββββββββββββββ
sql
SELECT '[1, 2, 3]'::Variant(String, Array(UInt64)) as variant, variantType(variant) as variant_type
text
ββvariantββ¬βvariant_typeβββ
β [1,2,3] β Array(UInt64) β
βββββββββββ΄ββββββββββββββββ
sql
SELECT CAST(map('key1', '42', 'key2', 'true', 'key3', '2020-01-01'), 'Map(String, Variant(UInt64, Bool, Date))') AS map_of_variants, mapApply((k, v) -> (k, variantType(v)), map_of_variants) AS map_of_variant_types
```
text
ββmap_of_variantsββββββββββββββββββββββββββββββ¬βmap_of_variant_typesβββββββββββββββββββββββββββ
β {'key1':42,'key2':true,'key3':'2020-01-01'} β {'key1':'UInt64','key2':'Bool','key3':'Date'} β
βββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββ
To disable parsing during conversion from
String
to
Variant
you can disable setting
cast_string_to_dynamic_use_inference
:
sql
SET cast_string_to_variant_use_inference = 0;
SELECT '[1, 2, 3]'::Variant(String, Array(UInt64)) as variant, variantType(variant) as variant_type
text
ββvariantββββ¬βvariant_typeββ
β [1, 2, 3] β String β
βββββββββββββ΄βββββββββββββββ
Converting an ordinary column to a Variant column {#converting-an-ordinary-column-to-a-variant-column}
It is possible to convert an ordinary column with type
T
to a
Variant
column containing this type:
sql
SELECT toTypeName(variant) AS type_name, [1,2,3]::Array(UInt64)::Variant(UInt64, String, Array(UInt64)) as variant, variantType(variant) as variant_name | {"source_file": "variant.md"} | [
0.10755466669797897,
0.017172008752822876,
-0.0706377625465393,
-0.017784256488084793,
-0.004524897783994675,
0.002691128524020314,
0.01438919361680746,
0.01711839996278286,
-0.052609883248806,
0.023234793916344643,
0.045056089758872986,
-0.07299064099788666,
-0.020737145096063614,
-0.0797... |
6ed59470-6007-42dd-93e2-22e9b9df8af4 | sql
SELECT toTypeName(variant) AS type_name, [1,2,3]::Array(UInt64)::Variant(UInt64, String, Array(UInt64)) as variant, variantType(variant) as variant_name
text
ββtype_nameβββββββββββββββββββββββββββββββ¬βvariantββ¬βvariant_nameβββ
β Variant(Array(UInt64), String, UInt64) β [1,2,3] β Array(UInt64) β
ββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββ΄ββββββββββββββββ
Note: converting from
String
type is always performed through parsing, if you need to convert
String
column to
String
variant of a
Variant
without parsing, you can do the following:
sql
SELECT '[1, 2, 3]'::Variant(String)::Variant(String, Array(UInt64), UInt64) as variant, variantType(variant) as variant_type
sql
ββvariantββββ¬βvariant_typeββ
β [1, 2, 3] β String β
βββββββββββββ΄βββββββββββββββ
Converting a Variant column to an ordinary column {#converting-a-variant-column-to-an-ordinary-column}
It is possible to convert a
Variant
column to an ordinary column. In this case all nested variants will be converted to a destination type:
sql
CREATE TABLE test (v Variant(UInt64, String)) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('42.42');
SELECT v::Nullable(Float64) FROM test;
text
ββCAST(v, 'Nullable(Float64)')ββ
β α΄Ία΅α΄Έα΄Έ β
β 42 β
β 42.42 β
ββββββββββββββββββββββββββββββββ
Converting a Variant to another Variant {#converting-a-variant-to-another-variant}
It is possible to convert a
Variant
column to another
Variant
column, but only if the destination
Variant
column contains all nested types from the original
Variant
:
sql
CREATE TABLE test (v Variant(UInt64, String)) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('String');
SELECT v::Variant(UInt64, String, Array(UInt64)) FROM test;
text
ββCAST(v, 'Variant(UInt64, String, Array(UInt64))')ββ
β α΄Ία΅α΄Έα΄Έ β
β 42 β
β String β
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
Reading Variant type from the data {#reading-variant-type-from-the-data}
All text formats (TSV, CSV, CustomSeparated, Values, JSONEachRow, etc) supports reading
Variant
type. During data parsing ClickHouse tries to insert value into most appropriate variant type.
Example:
sql
SELECT
v,
variantElement(v, 'String') AS str,
variantElement(v, 'UInt64') AS num,
variantElement(v, 'Float64') AS float,
variantElement(v, 'DateTime') AS date,
variantElement(v, 'Array(UInt64)') AS arr
FROM format(JSONEachRow, 'v Variant(String, UInt64, Float64, DateTime, Array(UInt64))', $$
{"v" : "Hello, World!"},
{"v" : 42},
{"v" : 42.42},
{"v" : "2020-01-01 00:00:00"},
{"v" : [1, 2, 3]}
$$) | {"source_file": "variant.md"} | [
0.10715887695550919,
-0.024041909724473953,
-0.013145474717020988,
0.03217022493481636,
-0.046762991696596146,
-0.02365840971469879,
0.038656096905469894,
-0.00959756225347519,
-0.06705226749181747,
0.018617644906044006,
0.021298902109265327,
-0.012198231182992458,
-0.04752440005540848,
-0... |
5d152aac-b3ee-47b0-8ba8-d5d892f06b42 | text
ββvββββββββββββββββββββ¬βstrββββββββββββ¬ββnumββ¬βfloatββ¬ββββββββββββββββdateββ¬βarrββββββ
β Hello, World! β Hello, World! β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β
β 42 β α΄Ία΅α΄Έα΄Έ β 42 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β
β 42.42 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 42.42 β α΄Ία΅α΄Έα΄Έ β [] β
β 2020-01-01 00:00:00 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2020-01-01 00:00:00 β [] β
β [1,2,3] β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [1,2,3] β
βββββββββββββββββββββββ΄ββββββββββββββββ΄βββββββ΄ββββββββ΄ββββββββββββββββββββββ΄ββββββββββ
Comparing values of Variant type {#comparing-values-of-variant-data}
Values of a
Variant
type can be compared only with values with the same
Variant
type.
The result of operator
<
for values
v1
with underlying type
T1
and
v2
with underlying type
T2
of a type
Variant(..., T1, ... T2, ...)
is defined as follows:
- If
T1 = T2 = T
, the result will be
v1.T < v2.T
(underlying values will be compared).
- If
T1 != T2
, the result will be
T1 < T2
(type names will be compared).
Examples:
sql
CREATE TABLE test (v1 Variant(String, UInt64, Array(UInt32)), v2 Variant(String, UInt64, Array(UInt32))) ENGINE=Memory;
INSERT INTO test VALUES (42, 42), (42, 43), (42, 'abc'), (42, [1, 2, 3]), (42, []), (42, NULL);
sql
SELECT v2, variantType(v2) AS v2_type FROM test ORDER BY v2;
text
ββv2βββββββ¬βv2_typeββββββββ
β [] β Array(UInt32) β
β [1,2,3] β Array(UInt32) β
β abc β String β
β 42 β UInt64 β
β 43 β UInt64 β
β α΄Ία΅α΄Έα΄Έ β None β
βββββββββββ΄ββββββββββββββββ
sql
SELECT v1, variantType(v1) AS v1_type, v2, variantType(v2) AS v2_type, v1 = v2, v1 < v2, v1 > v2 FROM test;
```text
ββv1ββ¬βv1_typeββ¬βv2βββββββ¬βv2_typeββββββββ¬βequals(v1, v2)ββ¬βless(v1, v2)ββ¬βgreater(v1, v2)ββ
β 42 β UInt64 β 42 β UInt64 β 1 β 0 β 0 β
β 42 β UInt64 β 43 β UInt64 β 0 β 1 β 0 β
β 42 β UInt64 β abc β String β 0 β 0 β 1 β
β 42 β UInt64 β [1,2,3] β Array(UInt32) β 0 β 0 β 1 β
β 42 β UInt64 β [] β Array(UInt32) β 0 β 0 β 1 β
β 42 β UInt64 β α΄Ία΅α΄Έα΄Έ β None β 0 β 1 β 0 β
ββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ
```
If you need to find the row with specific
Variant
value, you can do one of the following:
Cast value to the corresponding
Variant
type:
sql
SELECT * FROM test WHERE v2 == [1,2,3]::Array(UInt32)::Variant(String, UInt64, Array(UInt32));
text
ββv1ββ¬βv2βββββββ
β 42 β [1,2,3] β
ββββββ΄ββββββββββ
Compare
Variant
subcolumn with required type:
sql
SELECT * FROM test WHERE v2.`Array(UInt32)` == [1,2,3] -- or using variantElement(v2, 'Array(UInt32)') | {"source_file": "variant.md"} | [
0.0008688006200827658,
0.06679055094718933,
0.08896025270223618,
-0.024742042645812035,
0.002282872563228011,
-0.04670201241970062,
-0.004162836354225874,
0.043808624148368835,
-0.03861527144908905,
-0.040805406868457794,
0.019675763323903084,
-0.023146454244852066,
-0.000006497304184449604,... |
e7c4342c-e007-496e-bbb6-72a8bd559898 | Compare
Variant
subcolumn with required type:
sql
SELECT * FROM test WHERE v2.`Array(UInt32)` == [1,2,3] -- or using variantElement(v2, 'Array(UInt32)')
text
ββv1ββ¬βv2βββββββ
β 42 β [1,2,3] β
ββββββ΄ββββββββββ
Sometimes it can be useful to make additional check on variant type as subcolumns with complex types like
Array/Map/Tuple
cannot be inside
Nullable
and will have default values instead of
NULL
on rows with different types:
sql
SELECT v2, v2.`Array(UInt32)`, variantType(v2) FROM test WHERE v2.`Array(UInt32)` == [];
text
ββv2ββββ¬βv2.Array(UInt32)ββ¬βvariantType(v2)ββ
β 42 β [] β UInt64 β
β 43 β [] β UInt64 β
β abc β [] β String β
β [] β [] β Array(UInt32) β
β α΄Ία΅α΄Έα΄Έ β [] β None β
ββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ
sql
SELECT v2, v2.`Array(UInt32)`, variantType(v2) FROM test WHERE variantType(v2) == 'Array(UInt32)' AND v2.`Array(UInt32)` == [];
text
ββv2ββ¬βv2.Array(UInt32)ββ¬βvariantType(v2)ββ
β [] β [] β Array(UInt32) β
ββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ
Note:
values of variants with different numeric types are considered as different variants and not compared between each other, their type names are compared instead.
Example:
sql
SET allow_suspicious_variant_types = 1;
CREATE TABLE test (v Variant(UInt32, Int64)) ENGINE=Memory;
INSERT INTO test VALUES (1::UInt32), (1::Int64), (100::UInt32), (100::Int64);
SELECT v, variantType(v) FROM test ORDER by v;
text
ββvββββ¬βvariantType(v)ββ
β 1 β Int64 β
β 100 β Int64 β
β 1 β UInt32 β
β 100 β UInt32 β
βββββββ΄βββββββββββββββββ
Note
by default
Variant
type is not allowed in
GROUP BY
/
ORDER BY
keys, if you want to use it consider its special comparison rule and enable
allow_suspicious_types_in_group_by
/
allow_suspicious_types_in_order_by
settings.
JSONExtract functions with Variant {#jsonextract-functions-with-variant}
All
JSONExtract*
functions support
Variant
type:
sql
SELECT JSONExtract('{"a" : [1, 2, 3]}', 'a', 'Variant(UInt32, String, Array(UInt32))') AS variant, variantType(variant) AS variant_type;
text
ββvariantββ¬βvariant_typeβββ
β [1,2,3] β Array(UInt32) β
βββββββββββ΄ββββββββββββββββ
sql
SELECT JSONExtract('{"obj" : {"a" : 42, "b" : "Hello", "c" : [1,2,3]}}', 'obj', 'Map(String, Variant(UInt32, String, Array(UInt32)))') AS map_of_variants, mapApply((k, v) -> (k, variantType(v)), map_of_variants) AS map_of_variant_types
text
ββmap_of_variantsβββββββββββββββββββ¬βmap_of_variant_typesβββββββββββββββββββββββββββββ
β {'a':42,'b':'Hello','c':[1,2,3]} β {'a':'UInt32','b':'String','c':'Array(UInt32)'} β
ββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "variant.md"} | [
0.07936354726552963,
-0.005772612523287535,
0.0002314556040801108,
0.020869577303528786,
0.01574072241783142,
-0.052140288054943085,
0.01127619855105877,
-0.0073638418689370155,
-0.09210395067930222,
-0.02228565886616707,
-0.00793032068759203,
-0.041662462055683136,
-0.05914219468832016,
0... |
2895ee96-9f02-498c-98d5-a298e014a2e1 | sql
SELECT JSONExtractKeysAndValues('{"a" : 42, "b" : "Hello", "c" : [1,2,3]}', 'Variant(UInt32, String, Array(UInt32))') AS variants, arrayMap(x -> (x.1, variantType(x.2)), variants) AS variant_types
text
ββvariantsββββββββββββββββββββββββββββββββ¬βvariant_typesββββββββββββββββββββββββββββββββββββββββββ
β [('a',42),('b','Hello'),('c',[1,2,3])] β [('a','UInt32'),('b','String'),('c','Array(UInt32)')] β
ββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "variant.md"} | [
0.08326413482427597,
0.029219690710306168,
0.07436896860599518,
-0.014985335990786552,
-0.06759066134691238,
0.01594248227775097,
0.06140107661485672,
-0.016447946429252625,
-0.04814980924129486,
-0.030168656259775162,
0.04395764693617821,
-0.03296041116118431,
0.017482010647654533,
-0.017... |
1db3950a-0c7c-4839-81bc-602f45ec55d7 | description: 'Documentation for the UUID data type in ClickHouse'
sidebar_label: 'UUID'
sidebar_position: 24
slug: /sql-reference/data-types/uuid
title: 'UUID'
doc_type: 'reference'
UUID
A Universally Unique Identifier (UUID) is a 16-byte value used to identify records. For detailed information about UUIDs, see
Wikipedia
.
While different UUID variants exist (see
here
), ClickHouse does not validate that inserted UUIDs conform to a particular variant.
UUIDs are internally treated as a sequence of 16 random bytes with
8-4-4-4-12 representation
at SQL level.
Example UUID value:
text
61f0c404-5cb3-11e7-907b-a6006ad3dba0
The default UUID is all-zero. It is used, for example, when a new record is inserted but no value for a UUID column is specified:
text
00000000-0000-0000-0000-000000000000
Due to historical reasons, UUIDs are sorted by their second half.
UUIDs should therefore not be used directly in a primary key, sorting key, or partition key of a table.
Example:
sql
CREATE TABLE tab (uuid UUID) ENGINE = Memory;
INSERT INTO tab SELECT generateUUIDv4() FROM numbers(50);
SELECT * FROM tab ORDER BY uuid;
Result:
text
ββuuidββββββββββββββββββββββββββββββββββ
β 36a0b67c-b74a-4640-803b-e44bb4547e3c β
β 3a00aeb8-2605-4eec-8215-08c0ecb51112 β
β 3fda7c49-282e-421a-85ab-c5684ef1d350 β
β 16ab55a7-45f6-44a8-873c-7a0b44346b3e β
β e3776711-6359-4f22-878d-bf290d052c85 β
β [...] β
β 9eceda2f-6946-40e3-b725-16f2709ca41a β
β 03644f74-47ba-4020-b865-be5fd4c8c7ff β
β ce3bc93d-ab19-4c74-b8cc-737cb9212099 β
β b7ad6c91-23d6-4b5e-b8e4-a52297490b56 β
β 06892f64-cc2d-45f3-bf86-f5c5af5768a9 β
ββββββββββββββββββββββββββββββββββββββββ
As a workaround, the UUID can be converted to a type with an intuitive sort order.
Example using conversion to UInt128:
sql
CREATE TABLE tab (uuid UUID) ENGINE = Memory;
INSERT INTO tab SELECT generateUUIDv4() FROM numbers(50);
SELECT * FROM tab ORDER BY toUInt128(uuid);
Result:
sql
ββuuidββββββββββββββββββββββββββββββββββ
β 018b81cd-aca1-4e9c-9e56-a84a074dc1a8 β
β 02380033-c96a-438e-913f-a2c67e341def β
β 057cf435-7044-456a-893b-9183a4475cea β
β 0a3c1d4c-f57d-44cc-8567-60cb0c46f76e β
β 0c15bf1c-8633-4414-a084-7017eead9e41 β
β [...] β
β f808cf05-ea57-4e81-8add-29a195bde63d β
β f859fb5d-764b-4a33-81e6-9e4239dae083 β
β fb1b7e37-ab7b-421a-910b-80e60e2bf9eb β
β fc3174ff-517b-49b5-bfe2-9b369a5c506d β
β fece9bf6-3832-449a-b058-cd1d70a02c8b β
ββββββββββββββββββββββββββββββββββββββββ
Generating UUIDs {#generating-uuids}
ClickHouse provides the
generateUUIDv4
function to generate random UUID version 4 values.
Usage Example {#usage-example}
Example 1
This example demonstrates the creation of a table with a UUID column and the insertion of a value into the table.
```sql
CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog
INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1'
SELECT * FROM t_uuid
```
Result: | {"source_file": "uuid.md"} | [
-0.04111143946647644,
-0.032215435057878494,
-0.06532224267721176,
0.0059321545995771885,
-0.0022471772972494364,
-0.07409829646348953,
0.049876868724823,
-0.052813805639743805,
-0.01889858767390251,
-0.06841805577278137,
0.039240047335624695,
0.03176676481962204,
0.04773199185729027,
-0.0... |
4f33cd6d-3597-47fe-b94c-e102075d5d30 | ```sql
CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog
INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1'
SELECT * FROM t_uuid
```
Result:
text
βββββββββββββββββββββββββββββββββββββxββ¬βyββββββββββ
β 417ddc5d-e556-4d27-95dd-a34d84e46a50 β Example 1 β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββ
Example 2
In this example, no UUID column value is specified when the record is inserted, i.e. the default UUID value is inserted:
```sql
INSERT INTO t_uuid (y) VALUES ('Example 2')
SELECT * FROM t_uuid
```
text
βββββββββββββββββββββββββββββββββββββxββ¬βyββββββββββ
β 417ddc5d-e556-4d27-95dd-a34d84e46a50 β Example 1 β
β 00000000-0000-0000-0000-000000000000 β Example 2 β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββ
Restrictions {#restrictions}
The UUID data type only supports functions which
String
data type also supports (for example,
min
,
max
, and
count
).
The UUID data type is not supported by arithmetic operations (for example,
abs
) or aggregate functions, such as
sum
and
avg
. | {"source_file": "uuid.md"} | [
0.025897720828652382,
0.010799737647175789,
-0.03189287707209587,
0.018815960735082626,
0.013492305763065815,
-0.04412783309817314,
0.058877404779195786,
0.015514581464231014,
-0.05027144402265549,
-0.00819158460944891,
0.05471843108534813,
-0.051161542534828186,
0.031718917191028595,
-0.0... |
9a739c91-50b4-466e-8f91-ac026b17edc0 | description: 'Documentation for deprecated Object data type in ClickHouse'
keywords: ['object', 'data type']
sidebar_label: 'Object Data Type'
sidebar_position: 26
slug: /sql-reference/data-types/object-data-type
title: 'Object Data Type '
doc_type: 'reference'
import DeprecatedBadge from '@theme/badges/DeprecatedBadge';
Object data type
This feature is not production-ready and deprecated.
If you need to work with JSON documents, consider using
this guide
instead. A new implementation to support JSON object is in Beta. Further details
here
.
Stores JavaScript Object Notation (JSON) documents in a single column.
JSON
can be used as an alias to
Object('json')
when setting
use_json_alias_for_old_object_type
is enabled.
Example {#example}
Example 1
Creating a table with a
JSON
column and inserting data into it:
sql
CREATE TABLE json
(
o JSON
)
ENGINE = Memory
sql
INSERT INTO json VALUES ('{"a": 1, "b": { "c": 2, "d": [1, 2, 3] }}')
sql
SELECT o.a, o.b.c, o.b.d[3] FROM json
text
ββo.aββ¬βo.b.cββ¬βarrayElement(o.b.d, 3)ββ
β 1 β 2 β 3 β
βββββββ΄ββββββββ΄βββββββββββββββββββββββββ
Example 2
To be able to create an ordered
MergeTree
family table, the sorting key has to be extracted into its column. For example, to insert a file of compressed HTTP access logs in JSON format:
sql
CREATE TABLE logs
(
timestamp DateTime,
message JSON
)
ENGINE = MergeTree
ORDER BY timestamp
sql
INSERT INTO logs
SELECT parseDateTimeBestEffort(JSONExtractString(json, 'timestamp')), json
FROM file('access.json.gz', JSONAsString)
Displaying JSON columns {#displaying-json-columns}
When displaying a
JSON
column, ClickHouse only shows the field values by default (because internally, it is represented as a tuple). You can also display the field names by setting
output_format_json_named_tuples_as_objects = 1
:
```sql
SET output_format_json_named_tuples_as_objects = 1
SELECT * FROM json FORMAT JSONEachRow
```
text
{"o":{"a":1,"b":{"c":2,"d":[1,2,3]}}}
Related Content {#related-content}
Using JSON in ClickHouse
Getting Data Into ClickHouse - Part 2 - A JSON detour | {"source_file": "json.md"} | [
-0.06748947501182556,
0.00912204384803772,
0.012622958980500698,
0.06917331367731094,
-0.03627888113260269,
0.024070998653769493,
-0.03945290297269821,
0.06134277582168579,
-0.058846596628427505,
-0.04232992231845856,
0.014637977816164494,
0.06732936948537827,
-0.010831635445356369,
0.0706... |
54ef6fa3-ebbd-478d-b9dc-a8d7cf01482a | description: 'Documentation for the Date data type in ClickHouse'
sidebar_label: 'Date'
sidebar_position: 12
slug: /sql-reference/data-types/date
title: 'Date'
doc_type: 'reference'
Date
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2149, but the final fully-supported year is 2148).
Supported range of values: [1970-01-01, 2149-06-06].
The date value is stored without the time zone.
Example
Creating a table with a
Date
-type column and inserting data into it:
sql
CREATE TABLE dt
(
`timestamp` Date,
`event_id` UInt8
)
ENGINE = TinyLog;
```sql
-- Parse Date
-- - from string,
-- - from 'small' integer interpreted as number of days since 1970-01-01, and
-- - from 'big' integer interpreted as number of seconds since 1970-01-01.
INSERT INTO dt VALUES ('2019-01-01', 1), (17897, 2), (1546300800, 3);
SELECT * FROM dt;
```
text
βββtimestampββ¬βevent_idββ
β 2019-01-01 β 1 β
β 2019-01-01 β 2 β
β 2019-01-01 β 3 β
ββββββββββββββ΄βββββββββββ
See Also
Functions for working with dates and times
Operators for working with dates and times
DateTime
data type | {"source_file": "date.md"} | [
0.03267687186598778,
0.033898111432790756,
-0.036705534905195236,
0.08426256477832794,
-0.03488883376121521,
0.022084029391407967,
-0.00568743608891964,
0.06616346538066864,
-0.030201377347111702,
0.01315865758806467,
0.006353141274303198,
-0.0338640995323658,
-0.009556581266224384,
-0.046... |
4187822b-f963-43d8-a115-a7897616a6b4 | description: 'Documentation for the LowCardinality optimization for string columns'
sidebar_label: 'LowCardinality(T)'
sidebar_position: 42
slug: /sql-reference/data-types/lowcardinality
title: 'LowCardinality(T)'
doc_type: 'reference'
LowCardinality(T)
Changes the internal representation of other data types to be dictionary-encoded.
Syntax {#syntax}
sql
LowCardinality(data_type)
Parameters
data_type
β
String
,
FixedString
,
Date
,
DateTime
, and numbers excepting
Decimal
.
LowCardinality
is not efficient for some data types, see the
allow_suspicious_low_cardinality_types
setting description.
Description {#description}
LowCardinality
is a superstructure that changes a data storage method and rules of data processing. ClickHouse applies
dictionary coding
to
LowCardinality
-columns. Operating with dictionary encoded data significantly increases performance of
SELECT
queries for many applications.
The efficiency of using
LowCardinality
data type depends on data diversity. If a dictionary contains less than 10,000 distinct values, then ClickHouse mostly shows higher efficiency of data reading and storing. If a dictionary contains more than 100,000 distinct values, then ClickHouse can perform worse in comparison with using ordinary data types.
Consider using
LowCardinality
instead of
Enum
when working with strings.
LowCardinality
provides more flexibility in use and often reveals the same or higher efficiency.
Example {#example}
Create a table with a
LowCardinality
-column:
sql
CREATE TABLE lc_t
(
`id` UInt16,
`strings` LowCardinality(String)
)
ENGINE = MergeTree()
ORDER BY id
Related Settings and Functions {#related-settings-and-functions}
Settings:
low_cardinality_max_dictionary_size
low_cardinality_use_single_dictionary_for_part
low_cardinality_allow_in_native_format
allow_suspicious_low_cardinality_types
output_format_arrow_low_cardinality_as_dictionary
Functions:
toLowCardinality
Related content {#related-content}
Blog:
Optimizing ClickHouse with Schemas and Codecs
Blog:
Working with time series data in ClickHouse
String Optimization (video presentation in Russian)
.
Slides in English | {"source_file": "lowcardinality.md"} | [
0.06635364890098572,
0.022891182452440262,
-0.07960192114114761,
0.049003567546606064,
-0.0761265978217125,
-0.08482979238033295,
0.020837489515542984,
0.03607715666294098,
-0.013094369322061539,
-0.05811090022325516,
0.03957444429397583,
0.020570095628499985,
0.0431700237095356,
-0.050169... |
5bb727d6-9edc-4802-bd2f-565978535b25 | description: 'Documentation for the AggregateFunction data type in ClickHouse, which
stores intermediate states of aggregate functions'
keywords: ['AggregateFunction', 'Type']
sidebar_label: 'AggregateFunction'
sidebar_position: 46
slug: /sql-reference/data-types/aggregatefunction
title: 'AggregateFunction Type'
doc_type: 'reference'
AggregateFunction Type
Description {#description}
All
Aggregate functions
in ClickHouse have
an implementation-specific intermediate state that can be serialized to an
AggregateFunction
data type and stored in a table. This is usually done by
means of a
materialized view
.
There are two aggregate function
combinators
commonly used with the
AggregateFunction
type:
The
-State
aggregate function combinator, which when appended to an aggregate
function name, produces
AggregateFunction
intermediate states.
The
-Merge
aggregate
function combinator, which is used to get the final result of an aggregation
from the intermediate states.
Syntax {#syntax}
sql
AggregateFunction(aggregate_function_name, types_of_arguments...)
Parameters
aggregate_function_name
- The name of an aggregate function. If the function
is parametric, then its parameters should be specified too.
types_of_arguments
- The types of the aggregate function arguments.
for example:
sql
CREATE TABLE t
(
column1 AggregateFunction(uniq, UInt64),
column2 AggregateFunction(anyIf, String, UInt8),
column3 AggregateFunction(quantiles(0.5, 0.9), UInt64)
) ENGINE = ...
Usage {#usage}
Data Insertion {#data-insertion}
To insert data into a table with columns of type
AggregateFunction
, you can
use
INSERT SELECT
with aggregate functions and the
-State
aggregate
function combinator.
For example, to insert into columns of type
AggregateFunction(uniq, UInt64)
and
AggregateFunction(quantiles(0.5, 0.9), UInt64)
you would use the following
aggregate functions with combinators.
sql
uniqState(UserID)
quantilesState(0.5, 0.9)(SendTiming)
In contrast to functions
uniq
and
quantiles
,
uniqState
and
quantilesState
(with
-State
combinator appended) return the state, rather than the final value.
In other words, they return a value of
AggregateFunction
type.
In the results of the
SELECT
query, values of type
AggregateFunction
have
implementation-specific binary representations for all of the ClickHouse output
formats.
If you dump data into, for example, the
TabSeparated
format with a
SELECT
query, then this dump can be loaded back using the
INSERT
query.
Data Selection {#data-selection}
When selecting data from
AggregatingMergeTree
table, use the
GROUP BY
clause
and the same aggregate functions as for when you inserted the data, but use the
-Merge
combinator.
An aggregate function with the
-Merge
combinator appended to it takes a set of
states, combines them, and returns the result of the complete data aggregation. | {"source_file": "aggregatefunction.md"} | [
-0.04451150819659233,
-0.09569299221038818,
0.0048143151216208935,
0.05566493421792984,
-0.10175617784261703,
-0.009391938336193562,
0.0480898879468441,
0.03600035980343819,
-0.019033297896385193,
-0.01373250037431717,
0.002083912957459688,
-0.021418221294879913,
0.05897878110408783,
-0.05... |
6bd434c7-a1de-4221-999b-de98a15856ce | -Merge
combinator.
An aggregate function with the
-Merge
combinator appended to it takes a set of
states, combines them, and returns the result of the complete data aggregation.
For example, the following two queries return the same result:
```sql
SELECT uniq(UserID) FROM table
SELECT uniqMerge(state) FROM (SELECT uniqState(UserID) AS state FROM table GROUP BY RegionID)
```
Usage Example {#usage-example}
See
AggregatingMergeTree
engine description.
Related Content {#related-content}
Blog:
Using Aggregate Combinators in ClickHouse
MergeState
combinator.
State
combinator. | {"source_file": "aggregatefunction.md"} | [
-0.05375373736023903,
-0.045484885573387146,
0.033848460763692856,
0.030760282650589943,
-0.04976968467235565,
-0.026690630242228508,
0.0804324522614479,
-0.009535752236843109,
0.010424821637570858,
-0.06443601846694946,
0.03855343908071518,
0.029348434880375862,
0.03921814262866974,
-0.04... |
b52fb4dd-f1c3-46dc-8781-9cf20440caad | description: 'Documentation for the Enum data type in ClickHouse, which represents
a set of named constant values'
sidebar_label: 'Enum'
sidebar_position: 20
slug: /sql-reference/data-types/enum
title: 'Enum'
doc_type: 'reference'
Enum
Enumerated type consisting of named values.
Named values can be declared as
'string' = integer
pairs or
'string'
names . ClickHouse stores only numbers, but supports operations with the values through their names.
ClickHouse supports:
8-bit
Enum
. It can contain up to 256 values enumerated in the
[-128, 127]
range.
16-bit
Enum
. It can contain up to 65536 values enumerated in the
[-32768, 32767]
range.
ClickHouse automatically chooses the type of
Enum
when data is inserted. You can also use
Enum8
or
Enum16
types to be sure in the size of storage.
Usage Examples {#usage-examples}
Here we create a table with an
Enum8('hello' = 1, 'world' = 2)
type column:
sql
CREATE TABLE t_enum
(
x Enum('hello' = 1, 'world' = 2)
)
ENGINE = TinyLog
Similarly, you could omit numbers. ClickHouse will assign consecutive numbers automatically. Numbers are assigned starting from 1 by default.
sql
CREATE TABLE t_enum
(
x Enum('hello', 'world')
)
ENGINE = TinyLog
You can also specify legal starting number for the first name.
sql
CREATE TABLE t_enum
(
x Enum('hello' = 1, 'world')
)
ENGINE = TinyLog
sql
CREATE TABLE t_enum
(
x Enum8('hello' = -129, 'world')
)
ENGINE = TinyLog
text
Exception on server:
Code: 69. DB::Exception: Value -129 for element 'hello' exceeds range of Enum8.
Column
x
can only store values that are listed in the type definition:
'hello'
or
'world'
. If you try to save any other value, ClickHouse will raise an exception. 8-bit size for this
Enum
is chosen automatically.
sql
INSERT INTO t_enum VALUES ('hello'), ('world'), ('hello')
text
Ok.
sql
INSERT INTO t_enum VALUES('a')
text
Exception on client:
Code: 49. DB::Exception: Unknown element 'a' for type Enum('hello' = 1, 'world' = 2)
When you query data from the table, ClickHouse outputs the string values from
Enum
.
sql
SELECT * FROM t_enum
text
ββxββββββ
β hello β
β world β
β hello β
βββββββββ
If you need to see the numeric equivalents of the rows, you must cast the
Enum
value to integer type.
sql
SELECT CAST(x, 'Int8') FROM t_enum
text
ββCAST(x, 'Int8')ββ
β 1 β
β 2 β
β 1 β
βββββββββββββββββββ
To create an Enum value in a query, you also need to use
CAST
.
sql
SELECT toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))
text
ββtoTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))ββ
β Enum8('a' = 1, 'b' = 2) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
General Rules and Usage {#general-rules-and-usage} | {"source_file": "enum.md"} | [
0.05108226090669632,
-0.06670889258384705,
-0.08810551464557648,
-0.004381105769425631,
-0.04983484372496605,
-0.03918946161866188,
0.053585272282361984,
0.012059725821018219,
-0.0536404550075531,
0.029695652425289154,
0.05841780826449394,
-0.046954333782196045,
0.062246907502412796,
-0.06... |
51800e95-8338-4cb4-8bcf-15ae54492057 | General Rules and Usage {#general-rules-and-usage}
Each of the values is assigned a number in the range
-128 ... 127
for
Enum8
or in the range
-32768 ... 32767
for
Enum16
. All the strings and numbers must be different. An empty string is allowed. If this type is specified (in a table definition), numbers can be in an arbitrary order. However, the order does not matter.
Neither the string nor the numeric value in an
Enum
can be
NULL
.
An
Enum
can be contained in
Nullable
type. So if you create a table using the query
sql
CREATE TABLE t_enum_nullable
(
x Nullable( Enum8('hello' = 1, 'world' = 2) )
)
ENGINE = TinyLog
it can store not only
'hello'
and
'world'
, but
NULL
, as well.
sql
INSERT INTO t_enum_nullable VALUES('hello'),('world'),(NULL)
In RAM, an
Enum
column is stored in the same way as
Int8
or
Int16
of the corresponding numerical values.
When reading in text form, ClickHouse parses the value as a string and searches for the corresponding string from the set of Enum values. If it is not found, an exception is thrown. When reading in text format, the string is read and the corresponding numeric value is looked up. An exception will be thrown if it is not found.
When writing in text form, it writes the value as the corresponding string. If column data contains garbage (numbers that are not from the valid set), an exception is thrown. When reading and writing in binary form, it works the same way as for Int8 and Int16 data types.
The implicit default value is the value with the lowest number.
During
ORDER BY
,
GROUP BY
,
IN
,
DISTINCT
and so on, Enums behave the same way as the corresponding numbers. For example, ORDER BY sorts them numerically. Equality and comparison operators work the same way on Enums as they do on the underlying numeric values.
Enum values cannot be compared with numbers. Enums can be compared to a constant string. If the string compared to is not a valid value for the Enum, an exception will be thrown. The IN operator is supported with the Enum on the left-hand side and a set of strings on the right-hand side. The strings are the values of the corresponding Enum.
Most numeric and string operations are not defined for Enum values, e.g.Β adding a number to an Enum or concatenating a string to an Enum.
However, the Enum has a natural
toString
function that returns its string value.
Enum values are also convertible to numeric types using the
toT
function, where T is a numeric type. When T corresponds to the enum's underlying numeric type, this conversion is zero-cost.
The Enum type can be changed without cost using ALTER, if only the set of values is changed. It is possible to both add and remove members of the Enum using ALTER (removing is safe only if the removed value has never been used in the table). As a safeguard, changing the numeric value of a previously defined Enum member will throw an exception. | {"source_file": "enum.md"} | [
0.07073161005973816,
-0.008742107078433037,
-0.07038048654794693,
-0.0441981703042984,
-0.03475628048181534,
-0.020212212577462196,
0.02507377788424492,
0.02627432718873024,
-0.018035776913166046,
0.028903963044285774,
0.09123566746711731,
-0.031911227852106094,
0.08892649412155151,
-0.031... |
57f3faea-c53e-4fb0-9bba-dc8f94cc7324 | Using ALTER, it is possible to change an Enum8 to an Enum16 or vice versa, just like changing an Int8 to Int16. | {"source_file": "enum.md"} | [
0.019803721457719803,
0.03800014406442642,
-0.01685114949941635,
-0.06276094913482666,
-0.05301970615983009,
-0.019601160660386086,
-0.05162150785326958,
-0.01171329990029335,
-0.10250324010848999,
-0.00918502639979124,
0.021340565755963326,
-0.06228838860988617,
-0.018876967951655388,
0.0... |
b71d6c35-27b5-4b29-9ca4-43d4bd84a472 | description: 'Documentation for the String data type in ClickHouse'
sidebar_label: 'String'
sidebar_position: 8
slug: /sql-reference/data-types/string
title: 'String'
doc_type: 'reference'
String
Strings of an arbitrary length. The length is not limited. The value can contain an arbitrary set of bytes, including null bytes.
The String type replaces the types VARCHAR, BLOB, CLOB, and others from other DBMSs.
When creating tables, numeric parameters for string fields can be set (e.g.
VARCHAR(255)
), but ClickHouse ignores them.
Aliases:
String
β
LONGTEXT
,
MEDIUMTEXT
,
TINYTEXT
,
TEXT
,
LONGBLOB
,
MEDIUMBLOB
,
TINYBLOB
,
BLOB
,
VARCHAR
,
CHAR
,
CHAR LARGE OBJECT
,
CHAR VARYING
,
CHARACTER LARGE OBJECT
,
CHARACTER VARYING
,
NCHAR LARGE OBJECT
,
NCHAR VARYING
,
NATIONAL CHARACTER LARGE OBJECT
,
NATIONAL CHARACTER VARYING
,
NATIONAL CHAR VARYING
,
NATIONAL CHARACTER
,
NATIONAL CHAR
,
BINARY LARGE OBJECT
,
BINARY VARYING
,
Encodings {#encodings}
ClickHouse does not have the concept of encodings. Strings can contain an arbitrary set of bytes, which are stored and output as-is.
If you need to store texts, we recommend using UTF-8 encoding. At the very least, if your terminal uses UTF-8 (as recommended), you can read and write your values without making conversions.
Similarly, certain functions for working with strings have separate variations that work under the assumption that the string contains a set of bytes representing a UTF-8 encoded text.
For example, the
length
function calculates the string length in bytes, while the
lengthUTF8
function calculates the string length in Unicode code points, assuming that the value is UTF-8 encoded. | {"source_file": "string.md"} | [
0.057636361569166183,
-0.0780913457274437,
-0.046395864337682724,
0.010168801993131638,
-0.11344464123249054,
-0.011832881718873978,
0.06735897064208984,
0.028582394123077393,
-0.024533050134778023,
-0.020082872360944748,
0.023806974291801453,
-0.047458015382289886,
0.07522071152925491,
-0... |
80c10bce-e5ed-4eee-bec3-8cc8da614c0f | description: 'Documentation for the FixedString data type in ClickHouse'
sidebar_label: 'FixedString(N)'
sidebar_position: 10
slug: /sql-reference/data-types/fixedstring
title: 'FixedString(N)'
doc_type: 'reference'
FixedString(N)
A fixed-length string of
N
bytes (neither characters nor code points).
To declare a column of
FixedString
type, use the following syntax:
sql
<column_name> FixedString(N)
Where
N
is a natural number.
The
FixedString
type is efficient when data has the length of precisely
N
bytes. In all other cases, it is likely to reduce efficiency.
Examples of the values that can be efficiently stored in
FixedString
-typed columns:
The binary representation of IP addresses (
FixedString(16)
for IPv6).
Language codes (ru_RU, en_US ... ).
Currency codes (USD, RUB ... ).
Binary representation of hashes (
FixedString(16)
for MD5,
FixedString(32)
for SHA256).
To store UUID values, use the
UUID
data type.
When inserting the data, ClickHouse:
Complements a string with null bytes if the string contains fewer than
N
bytes.
Throws the
Too large value for FixedString(N)
exception if the string contains more than
N
bytes.
Let's consider the following table with the single
FixedString(2)
column:
```sql
INSERT INTO FixedStringTable VALUES ('a'), ('ab'), ('');
```
sql
SELECT
name,
toTypeName(name),
length(name),
empty(name)
FROM FixedStringTable;
text
ββnameββ¬βtoTypeName(name)ββ¬βlength(name)ββ¬βempty(name)ββ
β a β FixedString(2) β 2 β 0 β
β ab β FixedString(2) β 2 β 0 β
β β FixedString(2) β 2 β 1 β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββ
Note that the length of the
FixedString(N)
value is constant. The
length
function returns
N
even if the
FixedString(N)
value is filled only with null bytes, but the
empty
function returns
1
in this case.
Selecting data with
WHERE
clause return various result depending on how the condition is specified:
If equality operator
=
or
==
or
equals
function used, ClickHouse
doesn't
take
\0
char into consideration, i.e. queries
SELECT * FROM FixedStringTable WHERE name = 'a';
and
SELECT * FROM FixedStringTable WHERE name = 'a\0';
return the same result.
If
LIKE
clause is used, ClickHouse
does
take
\0
char into consideration, so one may need to explicitly specify
\0
char in the filter condition.
```sql
SELECT name
FROM FixedStringTable
WHERE name = 'a'
FORMAT JSONStringsEachRow
{"name":"a\u0000"}
SELECT name
FROM FixedStringTable
WHERE name = 'a\0'
FORMAT JSONStringsEachRow
{"name":"a\u0000"}
SELECT name
FROM FixedStringTable
WHERE name = 'a'
FORMAT JSONStringsEachRow
Query id: c32cec28-bb9e-4650-86ce-d74a1694d79e
{"name":"a\u0000"}
SELECT name
FROM FixedStringTable
WHERE name LIKE 'a'
FORMAT JSONStringsEachRow
0 rows in set. | {"source_file": "fixedstring.md"} | [
0.0024422623682767153,
0.011308000423014164,
-0.10043352842330933,
0.020193466916680336,
-0.08148061484098434,
-0.011716003529727459,
0.014433017000555992,
0.06837953627109528,
-0.018322154879570007,
0.0017676198622211814,
0.0033481870777904987,
0.042529232800006866,
0.03151319921016693,
-... |
46e181e5-1c5b-4f47-8737-a38c2bafb610 | Query id: c32cec28-bb9e-4650-86ce-d74a1694d79e
{"name":"a\u0000"}
SELECT name
FROM FixedStringTable
WHERE name LIKE 'a'
FORMAT JSONStringsEachRow
0 rows in set.
SELECT name
FROM FixedStringTable
WHERE name LIKE 'a\0'
FORMAT JSONStringsEachRow
{"name":"a\u0000"}
``` | {"source_file": "fixedstring.md"} | [
0.021410997956991196,
0.04554234817624092,
0.0066411313600838184,
0.05617890506982803,
-0.14390286803245544,
0.06318899989128113,
0.02591238170862198,
0.0313725546002388,
0.007374487351626158,
-0.007369941100478172,
0.0021974823903292418,
-0.04670102149248123,
0.036838509142398834,
-0.0587... |
4fa3d5ec-6944-4f13-97a1-aba587e48f9b | description: 'Documentation for signed and unsigned integer data types in ClickHouse,
ranging from 8-bit to 256-bit'
sidebar_label: 'Int | UInt'
sidebar_position: 2
slug: /sql-reference/data-types/int-uint
title: 'Int | UInt Types'
doc_type: 'reference'
ClickHouse offers a number of fixed-length integers,
with a sign (
Int
) or without a sign (unsigned
UInt
) ranging from one byte to 32 bytes.
When creating tables, numeric parameters for integer numbers can be set (e.g.
TINYINT(8)
,
SMALLINT(16)
,
INT(32)
,
BIGINT(64)
), but ClickHouse ignores them.
Integer Ranges {#integer-ranges}
Integer types have the following ranges:
| Type | Range |
|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Int8
| [-128 : 127] |
|
Int16
| [-32768 : 32767] |
|
Int32
| [-2147483648 : 2147483647] |
|
Int64
| [-9223372036854775808 : 9223372036854775807] |
|
Int128
| [-170141183460469231731687303715884105728 : 170141183460469231731687303715884105727] |
|
Int256
| [-57896044618658097711785492504343953926634992332820282019728792003956564819968 : 57896044618658097711785492504343953926634992332820282019728792003956564819967] |
Unsigned integer types have the following ranges:
| Type | Range |
|-----------|----------------------------------------------------------------------------------------|
|
UInt8
| [0 : 255] |
|
UInt16
| [0 : 65535] |
|
UInt32
| [0 : 4294967295] |
|
UInt64
| [0 : 18446744073709551615] |
|
UInt128
| [0 : 340282366920938463463374607431768211455] |
|
UInt256
| [0 : 115792089237316195423570985008687907853269984665640564039457584007913129639935] |
Integer Aliases {#integer-aliases} | {"source_file": "int-uint.md"} | [
0.04828854277729988,
-0.0016625632997602224,
-0.06276895105838776,
-0.022854739800095558,
-0.06787046045064926,
-0.02712184563279152,
0.029094090685248375,
0.04217592254281044,
-0.05217699334025383,
-0.06845548748970032,
-0.02357151173055172,
-0.03520044684410095,
0.07002261281013489,
-0.0... |
d4c75ea5-30dc-4d99-96e6-acb81bf020aa | Integer Aliases {#integer-aliases}
Integer types have the following aliases:
| Type | Alias |
|---------|-----------------------------------------------------------------------------------|
|
Int8
|
TINYINT
,
INT1
,
BYTE
,
TINYINT SIGNED
,
INT1 SIGNED
|
|
Int16
|
SMALLINT
,
SMALLINT SIGNED
|
|
Int32
|
INT
,
INTEGER
,
MEDIUMINT
,
MEDIUMINT SIGNED
,
INT SIGNED
,
INTEGER SIGNED
|
|
Int64
|
BIGINT
,
SIGNED
,
BIGINT SIGNED
,
TIME
|
Unsigned integer types have the following aliases:
| Type | Alias |
|----------|----------------------------------------------------------|
|
UInt8
|
TINYINT UNSIGNED
,
INT1 UNSIGNED
|
|
UInt16
|
SMALLINT UNSIGNED
|
|
UInt32
|
MEDIUMINT UNSIGNED
,
INT UNSIGNED
,
INTEGER UNSIGNED
|
|
UInt64
|
UNSIGNED
,
BIGINT UNSIGNED
,
BIT
,
SET
| | {"source_file": "int-uint.md"} | [
0.04245346039533615,
0.04172162339091301,
-0.014032790437340736,
-0.060942143201828,
-0.026445535942912102,
-0.04117349162697792,
0.0797208696603775,
0.07478535920381546,
-0.04533180594444275,
-0.03775939345359802,
-0.03297311067581177,
-0.06774739176034927,
0.062279414385557175,
0.0234098... |
20f156ec-f677-4c23-aea8-f098351ed439 | description: 'Documentation for the DateTime64 data type in ClickHouse, which stores
timestamps with sub-second precision'
sidebar_label: 'DateTime64'
sidebar_position: 18
slug: /sql-reference/data-types/datetime64
title: 'DateTime64'
doc_type: 'reference'
DateTime64
Allows to store an instant in time, that can be expressed as a calendar date and a time of a day, with defined sub-second precision
Tick size (precision): 10
-precision
seconds. Valid range: [ 0 : 9 ].
Typically, are used - 3 (milliseconds), 6 (microseconds), 9 (nanoseconds).
Syntax:
sql
DateTime64(precision, [timezone])
Internally, stores data as a number of 'ticks' since epoch start (1970-01-01 00:00:00 UTC) as Int64. The tick resolution is determined by the precision parameter. Additionally, the
DateTime64
type can store time zone that is the same for the entire column, that affects how the values of the
DateTime64
type values are displayed in text format and how the values specified as strings are parsed ('2020-01-01 05:00:01.000'). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata. See details in
DateTime
.
Supported range of values: [1900-01-01 00:00:00, 2299-12-31 23:59:59.999999999]
The number of digits after the decimal point depends on the precision parameter.
Note: The precision of the maximum value is 8. If the maximum precision of 9 digits (nanoseconds) is used, the maximum supported value is
2262-04-11 23:47:16
in UTC.
Examples {#examples}
Creating a table with
DateTime64
-type column and inserting data into it:
sql
CREATE TABLE dt64
(
`timestamp` DateTime64(3, 'Asia/Istanbul'),
`event_id` UInt8
)
ENGINE = TinyLog;
```sql
-- Parse DateTime
-- - from integer interpreted as number of microseconds (because of precision 3) since 1970-01-01,
-- - from decimal interpreted as number of seconds before the decimal part, and based on the precision after the decimal point,
-- - from string.
INSERT INTO dt64 VALUES (1546300800123, 1), (1546300800.123, 2), ('2019-01-01 00:00:00', 3);
SELECT * FROM dt64;
```
text
ββββββββββββββββtimestampββ¬βevent_idββ
β 2019-01-01 03:00:00.123 β 1 β
β 2019-01-01 03:00:00.123 β 2 β
β 2019-01-01 00:00:00.000 β 3 β
βββββββββββββββββββββββββββ΄βββββββββββ
When inserting datetime as an integer, it is treated as an appropriately scaled Unix Timestamp (UTC).
1546300800000
(with precision 3) represents
'2019-01-01 00:00:00'
UTC. However, as
timestamp
column has
Asia/Istanbul
(UTC+3) timezone specified, when outputting as a string the value will be shown as
'2019-01-01 03:00:00'
. Inserting datetime as a decimal will treat it similarly as an integer, except the value before the decimal point is the Unix Timestamp up to and including the seconds, and after the decimal point will be treated as the precision. | {"source_file": "datetime64.md"} | [
0.02135133184492588,
0.01355704665184021,
-0.06341616064310074,
0.06433876603841782,
-0.03665308654308319,
-0.003916148561984301,
-0.025901688262820244,
0.05427279323339462,
-0.008139587938785553,
-0.010661155916750431,
-0.00973285362124443,
-0.026101263239979744,
-0.023986848071217537,
-0... |
5d9dd7d6-c815-4206-806a-903ae9d3c1de | When inserting string value as datetime, it is treated as being in column timezone.
'2019-01-01 00:00:00'
will be treated as being in
Asia/Istanbul
timezone and stored as
1546290000000
.
Filtering on
DateTime64
values
sql
SELECT * FROM dt64 WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Asia/Istanbul');
text
ββββββββββββββββtimestampββ¬βevent_idββ
β 2019-01-01 00:00:00.000 β 3 β
βββββββββββββββββββββββββββ΄βββββββββββ
Unlike
DateTime
,
DateTime64
values are not converted from
String
automatically.
sql
SELECT * FROM dt64 WHERE timestamp = toDateTime64(1546300800.123, 3);
text
ββββββββββββββββtimestampββ¬βevent_idββ
β 2019-01-01 03:00:00.123 β 1 β
β 2019-01-01 03:00:00.123 β 2 β
βββββββββββββββββββββββββββ΄βββββββββββ
Contrary to inserting, the
toDateTime64
function will treat all values as the decimal variant, so precision needs to
be given after the decimal point.
Getting a time zone for a
DateTime64
-type value:
sql
SELECT toDateTime64(now(), 3, 'Asia/Istanbul') AS column, toTypeName(column) AS x;
text
βββββββββββββββββββcolumnββ¬βxβββββββββββββββββββββββββββββββ
β 2023-06-05 00:09:52.000 β DateTime64(3, 'Asia/Istanbul') β
βββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββ
Timezone conversion
sql
SELECT
toDateTime64(timestamp, 3, 'Europe/London') AS lon_time,
toDateTime64(timestamp, 3, 'Asia/Istanbul') AS istanbul_time
FROM dt64;
text
βββββββββββββββββlon_timeββ¬βββββββββββistanbul_timeββ
β 2019-01-01 00:00:00.123 β 2019-01-01 03:00:00.123 β
β 2019-01-01 00:00:00.123 β 2019-01-01 03:00:00.123 β
β 2018-12-31 21:00:00.000 β 2019-01-01 00:00:00.000 β
βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ
See Also
Type conversion functions
Functions for working with dates and times
The
date_time_input_format
setting
The
date_time_output_format
setting
The
timezone
server configuration parameter
The
session_timezone
setting
Operators for working with dates and times
Date
data type
DateTime
data type | {"source_file": "datetime64.md"} | [
0.061532650142908096,
-0.022048626095056534,
-0.03348314017057419,
0.027489667758345604,
-0.015628822147846222,
-0.015457872301340103,
0.04992285743355751,
0.021537456661462784,
0.050801608711481094,
0.010517516173422337,
-0.007984795607626438,
-0.11629430949687958,
-0.049969885498285294,
... |
ee07d02f-26f2-4ce2-8c19-dc120d8939fe | description: 'Documentation for the IPv6 data type in ClickHouse, which stores IPv6
addresses as 16-byte values'
sidebar_label: 'IPv6'
sidebar_position: 30
slug: /sql-reference/data-types/ipv6
title: 'IPv6'
doc_type: 'reference'
IPv6 {#ipv6}
IPv6 addresses. Stored in 16 bytes as UInt128 big-endian.
Basic Usage {#basic-usage}
```sql
CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY url;
DESCRIBE TABLE hits;
```
text
ββnameββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ
β url β String β β β β β
β from β IPv6 β β β β β
ββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ
OR you can use
IPv6
domain as a key:
sql
CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY from;
IPv6
domain supports custom input as IPv6-strings:
```sql
INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '2a02:aa08:e000:3100::2')('https://clickhouse.com', '2001:44c8:129:2632:33:0:252:2')('https://clickhouse.com/docs/en/', '2a02:e980:1e::1');
SELECT * FROM hits;
```
text
ββurlβββββββββββββββββββββββββββββββββ¬βfromβββββββββββββββββββββββββββ
β https://clickhouse.com β 2001:44c8:129:2632:33:0:252:2 β
β https://clickhouse.com/docs/en/ β 2a02:e980:1e::1 β
β https://wikipedia.org β 2a02:aa08:e000:3100::2 β
ββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββ
Values are stored in compact binary form:
sql
SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
text
ββtoTypeName(from)ββ¬βhex(from)βββββββββββββββββββββββββ
β IPv6 β 200144C8012926320033000002520002 β
ββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββ
IPv6 addresses can be directly compared to IPv4 addresses:
sql
SELECT toIPv4('127.0.0.1') = toIPv6('::ffff:127.0.0.1');
text
ββequals(toIPv4('127.0.0.1'), toIPv6('::ffff:127.0.0.1'))ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
Functions for Working with IPv4 and IPv6 Addresses | {"source_file": "ipv6.md"} | [
0.03931491822004318,
-0.040007688105106354,
-0.007036922965198755,
0.002187452744692564,
-0.09705813974142075,
-0.0013090946013107896,
-0.055868782103061676,
-0.039880067110061646,
-0.04024539142847061,
0.059637658298015594,
0.00777827063575387,
0.05315408483147621,
0.007531418930739164,
-... |
2a6a5968-3e3b-4827-855a-7027e43186a8 | description: 'Documentation for the Array data type in ClickHouse'
sidebar_label: 'Array(T)'
sidebar_position: 32
slug: /sql-reference/data-types/array
title: 'Array(T)'
doc_type: 'reference'
Array(T)
An array of
T
-type items, with the starting array index as 1.
T
can be any data type, including an array.
Creating an Array {#creating-an-array}
You can use a function to create an array:
sql
array(T)
You can also use square brackets.
sql
[]
Example of creating an array:
sql
SELECT array(1, 2) AS x, toTypeName(x)
text
ββxββββββ¬βtoTypeName(array(1, 2))ββ
β [1,2] β Array(UInt8) β
βββββββββ΄ββββββββββββββββββββββββββ
sql
SELECT [1, 2] AS x, toTypeName(x)
text
ββxββββββ¬βtoTypeName([1, 2])ββ
β [1,2] β Array(UInt8) β
βββββββββ΄βββββββββββββββββββββ
Working with Data Types {#working-with-data-types}
When creating an array on the fly, ClickHouse automatically defines the argument type as the narrowest data type that can store all the listed arguments. If there are any
Nullable
or literal
NULL
values, the type of an array element also becomes
Nullable
.
If ClickHouse couldn't determine the data type, it generates an exception. For instance, this happens when trying to create an array with strings and numbers simultaneously (
SELECT array(1, 'a')
).
Examples of automatic data type detection:
sql
SELECT array(1, 2, NULL) AS x, toTypeName(x)
text
ββxβββββββββββ¬βtoTypeName(array(1, 2, NULL))ββ
β [1,2,NULL] β Array(Nullable(UInt8)) β
ββββββββββββββ΄ββββββββββββββββββββββββββββββββ
If you try to create an array of incompatible data types, ClickHouse throws an exception:
sql
SELECT array(1, 'a')
text
Received exception from server (version 1.1.54388):
Code: 386. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: There is no supertype for types UInt8, String because some of them are String/FixedString and some of them are not.
Array Size {#array-size}
It is possible to find the size of an array by using the
size0
subcolumn without reading the whole column. For multi-dimensional arrays you can use
sizeN-1
, where
N
is the wanted dimension.
Example
Query:
``sql
CREATE TABLE t_arr (
arr` Array(Array(Array(UInt32)))) ENGINE = MergeTree ORDER BY tuple();
INSERT INTO t_arr VALUES ([[[12, 13, 0, 1],[12]]]);
SELECT arr.size0, arr.size1, arr.size2 FROM t_arr;
```
Result:
text
ββarr.size0ββ¬βarr.size1ββ¬βarr.size2ββ
β 1 β [2] β [[4,1]] β
βββββββββββββ΄ββββββββββββ΄ββββββββββββ
Reading nested subcolumns from Array {#reading-nested-subcolumns-from-array}
If nested type
T
inside
Array
has subcolumns (for example, if it's a
named tuple
), you can read its subcolumns from an
Array(T)
type with the same subcolumn names. The type of a subcolumn will be
Array
of the type of original subcolumn.
Example | {"source_file": "array.md"} | [
0.030393406748771667,
-0.028853628784418106,
-0.031168567016720772,
0.039681512862443924,
-0.10769832879304886,
-0.04627911373972893,
0.10564655810594559,
-0.023949034512043,
-0.0680307000875473,
-0.01781994104385376,
-0.03092075325548649,
0.03464263305068016,
0.05170641094446182,
-0.01360... |
7f8b7d23-7a2c-4a72-b205-0f2d9db7c8f9 | Example
sql
CREATE TABLE t_arr (arr Array(Tuple(field1 UInt32, field2 String))) ENGINE = MergeTree ORDER BY tuple();
INSERT INTO t_arr VALUES ([(1, 'Hello'), (2, 'World')]), ([(3, 'This'), (4, 'is'), (5, 'subcolumn')]);
SELECT arr.field1, toTypeName(arr.field1), arr.field2, toTypeName(arr.field2) from t_arr;
test
ββarr.field1ββ¬βtoTypeName(arr.field1)ββ¬βarr.field2βββββββββββββββββ¬βtoTypeName(arr.field2)ββ
β [1,2] β Array(UInt32) β ['Hello','World'] β Array(String) β
β [3,4,5] β Array(UInt32) β ['This','is','subcolumn'] β Array(String) β
ββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ | {"source_file": "array.md"} | [
0.06786273419857025,
0.00310442759655416,
0.016704538837075233,
-0.019277101382613182,
-0.04645313695073128,
-0.09633205831050873,
0.016110314056277275,
-0.029619039967656136,
-0.04548164829611778,
0.022008921951055527,
-0.01923251710832119,
-0.010625210590660572,
-0.015098483301699162,
-0... |
b2514f85-7ec8-4691-a87b-258842e52d33 | description: 'Documentation for the DateTime data type in ClickHouse, which stores
timestamps with second precision'
sidebar_label: 'DateTime'
sidebar_position: 16
slug: /sql-reference/data-types/datetime
title: 'DateTime'
doc_type: 'reference'
DateTime
Allows to store an instant in time, that can be expressed as a calendar date and a time of a day.
Syntax:
sql
DateTime([timezone])
Supported range of values: [1970-01-01 00:00:00, 2106-02-07 06:28:15].
Resolution: 1 second.
Speed {#speed}
The
Date
data type is faster than
DateTime
under
most
conditions.
The
Date
type requires 2 bytes of storage, while
DateTime
requires 4. However, during compression, the size difference between Date and DateTime becomes more significant. This amplification is due to the minutes and seconds in
DateTime
being less compressible. Filtering and aggregating
Date
instead of
DateTime
is also faster.
Usage Remarks {#usage-remarks}
The point in time is saved as a
Unix timestamp
, regardless of the time zone or daylight saving time. The time zone affects how the values of the
DateTime
type values are displayed in text format and how the values specified as strings are parsed ('2020-01-01 05:00:01').
Timezone agnostic Unix timestamp is stored in tables, and the timezone is used to transform it to text format or back during data import/export or to make calendar calculations on the values (example:
toDate
,
toHour
functions etc.). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata.
A list of supported time zones can be found in the
IANA Time Zone Database
and also can be queried by
SELECT * FROM system.time_zones
.
The list
is also available at Wikipedia.
You can explicitly set a time zone for
DateTime
-type columns when creating a table. Example:
DateTime('UTC')
. If the time zone isn't set, ClickHouse uses the value of the
timezone
parameter in the server settings or the operating system settings at the moment of the ClickHouse server start.
The
clickhouse-client
applies the server time zone by default if a time zone isn't explicitly set when initializing the data type. To use the client time zone, run
clickhouse-client
with the
--use_client_time_zone
parameter.
ClickHouse outputs values depending on the value of the
date_time_output_format
setting.
YYYY-MM-DD hh:mm:ss
text format by default. Additionally, you can change the output with the
formatDateTime
function.
When inserting data into ClickHouse, you can use different formats of date and time strings, depending on the value of the
date_time_input_format
setting.
Examples {#examples}
1.
Creating a table with a
DateTime
-type column and inserting data into it:
sql
CREATE TABLE dt
(
`timestamp` DateTime('Asia/Istanbul'),
`event_id` UInt8
)
ENGINE = TinyLog; | {"source_file": "datetime.md"} | [
-0.04502402991056442,
0.030237488448619843,
-0.03288143500685692,
0.06145808473229408,
0.008014686405658722,
-0.025943007320165634,
0.02735704742372036,
0.05573699250817299,
0.028510505333542824,
0.012017335742712021,
-0.03829742223024368,
-0.003658733330667019,
-0.01994761824607849,
0.037... |
2af03550-66e0-4c8f-ba7a-2b31a170aab9 | 1.
Creating a table with a
DateTime
-type column and inserting data into it:
sql
CREATE TABLE dt
(
`timestamp` DateTime('Asia/Istanbul'),
`event_id` UInt8
)
ENGINE = TinyLog;
```sql
-- Parse DateTime
-- - from string,
-- - from integer interpreted as number of seconds since 1970-01-01.
INSERT INTO dt VALUES ('2019-01-01 00:00:00', 1), (1546300800, 2);
SELECT * FROM dt;
```
text
ββββββββββββtimestampββ¬βevent_idββ
β 2019-01-01 00:00:00 β 1 β
β 2019-01-01 03:00:00 β 2 β
βββββββββββββββββββββββ΄βββββββββββ
When inserting datetime as an integer, it is treated as Unix Timestamp (UTC).
1546300800
represents
'2019-01-01 00:00:00'
UTC. However, as
timestamp
column has
Asia/Istanbul
(UTC+3) timezone specified, when outputting as string the value will be shown as
'2019-01-01 03:00:00'
When inserting string value as datetime, it is treated as being in column timezone.
'2019-01-01 00:00:00'
will be treated as being in
Asia/Istanbul
timezone and saved as
1546290000
.
2.
Filtering on
DateTime
values
sql
SELECT * FROM dt WHERE timestamp = toDateTime('2019-01-01 00:00:00', 'Asia/Istanbul')
text
ββββββββββββtimestampββ¬βevent_idββ
β 2019-01-01 00:00:00 β 1 β
βββββββββββββββββββββββ΄βββββββββββ
DateTime
column values can be filtered using a string value in
WHERE
predicate. It will be converted to
DateTime
automatically:
sql
SELECT * FROM dt WHERE timestamp = '2019-01-01 00:00:00'
text
ββββββββββββtimestampββ¬βevent_idββ
β 2019-01-01 00:00:00 β 1 β
βββββββββββββββββββββββ΄βββββββββββ
3.
Getting a time zone for a
DateTime
-type column:
sql
SELECT toDateTime(now(), 'Asia/Istanbul') AS column, toTypeName(column) AS x
text
βββββββββββββββcolumnββ¬βxββββββββββββββββββββββββββ
β 2019-10-16 04:12:04 β DateTime('Asia/Istanbul') β
βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
4.
Timezone conversion
sql
SELECT
toDateTime(timestamp, 'Europe/London') AS lon_time,
toDateTime(timestamp, 'Asia/Istanbul') AS mos_time
FROM dt
text
ββββββββββββlon_timeβββ¬ββββββββββββmos_timeββ
β 2019-01-01 00:00:00 β 2019-01-01 03:00:00 β
β 2018-12-31 21:00:00 β 2019-01-01 00:00:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
As timezone conversion only changes the metadata, the operation has no computation cost.
Limitations on time zones support {#limitations-on-time-zones-support}
Some time zones may not be supported completely. There are a few cases:
If the offset from UTC is not a multiple of 15 minutes, the calculation of hours and minutes can be incorrect. For example, the time zone in Monrovia, Liberia has offset UTC -0:44:30 before 7 Jan 1972. If you are doing calculations on the historical time in Monrovia timezone, the time processing functions may give incorrect results. The results after 7 Jan 1972 will be correct nevertheless. | {"source_file": "datetime.md"} | [
0.038832928985357285,
0.02560860477387905,
-0.012209937907755375,
0.06001145392656326,
-0.01865731179714203,
-0.021059703081846237,
0.05832468345761299,
0.05855061113834381,
0.04744816571474075,
-0.00934132281690836,
0.05328734964132309,
-0.11876393854618073,
-0.031380027532577515,
0.02055... |
3b6956f7-66dd-433c-b2b5-5135bc9eca67 | If the time transition (due to daylight saving time or for other reasons) was performed at a point of time that is not a multiple of 15 minutes, you can also get incorrect results at this specific day.
Non-monotonic calendar dates. For example, in Happy Valley - Goose Bay, the time was transitioned one hour backwards at 00:01:00 7 Nov 2010 (one minute after midnight). So after 6th Nov has ended, people observed a whole one minute of 7th Nov, then time was changed back to 23:01 6th Nov and after another 59 minutes the 7th Nov started again. ClickHouse does not (yet) support this kind of fun. During these days the results of time processing functions may be slightly incorrect.
Similar issue exists for Casey Antarctic station in year 2010. They changed time three hours back at 5 Mar, 02:00. If you are working in antarctic station, please don't be afraid to use ClickHouse. Just make sure you set timezone to UTC or be aware of inaccuracies.
Time shifts for multiple days. Some pacific islands changed their timezone offset from UTC+14 to UTC-12. That's alright but some inaccuracies may present if you do calculations with their timezone for historical time points at the days of conversion.
Handling daylight saving time (DST) {#handling-daylight-saving-time-dst}
ClickHouse's DateTime type with time zones can exhibit unexpected behavior during Daylight Saving Time (DST) transitions, particularly when:
date_time_output_format
is set to
simple
.
Clocks move backward ("Fall Back"), causing a one-hour overlap.
Clocks move forward ("Spring Forward"), causing a one-hour gap.
By default, ClickHouse always picks the earlier occurrence of an overlapping time and may interpret nonexistent times during forward shifts.
For example, consider the following transition from Daylight Saving Time (DST) to Standard Time.
On October 29, 2023, at 02:00:00, clocks move backward to 01:00:00 (BST β GMT).
The hour 01:00:00 β 01:59:59 appears twice (once in BST and once in GMT)
ClickHouse always picks the first occurrence (BST), causing unexpected results when adding time intervals.
```sql
SELECT '2023-10-29 01:30:00'::DateTime('Europe/London') AS time, time + toIntervalHour(1) AS one_hour_later
βββββββββββββββββtimeββ¬ββββββone_hour_laterββ
β 2023-10-29 01:30:00 β 2023-10-29 01:30:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
```
Similarly, during the transition from Standard Time to Daylight Saving Time, an hour can appear to be skipped.
For example:
On March 26, 2023, at
00:59:59
, clocks jump forward to 02:00:00 (GMT β BST).
The hour
01:00:00
β
01:59:59
does not exist.
```sql
SELECT '2023-03-26 01:30:00'::DateTime('Europe/London') AS time, time + toIntervalHour(1) AS one_hour_later
βββββββββββββββββtimeββ¬ββββββone_hour_laterββ
β 2023-03-26 00:30:00 β 2023-03-26 02:30:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
``` | {"source_file": "datetime.md"} | [
-0.009940830990672112,
-0.030953921377658844,
0.06510163098573685,
0.050911106169223785,
0.04074006527662277,
-0.07982106506824493,
-0.06391916424036026,
-0.029344968497753143,
0.028562305495142937,
-0.018251631408929825,
0.004996672738343477,
-0.001995525322854519,
-0.08011920750141144,
0... |
7939576f-2fd1-4e7c-9a2f-91179e33f5c0 | βββββββββββββββββtimeββ¬ββββββone_hour_laterββ
β 2023-03-26 00:30:00 β 2023-03-26 02:30:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
```
In this case, ClickHouse shifts the non-existent time
2023-03-26 01:30:00
back to
2023-03-26 00:30:00
.
See Also {#see-also}
Type conversion functions
Functions for working with dates and times
Functions for working with arrays
The
date_time_input_format
setting
The
date_time_output_format
setting
The
timezone
server configuration parameter
The
session_timezone
setting
Operators for working with dates and times
The
Date
data type | {"source_file": "datetime.md"} | [
-0.012223445810377598,
-0.0009416568791493773,
0.021198967471718788,
0.020734740421175957,
0.022150050848722458,
-0.07405070215463638,
0.007953489199280739,
-0.043272387236356735,
-0.002679497003555298,
-0.03298656642436981,
0.00385014689527452,
-0.03913729265332222,
-0.033497121185064316,
... |
335d1b82-b646-49bb-96e0-7e5ae39f637a | description: 'Documentation for Data Types in ClickHouse'
sidebar_label: 'List of data types'
sidebar_position: 1
slug: /sql-reference/data-types/
title: 'Data Types in ClickHouse'
doc_type: 'reference'
Data Types in ClickHouse
This section describes the data types supported by ClickHouse, for example
integers
,
floats
and
strings
.
System table
system.data_type_families
provides an
overview of all available data types.
It also shows whether a data type is an alias to another data type and its name is case-sensitive (e.g.
bool
vs.
BOOL
). | {"source_file": "index.md"} | [
0.017588695511221886,
-0.0800364688038826,
-0.045523274689912796,
0.02458803541958332,
-0.03715520352125168,
-0.03897930681705475,
0.06570819765329361,
0.00851220078766346,
-0.067258320748806,
-0.04224725067615509,
0.05183706060051918,
0.012690187431871891,
0.026816176250576973,
-0.0385481... |
e1d54c26-75fb-40d5-8f80-08fe0429b716 | description: 'Documentation for the Nullable data type modifier in ClickHouse'
sidebar_label: 'Nullable(T)'
sidebar_position: 44
slug: /sql-reference/data-types/nullable
title: 'Nullable(T)'
doc_type: 'reference'
Nullable(T)
Allows to store special marker (
NULL
) that denotes "missing value" alongside normal values allowed by
T
. For example, a
Nullable(Int8)
type column can store
Int8
type values, and the rows that do not have a value will store
NULL
.
T
can't be any of the composite data types
Array
,
Map
and
Tuple
but composite data types can contain
Nullable
type values, e.g.
Array(Nullable(Int8))
.
A
Nullable
type field can't be included in table indexes.
NULL
is the default value for any
Nullable
type, unless specified otherwise in the ClickHouse server configuration.
Storage Features {#storage-features}
To store
Nullable
type values in a table column, ClickHouse uses a separate file with
NULL
masks in addition to normal file with values. Entries in masks file allow ClickHouse to distinguish between
NULL
and a default value of corresponding data type for each table row. Because of an additional file,
Nullable
column consumes additional storage space compared to a similar normal one.
:::note
Using
Nullable
almost always negatively affects performance, keep this in mind when designing your databases.
:::
Finding NULL {#finding-null}
It is possible to find
NULL
values in a column by using
null
subcolumn without reading the whole column. It returns
1
if the corresponding value is
NULL
and
0
otherwise.
Example
Query:
``sql
CREATE TABLE nullable (
n` Nullable(UInt32)) ENGINE = MergeTree ORDER BY tuple();
INSERT INTO nullable VALUES (1) (NULL) (2) (NULL);
SELECT n.null FROM nullable;
```
Result:
text
ββn.nullββ
β 0 β
β 1 β
β 0 β
β 1 β
ββββββββββ
Usage Example {#usage-example}
sql
CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE TinyLog
sql
INSERT INTO t_null VALUES (1, NULL), (2, 3)
sql
SELECT x + y FROM t_null
text
ββplus(x, y)ββ
β α΄Ία΅α΄Έα΄Έ β
β 5 β
ββββββββββββββ | {"source_file": "nullable.md"} | [
0.028729069977998734,
0.006164877209812403,
-0.0596335306763649,
0.07770344614982605,
-0.02259359136223793,
0.044980328530073166,
0.017975617200136185,
-0.01487025897949934,
-0.09247208386659622,
-0.006993577815592289,
0.08500658720731735,
-0.030257603153586388,
0.06377553194761276,
-0.056... |
fc750a85-115e-41d6-bd15-4a16a2f9f865 | description: 'Documentation for the IPv4 data type in ClickHouse'
sidebar_label: 'IPv4'
sidebar_position: 28
slug: /sql-reference/data-types/ipv4
title: 'IPv4'
doc_type: 'reference'
IPv4 {#ipv4}
IPv4 addresses. Stored in 4 bytes as UInt32.
Basic Usage {#basic-usage}
```sql
CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY url;
DESCRIBE TABLE hits;
```
text
ββnameββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ
β url β String β β β β β
β from β IPv4 β β β β β
ββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ
OR you can use IPv4 domain as a key:
sql
CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY from;
IPv4
domain supports custom input format as IPv4-strings:
```sql
INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '116.253.40.133')('https://clickhouse.com', '183.247.232.58')('https://clickhouse.com/docs/en/', '116.106.34.242');
SELECT * FROM hits;
```
text
ββurlβββββββββββββββββββββββββββββββββ¬βββββββββββfromββ
β https://clickhouse.com/docs/en/ β 116.106.34.242 β
β https://wikipedia.org β 116.253.40.133 β
β https://clickhouse.com β 183.247.232.58 β
ββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
Values are stored in compact binary form:
sql
SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
text
ββtoTypeName(from)ββ¬βhex(from)ββ
β IPv4 β B7F7E83A β
ββββββββββββββββββββ΄ββββββββββββ
IPv4 addresses can be directly compared to IPv6 addresses:
sql
SELECT toIPv4('127.0.0.1') = toIPv6('::ffff:127.0.0.1');
text
ββequals(toIPv4('127.0.0.1'), toIPv6('::ffff:127.0.0.1'))ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
Functions for Working with IPv4 and IPv6 Addresses | {"source_file": "ipv4.md"} | [
0.07724490761756897,
-0.049449242651462555,
-0.0055034165270626545,
0.008079270832240582,
-0.09204726666212082,
-0.0066078705713152885,
-0.018094634637236595,
-0.02260996587574482,
-0.04679698497056961,
0.06456790864467621,
0.030648665502667427,
0.01837075501680374,
0.06409571319818497,
-0... |
90db4905-7cf0-4e3d-a7fb-4d38df72d636 | description: 'Documentation for the Time data type in ClickHouse, which stores
the time range with second precision'
slug: /sql-reference/data-types/time
sidebar_position: 15
sidebar_label: 'Time'
title: 'Time'
doc_type: 'reference'
Time
Data type
Time
represents a time with hour, minute, and second components.
It is independent of any calendar date and is suitable for values which do not need day, months and year components.
Syntax:
sql
Time
Text representation range: [-999:59:59, 999:59:59].
Resolution: 1 second.
Implementation details {#implementation-details}
Representation and Performance
.
Data type
Time
internally stores a signed 32-bit integer that encodes the seconds.
Values of type
Time
and
DateTime
have the same byte size and thus comparable performance.
Normalization
.
When parsing strings to
Time
, the time components are normalized and not validated.
For example,
25:70:70
is interpreted as
26:11:10
.
Negative values
.
Leading minus signs are supported and preserved.
Negative values typically arise from arithmetic operations on
Time
values.
For
Time
type, negative inputs are preserved for both text (e.g.,
'-01:02:03'
) and numeric inputs (e.g.,
-3723
).
Saturation
.
The time-of-day component is capped to the range [-999:59:59, 999:59:59].
Values with hours beyond 999 (or below -999) are represented and round-tripped via text as
999:59:59
(or
-999:59:59
).
Time zones
.
Time
does not support time zones, i.e.
Time
value are interpreted without regional context.
Specifying a time zone for
Time
as a type parameter or during value creation throws an error.
Likewise, attempts to apply or change the time zone on
Time
columns are not supported and result in an error.
Time
values are not silently reinterpreted under different time zones.
Examples {#examples}
1.
Creating a table with a
Time
-type column and inserting data into it:
sql
CREATE TABLE tab
(
`event_id` UInt8,
`time` Time
)
ENGINE = TinyLog;
``` sql
-- Parse Time
-- - from string,
-- - from integer interpreted as number of seconds since 00:00:00.
INSERT INTO tab VALUES (1, '14:30:25'), (2, 52225);
SELECT * FROM tab ORDER BY event_id;
```
text
ββevent_idββ¬ββββββtimeββ
1. β 1 β 14:30:25 β
2. β 2 β 14:30:25 β
ββββββββββββ΄ββββββββββββ
2.
Filtering on
Time
values
sql
SELECT * FROM tab WHERE time = toTime('14:30:25')
text
ββevent_idββ¬ββββββtimeββ
1. β 1 β 14:30:25 β
2. β 2 β 14:30:25 β
ββββββββββββ΄ββββββββββββ
Time
column values can be filtered using a string value in
WHERE
predicate. It will be converted to
Time
automatically:
sql
SELECT * FROM tab WHERE time = '14:30:25'
text
ββevent_idββ¬ββββββtimeββ
1. β 1 β 14:30:25 β
2. β 2 β 14:30:25 β
ββββββββββββ΄ββββββββββββ
3.
Inspecting the resulting type:
sql
SELECT CAST('14:30:25' AS Time) AS column, toTypeName(column) AS type | {"source_file": "time.md"} | [
-0.019901921972632408,
0.006673138123005629,
-0.027441103011369705,
0.01899791695177555,
-0.084175243973732,
-0.07215053588151932,
0.026289351284503937,
0.0413932241499424,
0.014107353053987026,
-0.05373917892575264,
0.009816639125347137,
-0.051523711532354355,
-0.00002483873686287552,
0.0... |
3274dbd6-4c9c-4a4d-ae09-6eadf448b8b3 | 3.
Inspecting the resulting type:
sql
SELECT CAST('14:30:25' AS Time) AS column, toTypeName(column) AS type
text
βββββcolumnββ¬βtypeββ
1. β 14:30:25 β Time β
βββββββββββββ΄βββββββ
See Also {#see-also}
Type conversion functions
Functions for working with dates and times
Functions for working with arrays
The
date_time_input_format
setting
The
date_time_output_format
setting
The
timezone
server configuration parameter
The
session_timezone
setting
The
DateTime
data type
The
Date
data type | {"source_file": "time.md"} | [
0.06298598647117615,
0.035300325602293015,
-0.0011521083069965243,
-0.002080479171127081,
-0.040282733738422394,
-0.01967768371105194,
0.10830313712358475,
0.06048882380127907,
-0.020064756274223328,
0.001057842280715704,
-0.025124410167336464,
-0.04159194603562355,
-0.05305515602231026,
0... |
32ce33fe-999c-40c4-afb7-c860290449f4 | description: 'Documentation for the Date32 data type in ClickHouse, which stores dates
with an extended range compared to Date'
sidebar_label: 'Date32'
sidebar_position: 14
slug: /sql-reference/data-types/date32
title: 'Date32'
doc_type: 'reference'
Date32
A date. Supports the date range same with
DateTime64
. Stored as a signed 32-bit integer in native byte order with the value representing the days since
1900-01-01
.
Important!
0 represents
1970-01-01
, and negative values represent the days before
1970-01-01
.
Examples
Creating a table with a
Date32
-type column and inserting data into it:
sql
CREATE TABLE dt32
(
`timestamp` Date32,
`event_id` UInt8
)
ENGINE = TinyLog;
```sql
-- Parse Date
-- - from string,
-- - from 'small' integer interpreted as number of days since 1970-01-01, and
-- - from 'big' integer interpreted as number of seconds since 1970-01-01.
INSERT INTO dt32 VALUES ('2100-01-01', 1), (47482, 2), (4102444800, 3);
SELECT * FROM dt32;
```
text
βββtimestampββ¬βevent_idββ
β 2100-01-01 β 1 β
β 2100-01-01 β 2 β
β 2100-01-01 β 3 β
ββββββββββββββ΄βββββββββββ
See Also
toDate32
toDate32OrZero
toDate32OrNull | {"source_file": "date32.md"} | [
0.019234320148825645,
0.030135415494441986,
-0.016384132206439972,
0.0653911828994751,
-0.07907859236001968,
0.023767385631799698,
-0.025340456515550613,
0.07251798361539841,
-0.05079296603798866,
0.007125946693122387,
0.02574816718697548,
-0.0693659856915474,
0.036329444497823715,
-0.0055... |
90635d38-6131-4a25-b898-1c7335e516e5 | description: 'Documentation for geometric data types in ClickHouse used for representing
geographical objects and locations'
sidebar_label: 'Geo'
sidebar_position: 54
slug: /sql-reference/data-types/geo
title: 'Geometric'
doc_type: 'reference'
ClickHouse supports data types for representing geographical objects β locations, lands, etc.
See Also
-
Representing simple geographical features
.
Point {#point}
Point
is represented by its X and Y coordinates, stored as a
Tuple
(
Float64
,
Float64
).
Example
Query:
sql
CREATE TABLE geo_point (p Point) ENGINE = Memory();
INSERT INTO geo_point VALUES((10, 10));
SELECT p, toTypeName(p) FROM geo_point;
Result:
text
ββpββββββββ¬βtoTypeName(p)ββ
β (10,10) β Point β
βββββββββββ΄ββββββββββββββββ
Ring {#ring}
Ring
is a simple polygon without holes stored as an array of points:
Array
(
Point
).
Example
Query:
sql
CREATE TABLE geo_ring (r Ring) ENGINE = Memory();
INSERT INTO geo_ring VALUES([(0, 0), (10, 0), (10, 10), (0, 10)]);
SELECT r, toTypeName(r) FROM geo_ring;
Result:
text
ββrββββββββββββββββββββββββββββββ¬βtoTypeName(r)ββ
β [(0,0),(10,0),(10,10),(0,10)] β Ring β
βββββββββββββββββββββββββββββββββ΄ββββββββββββββββ
LineString {#linestring}
LineString
is a line stored as an array of points:
Array
(
Point
).
Example
Query:
sql
CREATE TABLE geo_linestring (l LineString) ENGINE = Memory();
INSERT INTO geo_linestring VALUES([(0, 0), (10, 0), (10, 10), (0, 10)]);
SELECT l, toTypeName(l) FROM geo_linestring;
Result:
text
ββrββββββββββββββββββββββββββββββ¬βtoTypeName(r)ββ
β [(0,0),(10,0),(10,10),(0,10)] β LineString β
βββββββββββββββββββββββββββββββββ΄ββββββββββββββββ
MultiLineString {#multilinestring}
MultiLineString
is multiple lines stored as an array of
LineString
:
Array
(
LineString
).
Example
Query:
sql
CREATE TABLE geo_multilinestring (l MultiLineString) ENGINE = Memory();
INSERT INTO geo_multilinestring VALUES([[(0, 0), (10, 0), (10, 10), (0, 10)], [(1, 1), (2, 2), (3, 3)]]);
SELECT l, toTypeName(l) FROM geo_multilinestring;
Result:
text
ββlββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βtoTypeName(l)ββββ
β [[(0,0),(10,0),(10,10),(0,10)],[(1,1),(2,2),(3,3)]] β MultiLineString β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββ
Polygon {#polygon}
Polygon
is a polygon with holes stored as an array of rings:
Array
(
Ring
). First element of outer array is the outer shape of polygon and all the following elements are holes.
Example
This is a polygon with one hole:
sql
CREATE TABLE geo_polygon (pg Polygon) ENGINE = Memory();
INSERT INTO geo_polygon VALUES([[(20, 20), (50, 20), (50, 50), (20, 50)], [(30, 30), (50, 50), (50, 30)]]);
SELECT pg, toTypeName(pg) FROM geo_polygon;
Result: | {"source_file": "geo.md"} | [
0.06178779900074005,
0.004510083701461554,
-0.03864608332514763,
0.011508684605360031,
-0.08949679881334305,
-0.0637262687087059,
0.08431262522935867,
-0.017468197271227837,
-0.006670570001006126,
-0.04555132985115051,
-0.010449453257024288,
0.0045879860408604145,
0.07035335898399353,
-0.0... |
b15c9810-5dab-4a19-8b61-a9475d81276b | Result:
text
ββpgβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βtoTypeName(pg)ββ
β [[(20,20),(50,20),(50,50),(20,50)],[(30,30),(50,50),(50,30)]] β Polygon β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
MultiPolygon {#multipolygon}
MultiPolygon
consists of multiple polygons and is stored as an array of polygons:
Array
(
Polygon
).
Example
This multipolygon consists of two separate polygons β the first one without holes, and the second with one hole:
sql
CREATE TABLE geo_multipolygon (mpg MultiPolygon) ENGINE = Memory();
INSERT INTO geo_multipolygon VALUES([[[(0, 0), (10, 0), (10, 10), (0, 10)]], [[(20, 20), (50, 20), (50, 50), (20, 50)],[(30, 30), (50, 50), (50, 30)]]]);
SELECT mpg, toTypeName(mpg) FROM geo_multipolygon;
Result:
text
ββmpgββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βtoTypeName(mpg)ββ
β [[[(0,0),(10,0),(10,10),(0,10)]],[[(20,20),(50,20),(50,50),(20,50)],[(30,30),(50,50),(50,30)]]] β MultiPolygon β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββ
Geometry {#geometry}
Geometry
is a common type for all the types above. It is equivalent to a Variant of those types.
Example
sql
CREATE TABLE IF NOT EXISTS geo (geom Geometry) ENGINE = Memory();
INSERT INTO geo VALUES ((1, 2));
SELECT * FROM geo;
Result:
text
ββgeomβββ
1. β (1,2) β
βββββββββ
```sql
CREATE TABLE IF NOT EXISTS geo_dst (geom Geometry) ENGINE = Memory();
CREATE TABLE IF NOT EXISTS geo (geom String, id Int) ENGINE = Memory();
INSERT INTO geo VALUES ('POLYGON((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4))', 1);
INSERT INTO geo VALUES ('POINT(0 0)', 2);
INSERT INTO geo VALUES ('MULTIPOLYGON(((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 3);
INSERT INTO geo VALUES ('LINESTRING(1 0,10 0,10 10,0 10,1 0)', 4);
INSERT INTO geo VALUES ('MULTILINESTRING((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 5,4 4))', 5);
INSERT INTO geo_dst SELECT readWkt(geom) FROM geo ORDER BY id;
SELECT * FROM geo_dst;
```
Result: | {"source_file": "geo.md"} | [
0.07316234707832336,
0.04454857483506203,
-0.0032625971361994743,
0.020324204117059708,
-0.024545887485146523,
-0.06746893376111984,
0.11913228034973145,
-0.03716327250003815,
-0.03702814504504204,
-0.020738333463668823,
-0.07859265804290771,
-0.041573427617549896,
0.015271175652742386,
-0... |
51f041cf-5c87-4034-832a-89b60cfa4cd6 | SELECT * FROM geo_dst;
```
Result:
text
ββgeomββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1. β [[(1,0),(10,0),(10,10),(0,10),(1,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]] β
2. β (0,0) β
3. β [[[(1,0),(10,0),(10,10),(0,10),(1,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]],[[(-10,-10),(-10,-9),(-9,10),(-10,-10)]]] β
4. β [(1,0),(10,0),(10,10),(0,10),(1,0)] β
5. β [[(1,0),(10,0),(10,10),(0,10),(1,0)],[(4,4),(5,4),(5,5),(4,5),(4,4)]] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Related Content {#related-content}
Exploring massive, real-world data sets: 100+ Years of Weather Records in ClickHouse | {"source_file": "geo.md"} | [
-0.00941078644245863,
0.016522850841283798,
0.04506189748644829,
-0.042597681283950806,
0.044666606932878494,
-0.06556429713964462,
0.1124308630824089,
-0.061469826847314835,
-0.07555770128965378,
-0.03358978033065796,
0.00023382712970487773,
0.03670596331357956,
-0.04391283541917801,
-0.0... |
02ee9d3a-e37c-4aa4-b222-3d953b6cca6d | description: 'Documentation for the Tuple data type in ClickHouse'
sidebar_label: 'Tuple(T1, T2, ...)'
sidebar_position: 34
slug: /sql-reference/data-types/tuple
title: 'Tuple(T1, T2, ...)'
doc_type: 'reference'
Tuple(T1, T2, ...)
A tuple of elements, each having an individual
type
. Tuple must contain at least one element.
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections
IN operators
and
Higher order functions
.
Tuples can be the result of a query. In this case, for text formats other than JSON, values are comma-separated in brackets. In JSON formats, tuples are output as arrays (in square brackets).
Creating Tuples {#creating-tuples}
You can use a function to create a tuple:
sql
tuple(T1, T2, ...)
Example of creating a tuple:
sql
SELECT tuple(1, 'a') AS x, toTypeName(x)
text
ββxββββββββ¬βtoTypeName(tuple(1, 'a'))ββ
β (1,'a') β Tuple(UInt8, String) β
βββββββββββ΄ββββββββββββββββββββββββββββ
A Tuple can contain a single element
Example:
sql
SELECT tuple('a') AS x;
text
ββxββββββ
β ('a') β
βββββββββ
Syntax
(tuple_element1, tuple_element2)
may be used to create a tuple of several elements without calling the
tuple()
function.
Example:
sql
SELECT (1, 'a') AS x, (today(), rand(), 'someString') AS y, ('a') AS not_a_tuple;
text
ββxββββββββ¬βyβββββββββββββββββββββββββββββββββββββββ¬βnot_a_tupleββ
β (1,'a') β ('2022-09-21',2006973416,'someString') β a β
βββββββββββ΄βββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββ
Data Type Detection {#data-type-detection}
When creating tuples on the fly, ClickHouse interferes the type of the tuples arguments as the smallest types which can hold the provided argument value. If the value is
NULL
, the interfered type is
Nullable
.
Example of automatic data type detection:
sql
SELECT tuple(1, NULL) AS x, toTypeName(x)
text
ββxββββββββββ¬βtoTypeName(tuple(1, NULL))βββββββ
β (1, NULL) β Tuple(UInt8, Nullable(Nothing)) β
βββββββββββββ΄ββββββββββββββββββββββββββββββββββ
Referring to Tuple Elements {#referring-to-tuple-elements}
Tuple elements can be referred to by name or by index:
``sql
CREATE TABLE named_tuples (
a` Tuple(s String, i Int64)) ENGINE = Memory;
INSERT INTO named_tuples VALUES (('y', 10)), (('x',-10));
SELECT a.s FROM named_tuples; -- by name
SELECT a.2 FROM named_tuples; -- by index
```
Result:
```text
ββa.sββ
β y β
β x β
βββββββ
ββtupleElement(a, 2)ββ
β 10 β
β -10 β
ββββββββββββββββββββββ
```
Comparison operations with Tuple {#comparison-operations-with-tuple} | {"source_file": "tuple.md"} | [
-0.02118961326777935,
-0.037291135638952255,
-0.03463155776262283,
0.010477465577423573,
-0.054153844714164734,
-0.029362905770540237,
0.05338950455188751,
-0.020444270223379135,
-0.05977104604244232,
-0.03720076009631157,
0.03981130197644234,
-0.0013649372849613428,
0.03090355359017849,
-... |
9848dc0b-512d-4ff0-92ba-659e166af150 | ββtupleElement(a, 2)ββ
β 10 β
β -10 β
ββββββββββββββββββββββ
```
Comparison operations with Tuple {#comparison-operations-with-tuple}
Two tuples are compared by sequentially comparing their elements from the left to the right. If first tuples element is greater (smaller) than the second tuples corresponding element, then the first tuple is greater (smaller) than the second, otherwise (both elements are equal), the next element is compared.
Example:
sql
SELECT (1, 'z') > (1, 'a') c1, (2022, 01, 02) > (2023, 04, 02) c2, (1,2,3) = (3,2,1) c3;
text
ββc1ββ¬βc2ββ¬βc3ββ
β 1 β 0 β 0 β
ββββββ΄βββββ΄βββββ
Real world examples:
``sql
CREATE TABLE test
(
year
Int16,
month
Int8,
day` Int8
)
ENGINE = Memory AS
SELECT *
FROM values((2022, 12, 31), (2000, 1, 1));
SELECT * FROM test;
ββyearββ¬βmonthββ¬βdayββ
β 2022 β 12 β 31 β
β 2000 β 1 β 1 β
ββββββββ΄ββββββββ΄ββββββ
SELECT *
FROM test
WHERE (year, month, day) > (2010, 1, 1);
ββyearββ¬βmonthββ¬βdayββ
β 2022 β 12 β 31 β
ββββββββ΄ββββββββ΄ββββββ
CREATE TABLE test
(
key
Int64,
duration
UInt32,
value
Float64
)
ENGINE = Memory AS
SELECT *
FROM values((1, 42, 66.5), (1, 42, 70), (2, 1, 10), (2, 2, 0));
SELECT * FROM test;
ββkeyββ¬βdurationββ¬βvalueββ
β 1 β 42 β 66.5 β
β 1 β 42 β 70 β
β 2 β 1 β 10 β
β 2 β 2 β 0 β
βββββββ΄βββββββββββ΄ββββββββ
-- Let's find a value for each key with the biggest duration, if durations are equal, select the biggest value
SELECT
key,
max(duration),
argMax(value, (duration, value))
FROM test
GROUP BY key
ORDER BY key ASC;
ββkeyββ¬βmax(duration)ββ¬βargMax(value, tuple(duration, value))ββ
β 1 β 42 β 70 β
β 2 β 2 β 0 β
βββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "tuple.md"} | [
0.0020954578649252653,
0.024177948012948036,
0.041314274072647095,
0.020672012120485306,
-0.03509505093097687,
-0.04427460953593254,
-0.011548620648682117,
-0.03922923654317856,
-0.07125222682952881,
0.028915245085954666,
0.024045290425419807,
0.02106918953359127,
0.036217425018548965,
-0.... |
543c56d7-527a-47fe-b761-f6bea9cb09ee | description: 'Documentation for the Decimal data types in ClickHouse, which provide
fixed-point arithmetic with configurable precision'
sidebar_label: 'Decimal'
sidebar_position: 6
slug: /sql-reference/data-types/decimal
title: 'Decimal, Decimal(P), Decimal(P, S), Decimal32(S), Decimal64(S), Decimal128(S),
Decimal256(S)'
doc_type: 'reference'
Decimal, Decimal(P), Decimal(P, S), Decimal32(S), Decimal64(S), Decimal128(S), Decimal256(S)
Signed fixed-point numbers that keep precision during add, subtract and multiply operations. For division least significant digits are discarded (not rounded).
Parameters {#parameters}
P - precision. Valid range: [ 1 : 76 ]. Determines how many decimal digits number can have (including fraction). By default, the precision is 10.
S - scale. Valid range: [ 0 : P ]. Determines how many decimal digits fraction can have.
Decimal(P) is equivalent to Decimal(P, 0). Similarly, the syntax Decimal is equivalent to Decimal(10, 0).
Depending on P parameter value Decimal(P, S) is a synonym for:
- P from [ 1 : 9 ] - for Decimal32(S)
- P from [ 10 : 18 ] - for Decimal64(S)
- P from [ 19 : 38 ] - for Decimal128(S)
- P from [ 39 : 76 ] - for Decimal256(S)
Decimal Value Ranges {#decimal-value-ranges}
Decimal(P, S) - ( -1 * 10^(P - S), 1 * 10^(P - S) )
Decimal32(S) - ( -1 * 10^(9 - S), 1 * 10^(9 - S) )
Decimal64(S) - ( -1 * 10^(18 - S), 1 * 10^(18 - S) )
Decimal128(S) - ( -1 * 10^(38 - S), 1 * 10^(38 - S) )
Decimal256(S) - ( -1 * 10^(76 - S), 1 * 10^(76 - S) )
For example, Decimal32(4) can contain numbers from -99999.9999 to 99999.9999 with 0.0001 step.
Internal Representation {#internal-representation}
Internally data is represented as normal signed integers with respective bit width. Real value ranges that can be stored in memory are a bit larger than specified above, which are checked only on conversion from a string.
Because modern CPUs do not support 128-bit and 256-bit integers natively, operations on Decimal128 and Decimal256 are emulated. Thus, Decimal128 and Decimal256 work significantly slower than Decimal32/Decimal64.
Operations and Result Type {#operations-and-result-type}
Binary operations on Decimal result in wider result type (with any order of arguments).
Decimal64(S1) <op> Decimal32(S2) -> Decimal64(S)
Decimal128(S1) <op> Decimal32(S2) -> Decimal128(S)
Decimal128(S1) <op> Decimal64(S2) -> Decimal128(S)
Decimal256(S1) <op> Decimal<32|64|128>(S2) -> Decimal256(S)
Rules for scale:
add, subtract: S = max(S1, S2).
multiply: S = S1 + S2.
divide: S = S1.
For similar operations between Decimal and integers, the result is Decimal of the same size as an argument. | {"source_file": "decimal.md"} | [
-0.011290794238448143,
-0.011110099963843822,
-0.10399392992258072,
0.005936794448643923,
-0.06841671466827393,
0.01609685644507408,
0.04222138598561287,
0.0792338028550148,
-0.06709083169698715,
-0.01447034627199173,
-0.024661879986524582,
-0.06859516352415085,
0.04330958425998688,
-0.027... |
06cfc9b5-63bd-411d-8fc3-3aaab38d88bd | add, subtract: S = max(S1, S2).
multiply: S = S1 + S2.
divide: S = S1.
For similar operations between Decimal and integers, the result is Decimal of the same size as an argument.
Operations between Decimal and Float32/Float64 are not defined. If you need them, you can explicitly cast one of argument using toDecimal32, toDecimal64, toDecimal128 or toFloat32, toFloat64 builtins. Keep in mind that the result will lose precision and type conversion is a computationally expensive operation.
Some functions on Decimal return result as Float64 (for example, var or stddev). Intermediate calculations might still be performed in Decimal, which might lead to different results between Float64 and Decimal inputs with the same values.
Overflow Checks {#overflow-checks}
During calculations on Decimal, integer overflows might happen. Excessive digits in a fraction are discarded (not rounded). Excessive digits in integer part will lead to an exception.
:::warning
Overflow check is not implemented for Decimal128 and Decimal256. In case of overflow incorrect result is returned, no exception is thrown.
:::
sql
SELECT toDecimal32(2, 4) AS x, x / 3
text
βββββββxββ¬βdivide(toDecimal32(2, 4), 3)ββ
β 2.0000 β 0.6666 β
ββββββββββ΄βββββββββββββββββββββββββββββββ
sql
SELECT toDecimal32(4.2, 8) AS x, x * x
text
DB::Exception: Scale is out of bounds.
sql
SELECT toDecimal32(4.2, 8) AS x, 6 * x
text
DB::Exception: Decimal math overflow.
Overflow checks lead to operations slowdown. If it is known that overflows are not possible, it makes sense to disable checks using
decimal_check_overflow
setting. When checks are disabled and overflow happens, the result will be incorrect:
sql
SET decimal_check_overflow = 0;
SELECT toDecimal32(4.2, 8) AS x, 6 * x
text
βββββββββββxββ¬βmultiply(6, toDecimal32(4.2, 8))ββ
β 4.20000000 β -17.74967296 β
ββββββββββββββ΄βββββββββββββββββββββββββββββββββββ
Overflow checks happen not only on arithmetic operations but also on value comparison:
sql
SELECT toDecimal32(1, 8) < 100
text
DB::Exception: Can't compare.
See also
-
isDecimalOverflow
-
countDigits | {"source_file": "decimal.md"} | [
0.009249033406376839,
0.034134391695261,
-0.048833444714546204,
-0.06189052015542984,
0.03319086506962776,
-0.07904937118291855,
-0.05271022766828537,
0.11422833055257797,
-0.011596014723181725,
0.018669113516807556,
-0.10360583662986755,
-0.07716590911149979,
0.007482819724828005,
0.04715... |
0ff455bc-082f-440a-8038-b7a3fd4080a7 | description: 'Documentation for the Boolean data type in ClickHouse'
sidebar_label: 'Boolean'
sidebar_position: 33
slug: /sql-reference/data-types/boolean
title: 'Bool'
doc_type: 'reference'
Bool
Type
bool
is internally stored as UInt8. Possible values are
true
(1),
false
(0).
```sql
SELECT true AS col, toTypeName(col);
ββcolβββ¬βtoTypeName(true)ββ
β true β Bool β
ββββββββ΄βββββββββββββββββββ
select true == 1 as col, toTypeName(col);
ββcolββ¬βtoTypeName(equals(true, 1))ββ
β 1 β UInt8 β
βββββββ΄ββββββββββββββββββββββββββββββ
```
``sql
CREATE TABLE test_bool
(
A
Int64,
B` Bool
)
ENGINE = Memory;
INSERT INTO test_bool VALUES (1, true),(2,0);
SELECT * FROM test_bool;
ββAββ¬βBββββββ
β 1 β true β
β 2 β false β
βββββ΄ββββββββ
``` | {"source_file": "boolean.md"} | [
0.05248485133051872,
-0.010407338850200176,
-0.05722302198410034,
0.09627039730548859,
-0.054836008697748184,
-0.016866926103830338,
0.09337159246206284,
-0.04739072173833847,
-0.06954869627952576,
0.01785844936966896,
0.05617913976311684,
-0.03208623826503754,
0.06551366299390793,
-0.0817... |
e635d76c-4b06-4933-a400-5b6de7ed2064 | description: 'Documentation for the JSON data type in ClickHouse, which provides native
support for working with JSON data'
keywords: ['json', 'data type']
sidebar_label: 'JSON'
sidebar_position: 63
slug: /sql-reference/data-types/newjson
title: 'JSON Data Type'
doc_type: 'reference'
import {CardSecondary} from '@clickhouse/click-ui/bundled';
import Link from '@docusaurus/Link'
The
JSON
type stores JavaScript Object Notation (JSON) documents in a single column.
:::note
In ClickHouse Open-Source JSON data type is marked as production ready in version 25.3. It's not recommended to use this type in production in previous versions.
:::
To declare a column of
JSON
type, you can use the following syntax:
sql
<column_name> JSON
(
max_dynamic_paths=N,
max_dynamic_types=M,
some.path TypeName,
SKIP path.to.skip,
SKIP REGEXP 'paths_regexp'
)
Where the parameters in the syntax above are defined as: | {"source_file": "newjson.md"} | [
-0.05648839473724365,
-0.01661943830549717,
-0.039323966950178146,
0.043424107134342194,
-0.02388167381286621,
-0.02942386269569397,
0.00026842457009479403,
0.030070945620536804,
-0.05353539064526558,
-0.048796530812978745,
0.06655911356210709,
0.021218810230493546,
-0.012123852968215942,
... |
fe1e3049-2e99-4823-9cfa-49ae6ceaf2f8 | Where the parameters in the syntax above are defined as:
| Parameter | Description | Default Value |
|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
|
max_dynamic_paths
| An optional parameter indicating how many paths can be stored separately as sub-columns across single block of data that is stored separately (for example across single data part for MergeTree table).
If this limit is exceeded, all other paths will be stored together in a single structure. |
1024
|
|
max_dynamic_types
| An optional parameter between
1
and
255
indicating how many different data types can be stored inside a single path column with type
Dynamic
across single block of data that is stored separately (for example across single data part for MergeTree table).
If this limit is exceeded, all new types will be converted to type
String
. |
32
|
|
some.path TypeName
| An optional type hint for particular path in the JSON. Such paths will be always stored as sub-columns with specified type. | |
|
SKIP path.to.skip
| An optional hint for particular path that should be skipped during JSON parsing. Such paths will never be stored in the JSON column. If specified path is a nested JSON object, the whole nested object will be skipped. | |
|
SKIP REGEXP 'path_regexp'
| An optional hint with a regular expression that is used to skip paths during JSON parsing. All paths that match this regular expression will never be stored in the JSON column. | |
Creating JSON {#creating-json}
In this section we'll take a look at the various ways that you can create
JSON
. | {"source_file": "newjson.md"} | [
0.023246489465236664,
0.04849613085389137,
-0.06527002900838852,
0.011540370993316174,
-0.10295005142688751,
0.035583291202783585,
0.026648975908756256,
0.06702332198619843,
-0.02166999690234661,
-0.04202210530638695,
0.04296814277768135,
-0.08303096890449524,
0.03155636414885521,
-0.05084... |
b31d0486-4074-4b84-ba43-ba1681526b0c | Creating JSON {#creating-json}
In this section we'll take a look at the various ways that you can create
JSON
.
Using
JSON
in a table column definition {#using-json-in-a-table-column-definition}
sql title="Query (Example 1)"
CREATE TABLE test (json JSON) ENGINE = Memory;
INSERT INTO test VALUES ('{"a" : {"b" : 42}, "c" : [1, 2, 3]}'), ('{"f" : "Hello, World!"}'), ('{"a" : {"b" : 43, "e" : 10}, "c" : [4, 5, 6]}');
SELECT json FROM test;
text title="Response (Example 1)"
ββjsonβββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":"42"},"c":["1","2","3"]} β
β {"f":"Hello, World!"} β
β {"a":{"b":"43","e":"10"},"c":["4","5","6"]} β
βββββββββββββββββββββββββββββββββββββββββββββββ
sql title="Query (Example 2)"
CREATE TABLE test (json JSON(a.b UInt32, SKIP a.e)) ENGINE = Memory;
INSERT INTO test VALUES ('{"a" : {"b" : 42}, "c" : [1, 2, 3]}'), ('{"f" : "Hello, World!"}'), ('{"a" : {"b" : 43, "e" : 10}, "c" : [4, 5, 6]}');
SELECT json FROM test;
text title="Response (Example 2)"
ββjsonβββββββββββββββββββββββββββββββ
β {"a":{"b":42},"c":["1","2","3"]} β
β {"a":{"b":0},"f":"Hello, World!"} β
β {"a":{"b":43},"c":["4","5","6"]} β
βββββββββββββββββββββββββββββββββββββ
Using CAST with
::JSON
{#using-cast-with-json}
It is possible to cast various types using the special syntax
::JSON
.
CAST from
String
to
JSON
{#cast-from-string-to-json}
sql title="Query"
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON AS json;
text title="Response"
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":"42"},"c":["1","2","3"],"d":"Hello, World!"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
CAST from
Tuple
to
JSON
{#cast-from-tuple-to-json}
sql title="Query"
SET enable_named_columns_in_function_tuple = 1;
SELECT (tuple(42 AS b) AS a, [1, 2, 3] AS c, 'Hello, World!' AS d)::JSON AS json;
text title="Response"
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":"42"},"c":["1","2","3"],"d":"Hello, World!"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
CAST from
Map
to
JSON
{#cast-from-map-to-json}
sql title="Query"
SET use_variant_as_common_type=1;
SELECT map('a', map('b', 42), 'c', [1,2,3], 'd', 'Hello, World!')::JSON AS json;
text title="Response"
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":"42"},"c":["1","2","3"],"d":"Hello, World!"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
CAST from deprecated
Object('json')
to
JSON
{#cast-from-deprecated-objectjson-to-json}
sql title="Query"
SET allow_experimental_object_type = 1;
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::Object('json')::JSON AS json;
text title="Response"
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":"42"},"c":["1","2","3"],"d":"Hello, World!"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "newjson.md"} | [
-0.016513550654053688,
0.023609783500432968,
-0.005272221751511097,
0.040262605994939804,
-0.09519767016172409,
-0.008132210932672024,
0.013867308385670185,
0.04999920353293419,
-0.039164431393146515,
0.0017370145069435239,
0.059367939829826355,
-0.031289003789424896,
0.05572256073355675,
... |
573fbeca-0522-466c-b6ff-2fe8fa8652c4 | :::note
JSON paths are stored flattened. This means that when a JSON object is formatted from a path like
a.b.c
it is not possible to know whether the object should be constructed as
{ "a.b.c" : ... }
or
{ "a" " {"b" : {"c" : ... }}}
.
Our implementation will always assume the latter.
For example:
sql
SELECT CAST('{"a.b.c" : 42}', 'JSON') AS json
will return:
response
ββjsonββββββββββββββββββββ
1. β {"a":{"b":{"c":"42"}}} β
ββββββββββββββββββββββββββ
and
not
:
sql
ββjsonββββββββββββ
1. β {"a.b.c":"42"} β
ββββββββββββββββββ
:::
Reading JSON paths as sub-columns {#reading-json-paths-as-sub-columns}
The
JSON
type supports reading every path as a separate sub-column.
If the type of the requested path is not specified in the JSON type declaration,
then the sub column of the path will always have type
Dynamic
.
For example:
sql title="Query"
CREATE TABLE test (json JSON(a.b UInt32, SKIP a.e)) ENGINE = Memory;
INSERT INTO test VALUES ('{"a" : {"b" : 42, "g" : 42.42}, "c" : [1, 2, 3], "d" : "2020-01-01"}'), ('{"f" : "Hello, World!", "d" : "2020-01-02"}'), ('{"a" : {"b" : 43, "e" : 10, "g" : 43.43}, "c" : [4, 5, 6]}');
SELECT json FROM test;
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":42,"g":42.42},"c":["1","2","3"],"d":"2020-01-01"} β
β {"a":{"b":0},"d":"2020-01-02","f":"Hello, World!"} β
β {"a":{"b":43,"g":43.43},"c":["4","5","6"]} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql title="Query (Reading JSON paths as sub-columns)"
SELECT json.a.b, json.a.g, json.c, json.d FROM test;
text title="Response (Reading JSON paths as sub-columns)"
ββjson.a.bββ¬βjson.a.gββ¬βjson.cβββ¬βjson.dββββββ
β 42 β 42.42 β [1,2,3] β 2020-01-01 β
β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2020-01-02 β
β 43 β 43.43 β [4,5,6] β α΄Ία΅α΄Έα΄Έ β
ββββββββββββ΄βββββββββββ΄ββββββββββ΄βββββββββββββ
You can also use
getSubcolumn
function to read subcolumns from JSON type:
sql title="Query"
SELECT getSubcolumn(json, 'a.b'), getSubcolumn(json, 'a.g'), getSubcolumn(json, 'c'), getSubcolumn(json, 'd') FROM test;
text title="Response"
ββgetSubcolumn(json, 'a.b')ββ¬βgetSubcolumn(json, 'a.g')ββ¬βgetSubcolumn(json, 'c')ββ¬βgetSubcolumn(json, 'd')ββ
β 42 β 42.42 β [1,2,3] β 2020-01-01 β
β 0 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2020-01-02 β
β 43 β 43.43 β [4,5,6] β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ
If the requested path wasn't found in the data, it will be filled with
NULL
values:
sql title="Query"
SELECT json.non.existing.path FROM test; | {"source_file": "newjson.md"} | [
-0.05159272626042366,
-0.021614011377096176,
0.011983472853899002,
0.0811111181974411,
-0.08530043810606003,
-0.0313202403485775,
0.011723986826837063,
0.02617671526968479,
0.000313811469823122,
-0.036680590361356735,
-0.03599369898438454,
-0.010544915683567524,
-0.0002717018942348659,
0.0... |
b45d8c1e-af42-4ed0-922e-913faf76d49e | If the requested path wasn't found in the data, it will be filled with
NULL
values:
sql title="Query"
SELECT json.non.existing.path FROM test;
text title="Response"
ββjson.non.existing.pathββ
β α΄Ία΅α΄Έα΄Έ β
β α΄Ία΅α΄Έα΄Έ β
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββββ
Let's check the data types of the returned sub-columns:
sql title="Query"
SELECT toTypeName(json.a.b), toTypeName(json.a.g), toTypeName(json.c), toTypeName(json.d) FROM test;
text title="Response"
ββtoTypeName(json.a.b)ββ¬βtoTypeName(json.a.g)ββ¬βtoTypeName(json.c)ββ¬βtoTypeName(json.d)ββ
β UInt32 β Dynamic β Dynamic β Dynamic β
β UInt32 β Dynamic β Dynamic β Dynamic β
β UInt32 β Dynamic β Dynamic β Dynamic β
ββββββββββββββββββββββββ΄βββββββββββββββββββββββ΄βββββββββββββββββββββ΄βββββββββββββββββββββ
As we can see, for
a.b
, the type is
UInt32
as we specified it to be in the JSON type declaration,
and for all other sub-columns the type is
Dynamic
.
It is also possible to read sub-columns of a
Dynamic
type using the special syntax
json.some.path.:TypeName
:
sql title="Query"
SELECT
json.a.g.:Float64,
dynamicType(json.a.g),
json.d.:Date,
dynamicType(json.d)
FROM test
text title="Response"
ββjson.a.g.:`Float64`ββ¬βdynamicType(json.a.g)ββ¬βjson.d.:`Date`ββ¬βdynamicType(json.d)ββ
β 42.42 β Float64 β 2020-01-01 β Date β
β α΄Ία΅α΄Έα΄Έ β None β 2020-01-02 β Date β
β 43.43 β Float64 β α΄Ία΅α΄Έα΄Έ β None β
βββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββββββββ
Dynamic
sub-columns can be cast to any data type. In this case an exception will be thrown if the internal type inside
Dynamic
cannot be cast to the requested type:
sql title="Query"
SELECT json.a.g::UInt64 AS uint
FROM test;
text title="Response"
ββuintββ
β 42 β
β 0 β
β 43 β
ββββββββ
sql title="Query"
SELECT json.a.g::UUID AS float
FROM test;
text title="Response"
Received exception from server:
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception:
Conversion between numeric types and UUID is not supported.
Probably the passed UUID is unquoted:
while executing 'FUNCTION CAST(__table1.json.a.g :: 2, 'UUID'_String :: 1) -> CAST(__table1.json.a.g, 'UUID'_String) UUID : 0'.
(NOT_IMPLEMENTED)
:::note
To read subcolumns efficiently from Compact MergeTree parts make sure MergeTree setting
write_marks_for_substreams_in_compact_parts
is enabled.
:::
Reading JSON sub-objects as sub-columns {#reading-json-sub-objects-as-sub-columns}
The
JSON
type supports reading nested objects as sub-columns with type
JSON
using the special syntax
json.^some.path
: | {"source_file": "newjson.md"} | [
-0.05769237130880356,
-0.04523135721683502,
-0.03500216081738472,
0.08967063575983047,
-0.1021801307797432,
-0.06255819648504257,
-0.005070072133094072,
0.01946989633142948,
-0.0013565656263381243,
-0.03508942946791649,
0.026476098224520683,
-0.052692633122205734,
-0.003801558632403612,
0.... |
f98e5a51-ff1f-444e-8ff7-73963acd523b | The
JSON
type supports reading nested objects as sub-columns with type
JSON
using the special syntax
json.^some.path
:
sql title="Query"
CREATE TABLE test (json JSON) ENGINE = Memory;
INSERT INTO test VALUES ('{"a" : {"b" : {"c" : 42, "g" : 42.42}}, "c" : [1, 2, 3], "d" : {"e" : {"f" : {"g" : "Hello, World", "h" : [1, 2, 3]}}}}'), ('{"f" : "Hello, World!", "d" : {"e" : {"f" : {"h" : [4, 5, 6]}}}}'), ('{"a" : {"b" : {"c" : 43, "e" : 10, "g" : 43.43}}, "c" : [4, 5, 6]}');
SELECT json FROM test;
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":{"c":"42","g":42.42}},"c":["1","2","3"],"d":{"e":{"f":{"g":"Hello, World","h":["1","2","3"]}}}} β
β {"d":{"e":{"f":{"h":["4","5","6"]}}},"f":"Hello, World!"} β
β {"a":{"b":{"c":"43","e":"10","g":43.43}},"c":["4","5","6"]} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql title="Query"
SELECT json.^a.b, json.^d.e.f FROM test;
text title="Response"
ββjson.^`a`.bββββββββββββββββββββ¬βjson.^`d`.e.fβββββββββββββββββββββββββββ
β {"c":"42","g":42.42} β {"g":"Hello, World","h":["1","2","3"]} β
β {} β {"h":["4","5","6"]} β
β {"c":"43","e":"10","g":43.43} β {} β
βββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββ
:::note
Reading sub-objects as sub-columns may be inefficient, as this may require a near full scan of the JSON data.
:::
Type inference for paths {#type-inference-for-paths}
During parsing of
JSON
, ClickHouse tries to detect the most appropriate data type for each JSON path.
It works similarly to
automatic schema inference from input data
,
and is controlled by the same settings:
input_format_try_infer_dates
input_format_try_infer_datetimes
schema_inference_make_columns_nullable
input_format_json_try_infer_numbers_from_strings
input_format_json_infer_incomplete_types_as_strings
input_format_json_read_numbers_as_strings
input_format_json_read_bools_as_strings
input_format_json_read_bools_as_numbers
input_format_json_read_arrays_as_strings
input_format_json_infer_array_of_dynamic_from_array_of_different_types
Let's take a look at some examples:
sql title="Query"
SELECT JSONAllPathsWithTypes('{"a" : "2020-01-01", "b" : "2020-01-01 10:00:00"}'::JSON) AS paths_with_types settings input_format_try_infer_dates=1, input_format_try_infer_datetimes=1;
text title="Response"
ββpaths_with_typesββββββββββββββββββ
β {'a':'Date','b':'DateTime64(9)'} β
ββββββββββββββββββββββββββββββββββββ
sql title="Query"
SELECT JSONAllPathsWithTypes('{"a" : "2020-01-01", "b" : "2020-01-01 10:00:00"}'::JSON) AS paths_with_types settings input_format_try_infer_dates=0, input_format_try_infer_datetimes=0; | {"source_file": "newjson.md"} | [
-0.009927324950695038,
0.027503695338964462,
0.02607366442680359,
0.06181510165333748,
-0.08266075700521469,
-0.041226740926504135,
-0.009427404031157494,
0.041059862822294235,
-0.029753535985946655,
-0.008772831410169601,
-0.0015189655823633075,
-0.009097641333937645,
0.014034935273230076,
... |
acd02f66-9166-44d3-8b15-96e99cd1ff9b | text title="Response"
ββpaths_with_typesβββββββββββββ
β {'a':'String','b':'String'} β
βββββββββββββββββββββββββββββββ
sql title="Query"
SELECT JSONAllPathsWithTypes('{"a" : [1, 2, 3]}'::JSON) AS paths_with_types settings schema_inference_make_columns_nullable=1;
text title="Response"
ββpaths_with_typesββββββββββββββββ
β {'a':'Array(Nullable(Int64))'} β
ββββββββββββββββββββββββββββββββββ
sql title="Query"
SELECT JSONAllPathsWithTypes('{"a" : [1, 2, 3]}'::JSON) AS paths_with_types settings schema_inference_make_columns_nullable=0;
text title="Response"
ββpaths_with_typesββββββ
β {'a':'Array(Int64)'} β
ββββββββββββββββββββββββ
Handling arrays of JSON objects {#handling-arrays-of-json-objects}
JSON paths that contain an array of objects are parsed as type
Array(JSON)
and inserted into a
Dynamic
column for the path.
To read an array of objects, you can extract it from the
Dynamic
column as a sub-column:
sql title="Query"
CREATE TABLE test (json JSON) ENGINE = Memory;
INSERT INTO test VALUES
('{"a" : {"b" : [{"c" : 42, "d" : "Hello", "f" : [[{"g" : 42.42}]], "k" : {"j" : 1000}}, {"c" : 43}, {"e" : [1, 2, 3], "d" : "My", "f" : [[{"g" : 43.43, "h" : "2020-01-01"}]], "k" : {"j" : 2000}}]}}'),
('{"a" : {"b" : [1, 2, 3]}}'),
('{"a" : {"b" : [{"c" : 44, "f" : [[{"h" : "2020-01-02"}]]}, {"e" : [4, 5, 6], "d" : "World", "f" : [[{"g" : 44.44}]], "k" : {"j" : 3000}}]}}');
SELECT json FROM test;
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":[{"c":"42","d":"Hello","f":[[{"g":42.42}]],"k":{"j":"1000"}},{"c":"43"},{"d":"My","e":["1","2","3"],"f":[[{"g":43.43,"h":"2020-01-01"}]],"k":{"j":"2000"}}]}} β
β {"a":{"b":["1","2","3"]}} β
β {"a":{"b":[{"c":"44","f":[[{"h":"2020-01-02"}]]},{"d":"World","e":["4","5","6"],"f":[[{"g":44.44}]],"k":{"j":"3000"}}]}} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql title="Query"
SELECT json.a.b, dynamicType(json.a.b) FROM test; | {"source_file": "newjson.md"} | [
-0.00035558571107685566,
0.01851586252450943,
-0.047831129282712936,
0.08475813269615173,
-0.07767859846353531,
-0.020651986822485924,
0.01541875023394823,
0.012373128905892372,
-0.044702623039484024,
-0.01601061224937439,
0.028961211442947388,
-0.012511765584349632,
-0.013911477290093899,
... |
325faefc-60d2-4872-8e3d-638a26b911d9 | sql title="Query"
SELECT json.a.b, dynamicType(json.a.b) FROM test;
text title="Response"
ββjson.a.bβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βdynamicType(json.a.b)βββββββββββββββββββββββββββββββββββββ
β ['{"c":"42","d":"Hello","f":[[{"g":42.42}]],"k":{"j":"1000"}}','{"c":"43"}','{"d":"My","e":["1","2","3"],"f":[[{"g":43.43,"h":"2020-01-01"}]],"k":{"j":"2000"}}'] β Array(JSON(max_dynamic_types=16, max_dynamic_paths=256)) β
β [1,2,3] β Array(Nullable(Int64)) β
β ['{"c":"44","f":[[{"h":"2020-01-02"}]]}','{"d":"World","e":["4","5","6"],"f":[[{"g":44.44}]],"k":{"j":"3000"}}'] β Array(JSON(max_dynamic_types=16, max_dynamic_paths=256)) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
As you may have noticed, the
max_dynamic_types
/
max_dynamic_paths
parameters of the nested
JSON
type got reduced compared to the default values.
This is needed to avoid the number of sub-columns growing uncontrollably on nested arrays of JSON objects.
Let's try to read sub-columns from a nested
JSON
column:
sql title="Query"
SELECT json.a.b.:`Array(JSON)`.c, json.a.b.:`Array(JSON)`.f, json.a.b.:`Array(JSON)`.d FROM test;
text title="Response"
ββjson.a.b.:`Array(JSON)`.cββ¬βjson.a.b.:`Array(JSON)`.fββββββββββββββββββββββββββββββββββββ¬βjson.a.b.:`Array(JSON)`.dββ
β [42,43,NULL] β [[['{"g":42.42}']],NULL,[['{"g":43.43,"h":"2020-01-01"}']]] β ['Hello',NULL,'My'] β
β [] β [] β [] β
β [44,NULL] β [[['{"h":"2020-01-02"}']],[['{"g":44.44}']]] β [NULL,'World'] β
βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
We can avoid writing
Array(JSON)
sub-column names using a special syntax:
sql title="Query"
SELECT json.a.b[].c, json.a.b[].f, json.a.b[].d FROM test; | {"source_file": "newjson.md"} | [
0.010099489241838455,
0.07505887001752853,
0.04259680211544037,
0.03790176659822464,
-0.08234277367591858,
0.006645273882895708,
0.013022559694945812,
-0.020778780803084373,
-0.03799441456794739,
-0.0338546521961689,
-0.021633485332131386,
0.0034803356975317,
-0.015897218137979507,
0.03533... |
e9a8ea61-04b2-4087-a128-5ac8f34fcd48 | We can avoid writing
Array(JSON)
sub-column names using a special syntax:
sql title="Query"
SELECT json.a.b[].c, json.a.b[].f, json.a.b[].d FROM test;
text title="Response"
ββjson.a.b.:`Array(JSON)`.cββ¬βjson.a.b.:`Array(JSON)`.fββββββββββββββββββββββββββββββββββββ¬βjson.a.b.:`Array(JSON)`.dββ
β [42,43,NULL] β [[['{"g":42.42}']],NULL,[['{"g":43.43,"h":"2020-01-01"}']]] β ['Hello',NULL,'My'] β
β [] β [] β [] β
β [44,NULL] β [[['{"h":"2020-01-02"}']],[['{"g":44.44}']]] β [NULL,'World'] β
βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
The number of
[]
after the path indicates the array level. For example,
json.path[][]
will be transformed to
json.path.:Array(Array(JSON))
Let's check the paths and types inside our
Array(JSON)
:
sql title="Query"
SELECT DISTINCT arrayJoin(JSONAllPathsWithTypes(arrayJoin(json.a.b[]))) FROM test;
text title="Response"
ββarrayJoin(JSONAllPathsWithTypes(arrayJoin(json.a.b.:`Array(JSON)`)))βββ
β ('c','Int64') β
β ('d','String') β
β ('f','Array(Array(JSON(max_dynamic_types=8, max_dynamic_paths=64)))') β
β ('k.j','Int64') β
β ('e','Array(Nullable(Int64))') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Let's read sub-columns from an
Array(JSON)
column:
sql title="Query"
SELECT json.a.b[].c.:Int64, json.a.b[].f[][].g.:Float64, json.a.b[].f[][].h.:Date FROM test;
text title="Response"
ββjson.a.b.:`Array(JSON)`.c.:`Int64`ββ¬βjson.a.b.:`Array(JSON)`.f.:`Array(Array(JSON))`.g.:`Float64`ββ¬βjson.a.b.:`Array(JSON)`.f.:`Array(Array(JSON))`.h.:`Date`ββ
β [42,43,NULL] β [[[42.42]],[],[[43.43]]] β [[[NULL]],[],[['2020-01-01']]] β
β [] β [] β [] β
β [44,NULL] β [[[NULL]],[[44.44]]] β [[['2020-01-02']],[[NULL]]] β
ββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
We can also read sub-object sub-columns from a nested
JSON
column:
sql title="Query"
SELECT json.a.b[].^k FROM test
text title="Response"
ββjson.a.b.:`Array(JSON)`.^`k`ββββββββββ
β ['{"j":"1000"}','{}','{"j":"2000"}'] β
β [] β
β ['{}','{"j":"3000"}'] β
ββββββββββββββββββββββββββββββββββββββββ | {"source_file": "newjson.md"} | [
-0.002798703731968999,
0.047245126217603683,
0.020103951916098595,
0.03986118361353874,
-0.06846603006124496,
-0.06253987550735474,
0.012194320559501648,
0.013295543380081654,
-0.033680349588394165,
-0.02484164386987686,
0.00016194066847674549,
0.0053989579901099205,
0.03411538526415825,
-... |
6b001638-9f26-45f2-bd6a-3e2a0f14fcb2 | Handling JSON keys with NULL {#handling-json-keys-with-nulls}
In our JSON implementation
null
and absence of the value are considered equivalent:
sql title="Query"
SELECT '{}'::JSON AS json1, '{"a" : null}'::JSON AS json2, json1 = json2
text title="Response"
ββjson1ββ¬βjson2ββ¬βequals(json1, json2)ββ
β {} β {} β 1 β
βββββββββ΄ββββββββ΄βββββββββββββββββββββββ
It means that it's impossible to determine whether the original JSON data contained some path with the NULL value or didn't contain it at all.
Handling JSON keys with dots {#handling-json-keys-with-dots}
Internally JSON column stores all paths and values in a flattened form. It means that by default these 2 objects are considered as the same:
json
{"a" : {"b" : 42}}
{"a.b" : 42}
They both will be stored internally as a pair of path
a.b
and value
42
. During formatting of JSON we always form nested objects based on the path parts separated by dot:
sql title="Query"
SELECT '{"a" : {"b" : 42}}'::JSON AS json1, '{"a.b" : 42}'::JSON AS json2, JSONAllPaths(json1), JSONAllPaths(json2);
text title="Response"
ββjson1βββββββββββββ¬βjson2βββββββββββββ¬βJSONAllPaths(json1)ββ¬βJSONAllPaths(json2)ββ
β {"a":{"b":"42"}} β {"a":{"b":"42"}} β ['a.b'] β ['a.b'] β
ββββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
As you can see, initial JSON
{"a.b" : 42}
is now formatted as
{"a" : {"b" : 42}}
.
This limitation also leads to the failure of parsing valid JSON objects like this:
sql title="Query"
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON AS json;
text title="Response"
Code: 117. DB::Exception: Cannot insert data into JSON column: Duplicate path found during parsing JSON object: a.b. You can enable setting type_json_skip_duplicated_paths to skip duplicated paths during insert: In scope SELECT CAST('{"a.b" : 42, "a" : {"b" : "Hello, World"}}', 'JSON') AS json. (INCORRECT_DATA)
If you want to keep keys with dots and avoid formatting them as nested objects, you can enable
setting
json_type_escape_dots_in_keys
(available starting from version
25.8
). In this case during parsing all dots in JSON keys will be
escaped into
%2E
and unescaped back during formatting.
sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a" : {"b" : 42}}'::JSON AS json1, '{"a.b" : 42}'::JSON AS json2, JSONAllPaths(json1), JSONAllPaths(json2);
text title="Response"
ββjson1βββββββββββββ¬βjson2βββββββββ¬βJSONAllPaths(json1)ββ¬βJSONAllPaths(json2)ββ
β {"a":{"b":"42"}} β {"a.b":"42"} β ['a.b'] β ['a%2Eb'] β
ββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON AS json, JSONAllPaths(json); | {"source_file": "newjson.md"} | [
-0.08201946318149567,
0.03039463609457016,
-0.002753633074462414,
0.056134551763534546,
-0.06998197734355927,
-0.07865738123655319,
-0.020576151087880135,
-0.02150259166955948,
0.04819517582654953,
-0.03708275035023689,
0.0673581212759018,
-0.00405447743833065,
0.04165506362915039,
0.02938... |
0e990d98-af79-4f05-96a2-5a0c196f5f83 | sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON AS json, JSONAllPaths(json);
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββ¬βJSONAllPaths(json)ββ
β {"a.b":"42","a":{"b":"Hello World!"}} β ['a%2Eb','a.b'] β
βββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββ
To read key with escaped dot as a subcolumn you have to use escaped dot in the subcolumn name:
sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON AS json, json.`a%2Eb`, json.a.b;
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββ¬βjson.a%2Ebββ¬βjson.a.bββββββ
β {"a.b":"42","a":{"b":"Hello World!"}} β 42 β Hello World! β
βββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββ
Note: due to identifiers parser and analyzer limitations subcolumn
json.`a.b`
is equivalent to subcolumn
json.a.b
and won't read path with escaped dot:
sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON AS json, json.`a%2Eb`, json.`a.b`, json.a.b;
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββ¬βjson.a%2Ebββ¬βjson.a.bββββββ¬βjson.a.bββββββ
β {"a.b":"42","a":{"b":"Hello World!"}} β 42 β Hello World! β Hello World! β
βββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Also, if you want to specify a hint for a JSON path that contains keys with dots (or use it in the
SKIP
/
SKIP REGEX
sections), you have to use escaped dots in the hint:
sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON(`a%2Eb` UInt8) as json, json.`a%2Eb`, toTypeName(json.`a%2Eb`);
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββ¬βjson.a%2Ebββ¬βtoTypeName(json.a%2Eb)ββ
β {"a.b":42,"a":{"b":"Hello World!"}} β 42 β UInt8 β
βββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββββββββ
sql title="Query"
SET json_type_escape_dots_in_keys=1;
SELECT '{"a.b" : 42, "a" : {"b" : "Hello World!"}}'::JSON(SKIP `a%2Eb`) as json, json.`a%2Eb`;
text title="Response"
ββjsonββββββββββββββββββββββββ¬βjson.a%2Ebββ
β {"a":{"b":"Hello World!"}} β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββββββββ΄βββββββββββββ
Reading JSON type from data {#reading-json-type-from-data}
All text formats
(
JSONEachRow
,
TSV
,
CSV
,
CustomSeparated
,
Values
, etc.) support reading the
JSON
type.
Examples: | {"source_file": "newjson.md"} | [
-0.006219016388058662,
-0.0032620278652757406,
0.025204330682754517,
0.06201903522014618,
-0.0928366482257843,
-0.02238069660961628,
0.03695230185985565,
0.03709651529788971,
-0.01889176294207573,
-0.03132815659046173,
0.04339611530303955,
-0.0431072972714901,
0.08204127103090286,
-0.02900... |
5a84c691-a0ba-410f-9359-5eab5d742c6a | Reading JSON type from data {#reading-json-type-from-data}
All text formats
(
JSONEachRow
,
TSV
,
CSV
,
CustomSeparated
,
Values
, etc.) support reading the
JSON
type.
Examples:
sql title="Query"
SELECT json FROM format(JSONEachRow, 'json JSON(a.b.c UInt32, SKIP a.b.d, SKIP d.e, SKIP REGEXP \'b.*\')', '
{"json" : {"a" : {"b" : {"c" : 1, "d" : [0, 1]}}, "b" : "2020-01-01", "c" : 42, "d" : {"e" : {"f" : ["s1", "s2"]}, "i" : [1, 2, 3]}}}
{"json" : {"a" : {"b" : {"c" : 2, "d" : [2, 3]}}, "b" : [1, 2, 3], "c" : null, "d" : {"e" : {"g" : 43}, "i" : [4, 5, 6]}}}
{"json" : {"a" : {"b" : {"c" : 3, "d" : [4, 5]}}, "b" : {"c" : 10}, "e" : "Hello, World!"}}
{"json" : {"a" : {"b" : {"c" : 4, "d" : [6, 7]}}, "c" : 43}}
{"json" : {"a" : {"b" : {"c" : 5, "d" : [8, 9]}}, "b" : {"c" : 11, "j" : [1, 2, 3]}, "d" : {"e" : {"f" : ["s3", "s4"], "g" : 44}, "h" : "2020-02-02 10:00:00"}}}
')
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":{"c":1}},"c":"42","d":{"i":["1","2","3"]}} β
β {"a":{"b":{"c":2}},"d":{"i":["4","5","6"]}} β
β {"a":{"b":{"c":3}},"e":"Hello, World!"} β
β {"a":{"b":{"c":4}},"c":"43"} β
β {"a":{"b":{"c":5}},"d":{"h":"2020-02-02 10:00:00.000000000"}} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
For text formats like
CSV
/
TSV
/etc,
JSON
is parsed from a string containing the JSON object:
sql title="Query"
SELECT json FROM format(TSV, 'json JSON(a.b.c UInt32, SKIP a.b.d, SKIP REGEXP \'b.*\')',
'{"a" : {"b" : {"c" : 1, "d" : [0, 1]}}, "b" : "2020-01-01", "c" : 42, "d" : {"e" : {"f" : ["s1", "s2"]}, "i" : [1, 2, 3]}}
{"a" : {"b" : {"c" : 2, "d" : [2, 3]}}, "b" : [1, 2, 3], "c" : null, "d" : {"e" : {"g" : 43}, "i" : [4, 5, 6]}}
{"a" : {"b" : {"c" : 3, "d" : [4, 5]}}, "b" : {"c" : 10}, "e" : "Hello, World!"}
{"a" : {"b" : {"c" : 4, "d" : [6, 7]}}, "c" : 43}
{"a" : {"b" : {"c" : 5, "d" : [8, 9]}}, "b" : {"c" : 11, "j" : [1, 2, 3]}, "d" : {"e" : {"f" : ["s3", "s4"], "g" : 44}, "h" : "2020-02-02 10:00:00"}}')
text title="Response"
ββjsonβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"a":{"b":{"c":1}},"c":"42","d":{"i":["1","2","3"]}} β
β {"a":{"b":{"c":2}},"d":{"i":["4","5","6"]}} β
β {"a":{"b":{"c":3}},"e":"Hello, World!"} β
β {"a":{"b":{"c":4}},"c":"43"} β
β {"a":{"b":{"c":5}},"d":{"h":"2020-02-02 10:00:00.000000000"}} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Reaching the limit of dynamic paths inside JSON {#reaching-the-limit-of-dynamic-paths-inside-json}
The
JSON
data type can store only a limited number of paths as separate sub-columns internally.
By default, this limit is
1024
, but you can change it in the type declaration using parameter
max_dynamic_paths
. | {"source_file": "newjson.md"} | [
-0.00666111009195447,
0.02209337055683136,
0.04825125262141228,
0.021985821425914764,
-0.053593095391988754,
0.04570677503943443,
0.0029657594859600067,
0.02312120422720909,
-0.023013940081000328,
-0.00600752979516983,
-0.06515558063983917,
0.016825782135128975,
-0.009337850846350193,
0.04... |
2333b9b5-7377-43a0-8ff8-c3788b608fa0 | When the limit is reached, all new paths inserted to a
JSON
column will be stored in a single shared data structure.
It's still possible to read such paths as sub-columns,
but it might be less efficient (
see section about shared data
).
This limit is needed to avoid having an enormous number of different sub-columns that can make the table unusable.
Let's see what happens when the limit is reached in a few different scenarios.
Reaching the limit during data parsing {#reaching-the-limit-during-data-parsing}
During parsing of
JSON
objects from data, when the limit is reached for the current block of data,
all new paths will be stored in a shared data structure. We can use the following two introspection functions
JSONDynamicPaths
,
JSONSharedDataPaths
:
sql title="Query"
SELECT json, JSONDynamicPaths(json), JSONSharedDataPaths(json) FROM format(JSONEachRow, 'json JSON(max_dynamic_paths=3)', '
{"json" : {"a" : {"b" : 42}, "c" : [1, 2, 3]}}
{"json" : {"a" : {"b" : 43}, "d" : "2020-01-01"}}
{"json" : {"a" : {"b" : 44}, "c" : [4, 5, 6]}}
{"json" : {"a" : {"b" : 43}, "d" : "2020-01-02", "e" : "Hello", "f" : {"g" : 42.42}}}
{"json" : {"a" : {"b" : 43}, "c" : [7, 8, 9], "f" : {"g" : 43.43}, "h" : "World"}}
')
text title="Response"
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βJSONDynamicPaths(json)ββ¬βJSONSharedDataPaths(json)ββ
β {"a":{"b":"42"},"c":["1","2","3"]} β ['a.b','c','d'] β [] β
β {"a":{"b":"43"},"d":"2020-01-01"} β ['a.b','c','d'] β [] β
β {"a":{"b":"44"},"c":["4","5","6"]} β ['a.b','c','d'] β [] β
β {"a":{"b":"43"},"d":"2020-01-02","e":"Hello","f":{"g":42.42}} β ['a.b','c','d'] β ['e','f.g'] β
β {"a":{"b":"43"},"c":["7","8","9"],"f":{"g":43.43},"h":"World"} β ['a.b','c','d'] β ['f.g','h'] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
As we can see, after inserting paths
e
and
f.g
the limit was reached,
and they got inserted into a shared data structure.
During merges of data parts in MergeTree table engines {#during-merges-of-data-parts-in-mergetree-table-engines}
During a merge of several data parts in a
MergeTree
table the
JSON
column in the resulting data part can reach the limit of dynamic paths
and won't be able to store all paths from source parts as sub-columns.
In this case, ClickHouse chooses what paths will remain as sub-columns after merge and what paths will be stored in the shared data structure.
In most cases, ClickHouse tries to keep paths that contain
the largest number of non-null values and move the rarest paths to the shared data structure. This does, however, depend on the implementation. | {"source_file": "newjson.md"} | [
-0.016477059572935104,
-0.10027426481246948,
-0.02194834128022194,
0.013469650410115719,
-0.05650930106639862,
-0.08447276055812836,
-0.10396029055118561,
0.025609562173485756,
0.006136227864772081,
-0.02737918123602867,
0.003128058509901166,
0.0385860875248909,
-0.003181896870955825,
0.04... |
f26d9438-28d7-44ca-aa63-ce4bae94fa17 | Let's see an example of such a merge.
First, let's create a table with a
JSON
column, set the limit of dynamic paths to
3
and then insert values with
5
different paths:
sql title="Query"
CREATE TABLE test (id UInt64, json JSON(max_dynamic_paths=3)) ENGINE=MergeTree ORDER BY id;
SYSTEM STOP MERGES test;
INSERT INTO test SELECT number, formatRow('JSONEachRow', number as a) FROM numbers(5);
INSERT INTO test SELECT number, formatRow('JSONEachRow', number as b) FROM numbers(4);
INSERT INTO test SELECT number, formatRow('JSONEachRow', number as c) FROM numbers(3);
INSERT INTO test SELECT number, formatRow('JSONEachRow', number as d) FROM numbers(2);
INSERT INTO test SELECT number, formatRow('JSONEachRow', number as e) FROM numbers(1);
Each insert will create a separate data part with the
JSON
column containing a single path:
sql title="Query"
SELECT
count(),
groupArrayArrayDistinct(JSONDynamicPaths(json)) AS dynamic_paths,
groupArrayArrayDistinct(JSONSharedDataPaths(json)) AS shared_data_paths,
_part
FROM test
GROUP BY _part
ORDER BY _part ASC
text title="Response"
ββcount()ββ¬βdynamic_pathsββ¬βshared_data_pathsββ¬β_partββββββ
β 5 β ['a'] β [] β all_1_1_0 β
β 4 β ['b'] β [] β all_2_2_0 β
β 3 β ['c'] β [] β all_3_3_0 β
β 2 β ['d'] β [] β all_4_4_0 β
β 1 β ['e'] β [] β all_5_5_0 β
βββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββ
Now, let's merge all parts into one and see what will happen:
sql title="Query"
SELECT
count(),
groupArrayArrayDistinct(JSONDynamicPaths(json)) AS dynamic_paths,
groupArrayArrayDistinct(JSONSharedDataPaths(json)) AS shared_data_paths,
_part
FROM test
GROUP BY _part
ORDER BY _part ASC
text title="Response"
ββcount()ββ¬βdynamic_pathsββ¬βshared_data_pathsββ¬β_partββββββ
β 15 β ['a','b','c'] β ['d','e'] β all_1_5_2 β
βββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββ
As we can see, ClickHouse kept the most frequent paths
a
,
b
and
c
and moved paths
d
and
e
to a shared data structure.
Shared data structure {#shared-data-structure}
As was described in the previous section, when the
max_dynamic_paths
limit is reached all new paths are stored in a single shared data structure.
In this section we will look into the details of the shared data structure and how we read paths sub-columns from it.
See section
"introspection functions"
for details of functions used for inspecting the contents of a JSON column.
Shared data structure in memory {#shared-data-structure-in-memory}
In memory, shared data structure is just a sub-column with type
Map(String, String)
that stores mapping from a flattened JSON path to a binary encoded value.
To extract a path subcolumn from it, we just iterate over all rows in this
Map
column and try to find the requested path and its values. | {"source_file": "newjson.md"} | [
-0.0006513818516395986,
-0.04324515536427498,
-0.0037644226104021072,
0.02725774049758911,
-0.12773534655570984,
-0.03289324790239334,
-0.06140749156475067,
0.030242707580327988,
-0.00854962132871151,
-0.011064518243074417,
0.03555389866232872,
0.04294747859239578,
0.02404428832232952,
-0.... |
8f871dd0-0338-4f8d-b091-9b643e7db74d | Shared data structure in MergeTree parts {#shared-data-structure-in-merge-tree-parts}
In
MergeTree
tables we store data in data parts that stores everything on disk (local or remote). And data on disk can be stored in a different way compared to memory.
Currently, there are 3 different shared data structure serializations in MergeTree data parts:
map
,
map_with_buckets
and
advanced
.
The serialization version is controlled by MergeTree
settings
object_shared_data_serialization_version
and
object_shared_data_serialization_version_for_zero_level_parts
(zero level part is the part created during inserting data into the table, during merges parts have higher level).
Note: changing shared data structure serialization is supported only
for
v3
object serialization version
Map {#shared-data-map}
In
map
serialization version shared data is serialized as a single column with type
Map(String, String)
the same as it's stored in
memory. To read path sub-column from this type of serialization ClickHouse reads the whole
Map
column and
extracts the requested path in memory.
This serialization is efficient for writing data and reading the whole
JSON
column, but it's not efficient for reading paths sub-columns.
Map with buckets {#shared-data-map-with-buckets}
In
map_with_buckets
serialization version shared data is serialized as
N
columns ("buckets") with type
Map(String, String)
.
Each such bucket contains only subset of paths. To read path sub-column from this type of serialization ClickHouse
reads the whole
Map
column from a single bucket and extracts the requested path in memory.
This serialization is less efficient for writing data and reading the whole
JSON
column, but it's more efficient for reading paths sub-columns
because it reads data only from required buckets.
Number of buckets
N
is controlled by MergeTree settings
object_shared_data_buckets_for_compact_part
(8 by default)
and
object_shared_data_buckets_for_wide_part
(32 by default).
Advanced {#shared-data-advanced}
In
advanced
serialization version shared data is serialized in a special data structure that maximizes the performance
of paths sub-columns reading by storing some additional information that allows to read only the data of requested paths.
This serialization also supports buckets, so each bucket contains only sub-set of paths.
This serialization is quite inefficient for writing data (so it's not recommended to use this serialization for zero-level parts), reading the whole
JSON
column is slightly less efficient compared to
map
serialization, but it's very efficient for reading paths sub-columns.
Note: because of storing some additional information inside the data structure, the disk storage size is higher with this serialization compared to
map
and
map_with_buckets
serializations.
For more detailed overview of the new shared data serializations and implementation details read the
blog post
. | {"source_file": "newjson.md"} | [
0.05210445448756218,
-0.0005253545241430402,
0.013253691606223583,
-0.010495327413082123,
-0.028987636789679527,
-0.0802396759390831,
-0.103957399725914,
0.03933832421898842,
-0.0051394738256931305,
-0.013418580405414104,
0.032222479581832886,
0.028202606365084648,
-0.01766456477344036,
-0... |
fa5f8bf5-e453-468a-babd-584d87faa0ee | map
and
map_with_buckets
serializations.
For more detailed overview of the new shared data serializations and implementation details read the
blog post
.
Introspection functions {#introspection-functions}
There are several functions that can help to inspect the content of the JSON column:
-
JSONAllPaths
-
JSONAllPathsWithTypes
-
JSONDynamicPaths
-
JSONDynamicPathsWithTypes
-
JSONSharedDataPaths
-
JSONSharedDataPathsWithTypes
-
distinctDynamicTypes
-
distinctJSONPaths and distinctJSONPathsAndTypes
Examples
Let's investigate the content of the
GH Archive
dataset for the date
2020-01-01
:
sql title="Query"
SELECT arrayJoin(distinctJSONPaths(json))
FROM s3('s3://clickhouse-public-datasets/gharchive/original/2020-01-01-*.json.gz', JSONAsObject) | {"source_file": "newjson.md"} | [
-0.027512917295098305,
0.004834307823330164,
-0.04056108742952347,
-0.04542399197816849,
0.0198564063757658,
0.003167802467942238,
-0.08666236698627472,
-0.06895044445991516,
-0.013660455122590065,
-0.01721614971756935,
0.05049055069684982,
0.010427698493003845,
-0.02122317999601364,
-0.05... |
805956e7-f98b-42b1-9152-9347e2044e1f | text title="Response"
ββarrayJoin(distinctJSONPaths(json))ββββββββββββββββββββββββββ
β actor.avatar_url β
β actor.display_login β
β actor.gravatar_id β
β actor.id β
β actor.login β
β actor.url β
β created_at β
β id β
β org.avatar_url β
β org.gravatar_id β
β org.id β
β org.login β
β org.url β
β payload.action β
β payload.before β
β payload.comment._links.html.href β
β payload.comment._links.pull_request.href β
β payload.comment._links.self.href β
β payload.comment.author_association β
β payload.comment.body β
β payload.comment.commit_id β
β payload.comment.created_at β
β payload.comment.diff_hunk β
β payload.comment.html_url β
β payload.comment.id β
β payload.comment.in_reply_to_id β
β payload.comment.issue_url β
β payload.comment.line β
β payload.comment.node_id β
β payload.comment.original_commit_id β
β payload.comment.original_position β
β payload.comment.path β
β payload.comment.position β
β payload.comment.pull_request_review_id β
...
β payload.release.node_id β
β payload.release.prerelease β
β payload.release.published_at β
β payload.release.tag_name β
β payload.release.tarball_url β
β payload.release.target_commitish β
β payload.release.upload_url β
β payload.release.url β
β payload.release.zipball_url β
β payload.size β
β public β
β repo.id β | {"source_file": "newjson.md"} | [
-0.09858664870262146,
-0.012022150680422783,
0.026886846870183945,
0.05863935127854347,
0.05672036111354828,
-0.04873364418745041,
0.007943038828670979,
-0.03549285978078842,
0.07102283090353012,
0.03605862706899643,
0.03987180441617966,
-0.008838037960231304,
0.047488074749708176,
0.04775... |
4b74c7aa-a30b-4346-b0b5-8074869859d8 | β payload.size β
β public β
β repo.id β
β repo.name β
β repo.url β
β type β
ββarrayJoin(distinctJSONPaths(json))ββββββββββββββββββββββββββ | {"source_file": "newjson.md"} | [
-0.060591354966163635,
-0.0022421707399189472,
0.0005447526345960796,
-0.00627388758584857,
0.014891470782458782,
-0.045121826231479645,
-0.010540826246142387,
-0.015153122134506702,
0.05704902857542038,
-0.02698882296681404,
0.08452948927879333,
0.03773409128189087,
-0.01697457954287529,
... |
d78e86c1-d9cd-4b4d-8a7b-d098c28e2355 | sql
SELECT arrayJoin(distinctJSONPathsAndTypes(json))
FROM s3('s3://clickhouse-public-datasets/gharchive/original/2020-01-01-*.json.gz', JSONAsObject)
SETTINGS date_time_input_format = 'best_effort' | {"source_file": "newjson.md"} | [
-0.02439855970442295,
-0.012384682893753052,
-0.06944979727268219,
0.01379842683672905,
-0.036701321601867676,
0.041163332760334015,
-0.013884839601814747,
-0.06434711813926697,
-0.02267719805240631,
-0.01673329994082451,
-0.010822287760674953,
-0.017667584121227264,
0.02534826099872589,
-... |
43e2523d-5d1a-4272-8496-c60ca4a1529c | text
ββarrayJoin(distinctJSONPathsAndTypes(json))βββββββββββββββββββ
β ('actor.avatar_url',['String']) β
β ('actor.display_login',['String']) β
β ('actor.gravatar_id',['String']) β
β ('actor.id',['Int64']) β
β ('actor.login',['String']) β
β ('actor.url',['String']) β
β ('created_at',['DateTime']) β
β ('id',['String']) β
β ('org.avatar_url',['String']) β
β ('org.gravatar_id',['String']) β
β ('org.id',['Int64']) β
β ('org.login',['String']) β
β ('org.url',['String']) β
β ('payload.action',['String']) β
β ('payload.before',['String']) β
β ('payload.comment._links.html.href',['String']) β
β ('payload.comment._links.pull_request.href',['String']) β
β ('payload.comment._links.self.href',['String']) β
β ('payload.comment.author_association',['String']) β
β ('payload.comment.body',['String']) β
β ('payload.comment.commit_id',['String']) β
β ('payload.comment.created_at',['DateTime']) β
β ('payload.comment.diff_hunk',['String']) β
β ('payload.comment.html_url',['String']) β
β ('payload.comment.id',['Int64']) β
β ('payload.comment.in_reply_to_id',['Int64']) β
β ('payload.comment.issue_url',['String']) β
β ('payload.comment.line',['Int64']) β
β ('payload.comment.node_id',['String']) β
β ('payload.comment.original_commit_id',['String']) β
β ('payload.comment.original_position',['Int64']) β
β ('payload.comment.path',['String']) β
β ('payload.comment.position',['Int64']) β
β ('payload.comment.pull_request_review_id',['Int64']) β
...
β ('payload.release.node_id',['String']) β
β ('payload.release.prerelease',['Bool']) β
β ('payload.release.published_at',['DateTime']) β
β ('payload.release.tag_name',['String']) β
β ('payload.release.tarball_url',['String']) β
β ('payload.release.target_commitish',['String']) β
β ('payload.release.upload_url',['String']) β
β ('payload.release.url',['String']) β
β ('payload.release.zipball_url',['String']) β
β ('payload.size',['Int64']) β
β ('public',['Bool']) β | {"source_file": "newjson.md"} | [
-0.010040602646768093,
0.04197217524051666,
0.004842633381485939,
-0.03334542363882065,
0.007786232978105545,
-0.007818667218089104,
0.048521000891923904,
-0.03674664720892906,
0.04675062745809555,
-0.009732676669955254,
0.08276066929101944,
0.00020753618446178734,
0.03965117782354355,
0.0... |
107f4b92-85c2-4e0c-8222-c327e90c1ef9 | β ('payload.release.zipball_url',['String']) β
β ('payload.size',['Int64']) β
β ('public',['Bool']) β
β ('repo.id',['Int64']) β
β ('repo.name',['String']) β
β ('repo.url',['String']) β
β ('type',['String']) β
ββarrayJoin(distinctJSONPathsAndTypes(json))βββββββββββββββββββ | {"source_file": "newjson.md"} | [
-0.03475895896553993,
0.05924805626273155,
0.0048246001824736595,
0.0021298914216458797,
0.02661341428756714,
-0.03889605402946472,
0.02941226027905941,
-0.02082432247698307,
0.03147099167108536,
-0.016718123108148575,
0.06082276254892349,
0.04457452520728111,
0.00434378394857049,
0.055105... |
13897bb2-fffc-4e1d-9493-9445908950a1 | ALTER MODIFY COLUMN to JSON type {#alter-modify-column-to-json-type}
It's possible to alter an existing table and change the type of the column to the new
JSON
type. Right now only
ALTER
from a
String
type is supported.
Example
sql title="Query"
CREATE TABLE test (json String) ENGINE=MergeTree ORDER BY tuple();
INSERT INTO test VALUES ('{"a" : 42}'), ('{"a" : 43, "b" : "Hello"}'), ('{"a" : 44, "b" : [1, 2, 3]}'), ('{"c" : "2020-01-01"}');
ALTER TABLE test MODIFY COLUMN json JSON;
SELECT json, json.a, json.b, json.c FROM test;
text title="Response"
ββjsonββββββββββββββββββββββββββ¬βjson.aββ¬βjson.bβββ¬βjson.cββββββ
β {"a":"42"} β 42 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β
β {"a":"43","b":"Hello"} β 43 β Hello β α΄Ία΅α΄Έα΄Έ β
β {"a":"44","b":["1","2","3"]} β 44 β [1,2,3] β α΄Ία΅α΄Έα΄Έ β
β {"c":"2020-01-01"} β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2020-01-01 β
ββββββββββββββββββββββββββββββββ΄βββββββββ΄ββββββββββ΄βββββββββββββ
Comparison between values of the JSON type {#comparison-between-values-of-the-json-type}
JSON objects are compared similarly to Maps.
For example:
```sql title="Query"
CREATE TABLE test (json1 JSON, json2 JSON) ENGINE=Memory;
INSERT INTO test FORMAT JSONEachRow
{"json1" : {}, "json2" : {}}
{"json1" : {"a" : 42}, "json2" : {}}
{"json1" : {"a" : 42}, "json2" : {"a" : 41}}
{"json1" : {"a" : 42}, "json2" : {"a" : 42}}
{"json1" : {"a" : 42}, "json2" : {"a" : [1, 2, 3]}}
{"json1" : {"a" : 42}, "json2" : {"a" : "Hello"}}
{"json1" : {"a" : 42}, "json2" : {"b" : 42}}
{"json1" : {"a" : 42}, "json2" : {"a" : 42, "b" : 42}}
{"json1" : {"a" : 42}, "json2" : {"a" : 41, "b" : 42}}
SELECT json1, json2, json1 < json2, json1 = json2, json1 > json2 FROM test;
```
text title="Response"
ββjson1βββββββ¬βjson2ββββββββββββββββ¬βless(json1, json2)ββ¬βequals(json1, json2)ββ¬βgreater(json1, json2)ββ
β {} β {} β 0 β 1 β 0 β
β {"a":"42"} β {} β 0 β 0 β 1 β
β {"a":"42"} β {"a":"41"} β 0 β 0 β 1 β
β {"a":"42"} β {"a":"42"} β 0 β 1 β 0 β
β {"a":"42"} β {"a":["1","2","3"]} β 0 β 0 β 1 β
β {"a":"42"} β {"a":"Hello"} β 1 β 0 β 0 β
β {"a":"42"} β {"b":"42"} β 1 β 0 β 0 β
β {"a":"42"} β {"a":"42","b":"42"} β 1 β 0 β 0 β
β {"a":"42"} β {"a":"41","b":"42"} β 0 β 0 β 1 β
ββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββ | {"source_file": "newjson.md"} | [
-0.002446918049827218,
0.01358486246317625,
0.0544775053858757,
0.031110314652323723,
-0.0946258008480072,
0.014873047359287739,
-0.05466057360172272,
-0.0011127232573926449,
-0.030736029148101807,
0.03672467917203903,
0.04320438206195831,
0.008787690661847591,
-0.03688822686672211,
-0.003... |
930909f1-cb0a-478d-a6b8-bf53b8205ef5 | Note:
when 2 paths contain values of different data types, they are compared according to
comparison rule
of
Variant
data type.
Tips for better usage of the JSON type {#tips-for-better-usage-of-the-json-type}
Before creating
JSON
column and loading data into it, consider the following tips:
Investigate your data and specify as many path hints with types as you can. It will make storage and reading much more efficient.
Think about what paths you will need and what paths you will never need. Specify paths that you won't need in the
SKIP
section, and
SKIP REGEXP
section if needed. This will improve the storage.
Don't set the
max_dynamic_paths
parameter to very high values, as it can make storage and reading less efficient.
While highly dependent on system parameters such as memory, CPU, etc., a general rule of thumb would be to not set
max_dynamic_paths
greater than 10 000 for the local filesystem storage and 1024 for the remote filesystem storage.
Further Reading {#further-reading}
How we built a new powerful JSON data type for ClickHouse
The billion docs JSON Challenge: ClickHouse vs. MongoDB, Elasticsearch, and more | {"source_file": "newjson.md"} | [
-0.019977547228336334,
0.04074791446328163,
-0.015963783487677574,
0.029888728633522987,
-0.0522594191133976,
-0.07936481386423111,
-0.08975851535797119,
0.05399774760007858,
0.0027195573784410954,
-0.022244447842240334,
0.02563115581870079,
0.01711047999560833,
-0.016444537788629532,
0.06... |
35d086a2-1b56-467e-a43c-24bd2c3295f9 | description: 'Documentation for the Time64 data type in ClickHouse, which stores
the time range with sub-second precision'
slug: /sql-reference/data-types/time64
sidebar_position: 17
sidebar_label: 'Time64'
title: 'Time64'
doc_type: 'reference'
Time64
Data type
Time64
represents a time-of-day with fractional seconds.
It has no calendar date components (day, month, year).
The
precision
parameter defines the number of fractional digits and therefore the tick size.
Tick size (precision): 10
-precision
seconds. Valid range: 0..9. Common choices are 3 (milliseconds), 6 (microseconds), and 9 (nanoseconds).
Syntax:
sql
Time64(precision)
Internally,
Time64
stores a signed 64-bit decimal (Decimal64) number of fractional seconds.
The tick resolution is determined by the
precision
parameter.
Time zones are not supported: specifying a time zone with
Time64
will throw an error.
Unlike
DateTime64
,
Time64
does not store a date component.
See also
Time
.
Text representation range: [-999:59:59.000, 999:59:59.999] for
precision = 3
. In general, the minimum is
-999:59:59
and the maximum is
999:59:59
with up to
precision
fractional digits (for
precision = 9
, the minimum is
-999:59:59.999999999
).
Implementation details {#implementation-details}
Representation
.
Signed
Decimal64
value counting fractional second with
precision
fractional digits.
Normalization
.
When parsing strings to
Time64
, the time components are normalized and not validated.
For example,
25:70:70
is interpreted as
26:11:10
.
Negative values
.
Leading minus signs are supported and preserved.
Negative values typically arise from arithmetic operations on
Time64
values.
For
Time64
, negative inputs are preserved for both text (e.g.,
'-01:02:03.123'
) and numeric inputs (e.g.,
-3723.123
).
Saturation
.
The time-of-day component is capped to the range [-999:59:59.xxx, 999:59:59.xxx] when converting to components or serialising to text.
The stored numeric value may exceed this range; however, any component extraction (hours, minutes, seconds) and textual representation use the saturated value.
Time zones
.
Time64
does not support time zones.
Specifying a time zone when creating a
Time64
type or value throws an error.
Likewise, attempts to apply or change the time zone on
Time64
columns is not supported and results in an error.
Examples {#examples}
Creating a table with a
Time64
-type column and inserting data into it:
sql
CREATE TABLE tab64
(
`event_id` UInt8,
`time` Time64(3)
)
ENGINE = TinyLog;
``` sql
-- Parse Time64
-- - from string,
-- - from a number of seconds since 00:00:00 (fractional part according to precision).
INSERT INTO tab64 VALUES (1, '14:30:25'), (2, 52225.123), (3, '14:30:25');
SELECT * FROM tab64 ORDER BY event_id;
```
text
ββevent_idββ¬ββββββββtimeββ
1. β 1 β 14:30:25.000 β
2. β 2 β 14:30:25.123 β
3. β 3 β 14:30:25.000 β
ββββββββββββ΄βββββββββββββββ | {"source_file": "time64.md"} | [
0.026513149961829185,
0.008111568167805672,
-0.0860912948846817,
0.03769129887223244,
-0.06283459067344666,
-0.0002702775236684829,
-0.052372366189956665,
0.04486658051609993,
-0.007966438308358192,
-0.01656527817249298,
-0.0009259008220396936,
-0.03856736049056053,
-0.029851486906409264,
... |
41ed3ba8-64a7-4adb-8848-538b49aab0be | text
ββevent_idββ¬ββββββββtimeββ
1. β 1 β 14:30:25.000 β
2. β 2 β 14:30:25.123 β
3. β 3 β 14:30:25.000 β
ββββββββββββ΄βββββββββββββββ
Filtering on
Time64
values
sql
SELECT * FROM tab64 WHERE time = toTime64('14:30:25', 3);
text
ββevent_idββ¬ββββββββtimeββ
1. β 1 β 14:30:25.000 β
2. β 3 β 14:30:25.000 β
ββββββββββββ΄βββββββββββββββ
sql
SELECT * FROM tab64 WHERE time = toTime64(52225.123, 3);
text
ββevent_idββ¬ββββββββtimeββ
1. β 2 β 14:30:25.123 β
ββββββββββββ΄βββββββββββββββ
Note:
toTime64
parses numeric literals as seconds with a fractional part according to the specified precision, so provide the intended fractional digits explicitly.
Inspecting the resulting type:
sql
SELECT CAST('14:30:25.250' AS Time64(3)) AS column, toTypeName(column) AS type;
text
βββββββββcolumnββ¬βtypeβββββββ
1. β 14:30:25.250 β Time64(3) β
βββββββββββββββββ΄ββββββββββββ
See Also
Type conversion functions
Functions for working with dates and times
The
date_time_input_format
setting
The
date_time_output_format
setting
The
timezone
server configuration parameter
The
session_timezone
setting
Operators for working with dates and times
Date
data type
Time
data type
DateTime
data type | {"source_file": "time64.md"} | [
0.07069514691829681,
0.03147388622164726,
0.020481381565332413,
0.014068391174077988,
-0.021637987345457077,
-0.007492732256650925,
0.0587456114590168,
0.033748943358659744,
0.022084830328822136,
-0.008141591213643551,
-0.008467426523566246,
-0.06737446784973145,
-0.011693385429680347,
-0.... |
9540e112-89ca-4920-bb93-a0ccd454ac67 | description: 'Documentation for floating-point data types in ClickHouse: Float32,
Float64, and BFloat16'
sidebar_label: 'Float32 | Float64 | BFloat16'
sidebar_position: 4
slug: /sql-reference/data-types/float
title: 'Float32 | Float64 | BFloat16 Types'
doc_type: 'reference'
:::note
If you need accurate calculations, in particular if you work with financial or business data requiring a high precision, you should consider using
Decimal
instead.
Floating Point Numbers
might lead to inaccurate results as illustrated below:
```sql
CREATE TABLE IF NOT EXISTS float_vs_decimal
(
my_float Float64,
my_decimal Decimal64(3)
)
ENGINE=MergeTree
ORDER BY tuple();
Generate 1 000 000 random numbers with 2 decimal places and store them as a float and as a decimal
INSERT INTO float_vs_decimal SELECT round(randCanonical(), 3) AS res, res FROM system.numbers LIMIT 1000000;
sql
SELECT sum(my_float), sum(my_decimal) FROM float_vs_decimal;
βββββββsum(my_float)ββ¬βsum(my_decimal)ββ
β 499693.60500000004 β 499693.605 β
ββββββββββββββββββββββ΄ββββββββββββββββββ
SELECT sumKahan(my_float), sumKahan(my_decimal) FROM float_vs_decimal;
ββsumKahan(my_float)ββ¬βsumKahan(my_decimal)ββ
β 499693.605 β 499693.605 β
ββββββββββββββββββββββ΄βββββββββββββββββββββββ
```
:::
The equivalent types in ClickHouse and in C are given below:
Float32
β
float
.
Float64
β
double
.
Float types in ClickHouse have the following aliases:
Float32
β
FLOAT
,
REAL
,
SINGLE
.
Float64
β
DOUBLE
,
DOUBLE PRECISION
.
When creating tables, numeric parameters for floating point numbers can be set (e.g.
FLOAT(12)
,
FLOAT(15, 22)
,
DOUBLE(12)
,
DOUBLE(4, 18)
), but ClickHouse ignores them.
Using floating-point numbers {#using-floating-point-numbers}
Computations with floating-point numbers might produce a rounding error.
```sql
SELECT 1 - 0.9
ββββββββminus(1, 0.9)ββ
β 0.09999999999999998 β
βββββββββββββββββββββββ
```
The result of the calculation depends on the calculation method (the processor type and architecture of the computer system).
Floating-point calculations might result in numbers such as infinity (
Inf
) and "not-a-number" (
NaN
). This should be taken into account when processing the results of calculations.
When parsing floating-point numbers from text, the result might not be the nearest machine-representable number.
NaN and Inf {#nan-and-inf}
In contrast to standard SQL, ClickHouse supports the following categories of floating-point numbers:
Inf
β Infinity.
```sql
SELECT 0.5 / 0
ββdivide(0.5, 0)ββ
β inf β
ββββββββββββββββββ
```
-Inf
β Negative infinity.
```sql
SELECT -0.5 / 0
ββdivide(-0.5, 0)ββ
β -inf β
βββββββββββββββββββ
```
NaN
β Not a number.
```sql
SELECT 0 / 0
ββdivide(0, 0)ββ
β nan β
ββββββββββββββββ
```
See the rules for
NaN
sorting in the section
ORDER BY clause
.
BFloat16 {#bfloat16} | {"source_file": "float.md"} | [
0.006354986224323511,
-0.044787634164094925,
-0.04095885530114174,
-0.0029330092947930098,
-0.06835067272186279,
-0.07678676396608353,
0.050396498292684555,
0.042451195418834686,
-0.021171431988477707,
0.029609132558107376,
-0.019904732704162598,
-0.12456356734037399,
0.018310368061065674,
... |
ce7d9a79-11b4-43d9-89ce-685f4f2aed8b | NaN
β Not a number.
```sql
SELECT 0 / 0
ββdivide(0, 0)ββ
β nan β
ββββββββββββββββ
```
See the rules for
NaN
sorting in the section
ORDER BY clause
.
BFloat16 {#bfloat16}
BFloat16
is a 16-bit floating point data type with 8-bit exponent, sign, and 7-bit mantissa.
It is useful for machine learning and AI applications.
ClickHouse supports conversions between
Float32
and
BFloat16
which
can be done using the
toFloat32()
or
toBFloat16
functions.
:::note
Most other operations are not supported.
::: | {"source_file": "float.md"} | [
0.01909090392291546,
-0.026038747280836105,
-0.015274680219590664,
-0.040683355182409286,
-0.0020644241012632847,
-0.09579147398471832,
0.04968072101473808,
-0.016757821664214134,
-0.013246591202914715,
0.059122055768966675,
0.049565281718969345,
0.02379531040787697,
-0.019099269062280655,
... |
6adc2803-484c-42b2-ad30-9aa7bc5c89c4 | description: 'Documentation for Aggregate Function Combinators'
sidebar_label: 'Combinators'
sidebar_position: 37
slug: /sql-reference/aggregate-functions/combinators
title: 'Aggregate Function Combinators'
doc_type: 'reference'
Aggregate function combinators
The name of an aggregate function can have a suffix appended to it. This changes the way the aggregate function works.
-If {#-if}
The suffix -If can be appended to the name of any aggregate function. In this case, the aggregate function accepts an extra argument β a condition (Uint8 type). The aggregate function processes only the rows that trigger the condition. If the condition was not triggered even once, it returns a default value (usually zeros or empty strings).
Examples:
sumIf(column, cond)
,
countIf(cond)
,
avgIf(x, cond)
,
quantilesTimingIf(level1, level2)(x, cond)
,
argMinIf(arg, val, cond)
and so on.
With conditional aggregate functions, you can calculate aggregates for several conditions at once, without using subqueries and
JOIN
s. For example, conditional aggregate functions can be used to implement the segment comparison functionality.
-Array {#-array}
The -Array suffix can be appended to any aggregate function. In this case, the aggregate function takes arguments of the 'Array(T)' type (arrays) instead of 'T' type arguments. If the aggregate function accepts multiple arguments, this must be arrays of equal lengths. When processing arrays, the aggregate function works like the original aggregate function across all array elements.
Example 1:
sumArray(arr)
- Totals all the elements of all 'arr' arrays. In this example, it could have been written more simply:
sum(arraySum(arr))
.
Example 2:
uniqArray(arr)
β Counts the number of unique elements in all 'arr' arrays. This could be done an easier way:
uniq(arrayJoin(arr))
, but it's not always possible to add 'arrayJoin' to a query.
-If and -Array can be combined. However, 'Array' must come first, then 'If'. Examples:
uniqArrayIf(arr, cond)
,
quantilesTimingArrayIf(level1, level2)(arr, cond)
. Due to this order, the 'cond' argument won't be an array.
-Map {#-map}
The -Map suffix can be appended to any aggregate function. This will create an aggregate function which gets Map type as an argument, and aggregates values of each key of the map separately using the specified aggregate function. The result is also of a Map type.
Example
```sql
CREATE TABLE map_map(
date Date,
timeslot DateTime,
status Map(String, UInt64)
) ENGINE = Log;
INSERT INTO map_map VALUES
('2000-01-01', '2000-01-01 00:00:00', (['a', 'b', 'c'], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:00:00', (['c', 'd', 'e'], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:01:00', (['d', 'e', 'f'], [10, 10, 10])),
('2000-01-01', '2000-01-01 00:01:00', (['f', 'g', 'g'], [10, 10, 10]));
SELECT
timeslot,
sumMap(status),
avgMap(status),
minMap(status)
FROM map_map
GROUP BY timeslot; | {"source_file": "combinators.md"} | [
-0.08445394039154053,
-0.01358684990555048,
0.08510114252567291,
0.03035399504005909,
-0.04084990546107292,
0.034982167184352875,
0.03297344967722893,
0.0816655233502388,
0.053403183817863464,
-0.013987264595925808,
0.0339706726372242,
-0.05929664149880409,
0.04652177169919014,
-0.05906050... |
8767cff2-bb1f-43f2-a098-f2c0258241bd | SELECT
timeslot,
sumMap(status),
avgMap(status),
minMap(status)
FROM map_map
GROUP BY timeslot;
βββββββββββββtimeslotββ¬βsumMap(status)ββββββββββββββββββββββββ¬βavgMap(status)ββββββββββββββββββββββββ¬βminMap(status)ββββββββββββββββββββββββ
β 2000-01-01 00:00:00 β {'a':10,'b':10,'c':20,'d':10,'e':10} β {'a':10,'b':10,'c':10,'d':10,'e':10} β {'a':10,'b':10,'c':10,'d':10,'e':10} β
β 2000-01-01 00:01:00 β {'d':10,'e':10,'f':20,'g':20} β {'d':10,'e':10,'f':10,'g':10} β {'d':10,'e':10,'f':10,'g':10} β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
```
-SimpleState {#-simplestate}
If you apply this combinator, the aggregate function returns the same value but with a different type. This is a
SimpleAggregateFunction(...)
that can be stored in a table to work with
AggregatingMergeTree
tables.
Syntax
sql
<aggFunction>SimpleState(x)
Arguments
x
β Aggregate function parameters.
Returned values
The value of an aggregate function with the
SimpleAggregateFunction(...)
type.
Example
Query:
sql
WITH anySimpleState(number) AS c SELECT toTypeName(c), c FROM numbers(1);
Result:
text
ββtoTypeName(c)βββββββββββββββββββββββββ¬βcββ
β SimpleAggregateFunction(any, UInt64) β 0 β
ββββββββββββββββββββββββββββββββββββββββ΄ββββ
-State {#-state}
If you apply this combinator, the aggregate function does not return the resulting value (such as the number of unique values for the
uniq
function), but an intermediate state of the aggregation (for
uniq
, this is the hash table for calculating the number of unique values). This is an
AggregateFunction(...)
that can be used for further processing or stored in a table to finish aggregating later.
:::note
Please notice, that -MapState is not an invariant for the same data due to the fact that order of data in intermediate state can change, though it doesn't impact ingestion of this data.
:::
To work with these states, use:
AggregatingMergeTree
table engine.
finalizeAggregation
function.
runningAccumulate
function.
-Merge
combinator.
-MergeState
combinator.
-Merge {#-merge}
If you apply this combinator, the aggregate function takes the intermediate aggregation state as an argument, combines the states to finish aggregation, and returns the resulting value.
-MergeState {#-mergestate}
Merges the intermediate aggregation states in the same way as the -Merge combinator. However, it does not return the resulting value, but an intermediate aggregation state, similar to the -State combinator.
-ForEach {#-foreach}
Converts an aggregate function for tables into an aggregate function for arrays that aggregates the corresponding array items and returns an array of results. For example,
sumForEach
for the arrays
[1, 2]
,
[3, 4, 5]
and
[6, 7]
returns the result
[10, 13, 5]
after adding together the corresponding array items. | {"source_file": "combinators.md"} | [
0.04628933221101761,
0.059686072170734406,
0.08573166280984879,
-0.02685951441526413,
-0.020322781056165695,
0.04027894511818886,
0.04638965055346489,
-0.029508188366889954,
-0.08019157499074936,
0.07175389677286148,
0.08717550337314606,
-0.07348541170358658,
-0.01798277534544468,
-0.01027... |
a3f584f8-6353-4fd2-9395-6c23c7a18fb3 | -Distinct {#-distinct}
Every unique combination of arguments will be aggregated only once. Repeating values are ignored.
Examples:
sum(DISTINCT x)
(or
sumDistinct(x)
),
groupArray(DISTINCT x)
(or
groupArrayDistinct(x)
),
corrStable(DISTINCT x, y)
(or
corrStableDistinct(x, y)
) and so on.
-OrDefault {#-ordefault}
Changes behavior of an aggregate function.
If an aggregate function does not have input values, with this combinator it returns the default value for its return data type. Applies to the aggregate functions that can take empty input data.
-OrDefault
can be used with other combinators.
Syntax
sql
<aggFunction>OrDefault(x)
Arguments
x
β Aggregate function parameters.
Returned values
Returns the default value of an aggregate function's return type if there is nothing to aggregate.
Type depends on the aggregate function used.
Example
Query:
sql
SELECT avg(number), avgOrDefault(number) FROM numbers(0)
Result:
text
ββavg(number)ββ¬βavgOrDefault(number)ββ
β nan β 0 β
βββββββββββββββ΄βββββββββββββββββββββββ
Also
-OrDefault
can be used with another combinators. It is useful when the aggregate function does not accept the empty input.
Query:
sql
SELECT avgOrDefaultIf(x, x > 10)
FROM
(
SELECT toDecimal32(1.23, 2) AS x
)
Result:
text
ββavgOrDefaultIf(x, greater(x, 10))ββ
β 0.00 β
βββββββββββββββββββββββββββββββββββββ
-OrNull {#-ornull}
Changes behavior of an aggregate function.
This combinator converts a result of an aggregate function to the
Nullable
data type. If the aggregate function does not have values to calculate it returns
NULL
.
-OrNull
can be used with other combinators.
Syntax
sql
<aggFunction>OrNull(x)
Arguments
x
β Aggregate function parameters.
Returned values
The result of the aggregate function, converted to the
Nullable
data type.
NULL
, if there is nothing to aggregate.
Type:
Nullable(aggregate function return type)
.
Example
Add
-orNull
to the end of aggregate function.
Query:
sql
SELECT sumOrNull(number), toTypeName(sumOrNull(number)) FROM numbers(10) WHERE number > 10
Result:
text
ββsumOrNull(number)ββ¬βtoTypeName(sumOrNull(number))ββ
β α΄Ία΅α΄Έα΄Έ β Nullable(UInt64) β
βββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββ
Also
-OrNull
can be used with another combinators. It is useful when the aggregate function does not accept the empty input.
Query:
sql
SELECT avgOrNullIf(x, x > 10)
FROM
(
SELECT toDecimal32(1.23, 2) AS x
)
Result:
text
ββavgOrNullIf(x, greater(x, 10))ββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββββββββββββ
-Resample {#-resample}
Lets you divide data into groups, and then separately aggregates the data in those groups. Groups are created by splitting the values from one column into intervals.
sql
<aggFunction>Resample(start, end, step)(<aggFunction_params>, resampling_key) | {"source_file": "combinators.md"} | [
-0.04828469827771187,
-0.051364459097385406,
0.00854199193418026,
0.04803919047117233,
-0.07565457373857498,
0.03884238377213478,
0.09236833453178406,
-0.031524233520030975,
0.07947032153606415,
-0.05398542061448097,
0.010831178165972233,
0.033836521208286285,
0.028883660212159157,
-0.0619... |
55bf81cc-deea-4c85-a23e-d2b811309bc8 | sql
<aggFunction>Resample(start, end, step)(<aggFunction_params>, resampling_key)
Arguments
start
β Starting value of the whole required interval for
resampling_key
values.
stop
β Ending value of the whole required interval for
resampling_key
values. The whole interval does not include the
stop
value
[start, stop)
.
step
β Step for separating the whole interval into subintervals. The
aggFunction
is executed over each of those subintervals independently.
resampling_key
β Column whose values are used for separating data into intervals.
aggFunction_params
β
aggFunction
parameters.
Returned values
Array of
aggFunction
results for each subinterval.
Example
Consider the
people
table with the following data:
text
ββnameββββ¬βageββ¬βwageββ
β John β 16 β 10 β
β Alice β 30 β 15 β
β Mary β 35 β 8 β
β Evelyn β 48 β 11.5 β
β David β 62 β 9.9 β
β Brian β 60 β 16 β
ββββββββββ΄ββββββ΄βββββββ
Let's get the names of the people whose age lies in the intervals of
[30,60)
and
[60,75)
. Since we use integer representation for age, we get ages in the
[30, 59]
and
[60,74]
intervals.
To aggregate names in an array, we use the
groupArray
aggregate function. It takes one argument. In our case, it's the
name
column. The
groupArrayResample
function should use the
age
column to aggregate names by age. To define the required intervals, we pass the
30, 75, 30
arguments into the
groupArrayResample
function.
sql
SELECT groupArrayResample(30, 75, 30)(name, age) FROM people
text
ββgroupArrayResample(30, 75, 30)(name, age)ββββββ
β [['Alice','Mary','Evelyn'],['David','Brian']] β
βββββββββββββββββββββββββββββββββββββββββββββββββ
Consider the results.
John
is out of the sample because he's too young. Other people are distributed according to the specified age intervals.
Now let's count the total number of people and their average wage in the specified age intervals.
sql
SELECT
countResample(30, 75, 30)(name, age) AS amount,
avgResample(30, 75, 30)(wage, age) AS avg_wage
FROM people
text
ββamountββ¬βavg_wageβββββββββββββββββββ
β [3,2] β [11.5,12.949999809265137] β
ββββββββββ΄ββββββββββββββββββββββββββββ
-ArgMin {#-argmin}
The suffix -ArgMin can be appended to the name of any aggregate function. In this case, the aggregate function accepts an additional argument, which should be any comparable expression. The aggregate function processes only the rows that have the minimum value for the specified extra expression.
Examples:
sumArgMin(column, expr)
,
countArgMin(expr)
,
avgArgMin(x, expr)
and so on.
-ArgMax {#-argmax}
Similar to suffix -ArgMin but processes only the rows that have the maximum value for the specified extra expression.
Related Content {#related-content}
Blog:
Using Aggregate Combinators in ClickHouse | {"source_file": "combinators.md"} | [
-0.05066920071840286,
0.007787075825035572,
0.047537434846162796,
0.005264601670205593,
-0.03387711942195892,
-0.04395332932472229,
0.016368385404348373,
0.029515549540519714,
-0.07187877595424652,
-0.02678532525897026,
0.0077169244177639484,
-0.030281564220786095,
-0.010047721676528454,
-... |
0fdbc9c7-10b1-446a-8e68-a641f96e236a | description: 'Documentation for Parametric Aggregate Functions'
sidebar_label: 'Parametric'
sidebar_position: 38
slug: /sql-reference/aggregate-functions/parametric-functions
title: 'Parametric Aggregate Functions'
doc_type: 'reference'
Parametric aggregate functions
Some aggregate functions can accept not only argument columns (used for compression), but a set of parameters β constants for initialization. The syntax is two pairs of brackets instead of one. The first is for parameters, and the second is for arguments.
histogram {#histogram}
Calculates an adaptive histogram. It does not guarantee precise results.
sql
histogram(number_of_bins)(values)
The functions uses
A Streaming Parallel Decision Tree Algorithm
. The borders of histogram bins are adjusted as new data enters a function. In common case, the widths of bins are not equal.
Arguments
values
β
Expression
resulting in input values.
Parameters
number_of_bins
β Upper limit for the number of bins in the histogram. The function automatically calculates the number of bins. It tries to reach the specified number of bins, but if it fails, it uses fewer bins.
Returned values
Array
of
Tuples
of the following format:
```
[(lower_1, upper_1, height_1), ... (lower_N, upper_N, height_N)]
```
- `lower` β Lower bound of the bin.
- `upper` β Upper bound of the bin.
- `height` β Calculated height of the bin.
Example
sql
SELECT histogram(5)(number + 1)
FROM (
SELECT *
FROM system.numbers
LIMIT 20
)
text
ββhistogram(5)(plus(number, 1))ββββββββββββββββββββββββββββββββββββββββββββ
β [(1,4.5,4),(4.5,8.5,4),(8.5,12.75,4.125),(12.75,17,4.625),(17,20,3.25)] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
You can visualize a histogram with the
bar
function, for example:
sql
WITH histogram(5)(rand() % 100) AS hist
SELECT
arrayJoin(hist).3 AS height,
bar(height, 0, 6, 5) AS bar
FROM
(
SELECT *
FROM system.numbers
LIMIT 20
)
text
ββheightββ¬βbarββββ
β 2.125 β ββ β
β 3.25 β βββ β
β 5.625 β βββββ β
β 5.625 β βββββ β
β 3.375 β βββ β
ββββββββββ΄ββββββββ
In this case, you should remember that you do not know the histogram bin borders.
sequenceMatch {#sequencematch}
Checks whether the sequence contains an event chain that matches the pattern.
Syntax
sql
sequenceMatch(pattern)(timestamp, cond1, cond2, ...)
:::note
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::
Arguments
timestamp
β Column considered to contain time data. Typical data types are
Date
and
DateTime
. You can also use any of the supported
UInt
data types. | {"source_file": "parametric-functions.md"} | [
0.031178046017885208,
-0.025398077443242073,
-0.0890149474143982,
-0.03192514926195145,
-0.10737094283103943,
-0.00785915832966566,
0.00415450893342495,
0.0819181427359581,
-0.09130049496889114,
0.020519228652119637,
-0.014598098583519459,
-0.01599034294486046,
0.143093079328537,
-0.095826... |
6150def9-4af9-4b67-9202-942e1768fef8 | Arguments
timestamp
β Column considered to contain time data. Typical data types are
Date
and
DateTime
. You can also use any of the supported
UInt
data types.
cond1
,
cond2
β Conditions that describe the chain of events. Data type:
UInt8
. You can pass up to 32 condition arguments. The function takes only the events described in these conditions into account. If the sequence contains data that isn't described in a condition, the function skips them.
Parameters
pattern
β Pattern string. See
Pattern syntax
.
Returned values
1, if the pattern is matched.
0, if the pattern isn't matched.
Type:
UInt8
.
Pattern syntax {#pattern-syntax}
(?N)
β Matches the condition argument at position
N
. Conditions are numbered in the
[1, 32]
range. For example,
(?1)
matches the argument passed to the
cond1
parameter.
.*
β Matches any number of events. You do not need conditional arguments to match this element of the pattern.
(?t operator value)
β Sets the time in seconds that should separate two events. For example, pattern
(?1)(?t>1800)(?2)
matches events that occur more than 1800 seconds from each other. An arbitrary number of any events can lay between these events. You can use the
>=
,
>
,
<
,
<=
,
==
operators.
Examples
Consider data in the
t
table:
text
ββtimeββ¬βnumberββ
β 1 β 1 β
β 2 β 3 β
β 3 β 2 β
ββββββββ΄βββββββββ
Perform the query:
sql
SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2) FROM t
text
ββsequenceMatch('(?1)(?2)')(time, equals(number, 1), equals(number, 2))ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The function found the event chain where number 2 follows number 1. It skipped number 3 between them, because the number is not described as an event. If we want to take this number into account when searching for the event chain given in the example, we should make a condition for it.
sql
SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 3) FROM t
text
ββsequenceMatch('(?1)(?2)')(time, equals(number, 1), equals(number, 2), equals(number, 3))ββ
β 0 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
In this case, the function couldn't find the event chain matching the pattern, because the event for number 3 occurred between 1 and 2. If in the same case we checked the condition for number 4, the sequence would match the pattern.
sql
SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM t | {"source_file": "parametric-functions.md"} | [
-0.09760031849145889,
0.07916512340307236,
0.026121297851204872,
-0.023025065660476685,
-0.025557950139045715,
-0.013081771321594715,
0.06589677929878235,
0.03623286634683609,
0.0033633222337812185,
-0.05551622435450554,
-0.006309247575700283,
-0.08747823536396027,
-0.057587072253227234,
-... |
728f8439-628a-47d6-8d84-c2a8241073ec | sql
SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM t
text
ββsequenceMatch('(?1)(?2)')(time, equals(number, 1), equals(number, 2), equals(number, 4))ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
sequenceCount
sequenceCount {#sequencecount}
Counts the number of event chains that matched the pattern. The function searches event chains that do not overlap. It starts to search for the next chain after the current chain is matched.
:::note
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::
Syntax
sql
sequenceCount(pattern)(timestamp, cond1, cond2, ...)
Arguments
timestamp
β Column considered to contain time data. Typical data types are
Date
and
DateTime
. You can also use any of the supported
UInt
data types.
cond1
,
cond2
β Conditions that describe the chain of events. Data type:
UInt8
. You can pass up to 32 condition arguments. The function takes only the events described in these conditions into account. If the sequence contains data that isn't described in a condition, the function skips them.
Parameters
pattern
β Pattern string. See
Pattern syntax
.
Returned values
Number of non-overlapping event chains that are matched.
Type:
UInt64
.
Example
Consider data in the
t
table:
text
ββtimeββ¬βnumberββ
β 1 β 1 β
β 2 β 3 β
β 3 β 2 β
β 4 β 1 β
β 5 β 3 β
β 6 β 2 β
ββββββββ΄βββββββββ
Count how many times the number 2 occurs after the number 1 with any amount of other numbers between them:
sql
SELECT sequenceCount('(?1).*(?2)')(time, number = 1, number = 2) FROM t
text
ββsequenceCount('(?1).*(?2)')(time, equals(number, 1), equals(number, 2))ββ
β 2 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sequenceMatchEvents {#sequencematchevents}
Return event timestamps of longest event chains that matched the pattern.
:::note
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::
Syntax
sql
sequenceMatchEvents(pattern)(timestamp, cond1, cond2, ...)
Arguments
timestamp
β Column considered to contain time data. Typical data types are
Date
and
DateTime
. You can also use any of the supported
UInt
data types.
cond1
,
cond2
β Conditions that describe the chain of events. Data type:
UInt8
. You can pass up to 32 condition arguments. The function takes only the events described in these conditions into account. If the sequence contains data that isn't described in a condition, the function skips them.
Parameters
pattern
β Pattern string. See
Pattern syntax
.
Returned values | {"source_file": "parametric-functions.md"} | [
-0.06650883704423904,
-0.012952846474945545,
0.04135805740952492,
-0.014353025704622269,
-0.019431978464126587,
-0.001780024846084416,
0.055362313985824585,
0.010553665459156036,
0.060165517032146454,
-0.036347612738609314,
0.044653672724962234,
-0.05262454226613045,
0.0008942378335632384,
... |
59dfa1e3-05e1-42ab-a26b-cf18ff62497b | Parameters
pattern
β Pattern string. See
Pattern syntax
.
Returned values
Array of timestamps for matched condition arguments (?N) from event chain. Position in array match position of condition argument in pattern
Type: Array.
Example
Consider data in the
t
table:
text
ββtimeββ¬βnumberββ
β 1 β 1 β
β 2 β 3 β
β 3 β 2 β
β 4 β 1 β
β 5 β 3 β
β 6 β 2 β
ββββββββ΄βββββββββ
Return timestamps of events for longest chain
sql
SELECT sequenceMatchEvents('(?1).*(?2).*(?1)(?3)')(time, number = 1, number = 2, number = 4) FROM t
text
ββsequenceMatchEvents('(?1).*(?2).*(?1)(?3)')(time, equals(number, 1), equals(number, 2), equals(number, 4))ββ
β [1,3,4] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
sequenceMatch
windowFunnel {#windowfunnel}
Searches for event chains in a sliding time window and calculates the maximum number of events that occurred from the chain.
The function works according to the algorithm:
The function searches for data that triggers the first condition in the chain and sets the event counter to 1. This is the moment when the sliding window starts.
If events from the chain occur sequentially within the window, the counter is incremented. If the sequence of events is disrupted, the counter isn't incremented.
If the data has multiple event chains at varying points of completion, the function will only output the size of the longest chain.
Syntax
sql
windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN)
Arguments
timestamp
β Name of the column containing the timestamp. Data types supported:
Date
,
DateTime
and other unsigned integer types (note that even though timestamp supports the
UInt64
type, it's value can't exceed the Int64 maximum, which is 2^63 - 1).
cond
β Conditions or data describing the chain of events.
UInt8
.
Parameters
window
β Length of the sliding window, it is the time interval between the first and the last condition. The unit of
window
depends on the
timestamp
itself and varies. Determined using the expression
timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window
.
mode
β It is an optional argument. One or more modes can be set.
'strict_deduplication'
β If the same condition holds for the sequence of events, then such repeating event interrupts further processing. Note: it may work unexpectedly if several conditions hold for the same event.
'strict_order'
β Don't allow interventions of other events. E.g. in the case of
A->B->D->C
, it stops finding
A->B->C
at the
D
and the max event level is 2.
'strict_increase'
β Apply conditions only to events with strictly increasing timestamps. | {"source_file": "parametric-functions.md"} | [
-0.05484402924776077,
0.0084781963378191,
0.03112325631082058,
-0.04115396738052368,
-0.04058444872498512,
-0.011787334457039833,
0.048123799264431,
0.021337496116757393,
0.040057115256786346,
-0.033994659781455994,
-0.017765222117304802,
0.031372006982564926,
-0.00774800218641758,
-0.0269... |
a601e00e-f2e9-4cb4-a593-7c79c229cb0a | 'strict_increase'
β Apply conditions only to events with strictly increasing timestamps.
'strict_once'
β Count each event only once in the chain even if it meets the condition several times
Returned value
The maximum number of consecutive triggered conditions from the chain within the sliding time window.
All the chains in the selection are analyzed.
Type:
Integer
.
Example
Determine if a set period of time is enough for the user to select a phone and purchase it twice in the online store.
Set the following chain of events:
The user logged in to their account on the store (
eventID = 1003
).
The user searches for a phone (
eventID = 1007, product = 'phone'
).
The user placed an order (
eventID = 1009
).
The user made the order again (
eventID = 1010
).
Input table:
text
ββevent_dateββ¬βuser_idββ¬βββββββββββtimestampββ¬βeventIDββ¬βproductββ
β 2019-01-28 β 1 β 2019-01-29 10:00:00 β 1003 β phone β
ββββββββββββββ΄ββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ΄ββββββββββ
ββevent_dateββ¬βuser_idββ¬βββββββββββtimestampββ¬βeventIDββ¬βproductββ
β 2019-01-31 β 1 β 2019-01-31 09:00:00 β 1007 β phone β
ββββββββββββββ΄ββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ΄ββββββββββ
ββevent_dateββ¬βuser_idββ¬βββββββββββtimestampββ¬βeventIDββ¬βproductββ
β 2019-01-30 β 1 β 2019-01-30 08:00:00 β 1009 β phone β
ββββββββββββββ΄ββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ΄ββββββββββ
ββevent_dateββ¬βuser_idββ¬βββββββββββtimestampββ¬βeventIDββ¬βproductββ
β 2019-02-01 β 1 β 2019-02-01 08:00:00 β 1010 β phone β
ββββββββββββββ΄ββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ΄ββββββββββ
Find out how far the user
user_id
could get through the chain in a period in January-February of 2019.
Query:
sql
SELECT
level,
count() AS c
FROM
(
SELECT
user_id,
windowFunnel(6048000000000000)(timestamp, eventID = 1003, eventID = 1009, eventID = 1007, eventID = 1010) AS level
FROM trend
WHERE (event_date >= '2019-01-01') AND (event_date <= '2019-02-02')
GROUP BY user_id
)
GROUP BY level
ORDER BY level ASC;
Result:
text
ββlevelββ¬βcββ
β 4 β 1 β
βββββββββ΄ββββ
retention {#retention}
The function takes as arguments a set of conditions from 1 to 32 arguments of type
UInt8
that indicate whether a certain condition was met for the event.
Any condition can be specified as an argument (as in
WHERE
).
The conditions, except the first, apply in pairs: the result of the second will be true if the first and second are true, of the third if the first and third are true, etc.
Syntax
sql
retention(cond1, cond2, ..., cond32);
Arguments
cond
β An expression that returns a
UInt8
result (1 or 0).
Returned value
The array of 1 or 0.
1 β Condition was met for the event.
0 β Condition wasn't met for the event.
Type:
UInt8
.
Example
Let's consider an example of calculating the
retention
function to determine site traffic.
1.
Create a table to illustrate an example. | {"source_file": "parametric-functions.md"} | [
-0.07910408079624176,
0.02561897411942482,
0.0916919931769371,
-0.022677797824144363,
-0.04303523898124695,
0.02252724952995777,
0.04832705855369568,
0.02249484695494175,
0.07908105105161667,
-0.036168962717056274,
0.0784340649843216,
-0.007078992202877998,
0.04181566461920738,
-0.03577636... |
4a8ee86f-5dea-4c43-8d25-accac1fa1dc4 | Type:
UInt8
.
Example
Let's consider an example of calculating the
retention
function to determine site traffic.
1.
Create a table to illustrate an example.
```sql
CREATE TABLE retention_test(date Date, uid Int32) ENGINE = Memory;
INSERT INTO retention_test SELECT '2020-01-01', number FROM numbers(5);
INSERT INTO retention_test SELECT '2020-01-02', number FROM numbers(10);
INSERT INTO retention_test SELECT '2020-01-03', number FROM numbers(15);
```
Input table:
Query:
sql
SELECT * FROM retention_test
Result:
text
ββββββββdateββ¬βuidββ
β 2020-01-01 β 0 β
β 2020-01-01 β 1 β
β 2020-01-01 β 2 β
β 2020-01-01 β 3 β
β 2020-01-01 β 4 β
ββββββββββββββ΄ββββββ
ββββββββdateββ¬βuidββ
β 2020-01-02 β 0 β
β 2020-01-02 β 1 β
β 2020-01-02 β 2 β
β 2020-01-02 β 3 β
β 2020-01-02 β 4 β
β 2020-01-02 β 5 β
β 2020-01-02 β 6 β
β 2020-01-02 β 7 β
β 2020-01-02 β 8 β
β 2020-01-02 β 9 β
ββββββββββββββ΄ββββββ
ββββββββdateββ¬βuidββ
β 2020-01-03 β 0 β
β 2020-01-03 β 1 β
β 2020-01-03 β 2 β
β 2020-01-03 β 3 β
β 2020-01-03 β 4 β
β 2020-01-03 β 5 β
β 2020-01-03 β 6 β
β 2020-01-03 β 7 β
β 2020-01-03 β 8 β
β 2020-01-03 β 9 β
β 2020-01-03 β 10 β
β 2020-01-03 β 11 β
β 2020-01-03 β 12 β
β 2020-01-03 β 13 β
β 2020-01-03 β 14 β
ββββββββββββββ΄ββββββ
2.
Group users by unique ID
uid
using the
retention
function.
Query:
sql
SELECT
uid,
retention(date = '2020-01-01', date = '2020-01-02', date = '2020-01-03') AS r
FROM retention_test
WHERE date IN ('2020-01-01', '2020-01-02', '2020-01-03')
GROUP BY uid
ORDER BY uid ASC
Result:
text
ββuidββ¬βrββββββββ
β 0 β [1,1,1] β
β 1 β [1,1,1] β
β 2 β [1,1,1] β
β 3 β [1,1,1] β
β 4 β [1,1,1] β
β 5 β [0,0,0] β
β 6 β [0,0,0] β
β 7 β [0,0,0] β
β 8 β [0,0,0] β
β 9 β [0,0,0] β
β 10 β [0,0,0] β
β 11 β [0,0,0] β
β 12 β [0,0,0] β
β 13 β [0,0,0] β
β 14 β [0,0,0] β
βββββββ΄ββββββββββ
3.
Calculate the total number of site visits per day.
Query:
sql
SELECT
sum(r[1]) AS r1,
sum(r[2]) AS r2,
sum(r[3]) AS r3
FROM
(
SELECT
uid,
retention(date = '2020-01-01', date = '2020-01-02', date = '2020-01-03') AS r
FROM retention_test
WHERE date IN ('2020-01-01', '2020-01-02', '2020-01-03')
GROUP BY uid
)
Result:
text
ββr1ββ¬βr2ββ¬βr3ββ
β 5 β 5 β 5 β
ββββββ΄βββββ΄βββββ
Where:
r1
- the number of unique visitors who visited the site during 2020-01-01 (the
cond1
condition).
r2
- the number of unique visitors who visited the site during a specific time period between 2020-01-01 and 2020-01-02 (
cond1
and
cond2
conditions).
r3
- the number of unique visitors who visited the site during a specific time period on 2020-01-01 and 2020-01-03 (
cond1
and
cond3
conditions).
uniqUpTo(N)(x) {#uniquptonx} | {"source_file": "parametric-functions.md"} | [
0.01360801886767149,
-0.027433620765805244,
-0.09585660696029663,
0.0760173499584198,
-0.05971302464604378,
-0.03078112006187439,
0.09470877051353455,
0.023341549560427666,
0.011312734335660934,
0.020945405587553978,
0.06212397664785385,
-0.05269678309559822,
0.021216141059994698,
-0.02215... |
e69e596a-cd8f-4131-9429-56e71a5e6882 | r3
- the number of unique visitors who visited the site during a specific time period on 2020-01-01 and 2020-01-03 (
cond1
and
cond3
conditions).
uniqUpTo(N)(x) {#uniquptonx}
Calculates the number of different values of the argument up to a specified limit,
N
. If the number of different argument values is greater than
N
, this function returns
N
+ 1, otherwise it calculates the exact value.
Recommended for use with small
N
s, up to 10. The maximum value of
N
is 100.
For the state of an aggregate function, this function uses the amount of memory equal to 1 +
N
* the size of one value of bytes.
When dealing with strings, this function stores a non-cryptographic hash of 8 bytes; the calculation is approximated for strings.
For example, if you had a table that logs every search query made by users on your website. Each row in the table represents a single search query, with columns for the user ID, the search query, and the timestamp of the query. You can use
uniqUpTo
to generate a report that shows only the keywords that produced at least 5 unique users.
sql
SELECT SearchPhrase
FROM SearchLog
GROUP BY SearchPhrase
HAVING uniqUpTo(4)(UserID) >= 5
uniqUpTo(4)(UserID)
calculates the number of unique
UserID
values for each
SearchPhrase
, but it only counts up to 4 unique values. If there are more than 4 unique
UserID
values for a
SearchPhrase
, the function returns 5 (4 + 1). The
HAVING
clause then filters out the
SearchPhrase
values for which the number of unique
UserID
values is less than 5. This will give you a list of search keywords that were used by at least 5 unique users.
sumMapFiltered {#summapfiltered}
This function behaves the same as
sumMap
except that it also accepts an array of keys to filter with as a parameter. This can be especially useful when working with a high cardinality of keys.
Syntax
sumMapFiltered(keys_to_keep)(keys, values)
Parameters
keys_to_keep
:
Array
of keys to filter with.
keys
:
Array
of keys.
values
:
Array
of values.
Returned Value
Returns a tuple of two arrays: keys in sorted order, and values ββsummed for the corresponding keys.
Example
Query:
``sql
CREATE TABLE sum_map
(
date
Date,
timeslot
DateTime,
statusMap` Nested(status UInt16, requests UInt64)
)
ENGINE = Log
INSERT INTO sum_map VALUES
('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10]),
('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10]),
('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10]),
('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10]);
```
sql
SELECT sumMapFiltered([1, 4, 8])(statusMap.status, statusMap.requests) FROM sum_map;
Result:
response
ββsumMapFiltered([1, 4, 8])(statusMap.status, statusMap.requests)ββ
1. β ([1,4,8],[10,20,10]) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "parametric-functions.md"} | [
-0.06721141189336777,
-0.04667454957962036,
-0.10875240713357925,
0.05828819051384926,
0.006059554405510426,
0.0005264972569420934,
0.052116844803094864,
0.06206471472978592,
0.058362118899822235,
-0.023946208879351616,
-0.01668594777584076,
0.03918110951781273,
0.08519631624221802,
-0.017... |
b019dab7-d9b8-48e0-be44-1e996c36dde4 | sumMapFilteredWithOverflow {#summapfilteredwithoverflow}
This function behaves the same as
sumMap
except that it also accepts an array of keys to filter with as a parameter. This can be especially useful when working with a high cardinality of keys. It differs from the
sumMapFiltered
function in that it does summation with overflow - i.e. returns the same data type for the summation as the argument data type.
Syntax
sumMapFilteredWithOverflow(keys_to_keep)(keys, values)
Parameters
keys_to_keep
:
Array
of keys to filter with.
keys
:
Array
of keys.
values
:
Array
of values.
Returned Value
Returns a tuple of two arrays: keys in sorted order, and values ββsummed for the corresponding keys.
Example
In this example we create a table
sum_map
, insert some data into it and then use both
sumMapFilteredWithOverflow
and
sumMapFiltered
and the
toTypeName
function for comparison of the result. Where
requests
was of type
UInt8
in the created table,
sumMapFiltered
has promoted the type of the summed values to
UInt64
to avoid overflow whereas
sumMapFilteredWithOverflow
has kept the type as
UInt8
which is not large enough to store the result - i.e. overflow has occurred.
Query:
``sql
CREATE TABLE sum_map
(
date
Date,
timeslot
DateTime,
statusMap` Nested(status UInt8, requests UInt8)
)
ENGINE = Log
INSERT INTO sum_map VALUES
('2000-01-01', '2000-01-01 00:00:00', [1, 2, 3], [10, 10, 10]),
('2000-01-01', '2000-01-01 00:00:00', [3, 4, 5], [10, 10, 10]),
('2000-01-01', '2000-01-01 00:01:00', [4, 5, 6], [10, 10, 10]),
('2000-01-01', '2000-01-01 00:01:00', [6, 7, 8], [10, 10, 10]);
```
sql
SELECT sumMapFilteredWithOverflow([1, 4, 8])(statusMap.status, statusMap.requests) as summap_overflow, toTypeName(summap_overflow) FROM sum_map;
sql
SELECT sumMapFiltered([1, 4, 8])(statusMap.status, statusMap.requests) as summap, toTypeName(summap) FROM sum_map;
Result:
response
ββsumβββββββββββββββββββ¬βtoTypeName(sum)ββββββββββββββββββββ
1. β ([1,4,8],[10,20,10]) β Tuple(Array(UInt8), Array(UInt8)) β
ββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββ
response
ββsummapββββββββββββββββ¬βtoTypeName(summap)ββββββββββββββββββ
1. β ([1,4,8],[10,20,10]) β Tuple(Array(UInt8), Array(UInt64)) β
ββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ
sequenceNextNode {#sequencenextnode}
Returns a value of the next event that matched an event chain.
Experimental function,
SET allow_experimental_funnel_functions = 1
to enable it.
Syntax
sql
sequenceNextNode(direction, base)(timestamp, event_column, base_condition, event1, event2, event3, ...)
Parameters
direction
β Used to navigate to directions.
forward β Moving forward.
backward β Moving backward.
base
β Used to set the base point.
head β Set the base point to the first event.
tail β Set the base point to the last event.
first_match β Set the base point to the first matched
event1
. | {"source_file": "parametric-functions.md"} | [
-0.015302478335797787,
0.028916196897625923,
0.05312079191207886,
0.012210896238684654,
-0.031336601823568344,
-0.027939684689044952,
0.030730685219168663,
-0.01883259043097496,
-0.028371818363666534,
0.032393425703048706,
-0.05609497055411339,
-0.06363784521818161,
0.005274656228721142,
-... |
62a12385-e4d8-4475-991d-9d5b8ff2aa4a | head β Set the base point to the first event.
tail β Set the base point to the last event.
first_match β Set the base point to the first matched
event1
.
last_match β Set the base point to the last matched
event1
.
Arguments
timestamp
β Name of the column containing the timestamp. Data types supported:
Date
,
DateTime
and other unsigned integer types.
event_column
β Name of the column containing the value of the next event to be returned. Data types supported:
String
and
Nullable(String)
.
base_condition
β Condition that the base point must fulfill.
event1
,
event2
, ... β Conditions describing the chain of events.
UInt8
.
Returned values
event_column[next_index]
β If the pattern is matched and next value exists.
NULL
- If the pattern isn't matched or next value doesn't exist.
Type:
Nullable(String)
.
Example
It can be used when events are A->B->C->D->E and you want to know the event following B->C, which is D.
The query statement searching the event following A->B:
```sql
CREATE TABLE test_flow (
dt DateTime,
id int,
page String)
ENGINE = MergeTree()
PARTITION BY toYYYYMMDD(dt)
ORDER BY id;
INSERT INTO test_flow VALUES (1, 1, 'A') (2, 1, 'B') (3, 1, 'C') (4, 1, 'D') (5, 1, 'E');
SELECT id, sequenceNextNode('forward', 'head')(dt, page, page = 'A', page = 'A', page = 'B') as next_flow FROM test_flow GROUP BY id;
```
Result:
text
ββidββ¬βnext_flowββ
β 1 β C β
ββββββ΄ββββββββββββ
Behavior for
forward
and
head
```sql
ALTER TABLE test_flow DELETE WHERE 1 = 1 settings mutations_sync = 1;
INSERT INTO test_flow VALUES (1, 1, 'Home') (2, 1, 'Gift') (3, 1, 'Exit');
INSERT INTO test_flow VALUES (1, 2, 'Home') (2, 2, 'Home') (3, 2, 'Gift') (4, 2, 'Basket');
INSERT INTO test_flow VALUES (1, 3, 'Gift') (2, 3, 'Home') (3, 3, 'Gift') (4, 3, 'Basket');
```
```sql
SELECT id, sequenceNextNode('forward', 'head')(dt, page, page = 'Home', page = 'Home', page = 'Gift') FROM test_flow GROUP BY id;
dt id page
1970-01-01 09:00:01 1 Home // Base point, Matched with Home
1970-01-01 09:00:02 1 Gift // Matched with Gift
1970-01-01 09:00:03 1 Exit // The result
1970-01-01 09:00:01 2 Home // Base point, Matched with Home
1970-01-01 09:00:02 2 Home // Unmatched with Gift
1970-01-01 09:00:03 2 Gift
1970-01-01 09:00:04 2 Basket
1970-01-01 09:00:01 3 Gift // Base point, Unmatched with Home
1970-01-01 09:00:02 3 Home
1970-01-01 09:00:03 3 Gift
1970-01-01 09:00:04 3 Basket
```
Behavior for
backward
and
tail
```sql
SELECT id, sequenceNextNode('backward', 'tail')(dt, page, page = 'Basket', page = 'Basket', page = 'Gift') FROM test_flow GROUP BY id;
dt id page
1970-01-01 09:00:01 1 Home
1970-01-01 09:00:02 1 Gift
1970-01-01 09:00:03 1 Exit // Base point, Unmatched with Basket | {"source_file": "parametric-functions.md"} | [
-0.02649686671793461,
0.03461459279060364,
-0.006116076372563839,
-0.015667004510760307,
-0.014963606372475624,
0.018593454733490944,
0.06526947021484375,
0.048424627631902695,
0.018127519637346268,
0.01780240796506405,
0.023717932403087616,
-0.07194893807172775,
-0.03768577054142952,
-0.0... |
faddc61c-1260-4905-9130-a5ca5f201da4 | dt id page
1970-01-01 09:00:01 1 Home
1970-01-01 09:00:02 1 Gift
1970-01-01 09:00:03 1 Exit // Base point, Unmatched with Basket
1970-01-01 09:00:01 2 Home
1970-01-01 09:00:02 2 Home // The result
1970-01-01 09:00:03 2 Gift // Matched with Gift
1970-01-01 09:00:04 2 Basket // Base point, Matched with Basket
1970-01-01 09:00:01 3 Gift
1970-01-01 09:00:02 3 Home // The result
1970-01-01 09:00:03 3 Gift // Base point, Matched with Gift
1970-01-01 09:00:04 3 Basket // Base point, Matched with Basket
```
Behavior for
forward
and
first_match
```sql
SELECT id, sequenceNextNode('forward', 'first_match')(dt, page, page = 'Gift', page = 'Gift') FROM test_flow GROUP BY id;
dt id page
1970-01-01 09:00:01 1 Home
1970-01-01 09:00:02 1 Gift // Base point
1970-01-01 09:00:03 1 Exit // The result
1970-01-01 09:00:01 2 Home
1970-01-01 09:00:02 2 Home
1970-01-01 09:00:03 2 Gift // Base point
1970-01-01 09:00:04 2 Basket The result
1970-01-01 09:00:01 3 Gift // Base point
1970-01-01 09:00:02 3 Home // The result
1970-01-01 09:00:03 3 Gift
1970-01-01 09:00:04 3 Basket
```
```sql
SELECT id, sequenceNextNode('forward', 'first_match')(dt, page, page = 'Gift', page = 'Gift', page = 'Home') FROM test_flow GROUP BY id;
dt id page
1970-01-01 09:00:01 1 Home
1970-01-01 09:00:02 1 Gift // Base point
1970-01-01 09:00:03 1 Exit // Unmatched with Home
1970-01-01 09:00:01 2 Home
1970-01-01 09:00:02 2 Home
1970-01-01 09:00:03 2 Gift // Base point
1970-01-01 09:00:04 2 Basket // Unmatched with Home
1970-01-01 09:00:01 3 Gift // Base point
1970-01-01 09:00:02 3 Home // Matched with Home
1970-01-01 09:00:03 3 Gift // The result
1970-01-01 09:00:04 3 Basket
```
Behavior for
backward
and
last_match
```sql
SELECT id, sequenceNextNode('backward', 'last_match')(dt, page, page = 'Gift', page = 'Gift') FROM test_flow GROUP BY id;
dt id page
1970-01-01 09:00:01 1 Home // The result
1970-01-01 09:00:02 1 Gift // Base point
1970-01-01 09:00:03 1 Exit
1970-01-01 09:00:01 2 Home
1970-01-01 09:00:02 2 Home // The result
1970-01-01 09:00:03 2 Gift // Base point
1970-01-01 09:00:04 2 Basket
1970-01-01 09:00:01 3 Gift
1970-01-01 09:00:02 3 Home // The result
1970-01-01 09:00:03 3 Gift // Base point
1970-01-01 09:00:04 3 Basket
```
```sql
SELECT id, sequenceNextNode('backward', 'last_match')(dt, page, page = 'Gift', page = 'Gift', page = 'Home') FROM test_flow GROUP BY id;
dt id page
1970-01-01 09:00:01 1 Home // Matched with Home, the result is null
1970-01-01 09:00:02 1 Gift // Base point
1970-01-01 09:00:03 1 Exit | {"source_file": "parametric-functions.md"} | [
-0.09375675767660141,
0.032010845839977264,
0.06552303582429886,
0.0035362697672098875,
0.012406407855451107,
0.06599237769842148,
0.0473208911716938,
-0.06860794872045517,
0.07923686504364014,
-0.01508815586566925,
0.03589198738336563,
-0.023456523194909096,
-0.024213038384914398,
-0.0358... |
0bfb2a79-dcbb-494f-9c85-966f1cd474a3 | dt id page
1970-01-01 09:00:01 1 Home // Matched with Home, the result is null
1970-01-01 09:00:02 1 Gift // Base point
1970-01-01 09:00:03 1 Exit
1970-01-01 09:00:01 2 Home // The result
1970-01-01 09:00:02 2 Home // Matched with Home
1970-01-01 09:00:03 2 Gift // Base point
1970-01-01 09:00:04 2 Basket
1970-01-01 09:00:01 3 Gift // The result
1970-01-01 09:00:02 3 Home // Matched with Home
1970-01-01 09:00:03 3 Gift // Base point
1970-01-01 09:00:04 3 Basket
```
Behavior for
base_condition
``sql
CREATE TABLE test_flow_basecond
(
dt
DateTime,
id
int,
page
String,
ref` String
)
ENGINE = MergeTree
PARTITION BY toYYYYMMDD(dt)
ORDER BY id;
INSERT INTO test_flow_basecond VALUES (1, 1, 'A', 'ref4') (2, 1, 'A', 'ref3') (3, 1, 'B', 'ref2') (4, 1, 'B', 'ref1');
```
```sql
SELECT id, sequenceNextNode('forward', 'head')(dt, page, ref = 'ref1', page = 'A') FROM test_flow_basecond GROUP BY id;
dt id page ref
1970-01-01 09:00:01 1 A ref4 // The head can not be base point because the ref column of the head unmatched with 'ref1'.
1970-01-01 09:00:02 1 A ref3
1970-01-01 09:00:03 1 B ref2
1970-01-01 09:00:04 1 B ref1
```
```sql
SELECT id, sequenceNextNode('backward', 'tail')(dt, page, ref = 'ref4', page = 'B') FROM test_flow_basecond GROUP BY id;
dt id page ref
1970-01-01 09:00:01 1 A ref4
1970-01-01 09:00:02 1 A ref3
1970-01-01 09:00:03 1 B ref2
1970-01-01 09:00:04 1 B ref1 // The tail can not be base point because the ref column of the tail unmatched with 'ref4'.
```
```sql
SELECT id, sequenceNextNode('forward', 'first_match')(dt, page, ref = 'ref3', page = 'A') FROM test_flow_basecond GROUP BY id;
dt id page ref
1970-01-01 09:00:01 1 A ref4 // This row can not be base point because the ref column unmatched with 'ref3'.
1970-01-01 09:00:02 1 A ref3 // Base point
1970-01-01 09:00:03 1 B ref2 // The result
1970-01-01 09:00:04 1 B ref1
```
```sql
SELECT id, sequenceNextNode('backward', 'last_match')(dt, page, ref = 'ref2', page = 'B') FROM test_flow_basecond GROUP BY id;
dt id page ref
1970-01-01 09:00:01 1 A ref4
1970-01-01 09:00:02 1 A ref3 // The result
1970-01-01 09:00:03 1 B ref2 // Base point
1970-01-01 09:00:04 1 B ref1 // This row can not be base point because the ref column unmatched with 'ref2'.
``` | {"source_file": "parametric-functions.md"} | [
-0.033724792301654816,
0.03811728209257126,
0.06248675659298897,
-0.011273924261331558,
0.008858166635036469,
0.035752590745687485,
0.041406724601984024,
0.012923392467200756,
0.02294512465596199,
0.010163022205233574,
0.041574444621801376,
-0.0803462341427803,
-0.03147544711828232,
-0.036... |
e5c2b763-880f-4b60-aa49-cd33443d4e1a | description: 'Documentation for Aggregate Functions'
sidebar_label: 'Aggregate Functions'
sidebar_position: 33
slug: /sql-reference/aggregate-functions/
title: 'Aggregate Functions'
doc_type: 'reference'
Aggregate functions
Aggregate functions work in the
normal
way as expected by database experts.
ClickHouse also supports:
Parametric aggregate functions
, which accept other parameters in addition to columns.
Combinators
, which change the behavior of aggregate functions.
NULL processing {#null-processing}
During aggregation, all
NULL
arguments are skipped. If the aggregation has several arguments it will ignore any row in which one or more of them are NULL.
There is an exception to this rule, which are the functions
first_value
,
last_value
and their aliases (
any
and
anyLast
respectively) when followed by the modifier
RESPECT NULLS
. For example,
FIRST_VALUE(b) RESPECT NULLS
.
Examples:
Consider this table:
text
ββxββ¬ββββyββ
β 1 β 2 β
β 2 β α΄Ία΅α΄Έα΄Έ β
β 3 β 2 β
β 3 β 3 β
β 3 β α΄Ία΅α΄Έα΄Έ β
βββββ΄βββββββ
Let's say you need to total the values in the
y
column:
sql
SELECT sum(y) FROM t_null_big
text
ββsum(y)ββ
β 7 β
ββββββββββ
Now you can use the
groupArray
function to create an array from the
y
column:
sql
SELECT groupArray(y) FROM t_null_big
text
ββgroupArray(y)ββ
β [2,2,3] β
βββββββββββββββββ
groupArray
does not include
NULL
in the resulting array.
You can use
COALESCE
to change NULL into a value that makes sense in your use case. For example:
avg(COALESCE(column, 0))
with use the column value in the aggregation or zero if NULL:
sql
SELECT
avg(y),
avg(coalesce(y, 0))
FROM t_null_big
text
ββββββββββββββavg(y)ββ¬βavg(coalesce(y, 0))ββ
β 2.3333333333333335 β 1.4 β
ββββββββββββββββββββββ΄ββββββββββββββββββββββ
Also you can use
Tuple
to work around NULL skipping behavior. A
Tuple
that contains only a
NULL
value is not
NULL
, so the aggregate functions won't skip that row because of that
NULL
value.
```sql
SELECT
groupArray(y),
groupArray(tuple(y)).1
FROM t_null_big;
ββgroupArray(y)ββ¬βtupleElement(groupArray(tuple(y)), 1)ββ
β [2,2,3] β [2,NULL,2,3,NULL] β
βββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββ
```
Note that aggregations are skipped when the columns are used as arguments to an aggregated function. For example
count
without parameters (
count()
) or with constant ones (
count(1)
) will count all rows in the block (independently of the value of the GROUP BY column as it's not an argument), while
count(column)
will only return the number of rows where column is not NULL.
```sql
SELECT
v,
count(1),
count(v)
FROM
(
SELECT if(number < 10, NULL, number % 3) AS v
FROM numbers(15)
)
GROUP BY v | {"source_file": "index.md"} | [
-0.05102267488837242,
-0.03708165884017944,
0.0023803848307579756,
0.013480504974722862,
-0.06309014558792114,
0.016328755766153336,
-0.007124497555196285,
0.02721349708735943,
0.007138578686863184,
0.006811202969402075,
0.07497380673885345,
-0.038577850908041,
0.04819997027516365,
-0.1188... |
07756ed3-fac0-418d-8233-8c24b5c5f3e9 | ```sql
SELECT
v,
count(1),
count(v)
FROM
(
SELECT if(number < 10, NULL, number % 3) AS v
FROM numbers(15)
)
GROUP BY v
βββββvββ¬βcount()ββ¬βcount(v)ββ
β α΄Ία΅α΄Έα΄Έ β 10 β 0 β
β 0 β 1 β 1 β
β 1 β 2 β 2 β
β 2 β 2 β 2 β
ββββββββ΄ββββββββββ΄βββββββββββ
```
And here is an example of first_value with
RESPECT NULLS
where we can see that NULL inputs are respected and it will return the first value read, whether it's NULL or not:
```sql
SELECT
col || '_' || ((col + 1) * 5 - 1) AS range,
first_value(odd_or_null) AS first,
first_value(odd_or_null) IGNORE NULLS as first_ignore_null,
first_value(odd_or_null) RESPECT NULLS as first_respect_nulls
FROM
(
SELECT
intDiv(number, 5) AS col,
if(number % 2 == 0, NULL, number) AS odd_or_null
FROM numbers(15)
)
GROUP BY col
ORDER BY col
ββrangeββ¬βfirstββ¬βfirst_ignore_nullββ¬βfirst_respect_nullsββ
β 0_4 β 1 β 1 β α΄Ία΅α΄Έα΄Έ β
β 1_9 β 5 β 5 β 5 β
β 2_14 β 11 β 11 β α΄Ία΅α΄Έα΄Έ β
βββββββββ΄ββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββββββ
``` | {"source_file": "index.md"} | [
-0.012347816489636898,
-0.023409655317664146,
0.017546216025948524,
-0.024029629305005074,
0.028933770954608917,
-0.010102575644850731,
0.011075686663389206,
-0.005301650147885084,
-0.07988511770963669,
0.01307635847479105,
0.0772099643945694,
-0.014634245075285435,
-0.014184079132974148,
... |
f76b9f50-924a-4c02-9748-afcfd8188ade | description: 'Documentation for the GROUPING aggregate function.'
slug: /sql-reference/aggregate-functions/grouping_function
title: 'GROUPING'
doc_type: 'reference'
GROUPING
GROUPING {#grouping}
ROLLUP
and
CUBE
are modifiers to GROUP BY. Both of these calculate subtotals. ROLLUP takes an ordered list of columns, for example
(day, month, year)
, and calculates subtotals at each level of the aggregation and then a grand total. CUBE calculates subtotals across all possible combinations of the columns specified. GROUPING identifies which rows returned by ROLLUP or CUBE are superaggregates, and which are rows that would be returned by an unmodified GROUP BY.
The GROUPING function takes multiple columns as an argument, and returns a bitmask.
-
1
indicates that a row returned by a
ROLLUP
or
CUBE
modifier to
GROUP BY
is a subtotal
-
0
indicates that a row returned by a
ROLLUP
or
CUBE
is a row that is not a subtotal
GROUPING SETS {#grouping-sets}
By default, the CUBE modifier calculates subtotals for all possible combinations of the columns passed to CUBE. GROUPING SETS allows you to specify the specific combinations to calculate.
Analyzing hierarchical data is a good use case for ROLLUP, CUBE, and GROUPING SETS modifiers. The sample here is a table containing data about what Linux distribution, and the version of that distribution is installed across two datacenters. It may be valuable to look at the data by distribution, version, and location.
Load sample data {#load-sample-data}
sql
CREATE TABLE servers ( datacenter VARCHAR(255),
distro VARCHAR(255) NOT NULL,
version VARCHAR(50) NOT NULL,
quantity INT
)
ORDER BY (datacenter, distro, version)
sql
INSERT INTO servers(datacenter, distro, version, quantity)
VALUES ('Schenectady', 'Arch','2022.08.05',50),
('Westport', 'Arch','2022.08.05',40),
('Schenectady','Arch','2021.09.01',30),
('Westport', 'Arch','2021.09.01',20),
('Schenectady','Arch','2020.05.01',10),
('Westport', 'Arch','2020.05.01',5),
('Schenectady','RHEL','9',60),
('Westport','RHEL','9',70),
('Westport','RHEL','7',80),
('Schenectady','RHEL','7',80)
sql
SELECT
*
FROM
servers;
```response
ββdatacenterβββ¬βdistroββ¬βversionβββββ¬βquantityββ
β Schenectady β Arch β 2020.05.01 β 10 β
β Schenectady β Arch β 2021.09.01 β 30 β
β Schenectady β Arch β 2022.08.05 β 50 β
β Schenectady β RHEL β 7 β 80 β
β Schenectady β RHEL β 9 β 60 β
β Westport β Arch β 2020.05.01 β 5 β
β Westport β Arch β 2021.09.01 β 20 β
β Westport β Arch β 2022.08.05 β 40 β
β Westport β RHEL β 7 β 80 β
β Westport β RHEL β 9 β 70 β
βββββββββββββββ΄βββββββββ΄βββββββββββββ΄βββββββββββ
10 rows in set. Elapsed: 0.409 sec.
``` | {"source_file": "grouping_function.md"} | [
-0.0571308396756649,
-0.011503892950713634,
-0.04833154380321503,
0.06485171616077423,
-0.03516406938433647,
-0.025870298966765404,
-0.024350643157958984,
0.010888301767408848,
0.012726003304123878,
-0.059518083930015564,
-0.002187528880313039,
0.017388030886650085,
-0.0038493184838443995,
... |
17ee068d-4517-41a6-acc3-2e18b875464a | 10 rows in set. Elapsed: 0.409 sec.
```
Simple queries {#simple-queries}
Get the count of servers in each data center by distribution:
sql
SELECT
datacenter,
distro,
SUM (quantity) qty
FROM
servers
GROUP BY
datacenter,
distro;
```response
ββdatacenterβββ¬βdistroββ¬βqtyββ
β Schenectady β RHEL β 140 β
β Westport β Arch β 65 β
β Schenectady β Arch β 90 β
β Westport β RHEL β 150 β
βββββββββββββββ΄βββββββββ΄ββββββ
4 rows in set. Elapsed: 0.212 sec.
```
sql
SELECT
datacenter,
SUM (quantity) qty
FROM
servers
GROUP BY
datacenter;
```response
ββdatacenterβββ¬βqtyββ
β Westport β 215 β
β Schenectady β 230 β
βββββββββββββββ΄ββββββ
2 rows in set. Elapsed: 0.277 sec.
```
sql
SELECT
distro,
SUM (quantity) qty
FROM
servers
GROUP BY
distro;
```response
ββdistroββ¬βqtyββ
β Arch β 155 β
β RHEL β 290 β
ββββββββββ΄ββββββ
2 rows in set. Elapsed: 0.352 sec.
```
sql
SELECT
SUM(quantity) qty
FROM
servers;
```response
ββqtyββ
β 445 β
βββββββ
1 row in set. Elapsed: 0.244 sec.
```
Comparing multiple GROUP BY statements with GROUPING SETS {#comparing-multiple-group-by-statements-with-grouping-sets}
Breaking down the data without CUBE, ROLLUP, or GROUPING SETS:
sql
SELECT
datacenter,
distro,
SUM (quantity) qty
FROM
servers
GROUP BY
datacenter,
distro
UNION ALL
SELECT
datacenter,
null,
SUM (quantity) qty
FROM
servers
GROUP BY
datacenter
UNION ALL
SELECT
null,
distro,
SUM (quantity) qty
FROM
servers
GROUP BY
distro
UNION ALL
SELECT
null,
null,
SUM(quantity) qty
FROM
servers;
```response
ββdatacenterββ¬βdistroββ¬βqtyββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 445 β
ββββββββββββββ΄βββββββββ΄ββββββ
ββdatacenterβββ¬βdistroββ¬βqtyββ
β Westport β α΄Ία΅α΄Έα΄Έ β 215 β
β Schenectady β α΄Ία΅α΄Έα΄Έ β 230 β
βββββββββββββββ΄βββββββββ΄ββββββ
ββdatacenterβββ¬βdistroββ¬βqtyββ
β Schenectady β RHEL β 140 β
β Westport β Arch β 65 β
β Schenectady β Arch β 90 β
β Westport β RHEL β 150 β
βββββββββββββββ΄βββββββββ΄ββββββ
ββdatacenterββ¬βdistroββ¬βqtyββ
β α΄Ία΅α΄Έα΄Έ β Arch β 155 β
β α΄Ία΅α΄Έα΄Έ β RHEL β 290 β
ββββββββββββββ΄βββββββββ΄ββββββ
9 rows in set. Elapsed: 0.527 sec.
```
Getting the same information using GROUPING SETS:
sql
SELECT
datacenter,
distro,
SUM (quantity) qty
FROM
servers
GROUP BY
GROUPING SETS(
(datacenter,distro),
(datacenter),
(distro),
()
) | {"source_file": "grouping_function.md"} | [
0.04862501844763756,
-0.05006156116724014,
-0.001301249605603516,
0.0423935130238533,
-0.11377452313899994,
-0.044394601136446,
0.022607527673244476,
-0.00984244979918003,
0.047115158289670944,
0.06269446760416031,
0.06597212702035904,
-0.09247991442680359,
0.04159814864397049,
-0.06948207... |
a83e7df2-4bb6-4c92-a37d-2506d214b781 | sql
SELECT
datacenter,
distro,
SUM (quantity) qty
FROM
servers
GROUP BY
GROUPING SETS(
(datacenter,distro),
(datacenter),
(distro),
()
)
```response
ββdatacenterβββ¬βdistroββ¬βqtyββ
β Schenectady β RHEL β 140 β
β Westport β Arch β 65 β
β Schenectady β Arch β 90 β
β Westport β RHEL β 150 β
βββββββββββββββ΄βββββββββ΄ββββββ
ββdatacenterβββ¬βdistroββ¬βqtyββ
β Westport β β 215 β
β Schenectady β β 230 β
βββββββββββββββ΄βββββββββ΄ββββββ
ββdatacenterββ¬βdistroββ¬βqtyββ
β β β 445 β
ββββββββββββββ΄βββββββββ΄ββββββ
ββdatacenterββ¬βdistroββ¬βqtyββ
β β Arch β 155 β
β β RHEL β 290 β
ββββββββββββββ΄βββββββββ΄ββββββ
9 rows in set. Elapsed: 0.427 sec.
```
Comparing CUBE with GROUPING SETS {#comparing-cube-with-grouping-sets}
The CUBE in the next query,
CUBE(datacenter,distro,version)
provides a hierarchy that may not make sense. It does not make sense to look at Version across the two distributions (as Arch and RHEL do not have the same release cycle or version naming standards). The GROUPING SETS example following this one is more appropriate as it groups
distro
and
version
in the same set.
sql
SELECT
datacenter,
distro,
version,
SUM(quantity)
FROM
servers
GROUP BY
CUBE(datacenter,distro,version)
ORDER BY
datacenter,
distro; | {"source_file": "grouping_function.md"} | [
0.042143564671278,
-0.07172245532274246,
-0.01509298849850893,
0.00835577305406332,
-0.011361829936504364,
-0.06279551982879639,
-0.06254994124174118,
-0.013958804309368134,
0.024025525897741318,
-0.025339215993881226,
0.02818036824464798,
-0.043664298951625824,
0.025033317506313324,
-0.06... |
3ed9831d-f152-4ee2-8c44-f3a61ccf77ac | sql
SELECT
datacenter,
distro,
version,
SUM(quantity)
FROM
servers
GROUP BY
CUBE(datacenter,distro,version)
ORDER BY
datacenter,
distro;
```response
ββdatacenterβββ¬βdistroββ¬βversionβββββ¬βsum(quantity)ββ
β β β 7 β 160 β
β β β 2020.05.01 β 15 β
β β β 2021.09.01 β 50 β
β β β 2022.08.05 β 90 β
β β β 9 β 130 β
β β β β 445 β
β β Arch β 2021.09.01 β 50 β
β β Arch β 2022.08.05 β 90 β
β β Arch β 2020.05.01 β 15 β
β β Arch β β 155 β
β β RHEL β 9 β 130 β
β β RHEL β 7 β 160 β
β β RHEL β β 290 β
β Schenectady β β 9 β 60 β
β Schenectady β β 2021.09.01 β 30 β
β Schenectady β β 7 β 80 β
β Schenectady β β 2022.08.05 β 50 β
β Schenectady β β 2020.05.01 β 10 β
β Schenectady β β β 230 β
β Schenectady β Arch β 2022.08.05 β 50 β
β Schenectady β Arch β 2021.09.01 β 30 β
β Schenectady β Arch β 2020.05.01 β 10 β
β Schenectady β Arch β β 90 β
β Schenectady β RHEL β 7 β 80 β
β Schenectady β RHEL β 9 β 60 β
β Schenectady β RHEL β β 140 β
β Westport β β 9 β 70 β
β Westport β β 2020.05.01 β 5 β
β Westport β β 2022.08.05 β 40 β
β Westport β β 7 β 80 β
β Westport β β 2021.09.01 β 20 β
β Westport β β β 215 β
β Westport β Arch β 2020.05.01 β 5 β
β Westport β Arch β 2021.09.01 β 20 β
β Westport β Arch β 2022.08.05 β 40 β
β Westport β Arch β β 65 β
β Westport β RHEL β 9 β 70 β
β Westport β RHEL β 7 β 80 β
β Westport β RHEL β β 150 β
βββββββββββββββ΄βββββββββ΄βββββββββββββ΄ββββββββββββββββ
39 rows in set. Elapsed: 0.355 sec.
```
:::note
Version in the above example may not make sense when it is not associated with a distro, if we were tracking the kernel version it might make sense because the kernel version can be associated with either distro. Using GROUPING SETS, as in the next example, may be a better choice.
:::
sql
SELECT
datacenter,
distro,
version,
SUM(quantity)
FROM servers
GROUP BY
GROUPING SETS (
(datacenter, distro, version),
(datacenter, distro)) | {"source_file": "grouping_function.md"} | [
0.02167668752372265,
-0.0675477609038353,
0.006493589840829372,
0.08259879052639008,
-0.04828042909502983,
-0.053128235042095184,
-0.01973990723490715,
-0.04922597482800484,
-0.044461384415626526,
0.059313591569662094,
0.022898666560649872,
-0.04927815496921539,
0.029244203120470047,
-0.07... |
c4fdb615-a49e-4b6c-ba65-3c1d50fdc82f | sql
SELECT
datacenter,
distro,
version,
SUM(quantity)
FROM servers
GROUP BY
GROUPING SETS (
(datacenter, distro, version),
(datacenter, distro))
```response
ββdatacenterβββ¬βdistroββ¬βversionβββββ¬βsum(quantity)ββ
β Westport β RHEL β 9 β 70 β
β Schenectady β Arch β 2022.08.05 β 50 β
β Schenectady β Arch β 2021.09.01 β 30 β
β Schenectady β RHEL β 7 β 80 β
β Westport β Arch β 2020.05.01 β 5 β
β Westport β RHEL β 7 β 80 β
β Westport β Arch β 2021.09.01 β 20 β
β Westport β Arch β 2022.08.05 β 40 β
β Schenectady β RHEL β 9 β 60 β
β Schenectady β Arch β 2020.05.01 β 10 β
βββββββββββββββ΄βββββββββ΄βββββββββββββ΄ββββββββββββββββ
ββdatacenterβββ¬βdistroββ¬βversionββ¬βsum(quantity)ββ
β Schenectady β RHEL β β 140 β
β Westport β Arch β β 65 β
β Schenectady β Arch β β 90 β
β Westport β RHEL β β 150 β
βββββββββββββββ΄βββββββββ΄ββββββββββ΄ββββββββββββββββ
14 rows in set. Elapsed: 1.036 sec.
``` | {"source_file": "grouping_function.md"} | [
0.04024304822087288,
-0.0634971559047699,
0.03680223971605301,
0.04947282001376152,
-0.03853614628314972,
-0.041899316012859344,
0.0046757906675338745,
-0.04270574823021889,
-0.06322649866342545,
0.05636202543973923,
0.04040684551000595,
-0.07487668842077255,
0.027447381988167763,
-0.07640... |
d7667b95-f9d6-47c6-bb9c-6ccc48997f41 | description: 'Documentation for INSERT INTO Statement'
sidebar_label: 'INSERT INTO'
sidebar_position: 33
slug: /sql-reference/statements/insert-into
title: 'INSERT INTO Statement'
doc_type: 'reference'
INSERT INTO Statement
Inserts data into a table.
Syntax
sql
INSERT INTO [TABLE] [db.]table [(c1, c2, c3)] [SETTINGS ...] VALUES (v11, v12, v13), (v21, v22, v23), ...
You can specify a list of columns to insert using the
(c1, c2, c3)
. You can also use an expression with column
matcher
such as
*
and/or
modifiers
such as
APPLY
,
EXCEPT
,
REPLACE
.
For example, consider the table:
sql
SHOW CREATE insert_select_testtable;
text
CREATE TABLE insert_select_testtable
(
`a` Int8,
`b` String,
`c` Int8
)
ENGINE = MergeTree()
ORDER BY a
sql
INSERT INTO insert_select_testtable (*) VALUES (1, 'a', 1) ;
If you want to insert data into all of the columns, except column
b
, you can do so using the
EXCEPT
keyword. With reference to the syntax above, you will need to ensure that you insert as many values (
VALUES (v11, v13)
) as you specify columns (
(c1, c3)
) :
sql
INSERT INTO insert_select_testtable (* EXCEPT(b)) Values (2, 2);
sql
SELECT * FROM insert_select_testtable;
text
ββaββ¬βbββ¬βcββ
β 2 β β 2 β
βββββ΄ββββ΄ββββ
ββaββ¬βbββ¬βcββ
β 1 β a β 1 β
βββββ΄ββββ΄ββββ
In this example, we see that the second inserted row has
a
and
c
columns filled by the passed values, and
b
filled with value by default. It is also possible to use the
DEFAULT
keyword to insert default values:
sql
INSERT INTO insert_select_testtable VALUES (1, DEFAULT, 1) ;
If a list of columns does not include all existing columns, the rest of the columns are filled with:
The values calculated from the
DEFAULT
expressions specified in the table definition.
Zeros and empty strings, if
DEFAULT
expressions are not defined.
Data can be passed to the INSERT in any
format
supported by ClickHouse. The format must be specified explicitly in the query:
sql
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT format_name data_set
For example, the following query format is identical to the basic version of
INSERT ... VALUES
:
sql
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT Values (v11, v12, v13), (v21, v22, v23), ...
ClickHouse removes all spaces and one line feed (if there is one) before the data. When forming a query, we recommend putting the data on a new line after the query operators which is important if the data begins with spaces.
Example:
sql
INSERT INTO t FORMAT TabSeparated
11 Hello, world!
22 Qwerty
You can insert data separately from the query by using the
command-line client
or the
HTTP interface
.
:::note
If you want to specify
SETTINGS
for
INSERT
query then you have to do it
before
the
FORMAT
clause since everything after
FORMAT format_name
is treated as data. For example:
sql
INSERT INTO table SETTINGS ... FORMAT format_name data_set
:::
Constraints {#constraints} | {"source_file": "insert-into.md"} | [
0.052840519696474075,
0.022144021466374397,
-0.017923478037118912,
0.018466714769601822,
-0.04995306208729744,
0.04416169971227646,
-0.007064645178616047,
0.0528264045715332,
-0.07339144498109818,
0.05235506221652031,
0.07145821303129196,
-0.05748070776462555,
0.10542237013578415,
-0.13519... |
bf65d12a-9037-4ca5-89b7-97b2feff6926 | sql
INSERT INTO table SETTINGS ... FORMAT format_name data_set
:::
Constraints {#constraints}
If a table has
constraints
, their expressions will be checked for each row of inserted data. If any of those constraints is not satisfied β the server will raise an exception containing the constraint name and expression, and the query will be stopped.
Inserting the Results of SELECT {#inserting-the-results-of-select}
Syntax
sql
INSERT INTO [TABLE] [db.]table [(c1, c2, c3)] SELECT ...
Columns are mapped according to their position in the
SELECT
clause. However, their names in the
SELECT
expression and the table for
INSERT
may differ. If necessary, type casting is performed.
None of the data formats except the Values format allow setting values to expressions such as
now()
,
1 + 2
, and so on. The Values format allows limited use of expressions, but this is not recommended, because in this case inefficient code is used for their execution.
Other queries for modifying data parts are not supported:
UPDATE
,
DELETE
,
REPLACE
,
MERGE
,
UPSERT
,
INSERT UPDATE
.
However, you can delete old data using
ALTER TABLE ... DROP PARTITION
.
The
FORMAT
clause must be specified at the end of the query if the
SELECT
clause contains the table function
input()
.
To insert a default value instead of
NULL
into a column with a non-nullable data type, enable the
insert_null_as_default
setting.
INSERT
also supports CTE (common table expression). For example, the following two statements are equivalent:
sql
INSERT INTO x WITH y AS (SELECT * FROM numbers(10)) SELECT * FROM y;
WITH y AS (SELECT * FROM numbers(10)) INSERT INTO x SELECT * FROM y;
Inserting Data from a File {#inserting-data-from-a-file}
Syntax
sql
INSERT INTO [TABLE] [db.]table [(c1, c2, c3)] FROM INFILE file_name [COMPRESSION type] [SETTINGS ...] [FORMAT format_name]
Use the syntax above to insert data from a file, or files, stored on the
client
side.
file_name
and
type
are string literals. Input file
format
must be set in the
FORMAT
clause.
Compressed files are supported. The compression type is detected by the extension of the file name. Or it can be explicitly specified in a
COMPRESSION
clause. Supported types are:
'none'
,
'gzip'
,
'deflate'
,
'br'
,
'xz'
,
'zstd'
,
'lz4'
,
'bz2'
.
This functionality is available in the
command-line client
and
clickhouse-local
.
Examples
Single file with FROM INFILE {#single-file-with-from-infile}
Execute the following queries using
command-line client
:
bash
echo 1,A > input.csv ; echo 2,B >> input.csv
clickhouse-client --query="CREATE TABLE table_from_file (id UInt32, text String) ENGINE=MergeTree() ORDER BY id;"
clickhouse-client --query="INSERT INTO table_from_file FROM INFILE 'input.csv' FORMAT CSV;"
clickhouse-client --query="SELECT * FROM table_from_file FORMAT PrettyCompact;"
Result:
text
ββidββ¬βtextββ
β 1 β A β
β 2 β B β
ββββββ΄βββββββ | {"source_file": "insert-into.md"} | [
-0.02504388988018036,
-0.02794579043984413,
-0.010213907808065414,
0.03826669231057167,
-0.06481797248125076,
0.020464854314923286,
0.007873856462538242,
0.0020914925262331963,
-0.03295363858342171,
0.03324832767248154,
0.042238086462020874,
-0.010778052732348442,
0.08587578684091568,
-0.0... |
3212c256-fb9c-4c5e-bd09-c3e4b3716380 | Result:
text
ββidββ¬βtextββ
β 1 β A β
β 2 β B β
ββββββ΄βββββββ
Multiple files with FROM INFILE using globs {#multiple-files-with-from-infile-using-globs}
This example is very similar to the previous one but inserts are performed from multiple files using
FROM INFILE 'input_*.csv
.
bash
echo 1,A > input_1.csv ; echo 2,B > input_2.csv
clickhouse-client --query="CREATE TABLE infile_globs (id UInt32, text String) ENGINE=MergeTree() ORDER BY id;"
clickhouse-client --query="INSERT INTO infile_globs FROM INFILE 'input_*.csv' FORMAT CSV;"
clickhouse-client --query="SELECT * FROM infile_globs FORMAT PrettyCompact;"
:::tip
In addition to selecting multiple files with
*
, you can use ranges (
{1,2}
or
{1..9}
) and other
glob substitutions
. These three all would work with the example above:
sql
INSERT INTO infile_globs FROM INFILE 'input_*.csv' FORMAT CSV;
INSERT INTO infile_globs FROM INFILE 'input_{1,2}.csv' FORMAT CSV;
INSERT INTO infile_globs FROM INFILE 'input_?.csv' FORMAT CSV;
:::
Inserting using a Table Function {#inserting-using-a-table-function}
Data can be inserted into tables referenced by
table functions
.
Syntax
sql
INSERT INTO [TABLE] FUNCTION table_func ...
Example
The
remote
table function is used in the following queries:
sql
CREATE TABLE simple_table (id UInt32, text String) ENGINE=MergeTree() ORDER BY id;
INSERT INTO TABLE FUNCTION remote('localhost', default.simple_table)
VALUES (100, 'inserted via remote()');
SELECT * FROM simple_table;
Result:
text
βββidββ¬βtextβββββββββββββββββββ
β 100 β inserted via remote() β
βββββββ΄ββββββββββββββββββββββββ
Inserting into ClickHouse Cloud {#inserting-into-clickhouse-cloud}
By default, services on ClickHouse Cloud provide multiple replicas for high availability. When you connect to a service, a connection is established to one of these replicas.
After an
INSERT
succeeds, data is written to the underlying storage. However, it may take some time for replicas to receive these updates. Therefore, if you use a different connection that executes a
SELECT
query on one of these other replicas, the updated data may not yet be reflected.
It is possible to use the
select_sequential_consistency
to force the replica to receive the latest updates. Here is an example of a
SELECT
query using this setting:
sql
SELECT .... SETTINGS select_sequential_consistency = 1;
Note that using
select_sequential_consistency
will increase the load on ClickHouse Keeper (used by ClickHouse Cloud internally) and may result in slower performance depending on the load on the service. We recommend against enabling this setting unless necessary. The recommended approach is to execute read/writes in the same session or to use a client driver that uses the native protocol (and thus supports sticky connections).
Inserting into a replicated setup {#inserting-into-a-replicated-setup} | {"source_file": "insert-into.md"} | [
0.006126462947577238,
-0.039937835186719894,
-0.03964638337492943,
0.041132595390081406,
-0.02186758816242218,
-0.02873852103948593,
0.06675045937299728,
0.05226292088627815,
-0.016413625329732895,
0.02301597036421299,
0.05023734271526337,
0.012852170504629612,
0.12638629972934723,
-0.0872... |
e520d19a-6140-4f17-b845-8f781db7e527 | Inserting into a replicated setup {#inserting-into-a-replicated-setup}
In a replicated setup, data will be visible on other replicas after it has been replicated. Data begins being replicated (downloaded on other replicas) immediately after an
INSERT
. This differs from ClickHouse Cloud, where data is immediately written to shared storage and replicas subscribe to metadata changes.
Note that for replicated setups,
INSERTs
can sometimes take a considerable amount of time (in the order of one second) as it requires committing to ClickHouse Keeper for distributed consensus. Using S3 for storage also adds additional latency.
Performance Considerations {#performance-considerations}
INSERT
sorts the input data by primary key and splits them into partitions by a partition key. If you insert data into several partitions at once, it can significantly reduce the performance of the
INSERT
query. To avoid this:
Add data in fairly large batches, such as 100,000 rows at a time.
Group data by a partition key before uploading it to ClickHouse.
Performance will not decrease if:
Data is added in real time.
You upload data that is usually sorted by time.
Asynchronous inserts {#asynchronous-inserts}
It is possible to asynchronously insert data in small but frequent inserts. The data from such insertions is combined into batches and then safely inserted into a table. To use asynchronous inserts, enable the
async_insert
setting.
Using
async_insert
or the
Buffer
table engine
results in additional buffering.
Large or long-running inserts {#large-or-long-running-inserts}
When you are inserting large amounts of data, ClickHouse will optimize write performance through a process called "squashing". Small blocks of inserted data in memory are merged and squashed into larger blocks before being written to disk. Squashing reduces the overhead associated with each write operation. In this process, inserted data will be available to query after ClickHouse completes writing each
max_insert_block_size
rows.
See Also
async_insert
wait_for_async_insert
wait_for_async_insert_timeout
async_insert_max_data_size
async_insert_busy_timeout_ms
async_insert_stale_timeout_ms | {"source_file": "insert-into.md"} | [
-0.06890955567359924,
-0.07508682459592819,
-0.06294962018728256,
0.014048812910914421,
-0.00986938364803791,
-0.0404958501458168,
-0.0626102164387703,
-0.044961269944906235,
0.052132315933704376,
0.04427969455718994,
0.03940495103597641,
0.06982837617397308,
0.15266269445419312,
-0.071431... |
8ee97955-445c-4889-a733-7fe18229c529 | description: 'Documentation for MOVE access entity statement'
sidebar_label: 'MOVE'
sidebar_position: 54
slug: /sql-reference/statements/move
title: 'MOVE access entity statement'
doc_type: 'reference'
MOVE access entity statement
This statement allows to move an access entity from one access storage to another.
Syntax:
sql
MOVE {USER, ROLE, QUOTA, SETTINGS PROFILE, ROW POLICY} name1 [, name2, ...] TO access_storage_type
Currently, there are five access storages in ClickHouse:
-
local_directory
-
memory
-
replicated
-
users_xml
(ro)
-
ldap
(ro)
Examples:
sql
MOVE USER test TO local_directory
sql
MOVE ROLE test TO memory | {"source_file": "move.md"} | [
0.04636600241065025,
-0.05243412032723427,
-0.1078767329454422,
0.08291307091712952,
-0.01925220899283886,
0.011533885262906551,
0.08863726258277893,
-0.02440451830625534,
-0.06993663311004639,
0.039561424404382706,
0.07415184378623962,
0.022871287539601326,
0.08641280978918076,
-0.0527945... |
e30dbfe6-dd7e-4416-a733-d05a7e2c8cb0 | description: 'Documentation for REVOKE Statement'
sidebar_label: 'REVOKE'
sidebar_position: 39
slug: /sql-reference/statements/revoke
title: 'REVOKE Statement'
doc_type: 'reference'
REVOKE Statement
Revokes privileges from users or roles.
Syntax {#syntax}
Revoking privileges from users
sql
REVOKE [ON CLUSTER cluster_name] privilege[(column_name [,...])] [,...] ON {db.table|db.*|*.*|table|*} FROM {user | CURRENT_USER} [,...] | ALL | ALL EXCEPT {user | CURRENT_USER} [,...]
Revoking roles from users
sql
REVOKE [ON CLUSTER cluster_name] [ADMIN OPTION FOR] role [,...] FROM {user | role | CURRENT_USER} [,...] | ALL | ALL EXCEPT {user_name | role_name | CURRENT_USER} [,...]
Description {#description}
To revoke some privilege you can use a privilege of a wider scope than you plan to revoke. For example, if a user has the
SELECT (x,y)
privilege, administrator can execute
REVOKE SELECT(x,y) ...
, or
REVOKE SELECT * ...
, or even
REVOKE ALL PRIVILEGES ...
query to revoke this privilege.
Partial Revokes {#partial-revokes}
You can revoke a part of a privilege. For example, if a user has the
SELECT *.*
privilege you can revoke from it a privilege to read data from some table or a database.
Examples {#examples}
Grant the
john
user account with a privilege to select from all the databases, excepting the
accounts
one:
sql
GRANT SELECT ON *.* TO john;
REVOKE SELECT ON accounts.* FROM john;
Grant the
mira
user account with a privilege to select from all the columns of the
accounts.staff
table, excepting the
wage
one.
sql
GRANT SELECT ON accounts.staff TO mira;
REVOKE SELECT(wage) ON accounts.staff FROM mira;
Original article | {"source_file": "revoke.md"} | [
0.015347359701991081,
0.11338657885789871,
0.012573068030178547,
0.07270044088363647,
-0.01433026883751154,
0.00016552931629121304,
0.10567492991685867,
-0.034720320254564285,
-0.03525957465171814,
-0.01657436601817608,
0.012401164509356022,
-0.003582349978387356,
0.10100369155406952,
-0.0... |
9a318e68-568f-45c2-bf28-bc3c7639a911 | description: 'Documentation for GRANT Statement'
sidebar_label: 'GRANT'
sidebar_position: 38
slug: /sql-reference/statements/grant
title: 'GRANT Statement'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
GRANT Statement
Grants
privileges
to ClickHouse user accounts or roles.
Assigns roles to user accounts or to the other roles.
To revoke privileges, use the
REVOKE
statement. Also you can list granted privileges with the
SHOW GRANTS
statement.
Granting Privilege Syntax {#granting-privilege-syntax}
sql
GRANT [ON CLUSTER cluster_name] privilege[(column_name [,...])] [,...] ON {db.table[*]|db[*].*|*.*|table[*]|*} TO {user | role | CURRENT_USER} [,...] [WITH GRANT OPTION] [WITH REPLACE OPTION]
privilege
β Type of privilege.
role
β ClickHouse user role.
user
β ClickHouse user account.
The
WITH GRANT OPTION
clause grants
user
or
role
with permission to execute the
GRANT
query. Users can grant privileges of the same scope they have and less.
The
WITH REPLACE OPTION
clause replace old privileges by new privileges for the
user
or
role
, if is not specified it appends privileges.
Assigning Role Syntax {#assigning-role-syntax}
sql
GRANT [ON CLUSTER cluster_name] role [,...] TO {user | another_role | CURRENT_USER} [,...] [WITH ADMIN OPTION] [WITH REPLACE OPTION]
role
β ClickHouse user role.
user
β ClickHouse user account.
The
WITH ADMIN OPTION
clause grants
ADMIN OPTION
privilege to
user
or
role
.
The
WITH REPLACE OPTION
clause replace old roles by new role for the
user
or
role
, if is not specified it appends roles.
Grant Current Grants Syntax {#grant-current-grants-syntax}
sql
GRANT CURRENT GRANTS{(privilege[(column_name [,...])] [,...] ON {db.table|db.*|*.*|table|*}) | ON {db.table|db.*|*.*|table|*}} TO {user | role | CURRENT_USER} [,...] [WITH GRANT OPTION] [WITH REPLACE OPTION]
privilege
β Type of privilege.
role
β ClickHouse user role.
user
β ClickHouse user account.
Using the
CURRENT GRANTS
statement allows you to give all specified privileges to the given user or role.
If none of the privileges were specified, then the given user or role will receive all available privileges for
CURRENT_USER
.
Usage {#usage}
To use
GRANT
, your account must have the
GRANT OPTION
privilege. You can grant privileges only inside the scope of your account privileges.
For example, administrator has granted privileges to the
john
account by the query:
sql
GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION
It means that
john
has the permission to execute:
SELECT x,y FROM db.table
.
SELECT x FROM db.table
.
SELECT y FROM db.table
. | {"source_file": "grant.md"} | [
0.016579579561948776,
0.050914350897073746,
0.03038513846695423,
0.05971866473555565,
-0.027154522016644478,
0.07841874659061432,
0.10420424491167068,
-0.07321308553218842,
-0.042591579258441925,
0.006108563859015703,
-0.039118897169828415,
-0.07134359329938889,
0.11561163514852524,
-0.020... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.