id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
d9202a01-bca9-49d1-b3d1-3e6ab7ae0588 | 2 rows in set. Elapsed: 0.006 sec.
```
Note how columns missing in rows are returned as
NULL
.
Additionally, a separate sub column is created for paths with the same type. For example, a subcolumn exists for
company.labels.type
of both
String
and
Array(Nullable(String))
. While both will be returned where possible, we can target specific sub-columns using
.:
syntax:
```sql
SELECT json.company.labels.type
FROM people
ββjson.company.labels.typeββ
β database systems β
β ['real-time processing'] β
ββββββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.007 sec.
SELECT json.company.labels.type.:String
FROM people
ββjson.companyβ―e.:
String
ββ
β α΄Ία΅α΄Έα΄Έ β
β database systems β
ββββββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.009 sec.
```
In order to return nested sub-objects, the
^
is required. This is a design choice to avoid reading a high number of columns - unless explicitly requested. Objects accessed without
^
will return
NULL
as shown below:
```sql
-- sub objects will not be returned by default
SELECT json.company.labels
FROM people
ββjson.company.labelsββ
β α΄Ία΅α΄Έα΄Έ β
β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββββββββ
2 rows in set. Elapsed: 0.002 sec.
-- return sub objects using ^ notation
SELECT json.^company.labels
FROM people
ββjson.^
company
.labelsββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"employees":"250","founded":"2021","type":"database systems"} β
β {"dissolved":"2023","employees":"10","founded":"2019","type":["real-time processing"]} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.004 sec.
```
Targeted JSON column {#targeted-json-column}
While useful in prototyping and data engineering challenges, we recommend using an explicit schema in production where possible.
Our previous example can be modeled with a single
JSON
column for the
company.labels
column.
sql
CREATE TABLE people
(
`id` Int64,
`name` String,
`username` String,
`email` String,
`address` Array(Tuple(city String, geo Tuple(lat Float32, lng Float32), street String, suite String, zipcode String)),
`phone_numbers` Array(String),
`website` String,
`company` Tuple(catchPhrase String, name String, labels JSON),
`dob` Date,
`tags` String
)
ENGINE = MergeTree
ORDER BY username
We can insert into this table using the
JSONEachRow
format: | {"source_file": "schema.md"} | [
-0.008792194537818432,
0.005072437692433596,
-0.08776376396417618,
0.07163846492767334,
-0.022203875705599785,
-0.07415806502103806,
0.02287237159907818,
0.014648452401161194,
0.014172430150210857,
-0.03256935626268387,
0.06693258881568909,
-0.03267212584614754,
-0.026391414925456047,
-0.0... |
1fa1382d-37a2-4b36-89a2-7c3d0ef68d6d | We can insert into this table using the
JSONEachRow
format:
```sql
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics","labels":{"type":"database systems","founded":"2021","employees":250}},"dob":"2007-03-31","tags":{"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}}
1 row in set. Elapsed: 0.450 sec.
INSERT INTO people FORMAT JSONEachRow
{"id":2,"name":"Analytica Rowe","username":"Analytica","address":[{"street":"Maple Avenue","suite":"Apt. 402","city":"Dataford","zipcode":"11223-4567","geo":{"lat":40.7128,"lng":-74.006}}],"phone_numbers":["123-456-7890","555-867-5309"],"website":"fastdata.io","company":{"name":"FastData Inc.","catchPhrase":"Streamlined analytics at scale","labels":{"type":["real-time processing"],"founded":2019,"dissolved":2023,"employees":10}},"dob":"1992-07-15","tags":{"hobby":"Running simulations","holidays":[{"year":2023,"location":"Kyoto, Japan"}],"car":{"model":"Audi e-tron","year":2022}}}
1 row in set. Elapsed: 0.440 sec.
```
```sql
SELECT *
FROM people
FORMAT Vertical
Row 1:
ββββββ
id: 2
name: Analytica Rowe
username: Analytica
email:
address: [('Dataford',(40.7128,-74.006),'Maple Avenue','Apt. 402','11223-4567')]
phone_numbers: ['123-456-7890','555-867-5309']
website: fastdata.io
company: ('Streamlined analytics at scale','FastData Inc.','{"dissolved":"2023","employees":"10","founded":"2019","type":["real-time processing"]}')
dob: 1992-07-15
tags: {"hobby":"Running simulations","holidays":[{"year":2023,"location":"Kyoto, Japan"}],"car":{"model":"Audi e-tron","year":2022}}
Row 2:
ββββββ
id: 1
name: Clicky McCliickHouse
username: Clicky
email: clicky@clickhouse.com
address: [('Wisokyburgh',(-43.9509,-34.4618),'Victor Plains','Suite 879','90566-7771')]
phone_numbers: ['010-692-6593','020-192-3333']
website: clickhouse.com
company: ('The real-time data warehouse for analytics','ClickHouse','{"employees":"250","founded":"2021","type":"database systems"}')
dob: 2007-03-31
tags: {"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}
2 rows in set. Elapsed: 0.005 sec.
```
Introspection functions
can be used to determine the inferred paths and types for the
company.labels
column.
```sql
SELECT JSONDynamicPathsWithTypes(company.labels) AS paths
FROM people
FORMAT PrettyJsonEachRow | {"source_file": "schema.md"} | [
-0.04201647266745567,
-0.010015741921961308,
-0.022531766444444656,
0.06249694526195526,
-0.06992018222808838,
-0.01215137355029583,
-0.010762788355350494,
-0.011341073550283909,
-0.04098626971244812,
-0.00041552537004463375,
0.02702167071402073,
-0.07466527074575424,
0.04283600673079491,
... |
6b60df00-d19d-446b-9e03-ecb7d4c8362a | ```sql
SELECT JSONDynamicPathsWithTypes(company.labels) AS paths
FROM people
FORMAT PrettyJsonEachRow
{
"paths": {
"dissolved": "Int64",
"employees": "Int64",
"founded": "Int64",
"type": "Array(Nullable(String))"
}
}
{
"paths": {
"employees": "Int64",
"founded": "String",
"type": "String"
}
}
2 rows in set. Elapsed: 0.003 sec.
```
Using type hints and skipping paths {#using-type-hints-and-skipping-paths}
Type hints allow us to specify the type for a path and its sub-column, preventing unnecessary type inference. Consider the following example where we specify the types for the JSON keys
dissolved
,
employees
, and
founded
within the JSON column
company.labels
sql
CREATE TABLE people
(
`id` Int64,
`name` String,
`username` String,
`email` String,
`address` Array(Tuple(
city String,
geo Tuple(
lat Float32,
lng Float32),
street String,
suite String,
zipcode String)),
`phone_numbers` Array(String),
`website` String,
`company` Tuple(
catchPhrase String,
name String,
labels JSON(dissolved UInt16, employees UInt16, founded UInt16)),
`dob` Date,
`tags` String
)
ENGINE = MergeTree
ORDER BY username
```sql
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics","labels":{"type":"database systems","founded":"2021","employees":250}},"dob":"2007-03-31","tags":{"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}}
1 row in set. Elapsed: 0.450 sec.
INSERT INTO people FORMAT JSONEachRow
{"id":2,"name":"Analytica Rowe","username":"Analytica","address":[{"street":"Maple Avenue","suite":"Apt. 402","city":"Dataford","zipcode":"11223-4567","geo":{"lat":40.7128,"lng":-74.006}}],"phone_numbers":["123-456-7890","555-867-5309"],"website":"fastdata.io","company":{"name":"FastData Inc.","catchPhrase":"Streamlined analytics at scale","labels":{"type":["real-time processing"],"founded":2019,"dissolved":2023,"employees":10}},"dob":"1992-07-15","tags":{"hobby":"Running simulations","holidays":[{"year":2023,"location":"Kyoto, Japan"}],"car":{"model":"Audi e-tron","year":2022}}}
1 row in set. Elapsed: 0.440 sec.
```
Notice how these columns now have our explicit types:
```sql
SELECT JSONAllPathsWithTypes(company.labels) AS paths
FROM people
FORMAT PrettyJsonEachRow | {"source_file": "schema.md"} | [
0.010382208041846752,
-0.0006537776789627969,
0.028744090348482132,
0.05529927834868431,
-0.06756129860877991,
-0.019652072340250015,
0.06200088560581207,
-0.02783903479576111,
-0.04995757341384888,
-0.05504336208105087,
0.02364119328558445,
0.004414334427565336,
-0.0517762266099453,
-0.00... |
f67ae8c2-6895-4621-ac4c-f1dd48458205 | 1 row in set. Elapsed: 0.440 sec.
```
Notice how these columns now have our explicit types:
```sql
SELECT JSONAllPathsWithTypes(company.labels) AS paths
FROM people
FORMAT PrettyJsonEachRow
{
"paths": {
"dissolved": "UInt16",
"employees": "UInt16",
"founded": "UInt16",
"type": "String"
}
}
{
"paths": {
"dissolved": "UInt16",
"employees": "UInt16",
"founded": "UInt16",
"type": "Array(Nullable(String))"
}
}
2 rows in set. Elapsed: 0.003 sec.
```
Additionally, we can skip paths within JSON that we don't want to store with the
SKIP
and
SKIP REGEXP
parameters in order to minimize storage and avoid unnecessary inference on unneeded paths. For example, suppose we use a single JSON column for the above data. We can skip the
address
and
company
paths:
``sql
CREATE TABLE people
(
json` JSON(username String, SKIP address, SKIP company)
)
ENGINE = MergeTree
ORDER BY json.username
INSERT INTO people FORMAT JSONAsObject
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics","labels":{"type":"database systems","founded":"2021","employees":250}},"dob":"2007-03-31","tags":{"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}}
1 row in set. Elapsed: 0.450 sec.
INSERT INTO people FORMAT JSONAsObject
{"id":2,"name":"Analytica Rowe","username":"Analytica","address":[{"street":"Maple Avenue","suite":"Apt. 402","city":"Dataford","zipcode":"11223-4567","geo":{"lat":40.7128,"lng":-74.006}}],"phone_numbers":["123-456-7890","555-867-5309"],"website":"fastdata.io","company":{"name":"FastData Inc.","catchPhrase":"Streamlined analytics at scale","labels":{"type":["real-time processing"],"founded":2019,"dissolved":2023,"employees":10}},"dob":"1992-07-15","tags":{"hobby":"Running simulations","holidays":[{"year":2023,"location":"Kyoto, Japan"}],"car":{"model":"Audi e-tron","year":2022}}}
1 row in set. Elapsed: 0.440 sec.
```
Note how our columns have been excluded from our data:
```sql
SELECT *
FROM people
FORMAT PrettyJSONEachRow | {"source_file": "schema.md"} | [
-0.017381155863404274,
0.02385200932621956,
-0.025790438055992126,
0.042067211121320724,
-0.08519689738750458,
-0.02777327224612236,
0.0029600372072309256,
-0.028146496042609215,
-0.011020802892744541,
-0.05350925773382187,
0.06676996499300003,
-0.008070169016718864,
-0.03703444451093674,
... |
0ef7897c-05a4-4e6a-9d2a-560e9826496d | 1 row in set. Elapsed: 0.440 sec.
```
Note how our columns have been excluded from our data:
```sql
SELECT *
FROM people
FORMAT PrettyJSONEachRow
{
"json": {
"dob" : "1992-07-15",
"id" : "2",
"name" : "Analytica Rowe",
"phone_numbers" : [
"123-456-7890",
"555-867-5309"
],
"tags" : {
"car" : {
"model" : "Audi e-tron",
"year" : "2022"
},
"hobby" : "Running simulations",
"holidays" : [
{
"location" : "Kyoto, Japan",
"year" : "2023"
}
]
},
"username" : "Analytica",
"website" : "fastdata.io"
}
}
{
"json": {
"dob" : "2007-03-31",
"email" : "clicky@clickhouse.com",
"id" : "1",
"name" : "Clicky McCliickHouse",
"phone_numbers" : [
"010-692-6593",
"020-192-3333"
],
"tags" : {
"car" : {
"model" : "Tesla",
"year" : "2023"
},
"hobby" : "Databases",
"holidays" : [
{
"location" : "Azores, Portugal",
"year" : "2024"
}
]
},
"username" : "Clicky",
"website" : "clickhouse.com"
}
}
2 rows in set. Elapsed: 0.004 sec.
```
Optimizing performance with type hints {#optimizing-performance-with-type-hints}
Type hints offer more than just a way to avoid unnecessary type inference - they eliminate storage and processing indirection entirely, as well as allowing
optimal primitive types
to be specified. JSON paths with type hints are always stored just like traditional columns, bypassing the need for
discriminator columns
or dynamic resolution during query time.
This means that with well-defined type hints, nested JSON keys achieve the same performance and efficiency as if they were modeled as top-level columns from the outset.
As a result, for datasets that are mostly consistent but still benefit from the flexibility of JSON, type hints provide a convenient way to preserve performance without needing to restructure your schema or ingest pipeline.
Configuring dynamic paths {#configuring-dynamic-paths}
ClickHouse stores each JSON path as a subcolumn in a true columnar layout, enabling the same performance benefits seen with traditional columnsβsuch as compression, SIMD-accelerated processing, and minimal disk I/O. Each unique path and type combination in your JSON data can become its own column file on disk. | {"source_file": "schema.md"} | [
-0.01115286536514759,
0.07038649171590805,
0.05803053826093674,
0.04191277176141739,
0.006214396562427282,
-0.03556301072239876,
-0.00500694802030921,
-0.03295474126935005,
0.029627515003085136,
-0.015686744824051857,
0.039157118648290634,
-0.05329931527376175,
-0.04873155429959297,
-0.052... |
a75ba795-59a2-4d3d-ab2b-b3d38bb11d5f | For example, when two JSON paths are inserted with differing types, ClickHouse stores the values of each
concrete type in distinct sub-columns
. These sub-columns can be accessed independently, minimizing unnecessary I/O. Note that when querying a column with multiple types, its values are still returned as a single columnar response.
Additionally, by leveraging offsets, ClickHouse ensures that these sub-columns remain dense, with no default values stored for absent JSON paths. This approach maximizes compression and further reduces I/O.
However, in scenarios with high-cardinality or highly variable JSON structuresβsuch as telemetry pipelines, logs, or machine-learning feature stores - this behavior can lead to an explosion of column files. Each new unique JSON path results in a new column file, and each type variant under that path results in an additional column file. While this is optimal for read performance, it introduces operational challenges: file descriptor exhaustion, increased memory usage, and slower merges due to a high number of small files.
To mitigate this, ClickHouse introduces the concept of an overflow subcolumn: once the number of distinct JSON paths exceeds a threshold, additional paths are stored in a single shared file using a compact encoded format. This file is still queryable but does not benefit from the same performance characteristics as dedicated subcolumns.
This threshold is controlled by the
max_dynamic_paths
parameter in the JSON type declaration.
sql
CREATE TABLE logs
(
payload JSON(max_dynamic_paths = 500)
)
ENGINE = MergeTree
ORDER BY tuple();
Avoid setting this parameter too high
- large values increase resource consumption and reduce efficiency. As a rule of thumb, keep it below 10,000. For workloads with highly dynamic structures, use type hints and
SKIP
parameters to restrict what's stored.
For users curious about the implementation of this new column type, we recommend reading our detailed blog post
"A New Powerful JSON Data Type for ClickHouse"
. | {"source_file": "schema.md"} | [
-0.06565647572278976,
-0.01075893733650446,
-0.047044310718774796,
0.03712376579642296,
0.00977408792823553,
-0.1491907238960266,
-0.06265442818403244,
0.03144603222608566,
0.07289192080497742,
-0.017183847725391388,
-0.025911303237080574,
0.07223162055015564,
-0.04474283754825592,
0.00976... |
6c0bcb68-59eb-4a75-874e-bb154027dbbd | sidebar_label: 'Overview'
sidebar_position: 10
title: 'Working with JSON'
slug: /integrations/data-formats/json/overview
description: 'Working with JSON in ClickHouse'
keywords: ['json', 'clickhouse']
score: 10
doc_type: 'guide'
JSON Overview
ClickHouse provides several approaches for handling JSON, each with its respective pros and cons and usage. In this guide, we will cover how to load JSON and design your schema optimally. This guide consists of the following sections:
Loading JSON
- Loading and querying structured and semi-structured JSON in ClickHouse with simple schemas.
JSON schema inference
- Using JSON schema inference to query JSON and create table schemas.
Designing JSON schema
- Steps to design and optimize your JSON schema.
Exporting JSON
- How to export JSON.
Handling other JSON Formats
- Some tips on handling JSON formats other than newline-delimited (NDJSON).
Other approaches to modeling JSON
- Legacy approaches to modeling JSON.
Not recommended. | {"source_file": "intro.md"} | [
-0.06142779812216759,
0.03052978590130806,
0.0009177398751489818,
0.010121523402631283,
-0.008208983577787876,
-0.0019366617780178785,
-0.05730079114437103,
0.06538129597902298,
-0.09225164353847504,
-0.05014759302139282,
0.052228789776563644,
-0.02059548906981945,
0.02141694352030754,
0.0... |
acaa951f-e4c4-4afa-a381-3dca90cba5a4 | sidebar_label: 'Loading JSON'
sidebar_position: 20
title: 'Working with JSON'
slug: /integrations/data-formats/json/loading
description: 'Loading JSON'
keywords: ['json', 'clickhouse', 'inserting', 'loading', 'inserting']
score: 15
doc_type: 'guide'
Loading JSON {#loading-json}
The following examples provide a very simple example of loading structured and semi-structured JSON data. For more complex JSON, including nested structures, see the guide
Designing JSON schema
.
Loading structured JSON {#loading-structured-json}
In this section, we assume the JSON data is in
NDJSON
(Newline delimited JSON) format, known as
JSONEachRow
in ClickHouse, and well structured i.e. the column names and types are fixed.
NDJSON
is the preferred format for loading JSON due to its brevity and efficient use of space, but others are supported for both
input and output
.
Consider the following JSON sample, representing a row from the
Python PyPI dataset
:
json
{
"date": "2022-11-15",
"country_code": "ES",
"project": "clickhouse-connect",
"type": "bdist_wheel",
"installer": "pip",
"python_minor": "3.9",
"system": "Linux",
"version": "0.3.0"
}
In order to load this JSON object into ClickHouse, a table schema must be defined.
In this simple case, our structure is static, our column names are known, and their types are well-defined.
Whereas ClickHouse supports semi-structured data through a JSON type, where key names and their types can be dynamic, this is unnecessary here.
:::note Prefer static schemas where possible
In cases where your columns have fixed names and types, and new columns are not expected, always prefer a statically defined schema in production.
The JSON type is preferred for highly dynamic data, where the names and types of columns are subject to change. This type is also useful in prototyping and data exploration.
:::
A simple schema for this is shown below, where
JSON keys are mapped to column names
:
sql
CREATE TABLE pypi (
`date` Date,
`country_code` String,
`project` String,
`type` String,
`installer` String,
`python_minor` String,
`system` String,
`version` String
)
ENGINE = MergeTree
ORDER BY (project, date)
:::note Ordering keys
We have selected an ordering key here via the
ORDER BY
clause. For further details on ordering keys and how to choose them, see
here
.
:::
ClickHouse can load data JSON in several formats, automatically inferring the type from the extension and contents. We can read JSON files for the above table using the
S3 function
: | {"source_file": "loading.md"} | [
-0.08683890104293823,
0.07064816355705261,
-0.003761137370020151,
-0.014545985497534275,
-0.0799771249294281,
-0.049343548715114594,
-0.1089002788066864,
0.032246142625808716,
-0.030733667314052582,
-0.0841435045003891,
0.04657832160592079,
0.009826136752963066,
-0.012795884162187576,
0.05... |
87ed7be8-f0ce-48bf-ba83-76ec75cf0834 | ClickHouse can load data JSON in several formats, automatically inferring the type from the extension and contents. We can read JSON files for the above table using the
S3 function
:
```sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz')
LIMIT 1
ββββββββdateββ¬βcountry_codeββ¬βprojectβββββββββββββ¬βtypeβββββββββ¬βinstallerβββββ¬βpython_minorββ¬βsystemββ¬βversionββ
β 2022-11-15 β CN β clickhouse-connect β bdist_wheel β bandersnatch β β β 0.2.8 β
ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββ΄ββββββββββ
1 row in set. Elapsed: 1.232 sec.
```
Note how we are not required to specify the file format. Instead, we use a glob pattern to read all
*.json.gz
files in the bucket. ClickHouse automatically infers the format is
JSONEachRow
(ndjson) from the file extension and contents. A format can be manually specified through parameter functions in case ClickHouse is unable to detect it.
sql
SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz', JSONEachRow)
:::note Compressed files
The above files are also compressed. This is automatically detected and handled by ClickHouse.
:::
To load the rows in these files, we can use an
INSERT INTO SELECT
:
```sql
INSERT INTO pypi SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz')
Ok.
0 rows in set. Elapsed: 10.445 sec. Processed 19.49 million rows, 35.71 MB (1.87 million rows/s., 3.42 MB/s.)
SELECT * FROM pypi LIMIT 2
ββββββββdateββ¬βcountry_codeββ¬βprojectβββββββββββββ¬βtypeβββ¬βinstallerβββββ¬βpython_minorββ¬βsystemββ¬βversionββ
β 2022-05-26 β CN β clickhouse-connect β sdist β bandersnatch β β β 0.0.7 β
β 2022-05-26 β CN β clickhouse-connect β sdist β bandersnatch β β β 0.0.7 β
ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββ΄ββββββββββ
2 rows in set. Elapsed: 0.005 sec. Processed 8.19 thousand rows, 908.03 KB (1.63 million rows/s., 180.38 MB/s.)
```
Rows can also be loaded inline using the
FORMAT
clause
e.g.
sql
INSERT INTO pypi
FORMAT JSONEachRow
{"date":"2022-11-15","country_code":"CN","project":"clickhouse-connect","type":"bdist_wheel","installer":"bandersnatch","python_minor":"","system":"","version":"0.2.8"}
These examples assume the use of the
JSONEachRow
format. Other common JSON formats are supported, with examples of loading these provided
here
.
Loading semi-structured JSON {#loading-semi-structured-json}
Our previous example loaded JSON which was static with well known key names and types. This is often not the case - keys can be added or their types can change. This is common in use cases such as Observability data.
ClickHouse handles this through a dedicated
JSON
type. | {"source_file": "loading.md"} | [
-0.0382637120783329,
-0.051444847136735916,
-0.08371435105800629,
0.010478672571480274,
-0.016221262514591217,
-0.04642953351140022,
0.008831196464598179,
-0.009290676563978195,
-0.009492548182606697,
-0.031325094401836395,
0.02390429563820362,
-0.01014968566596508,
-0.007028757128864527,
... |
792b0863-b3de-46b6-8a67-f70698f09600 | ClickHouse handles this through a dedicated
JSON
type.
Consider the following example from an extended version of the above
Python PyPI dataset
dataset. Here we have added an arbitrary
tags
column with random key value pairs.
```json
{
"date": "2022-09-22",
"country_code": "IN",
"project": "clickhouse-connect",
"type": "bdist_wheel",
"installer": "bandersnatch",
"python_minor": "",
"system": "",
"version": "0.2.8",
"tags": {
"5gTux": "f3to
PMvaTYZsz!
rtzX1",
"nD8CV": "value"
}
}
```
The tags column here is unpredictable and thus impossible for us to model. To load this data, we can use our previous schema but provide an additional
tags
column of type
JSON
:
```sql
SET enable_json_type = 1;
CREATE TABLE pypi_with_tags
(
date
Date,
country_code
String,
project
String,
type
String,
installer
String,
python_minor
String,
system
String,
version
String,
tags
JSON
)
ENGINE = MergeTree
ORDER BY (project, date);
```
We populate the table using the same approach as for the original dataset:
sql
INSERT INTO pypi_with_tags SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/pypi_with_tags/sample.json.gz')
```sql
INSERT INTO pypi_with_tags SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/pypi_with_tags/sample.json.gz')
Ok.
0 rows in set. Elapsed: 255.679 sec. Processed 1.00 million rows, 29.00 MB (3.91 thousand rows/s., 113.43 KB/s.)
Peak memory usage: 2.00 GiB.
SELECT *
FROM pypi_with_tags
LIMIT 2
ββββββββdateββ¬βcountry_codeββ¬βprojectβββββββββββββ¬βtypeβββ¬βinstallerβββββ¬βpython_minorββ¬βsystemββ¬βversionββ¬βtagsββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2022-05-26 β CN β clickhouse-connect β sdist β bandersnatch β β β 0.0.7 β {"nsBM":"5194603446944555691"} β
β 2022-05-26 β CN β clickhouse-connect β sdist β bandersnatch β β β 0.0.7 β {"4zD5MYQz4JkP1QqsJIS":"0","name":"8881321089124243208"} β
ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββ΄ββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.149 sec.
```
Notice the performance difference here on loading data. The JSON column requires type inference at insert time as well as additional storage if columns exist that have more than one type. Although the JSON type can be configured (see
Designing JSON schema
) for equivalent performance to explicitly declaring columns, it is intentionally flexible out-of-the-box. This flexibility, however, comes at some cost.
When to use the JSON type {#when-to-use-the-json-type}
Use the JSON type when your data:
Has
unpredictable keys
that can change over time.
Contains
values with varying types
(e.g., a path might sometimes contain a string, sometimes a number). | {"source_file": "loading.md"} | [
-0.049014125019311905,
0.009907446801662445,
-0.07514267414808273,
0.012471163645386696,
-0.017474442720413208,
-0.06292781233787537,
0.06536681950092316,
-0.015639733523130417,
-0.004427480045706034,
-0.03419116139411926,
0.07488983124494553,
-0.022081827744841576,
0.0417291522026062,
0.0... |
efdcebdd-d9ce-479b-bae0-bede66b2d4b9 | Use the JSON type when your data:
Has
unpredictable keys
that can change over time.
Contains
values with varying types
(e.g., a path might sometimes contain a string, sometimes a number).
Requires schema flexibility where strict typing isn't viable.
If your data structure is known and consistent, there is rarely a need for the JSON type, even if your data is in JSON format. Specifically, if your data has:
A flat structure with known keys
: use standard column types e.g. String.
Predictable nesting
: use Tuple, Array, or Nested types for these structures.
Predictable structure with varying types
: consider Dynamic or Variant types instead.
You can also mix approaches as we have done in the above example, using static columns for predictable top-level keys and a single JSON column for a dynamic section of the payload. | {"source_file": "loading.md"} | [
-0.031026383861899376,
0.013787123374640942,
-0.00273254350759089,
0.00373870343901217,
-0.06011557951569557,
-0.05711553245782852,
-0.03335154429078102,
0.02648661844432354,
0.018676018342375755,
-0.04484620317816734,
-0.024565163999795914,
-0.011099905706942081,
0.009337717667222023,
0.0... |
14c02427-4659-4435-9376-eb08142dd6c0 | title: 'Other JSON approaches'
slug: /integrations/data-formats/json/other-approaches
description: 'Other approaches to modeling JSON'
keywords: ['json', 'formats']
doc_type: 'reference'
Other approaches to modeling JSON
The following are alternatives to modeling JSON in ClickHouse. These are documented for completeness and were applicable before the development of the JSON type and are thus generally not recommended or applicable in most use cases.
:::note Apply an object level approach
Different techniques may be applied to different objects in the same schema. For example, some objects can be best solved with a
String
type and others with a
Map
type. Note that once a
String
type is used, no further schema decisions need to be made. Conversely, it is possible to nest sub-objects within a
Map
key - including a
String
representing JSON - as we show below:
:::
Using the String type {#using-string}
If the objects are highly dynamic, with no predictable structure and contain arbitrary nested objects, users should use the
String
type. Values can be extracted at query time using JSON functions as we show below.
Handling data using the structured approach described above is often not viable for those users with dynamic JSON, which is either subject to change or for which the schema is not well understood. For absolute flexibility, users can simply store JSON as
String
s before using functions to extract fields as required. This represents the extreme opposite of handling JSON as a structured object. This flexibility incurs costs with significant disadvantages - primarily an increase in query syntax complexity as well as degraded performance.
As noted earlier, for the
original person object
, we cannot ensure the structure of the
tags
column. We insert the original row (including
company.labels
, which we ignore for now), declaring the
Tags
column as a
String
:
``sql
CREATE TABLE people
(
id
Int64,
name
String,
username
String,
email
String,
address
Array(Tuple(city String, geo Tuple(lat Float32, lng Float32), street String, suite String, zipcode String)),
phone_numbers
Array(String),
website
String,
company
Tuple(catchPhrase String, name String),
dob
Date,
tags` String
)
ENGINE = MergeTree
ORDER BY username
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics","labels":{"type":"database systems","founded":"2021"}},"dob":"2007-03-31","tags":{"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}}
Ok.
1 row in set. Elapsed: 0.002 sec.
``` | {"source_file": "other.md"} | [
-0.07526279985904694,
0.06655947118997574,
0.003540088888257742,
0.004895415157079697,
-0.0563991516828537,
-0.025070995092391968,
-0.04155213013291359,
0.07201673090457916,
-0.030150888487696648,
-0.027163593098521233,
0.005944856908172369,
-0.0019241443369537592,
0.00023604814487043768,
... |
630a7c53-35dd-432a-a245-a073b24dc532 | Ok.
1 row in set. Elapsed: 0.002 sec.
```
We can select the
tags
column and see that the JSON has been inserted as a string:
```sql
SELECT tags
FROM people
ββtagsββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
The
JSONExtract
functions can be used to retrieve values from this JSON. Consider the simple example below:
```sql
SELECT JSONExtractString(tags, 'holidays') AS holidays FROM people
ββholidaysβββββββββββββββββββββββββββββββββββββββ
β [{"year":2024,"location":"Azores, Portugal"}] β
βββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
Notice how the functions require both a reference to the
String
column
tags
and a path in the JSON to extract. Nested paths require functions to be nested e.g.
JSONExtractUInt(JSONExtractString(tags, 'car'), 'year')
which extracts the column
tags.car.year
. The extraction of nested paths can be simplified through the functions
JSON_QUERY
and
JSON_VALUE
.
Consider the extreme case with the
arxiv
dataset where we consider the entire body to be a
String
.
sql
CREATE TABLE arxiv (
body String
)
ENGINE = MergeTree ORDER BY ()
To insert into this schema, we need to use the
JSONAsString
format:
```sql
INSERT INTO arxiv SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/arxiv/arxiv.json.gz', 'JSONAsString')
0 rows in set. Elapsed: 25.186 sec. Processed 2.52 million rows, 1.38 GB (99.89 thousand rows/s., 54.79 MB/s.)
```
Suppose we wish to count the number of papers released by year. Contrast the following query using only a string versus the
structured version
of the schema:
```sql
-- using structured schema
SELECT
toYear(parseDateTimeBestEffort(versions.created[1])) AS published_year,
count() AS c
FROM arxiv_v2
GROUP BY published_year
ORDER BY c ASC
LIMIT 10
ββpublished_yearββ¬βββββcββ
β 1986 β 1 β
β 1988 β 1 β
β 1989 β 6 β
β 1990 β 26 β
β 1991 β 353 β
β 1992 β 3190 β
β 1993 β 6729 β
β 1994 β 10078 β
β 1995 β 13006 β
β 1996 β 15872 β
ββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.264 sec. Processed 2.31 million rows, 153.57 MB (8.75 million rows/s., 582.58 MB/s.)
-- using unstructured String
SELECT
toYear(parseDateTimeBestEffort(JSON_VALUE(body, '$.versions[0].created'))) AS published_year,
count() AS c
FROM arxiv
GROUP BY published_year
ORDER BY published_year ASC
LIMIT 10 | {"source_file": "other.md"} | [
-0.025605333968997,
0.03254878520965576,
0.044610653072595596,
0.03020995482802391,
-0.013818689621984959,
-0.040732331573963165,
0.012713100761175156,
0.027282383292913437,
0.02718459814786911,
-0.06673489511013031,
0.04569544643163681,
-0.05653709918260574,
0.0017304469365626574,
-0.0220... |
7facfe08-7356-4c43-a141-3b8659a573c3 | SELECT
toYear(parseDateTimeBestEffort(JSON_VALUE(body, '$.versions[0].created'))) AS published_year,
count() AS c
FROM arxiv
GROUP BY published_year
ORDER BY published_year ASC
LIMIT 10
ββpublished_yearββ¬βββββcββ
β 1986 β 1 β
β 1988 β 1 β
β 1989 β 6 β
β 1990 β 26 β
β 1991 β 353 β
β 1992 β 3190 β
β 1993 β 6729 β
β 1994 β 10078 β
β 1995 β 13006 β
β 1996 β 15872 β
ββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 1.281 sec. Processed 2.49 million rows, 4.22 GB (1.94 million rows/s., 3.29 GB/s.)
Peak memory usage: 205.98 MiB.
```
Notice the use of an XPath expression here to filter the JSON by method i.e.
JSON_VALUE(body, '$.versions[0].created')
.
String functions are appreciably slower (> 10x) than explicit type conversions with indices. The above queries always require a full table scan and processing of every row. While these queries will still be fast on a small dataset such as this, performance will degrade on larger datasets.
This approach's flexibility comes at a clear performance and syntax cost, and it should be used only for highly dynamic objects in the schema.
Simple JSON functions {#simple-json-functions}
The above examples use the JSON* family of functions. These utilize a full JSON parser based on
simdjson
, that is rigorous in its parsing and will distinguish between the same field nested at different levels. These functions are able to deal with JSON that is syntactically correct but not well-formatted, e.g. double spaces between keys.
A faster and more strict set of functions are available. These
simpleJSON*
functions offer potentially superior performance, primarily by making strict assumptions as to the structure and format of the JSON. Specifically:
Field names must be constants
Consistent encoding of field names e.g.
simpleJSONHas('{"abc":"def"}', 'abc') = 1
, but
visitParamHas('{"\\u0061\\u0062\\u0063":"def"}', 'abc') = 0
The field names are unique across all nested structures. No differentiation is made between nesting levels and matching is indiscriminate. In the event of multiple matching fields, the first occurrence is used.
No special characters outside of string literals. This includes spaces. The following is invalid and will not parse.
json
{"@timestamp": 893964617, "clientip": "40.135.0.0", "request": {"method": "GET",
"path": "/images/hm_bg.jpg", "version": "HTTP/1.0"}, "status": 200, "size": 24736}
Whereas, the following will parse correctly:
```json
{"@timestamp":893964617,"clientip":"40.135.0.0","request":{"method":"GET",
"path":"/images/hm_bg.jpg","version":"HTTP/1.0"},"status":200,"size":24736}
In some circumstances, where performance is critical and your JSON meets the above requirements, these may be appropriate. An example of the earlier query, re-written to use
simpleJSON*
functions, is shown below: | {"source_file": "other.md"} | [
0.05159367620944977,
0.04756771773099899,
-0.02182445488870144,
0.08061521500349045,
0.017861252650618553,
-0.048546936362981796,
-0.013095071539282799,
-0.0224971491843462,
0.00603617774322629,
-0.020597482100129128,
0.029949456453323364,
0.007075527682900429,
0.01044426392763853,
-0.0422... |
36d4d556-6716-4097-b443-e4422e188c6f | ```sql
SELECT
toYear(parseDateTimeBestEffort(simpleJSONExtractString(simpleJSONExtractRaw(body, 'versions'), 'created'))) AS published_year,
count() AS c
FROM arxiv
GROUP BY published_year
ORDER BY published_year ASC
LIMIT 10
ββpublished_yearββ¬βββββcββ
β 1986 β 1 β
β 1988 β 1 β
β 1989 β 6 β
β 1990 β 26 β
β 1991 β 353 β
β 1992 β 3190 β
β 1993 β 6729 β
β 1994 β 10078 β
β 1995 β 13006 β
β 1996 β 15872 β
ββββββββββββββββββ΄ββββββββ
10 rows in set. Elapsed: 0.964 sec. Processed 2.48 million rows, 4.21 GB (2.58 million rows/s., 4.36 GB/s.)
Peak memory usage: 211.49 MiB.
```
The above query uses the
simpleJSONExtractString
to extract the
created
key, exploiting the fact we want the first value only for the published date. In this case, the limitations of the
simpleJSON*
functions are acceptable for the gain in performance.
Using the Map type {#using-map}
If the object is used to store arbitrary keys, mostly of one type, consider using the
Map
type. Ideally, the number of unique keys should not exceed several hundred. The
Map
type can also be considered for objects with sub-objects, provided the latter have uniformity in their types. Generally, we recommend the
Map
type be used for labels and tags, e.g. Kubernetes pod labels in log data.
Although
Map
s give a simple way to represent nested structures, they have some notable limitations:
The fields must all be of the same type.
Accessing sub-columns requires a special map syntax since the fields don't exist as columns. The entire object
is
a column.
Accessing a subcolumn loads the entire
Map
value i.e. all siblings and their respective values. For larger maps, this can result in a significant performance penalty.
:::note String keys
When modelling objects as
Map
s, a
String
key is used to store the JSON key name. The map will therefore always be
Map(String, T)
, where
T
depends on the data.
:::
Primitive values {#primitive-values}
The simplest application of a
Map
is when the object contains the same primitive type as values. In most cases, this involves using the
String
type for the value
T
.
Consider our
earlier person JSON
where the
company.labels
object was determined to be dynamic. Importantly, we only expect key-value pairs of type String to be added to this object. We can thus declare this as
Map(String, String)
:
sql
CREATE TABLE people
(
`id` Int64,
`name` String,
`username` String,
`email` String,
`address` Array(Tuple(city String, geo Tuple(lat Float32, lng Float32), street String, suite String, zipcode String)),
`phone_numbers` Array(String),
`website` String,
`company` Tuple(catchPhrase String, name String, labels Map(String,String)),
`dob` Date,
`tags` String
)
ENGINE = MergeTree
ORDER BY username
We can insert our original complete JSON object: | {"source_file": "other.md"} | [
-0.03363143652677536,
0.012736187316477299,
-0.040672075003385544,
0.07705347239971161,
-0.058197807520627975,
0.006588182412087917,
-0.008612661622464657,
0.019296005368232727,
-0.0467313677072525,
0.02207517996430397,
0.01614745333790779,
0.01688005030155182,
0.008658794686198235,
-0.078... |
4302fab4-7e27-4ab9-834e-7d4d1c21a7f1 | We can insert our original complete JSON object:
```sql
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics","labels":{"type":"database systems","founded":"2021"}},"dob":"2007-03-31","tags":{"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}}
Ok.
1 row in set. Elapsed: 0.002 sec.
```
Querying these fields within the request object requires a map syntax e.g.:
```sql
SELECT company.labels FROM people
ββcompany.labelsββββββββββββββββββββββββββββββββ
β {'type':'database systems','founded':'2021'} β
ββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
SELECT company.labels['type'] AS type FROM people
ββtypeββββββββββββββ
β database systems β
ββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
A full set of
Map
functions is available to query this time, described
here
. If your data is not of a consistent type, functions exist to perform the
necessary type coercion
.
Object values {#object-values}
The
Map
type can also be considered for objects which have sub-objects, provided the latter have consistency in their types.
Suppose the
tags
key for our
persons
object requires a consistent structure, where the sub-object for each
tag
has a
name
and
time
column. A simplified example of such a JSON document might look like the following:
json
{
"id": 1,
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"tags": {
"hobby": {
"name": "Diving",
"time": "2024-07-11 14:18:01"
},
"car": {
"name": "Tesla",
"time": "2024-07-11 15:18:23"
}
}
}
This can be modelled with a
Map(String, Tuple(name String, time DateTime))
as shown below:
``sql
CREATE TABLE people
(
id
Int64,
name
String,
username
String,
email
String,
tags` Map(String, Tuple(name String, time DateTime))
)
ENGINE = MergeTree
ORDER BY username
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","tags":{"hobby":{"name":"Diving","time":"2024-07-11 14:18:01"},"car":{"name":"Tesla","time":"2024-07-11 15:18:23"}}}
Ok.
1 row in set. Elapsed: 0.002 sec.
SELECT tags['hobby'] AS hobby
FROM people
FORMAT JSONEachRow
{"hobby":{"name":"Diving","time":"2024-07-11 14:18:01"}}
1 row in set. Elapsed: 0.001 sec.
``` | {"source_file": "other.md"} | [
-0.06245770305395126,
0.0032510750461369753,
-0.0045348103158175945,
0.06315911561250687,
-0.07793071866035461,
-0.04302113875746727,
-0.00003766831287066452,
-0.012190067209303379,
-0.012408538721501827,
-0.0006330182659439743,
0.008219565264880657,
-0.03282004967331886,
0.01617563702166080... |
91689fdb-6277-4e59-9429-90d24d6187ae | 1 row in set. Elapsed: 0.002 sec.
SELECT tags['hobby'] AS hobby
FROM people
FORMAT JSONEachRow
{"hobby":{"name":"Diving","time":"2024-07-11 14:18:01"}}
1 row in set. Elapsed: 0.001 sec.
```
The application of maps in this case is typically rare, and suggests that the data should be remodelled such that dynamic key names do not have sub-objects. For example, the above could be remodelled as follows allowing the use of
Array(Tuple(key String, name String, time DateTime))
.
json
{
"id": 1,
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"tags": [
{
"key": "hobby",
"name": "Diving",
"time": "2024-07-11 14:18:01"
},
{
"key": "car",
"name": "Tesla",
"time": "2024-07-11 15:18:23"
}
]
}
Using the Nested type {#using-nested}
The
Nested type
can be used to model static objects which are rarely subject to change, offering an alternative to
Tuple
and
Array(Tuple)
. We generally recommend avoiding using this type for JSON as its behavior is often confusing. The primary benefit of
Nested
is that sub-columns can be used in ordering keys.
Below, we provide an example of using the Nested type to model a static object. Consider the following simple log entry in JSON:
json
{
"timestamp": 897819077,
"clientip": "45.212.12.0",
"request": {
"method": "GET",
"path": "/french/images/hm_nav_bar.gif",
"version": "HTTP/1.0"
},
"status": 200,
"size": 3305
}
We can declare the
request
key as
Nested
. Similar to
Tuple
, we are required to specify the sub columns.
sql
-- default
SET flatten_nested=1
CREATE table http
(
timestamp Int32,
clientip IPv4,
request Nested(method LowCardinality(String), path String, version LowCardinality(String)),
status UInt16,
size UInt32,
) ENGINE = MergeTree() ORDER BY (status, timestamp);
flatten_nested {#flatten_nested}
The setting
flatten_nested
controls the behavior of nested.
flatten_nested=1 {#flatten_nested1}
A value of
1
(the default) does not support an arbitrary level of nesting. With this value, it is easiest to think of a nested data structure as multiple
Array
columns of the same length. The fields
method
,
path
, and
version
are all separate
Array(Type)
columns in effect with one critical constraint:
the length of the
method
,
path
, and
version
fields must be the same.
If we use
SHOW CREATE TABLE
, this is illustrated:
```sql
SHOW CREATE TABLE http
CREATE TABLE http
(
timestamp
Int32,
clientip
IPv4,
request.method
Array(LowCardinality(String)),
request.path
Array(String),
request.version
Array(LowCardinality(String)),
status
UInt16,
size
UInt32
)
ENGINE = MergeTree
ORDER BY (status, timestamp)
```
Below, we insert into this table: | {"source_file": "other.md"} | [
0.031156962737441063,
0.04628699645400047,
0.06577848643064499,
-0.01720704510807991,
-0.07055141031742096,
-0.0424828827381134,
0.044586144387722015,
-0.0024800861719995737,
-0.03777255862951279,
-0.053712181746959686,
0.006631036289036274,
-0.04880644381046295,
-0.004567200317978859,
0.0... |
d1f8823f-be1f-4c2b-af52-9b20d8c052b6 | Below, we insert into this table:
sql
SET input_format_import_nested_json = 1;
INSERT INTO http
FORMAT JSONEachRow
{"timestamp":897819077,"clientip":"45.212.12.0","request":[{"method":"GET","path":"/french/images/hm_nav_bar.gif","version":"HTTP/1.0"}],"status":200,"size":3305}
A few important points to note here:
We need to use the setting
input_format_import_nested_json
to insert the JSON as a nested structure. Without this, we are required to flatten the JSON i.e.
sql
INSERT INTO http FORMAT JSONEachRow
{"timestamp":897819077,"clientip":"45.212.12.0","request":{"method":["GET"],"path":["/french/images/hm_nav_bar.gif"],"version":["HTTP/1.0"]},"status":200,"size":3305}
* The nested fields
method
,
path
, and
version
need to be passed as JSON arrays i.e.
json
{
"@timestamp": 897819077,
"clientip": "45.212.12.0",
"request": {
"method": [
"GET"
],
"path": [
"/french/images/hm_nav_bar.gif"
],
"version": [
"HTTP/1.0"
]
},
"status": 200,
"size": 3305
}
Columns can be queried using a dot notation:
``sql
SELECT clientip, status, size,
request.method` FROM http WHERE has(request.method, 'GET');
ββclientipβββββ¬βstatusββ¬βsizeββ¬βrequest.methodββ
β 45.212.12.0 β 200 β 3305 β ['GET'] β
βββββββββββββββ΄βββββββββ΄βββββββ΄βββββββββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
Note the use of
Array
for the sub-columns means the full breath
Array functions
can potentially be exploited, including the
ARRAY JOIN
clause - useful if your columns have multiple values.
flatten_nested=0 {#flatten_nested0}
This allows an arbitrary level of nesting and means nested columns stay as a single array of
Tuple
s - effectively they become the same as
Array(Tuple)
.
This represents the preferred way, and often the simplest way, to use JSON with
Nested
. As we show below, it only requires all objects to be a list.
Below, we re-create our table and re-insert a row:
``sql
CREATE TABLE http
(
timestamp
Int32,
clientip
IPv4,
request
Nested(method LowCardinality(String), path String, version LowCardinality(String)),
status
UInt16,
size` UInt32
)
ENGINE = MergeTree
ORDER BY (status, timestamp)
SHOW CREATE TABLE http
-- note Nested type is preserved.
CREATE TABLE default.http
(
timestamp
Int32,
clientip
IPv4,
request
Nested(method LowCardinality(String), path String, version LowCardinality(String)),
status
UInt16,
size
UInt32
)
ENGINE = MergeTree
ORDER BY (status, timestamp)
INSERT INTO http
FORMAT JSONEachRow
{"timestamp":897819077,"clientip":"45.212.12.0","request":[{"method":"GET","path":"/french/images/hm_nav_bar.gif","version":"HTTP/1.0"}],"status":200,"size":3305}
```
A few important points to note here:
input_format_import_nested_json
is not required to insert. | {"source_file": "other.md"} | [
-0.012324872426688671,
-0.03755522146821022,
0.02287856675684452,
0.010361429303884506,
-0.07759397476911545,
-0.011741165071725845,
-0.08439528942108154,
0.05229804664850235,
-0.007719745859503746,
0.013491732068359852,
-0.0041188704781234264,
-0.05553145334124565,
0.04362078011035919,
0.... |
31d4ef07-dc29-4f86-a8e1-5799fffe2d0d | A few important points to note here:
input_format_import_nested_json
is not required to insert.
The
Nested
type is preserved in
SHOW CREATE TABLE
. Underneath this column is effectively a
Array(Tuple(Nested(method LowCardinality(String), path String, version LowCardinality(String))))
As a result, we are required to insert
request
as an array i.e.
json
{
"timestamp": 897819077,
"clientip": "45.212.12.0",
"request": [
{
"method": "GET",
"path": "/french/images/hm_nav_bar.gif",
"version": "HTTP/1.0"
}
],
"status": 200,
"size": 3305
}
Columns can again be queried using a dot notation:
``sql
SELECT clientip, status, size,
request.method` FROM http WHERE has(request.method, 'GET');
ββclientipβββββ¬βstatusββ¬βsizeββ¬βrequest.methodββ
β 45.212.12.0 β 200 β 3305 β ['GET'] β
βββββββββββββββ΄βββββββββ΄βββββββ΄βββββββββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
Example {#example}
A larger example of the above data is available in a public bucket in s3 at:
s3://datasets-documentation/http/
.
```sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/documents-01.ndjson.gz', 'JSONEachRow')
LIMIT 1
FORMAT PrettyJSONEachRow
{
"@timestamp": "893964617",
"clientip": "40.135.0.0",
"request": {
"method": "GET",
"path": "\/images\/hm_bg.jpg",
"version": "HTTP\/1.0"
},
"status": "200",
"size": "24736"
}
1 row in set. Elapsed: 0.312 sec.
```
Given the constraints and input format for the JSON, we insert this sample dataset using the following query. Here, we set
flatten_nested=0
.
The following statement inserts 10 million rows, so this may take a few minutes to execute. Apply a
LIMIT
if required:
sql
INSERT INTO http
SELECT `@timestamp` AS `timestamp`, clientip, [request], status,
size FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/documents-01.ndjson.gz',
'JSONEachRow');
Querying this data requires us to access the request fields as arrays. Below, we summarize the errors and http methods over a fixed time period.
```sql
SELECT status, request.method[1] AS method, count() AS c
FROM http
WHERE status >= 400
AND toDateTime(timestamp) BETWEEN '1998-01-01 00:00:00' AND '1998-06-01 00:00:00'
GROUP BY method, status
ORDER BY c DESC LIMIT 5;
ββstatusββ¬βmethodββ¬βββββcββ
β 404 β GET β 11267 β
β 404 β HEAD β 276 β
β 500 β GET β 160 β
β 500 β POST β 115 β
β 400 β GET β 81 β
ββββββββββ΄βββββββββ΄ββββββββ
5 rows in set. Elapsed: 0.007 sec.
```
Using pairwise arrays {#using-pairwise-arrays} | {"source_file": "other.md"} | [
-0.001065035699866712,
0.022651955485343933,
0.00551413930952549,
0.01631724089384079,
-0.042948611080646515,
-0.07619084417819977,
-0.0225914865732193,
0.07203181087970734,
-0.005880887620151043,
0.016004370525479317,
0.03687717765569687,
-0.014949080534279346,
0.07311498373746872,
-0.020... |
26c4be02-3aa2-4ad0-a6c8-462c8d501ed1 | 5 rows in set. Elapsed: 0.007 sec.
```
Using pairwise arrays {#using-pairwise-arrays}
Pairwise arrays provide a balance between the flexibility of representing JSON as Strings and the performance of a more structured approach. The schema is flexible in that any new fields can be potentially added to the root. This, however, requires a significantly more complex query syntax and isn't compatible with nested structures.
As an example, consider the following table:
sql
CREATE TABLE http_with_arrays (
keys Array(String),
values Array(String)
)
ENGINE = MergeTree ORDER BY tuple();
To insert into this table, we need to structure the JSON as a list of keys and values. The following query illustrates the use of the
JSONExtractKeysAndValues
to achieve this:
```sql
SELECT
arrayMap(x -> (x.1), JSONExtractKeysAndValues(json, 'String')) AS keys,
arrayMap(x -> (x.2), JSONExtractKeysAndValues(json, 'String')) AS values
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/documents-01.ndjson.gz', 'JSONAsString')
LIMIT 1
FORMAT Vertical
Row 1:
ββββββ
keys: ['@timestamp','clientip','request','status','size']
values: ['893964617','40.135.0.0','{"method":"GET","path":"/images/hm_bg.jpg","version":"HTTP/1.0"}','200','24736']
1 row in set. Elapsed: 0.416 sec.
```
Note how the request column remains a nested structure represented as a string. We can insert any new keys to the root. We can also have arbitrary differences in the JSON itself. To insert into our local table, execute the following:
```sql
INSERT INTO http_with_arrays
SELECT
arrayMap(x -> (x.1), JSONExtractKeysAndValues(json, 'String')) AS keys,
arrayMap(x -> (x.2), JSONExtractKeysAndValues(json, 'String')) AS values
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/documents-01.ndjson.gz', 'JSONAsString')
0 rows in set. Elapsed: 12.121 sec. Processed 10.00 million rows, 107.30 MB (825.01 thousand rows/s., 8.85 MB/s.)
```
Querying this structure requires using the
indexOf
function to identify the index of the required key (which should be consistent with the order of the values). This can be used to access the values array column i.e.
values[indexOf(keys, 'status')]
. We still require a JSON parsing method for the request column - in this case,
simpleJSONExtractString
.
```sql
SELECT toUInt16(values[indexOf(keys, 'status')]) AS status,
simpleJSONExtractString(values[indexOf(keys, 'request')], 'method') AS method,
count() AS c
FROM http_with_arrays
WHERE status >= 400
AND toDateTime(values[indexOf(keys, '@timestamp')]) BETWEEN '1998-01-01 00:00:00' AND '1998-06-01 00:00:00'
GROUP BY method, status ORDER BY c DESC LIMIT 5; | {"source_file": "other.md"} | [
-0.0014258821029216051,
0.024060331284999847,
-0.008596408180892467,
-0.009086765348911285,
-0.10060671716928482,
-0.048139866441488266,
-0.038668591529130936,
-0.008116067387163639,
-0.01695708930492401,
0.004301518201828003,
0.022374166175723076,
-0.013194173574447632,
0.0595252625644207,
... |
448ec119-89ff-4b91-8604-5a33abeed894 | ββstatusββ¬βmethodββ¬βββββcββ
β 404 β GET β 11267 β
β 404 β HEAD β 276 β
β 500 β GET β 160 β
β 500 β POST β 115 β
β 400 β GET β 81 β
ββββββββββ΄βββββββββ΄ββββββββ
5 rows in set. Elapsed: 0.383 sec. Processed 8.22 million rows, 1.97 GB (21.45 million rows/s., 5.15 GB/s.)
Peak memory usage: 51.35 MiB.
``` | {"source_file": "other.md"} | [
0.043317992240190506,
-0.012421490624547005,
-0.11410670727491379,
0.0009657992632128298,
-0.07484175264835358,
-0.03955531865358353,
0.03879597783088684,
0.03244591876864433,
-0.04299302026629448,
0.0788511410355568,
0.004331678617745638,
0.08120587468147278,
0.03250782564282417,
-0.08827... |
bc920315-c4eb-450c-8a70-d404e47b3f7e | slug: /integrations/postgresql/inserting-data
title: 'How to insert data from PostgreSQL'
keywords: ['postgres', 'postgresql', 'inserts']
description: 'Page describing how to insert data from PostgresSQL using ClickPipes, PeerDB or the Postgres table function'
doc_type: 'guide'
We recommend reading
this guide
to learn best practices on inserting data to ClickHouse to optimize for insert performance.
For bulk loading data from PostgreSQL, users can use:
using
ClickPipes
, the managed integration service for ClickHouse Cloud.
PeerDB by ClickHouse
, an ETL tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
The
Postgres Table Function
to read data directly. This is typically appropriate for if batch replication based on a known watermark, e.g. a timestamp. is sufficient or if it's a once-off migration. This approach can scale to 10's of millions of rows. Users looking to migrate larger datasets should consider multiple requests, each dealing with a chunk of the data. Staging tables can be used for each chunk prior to its partitions being moved to a final table. This allows failed requests to be retried. For further details on this bulk-loading strategy, see here.
Data can be exported from Postgres in CSV format. This can then be inserted into ClickHouse from either local files or via object storage using table functions. | {"source_file": "inserting-data.md"} | [
-0.02096022106707096,
-0.07417546957731247,
-0.06491099298000336,
0.02095060609281063,
-0.0834791362285614,
-0.060416705906391144,
-0.03291203826665878,
-0.009525024332106113,
-0.06550353020429611,
0.058818377554416656,
0.014004901982843876,
0.048781901597976685,
0.06603655219078064,
-0.09... |
d21ec9cd-b7c3-4d3a-b456-f5d5cde82742 | slug: /integrations/postgresql/connecting-to-postgresql
title: 'Connecting to PostgreSQL'
keywords: ['clickhouse', 'postgres', 'postgresql', 'connect', 'integrate', 'table', 'engine']
description: 'Page describing the various ways to connect PostgreSQL to ClickHouse'
show_related_blogs: true
doc_type: 'guide'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
Connecting ClickHouse to PostgreSQL
This page covers following options for integrating PostgreSQL with ClickHouse:
using the
PostgreSQL
table engine, for reading from a PostgreSQL table
using the experimental
MaterializedPostgreSQL
database engine, for syncing a database in PostgreSQL with a database in ClickHouse
:::tip
We recommend using
ClickPipes
, a managed integration service for ClickHouse Cloud powered by PeerDB.
Alternatively,
PeerDB
is available as an an open-source CDC tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
:::
Using the PostgreSQL table engine {#using-the-postgresql-table-engine}
The
PostgreSQL
table engine allows
SELECT
and
INSERT
operations on data stored on the remote PostgreSQL server from ClickHouse.
This article is to illustrate basic methods of integration using one table.
1. Setting up PostgreSQL {#1-setting-up-postgresql}
In
postgresql.conf
, add the following entry to enable PostgreSQL to listen on the network interfaces:
text
listen_addresses = '*'
Create a user to connect from ClickHouse. For demonstration purposes, this example grants full superuser rights.
sql
CREATE ROLE clickhouse_user SUPERUSER LOGIN PASSWORD 'ClickHouse_123';
Create a new database in PostgreSQL:
sql
CREATE DATABASE db_in_psg;
Create a new table:
sql
CREATE TABLE table1 (
id integer primary key,
column1 varchar(10)
);
Let's add a few rows for testing:
sql
INSERT INTO table1
(id, column1)
VALUES
(1, 'abc'),
(2, 'def');
To configure PostgreSQL to allow connections to the new database with the new user for replication, add the following entry to the
pg_hba.conf
file. Update the address line with either the subnet or IP address of your PostgreSQL server:
text
# TYPE DATABASE USER ADDRESS METHOD
host db_in_psg clickhouse_user 192.168.1.0/24 password
Reload the
pg_hba.conf
configuration (adjust this command depending on your version):
text
/usr/pgsql-12/bin/pg_ctl reload
Verify the new
clickhouse_user
can login:
text
psql -U clickhouse_user -W -d db_in_psg -h <your_postgresql_host> | {"source_file": "connecting-to-postgresql.md"} | [
-0.02215389721095562,
-0.07797911018133163,
-0.07890444248914719,
0.03106938675045967,
-0.09420076757669449,
0.008343390189111233,
-0.006064275279641151,
-0.06422138214111328,
-0.05641366168856621,
0.0027366639114916325,
-0.014913110993802547,
0.018845826387405396,
0.021234726533293724,
-0... |
23c20801-1702-4381-b386-d6d33121442c | Verify the new
clickhouse_user
can login:
text
psql -U clickhouse_user -W -d db_in_psg -h <your_postgresql_host>
:::note
If you are using this feature in ClickHouse Cloud, you may need the to allow the ClickHouse Cloud IP addresses to access your PostgreSQL instance.
Check the ClickHouse
Cloud Endpoints API
for egress traffic details.
:::
2. Define a Table in ClickHouse {#2-define-a-table-in-clickhouse}
Login to the
clickhouse-client
:
bash
clickhouse-client --user default --password ClickHouse123!
Let's create a new database:
sql
CREATE DATABASE db_in_ch;
Create a table that uses the
PostgreSQL
:
sql
CREATE TABLE db_in_ch.table1
(
id UInt64,
column1 String
)
ENGINE = PostgreSQL('postgres-host.domain.com:5432', 'db_in_psg', 'table1', 'clickhouse_user', 'ClickHouse_123');
The minimum parameters needed are:
|parameter|Description |example |
|---------|----------------------------|---------------------|
|host:port|hostname or IP and port |postgres-host.domain.com:5432|
|database |PostgreSQL database name |db_in_psg |
|user |username to connect to postgres|clickhouse_user |
|password |password to connect to postgres|ClickHouse_123 |
:::note
View the
PostgreSQL table engine
doc page for a complete list of parameters.
:::
3 Test the Integration {#3-test-the-integration}
In ClickHouse, view initial rows:
sql
SELECT * FROM db_in_ch.table1
The ClickHouse table should automatically be populated with the two rows that already existed in the table in PostgreSQL:
```response
Query id: 34193d31-fe21-44ac-a182-36aaefbd78bf
ββidββ¬βcolumn1ββ
β 1 β abc β
β 2 β def β
ββββββ΄ββββββββββ
```
Back in PostgreSQL, add a couple of rows to the table:
sql
INSERT INTO table1
(id, column1)
VALUES
(3, 'ghi'),
(4, 'jkl');
Those two new rows should appear in your ClickHouse table:
sql
SELECT * FROM db_in_ch.table1
The response should be:
```response
Query id: 86fa2c62-d320-4e47-b564-47ebf3d5d27b
ββidββ¬βcolumn1ββ
β 1 β abc β
β 2 β def β
β 3 β ghi β
β 4 β jkl β
ββββββ΄ββββββββββ
```
Let's see what happens when you add rows to the ClickHouse table:
sql
INSERT INTO db_in_ch.table1
(id, column1)
VALUES
(5, 'mno'),
(6, 'pqr');
The rows added in ClickHouse should appear in the table in PostgreSQL:
sql
db_in_psg=# SELECT * FROM table1;
id | column1
----+---------
1 | abc
2 | def
3 | ghi
4 | jkl
5 | mno
6 | pqr
(6 rows) | {"source_file": "connecting-to-postgresql.md"} | [
0.07687754929065704,
-0.07639459520578384,
-0.09398791939020157,
-0.006137704476714134,
-0.1367698311805725,
0.011761041358113289,
0.07303871959447861,
-0.013645983301103115,
-0.009564541280269623,
-0.002462618751451373,
-0.02758750505745411,
-0.03680439293384552,
0.008824009448289871,
-0.... |
2851ed2d-e187-48ba-8da0-d81abc61c0b9 | This example demonstrated the basic integration between PostgreSQL and ClickHouse using the
PostrgeSQL
table engine.
Check out the
doc page for the PostgreSQL table engine
for more features, such as specifying schemas, returning only a subset of columns, and connecting to multiple replicas. Also check out the
ClickHouse and PostgreSQL - a match made in data heaven - part 1
blog.
Using the MaterializedPostgreSQL database engine {#using-the-materializedpostgresql-database-engine}
The PostgreSQL database engine uses the PostgreSQL replication features to create a replica of the database with all or a subset of schemas and tables.
This article is to illustrate basic methods of integration using one database, one schema and one table.
In the following procedures, the PostgreSQL CLI (psql) and the ClickHouse CLI (clickhouse-client) are used. The PostgreSQL server is installed on linux. The following has minimum settings if the postgresql database is new test install
1. In PostgreSQL {#1-in-postgresql}
In
postgresql.conf
, set minimum listen levels, replication wal level and replication slots:
add the following entries:
text
listen_addresses = '*'
max_replication_slots = 10
wal_level = logical
*ClickHouse needs minimum of
logical
wal level and minimum
2
replication slots
Using an admin account, create a user to connect from ClickHouse:
sql
CREATE ROLE clickhouse_user SUPERUSER LOGIN PASSWORD 'ClickHouse_123';
*for demonstration purposes, full superuser rights have been granted.
create a new database:
sql
CREATE DATABASE db1;
connect to the new database in
psql
:
text
\connect db1
create a new table:
sql
CREATE TABLE table1 (
id integer primary key,
column1 varchar(10)
);
add initial rows:
sql
INSERT INTO table1
(id, column1)
VALUES
(1, 'abc'),
(2, 'def');
Configure PostgreSQL allow connections to the new database with the new user for replication. Below is the minimum entry to add to the
pg_hba.conf
file:
```text
TYPE DATABASE USER ADDRESS METHOD
host db1 clickhouse_user 192.168.1.0/24 password
```
*for demonstration purposes, this is using clear text password authentication method. update the address line with either the subnet or the address of the server per PostgreSQL documentation
reload the
pg_hba.conf
configuration with something like this (adjust for your version):
text
/usr/pgsql-12/bin/pg_ctl reload
Test the login with new
clickhouse_user
:
text
psql -U clickhouse_user -W -d db1 -h <your_postgresql_host>
2. In ClickHouse {#2-in-clickhouse}
log into the ClickHouse CLI
bash
clickhouse-client --user default --password ClickHouse123!
Enable the PostgreSQL experimental feature for the database engine:
sql
SET allow_experimental_database_materialized_postgresql=1
Create the new database to be replicated and define the initial table: | {"source_file": "connecting-to-postgresql.md"} | [
-0.005105977412313223,
-0.07402917742729187,
-0.1249857172369957,
-0.015476822853088379,
-0.11882061511278152,
-0.0694354847073555,
-0.03805757686495781,
-0.03112630918622017,
-0.026809733361005783,
0.004032985772937536,
-0.004318759776651859,
0.00038703373866155744,
0.06254667788743973,
-... |
81490878-0921-4a8b-aa2f-8f27c8a8da0e | sql
SET allow_experimental_database_materialized_postgresql=1
Create the new database to be replicated and define the initial table:
sql
CREATE DATABASE db1_postgres
ENGINE = MaterializedPostgreSQL('postgres-host.domain.com:5432', 'db1', 'clickhouse_user', 'ClickHouse_123')
SETTINGS materialized_postgresql_tables_list = 'table1';
minimum options:
|parameter|Description |example |
|---------|----------------------------|---------------------|
|host:port|hostname or IP and port |postgres-host.domain.com:5432|
|database |PostgreSQL database name |db1 |
|user |username to connect to postgres|clickhouse_user |
|password |password to connect to postgres|ClickHouse_123 |
|settings |additional settings for the engine| materialized_postgresql_tables_list = 'table1'|
:::info
For complete guide to the PostgreSQL database engine, refer to https://clickhouse.com/docs/engines/database-engines/materialized-postgresql/#settings
:::
Verify the initial table has data:
```sql
ch_env_2 :) select * from db1_postgres.table1;
SELECT *
FROM db1_postgres.table1
Query id: df2381ac-4e30-4535-b22e-8be3894aaafc
ββidββ¬βcolumn1ββ
β 1 β abc β
ββββββ΄ββββββββββ
ββidββ¬βcolumn1ββ
β 2 β def β
ββββββ΄ββββββββββ
```
3. Test basic replication {#3-test-basic-replication}
In PostgreSQL, add new rows:
sql
INSERT INTO table1
(id, column1)
VALUES
(3, 'ghi'),
(4, 'jkl');
In ClickHouse, verify the new rows are visible:
```sql
ch_env_2 :) select * from db1_postgres.table1;
SELECT *
FROM db1_postgres.table1
Query id: b0729816-3917-44d3-8d1a-fed912fb59ce
ββidββ¬βcolumn1ββ
β 1 β abc β
ββββββ΄ββββββββββ
ββidββ¬βcolumn1ββ
β 4 β jkl β
ββββββ΄ββββββββββ
ββidββ¬βcolumn1ββ
β 3 β ghi β
ββββββ΄ββββββββββ
ββidββ¬βcolumn1ββ
β 2 β def β
ββββββ΄ββββββββββ
```
4. Summary {#4-summary}
This integration guide focused on a simple example on how to replicate a database with a table, however, there exist more advanced options which include replicating the whole database or adding new tables and schemas to the existing replications. Although DDL commands are not supported for this replication, the engine can be set to detect changes and reload the tables when there are structural changes made.
:::info
For more features available for advanced options, please see the
reference documentation
.
::: | {"source_file": "connecting-to-postgresql.md"} | [
0.028393374755978584,
-0.09448952227830887,
-0.07861325144767761,
0.0015110047534108162,
-0.1537168323993683,
-0.06854192167520523,
-0.038008760660886765,
-0.016635749489068985,
-0.09038782119750977,
0.023553065955638885,
0.0029572078492492437,
-0.05866103991866112,
0.04299183562397957,
-0... |
9352139b-e3cb-4319-bb81-0ccc4962732a | sidebar_label: 'DynamoDB'
sidebar_position: 10
slug: /integrations/dynamodb
description: 'ClickPipes allows you to connect ClickHouse to DynamoDB.'
keywords: ['DynamoDB']
title: 'CDC from DynamoDB to ClickHouse'
show_related_blogs: true
doc_type: 'guide'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import dynamodb_kinesis_stream from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-kinesis-stream.png';
import dynamodb_s3_export from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-s3-export.png';
import dynamodb_map_columns from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-map-columns.png';
import Image from '@theme/IdealImage';
CDC from DynamoDB to ClickHouse
This page covers how set up CDC from DynamoDB to ClickHouse using ClickPipes. There are 2 components to this integration:
1. The initial snapshot via S3 ClickPipes
2. Real-time updates via Kinesis ClickPipes
Data will be ingested into a
ReplacingMergeTree
. This table engine is commonly used for CDC scenarios to allow update operations to be applied. More on this pattern can be found in the following blog articles:
Change Data Capture (CDC) with PostgreSQL and ClickHouse - Part 1
Change Data Capture (CDC) with PostgreSQL and ClickHouse - Part 2
1. Set up Kinesis stream {#1-set-up-kinesis-stream}
First, you will want to enable a Kinesis stream on your DynamoDB table to capture changes in real-time. We want to do this before we create the snapshot to avoid missing any data.
Find the AWS guide located
here
.
2. Create the snapshot {#2-create-the-snapshot}
Next, we will create a snapshot of the DynamoDB table. This can be achieved through an AWS export to S3. Find the AWS guide located
here
.
You will want to do a "Full export" in the DynamoDB JSON format.
3. Load the snapshot into ClickHouse {#3-load-the-snapshot-into-clickhouse}
Create necessary tables {#create-necessary-tables}
The snapshot data from DynamoDB will look something this:
json
{
"age": {
"N": "26"
},
"first_name": {
"S": "sally"
},
"id": {
"S": "0A556908-F72B-4BE6-9048-9E60715358D4"
}
}
Observe that the data is in a nested format. We will need to flatten this data before loading it into ClickHouse. This can be done using the
JSONExtract
function in ClickHouse in a materialized view.
We will want to create three tables:
1. A table to store the raw data from DynamoDB
2. A table to store the final flattened data (destination table)
3. A materialized view to flatten the data
For the example DynamoDB data above, the ClickHouse tables would look like this:
``sql
/* Snapshot table */
CREATE TABLE IF NOT EXISTS "default"."snapshot"
(
item` String
)
ORDER BY tuple(); | {"source_file": "index.md"} | [
0.02060488425195217,
0.0061377570964396,
-0.04821799322962761,
-0.003140850691124797,
0.09447479993104935,
0.022479930892586708,
-0.033448781818151474,
0.022695820778608322,
-0.0014332124264910817,
0.019874922931194305,
0.056673768907785416,
-0.04983576014637947,
0.11828223615884781,
-0.06... |
46e011b6-dff0-4a0b-8665-2c418424daff | For the example DynamoDB data above, the ClickHouse tables would look like this:
``sql
/* Snapshot table */
CREATE TABLE IF NOT EXISTS "default"."snapshot"
(
item` String
)
ORDER BY tuple();
/
Table for final flattened data
/
CREATE MATERIALIZED VIEW IF NOT EXISTS "default"."snapshot_mv" TO "default"."destination" AS
SELECT
JSONExtractString(item, 'id', 'S') AS id,
JSONExtractInt(item, 'age', 'N') AS age,
JSONExtractString(item, 'first_name', 'S') AS first_name
FROM "default"."snapshot";
/
Table for final flattened data
/
CREATE TABLE IF NOT EXISTS "default"."destination" (
"id" String,
"first_name" String,
"age" Int8,
"version" Int64
)
ENGINE ReplacingMergeTree("version")
ORDER BY id;
```
There are a few requirements for the destination table:
- This table must be a
ReplacingMergeTree
table
- The table must have a
version
column
- In later steps, we will be mapping the
ApproximateCreationDateTime
field from the Kinesis stream to the
version
column.
- The table should use the partition key as the sorting key (specified by
ORDER BY
)
- Rows with the same sorting key will be deduplicated based on the
version
column.
Create the snapshot ClickPipe {#create-the-snapshot-clickpipe}
Now you can create a ClickPipe to load the snapshot data from S3 into ClickHouse. Follow the S3 ClickPipe guide
here
, but use the following settings:
Ingest path
: You will need to locate the path of the exported json files in S3. The path will look something like this:
text
https://{bucket}.s3.amazonaws.com/{prefix}/AWSDynamoDB/{export-id}/data/*
Format
: JSONEachRow
Table
: Your snapshot table (e.g.
default.snapshot
in example above)
Once created, data will begin populating in the snapshot and destination tables. You do not need to wait for the snapshot load to finish before moving on to the next step.
4. Create the Kinesis ClickPipe {#4-create-the-kinesis-clickpipe}
Now we can set up the Kinesis ClickPipe to capture real-time changes from the Kinesis stream. Follow the Kinesis ClickPipe guide
here
, but use the following settings:
Stream
: The Kinesis stream used in step 1
Table
: Your destination table (e.g.
default.destination
in example above)
Flatten object
: true
Column mappings
:
ApproximateCreationDateTime
:
version
Map other fields to the appropriate destination columns as shown below
5. Cleanup (optional) {#5-cleanup-optional}
Once the snapshot ClickPipe has finished, you can delete the snapshot table and materialized view.
sql
DROP TABLE IF EXISTS "default"."snapshot";
DROP TABLE IF EXISTS "default"."snapshot_clickpipes_error";
DROP VIEW IF EXISTS "default"."snapshot_mv"; | {"source_file": "index.md"} | [
-0.02429366298019886,
-0.017708372324705124,
0.016378363594412804,
0.0035084960982203484,
-0.009594153612852097,
-0.023099668323993683,
-0.05612541362643242,
-0.03728858754038811,
-0.019434576854109764,
0.03259957954287529,
0.06392790377140045,
-0.05043746531009674,
0.10990733653306961,
-0... |
608591d9-348d-41cb-b564-377193dbd806 | sidebar_label: 'Kafka Connector Sink on Confluent Cloud'
sidebar_position: 2
slug: /integrations/kafka/cloud/confluent/sink-connector
description: 'Guide to using the fully managed ClickHouse Connector Sinkon Confluent Cloud'
title: 'Integrating Confluent Cloud with ClickHouse'
keywords: ['Kafka', 'Confluent Cloud']
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
- website: 'https://clickhouse.com/cloud/clickpipes'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
Integrating Confluent Cloud with ClickHouse
Prerequisites {#prerequisites}
We assume you are familiar with:
*
ClickHouse Connector Sink
* Confluent Cloud
The official Kafka connector from ClickHouse with Confluent Cloud {#the-official-kafka-connector-from-clickhouse-with-confluent-cloud}
Create a Topic {#create-a-topic}
Creating a topic on Confluent Cloud is fairly simple, and there are detailed instructions
here
.
Important notes {#important-notes}
The Kafka topic name must be the same as the ClickHouse table name. The way to tweak this is by using a transformer (for example
ExtractTopic
).
More partitions does not always mean more performance - see our upcoming guide for more details and performance tips.
Gather your connection details {#gather-your-connection-details}
Install Connector {#install-connector}
Install the fully managed ClickHouse Sink Connector on Confluent Cloud following the
official documentation
.
Configure the Connector {#configure-the-connector}
During the configuration of the ClickHouse Sink Connector, you will need to provide the following details:
- hostname of your ClickHouse server
- port of your ClickHouse server (default is 8443)
- username and password for your ClickHouse server
- database name in ClickHouse where the data will be written
- topic name in Kafka that will be used to write data to ClickHouse
The Confluent Cloud UI supports advanced configuration options to adjust poll intervals, batch sizes, and other parameters to optimize performance.
Known limitations {#known-limitations}
See the list of
Connectors limitations in the official docs | {"source_file": "confluent-cloud.md"} | [
-0.051196761429309845,
-0.029149925336241722,
-0.01699606142938137,
0.017739493399858475,
0.012622465379536152,
-0.044327396899461746,
-0.029951654374599457,
-0.004065517336130142,
-0.012275232002139091,
0.04178823158144951,
0.0006626718095503747,
-0.058851148933172226,
-0.000561871274840086... |
2e958634-364b-4616-939a-5931db02a2ac | sidebar_label: 'HTTP Sink Connector for Confluent Platform'
sidebar_position: 4
slug: /integrations/kafka/cloud/confluent/http
description: 'Using HTTP Connector Sink with Kafka Connect and ClickHouse'
title: 'Confluent HTTP Sink Connector'
doc_type: 'guide'
keywords: ['Confluent HTTP Sink Connector', 'HTTP Sink ClickHouse', 'Kafka HTTP connector
', 'ClickHouse HTTP integration', 'Confluent Cloud HTTP Sink']
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import createHttpSink from '@site/static/images/integrations/data-ingestion/kafka/confluent/create_http_sink.png';
import httpAuth from '@site/static/images/integrations/data-ingestion/kafka/confluent/http_auth.png';
import httpAdvanced from '@site/static/images/integrations/data-ingestion/kafka/confluent/http_advanced.png';
import createMessageInTopic from '@site/static/images/integrations/data-ingestion/kafka/confluent/create_message_in_topic.png';
Confluent HTTP sink connector
The HTTP Sink Connector is data type agnostic and thus does not need a Kafka schema as well as supporting ClickHouse specific data types such as Maps and Arrays. This additional flexibility comes at a slight increase in configuration complexity.
Below we describe a simple installation, pulling messages from a single Kafka topic and inserting rows into a ClickHouse table.
:::note
The HTTP Connector is distributed under the
Confluent Enterprise License
.
:::
Quick start steps {#quick-start-steps}
1. Gather your connection details {#1-gather-your-connection-details}
2. Run Kafka Connect and the HTTP sink connector {#2-run-kafka-connect-and-the-http-sink-connector}
You have two options:
Self-managed:
Download the Confluent package and install it locally. Follow the installation instructions for installing the connector as documented
here
.
If you use the confluent-hub installation method, your local configuration files will be updated.
Confluent Cloud:
A fully managed version of HTTP Sink is available for those using Confluent Cloud for their Kafka hosting. This requires your ClickHouse environment to be accessible from Confluent Cloud.
:::note
The following examples are using Confluent Cloud.
:::
3. Create destination table in ClickHouse {#3-create-destination-table-in-clickhouse}
Before the connectivity test, let's start by creating a test table in ClickHouse Cloud, this table will receive the data from Kafka:
sql
CREATE TABLE default.my_table
(
`side` String,
`quantity` Int32,
`symbol` String,
`price` Int32,
`account` String,
`userid` String
)
ORDER BY tuple()
4. Configure HTTP Sink {#4-configure-http-sink}
Create a Kafka topic and an instance of HTTP Sink Connector: | {"source_file": "kafka-connect-http.md"} | [
-0.05006136745214462,
0.028841843828558922,
-0.049876317381858826,
0.0010721683502197266,
-0.011835470795631409,
-0.04629229009151459,
-0.044448185712099075,
0.02392060123383999,
-0.007631062064319849,
0.00899885781109333,
0.013912349939346313,
-0.04605646803975105,
0.02312343753874302,
-0... |
9db6a360-b6eb-447d-841d-b4ac1009b64d | 4. Configure HTTP Sink {#4-configure-http-sink}
Create a Kafka topic and an instance of HTTP Sink Connector:
Configure HTTP Sink Connector:
* Provide the topic name you created
* Authentication
*
HTTP Url
- ClickHouse Cloud URL with a
INSERT
query specified
<protocol>://<clickhouse_host>:<clickhouse_port>?query=INSERT%20INTO%20<database>.<table>%20FORMAT%20JSONEachRow
.
Note
: the query must be encoded.
*
Endpoint Authentication type
- BASIC
*
Auth username
- ClickHouse username
*
Auth password
- ClickHouse password
:::note
This HTTP Url is error-prone. Ensure escaping is precise to avoid issues.
:::
Configuration
Input Kafka record value format
Depends on your source data but in most cases JSON or Avro. We assume
JSON
in the following settings.
In
advanced configurations
section:
HTTP Request Method
- Set to POST
Request Body Format
- json
Batch batch size
- Per ClickHouse recommendations, set this to
at least 1000
.
Batch json as array
- true
Retry on HTTP codes
- 400-500 but adapt as required e.g. this may change if you have an HTTP proxy in front of ClickHouse.
Maximum Reties
- the default (10) is appropriate but feel to adjust for more robust retries.
5. Testing the connectivity {#5-testing-the-connectivity}
Create an message in a topic configured by your HTTP Sink
and verify the created message's been written to your ClickHouse instance.
Troubleshooting {#troubleshooting}
HTTP Sink doesn't batch messages {#http-sink-doesnt-batch-messages}
From the
Sink documentation
:
The HTTP Sink connector does not batch requests for messages containing Kafka header values that are different.
Verify your Kafka records have the same key.
When you add parameters to the HTTP API URL, each record can result in a unique URL. For this reason, batching is disabled when using additional URL parameters.
400 bad request {#400-bad-request}
CANNOT_PARSE_QUOTED_STRING {#cannot_parse_quoted_string}
If HTTP Sink fails with the following message when inserting a JSON object into a
String
column:
response
Code: 26. DB::ParsingException: Cannot parse JSON string: expected opening quote: (while reading the value of key key_name): While executing JSONEachRowRowInputFormat: (at row 1). (CANNOT_PARSE_QUOTED_STRING)
Set
input_format_json_read_objects_as_strings=1
setting in URL as encoded string
SETTINGS%20input_format_json_read_objects_as_strings%3D1
Load the GitHub dataset (optional) {#load-the-github-dataset-optional}
Note that this example preserves the Array fields of the Github dataset. We assume you have an empty github topic in the examples and use
kcat
for message insertion to Kafka.
1. Prepare configuration {#1-prepare-configuration} | {"source_file": "kafka-connect-http.md"} | [
-0.014459427446126938,
-0.03296572342514992,
-0.09907110780477524,
-0.01983293890953064,
-0.10908964276313782,
-0.03723667934536934,
-0.08733770996332169,
-0.019344327971339226,
-0.015860725194215775,
0.059968169778585434,
-0.008339007385075092,
-0.09240729361772537,
-0.02697288617491722,
... |
fc347a1c-ba53-4472-a861-906a4bc52de0 | 1. Prepare configuration {#1-prepare-configuration}
Follow
these instructions
for setting up Connect relevant to your installation type, noting the differences between a standalone and distributed cluster. If using Confluent Cloud, the distributed setup is relevant.
The most important parameter is the
http.api.url
. The
HTTP interface
for ClickHouse requires you to encode the INSERT statement as a parameter in the URL. This must include the format (
JSONEachRow
in this case) and target database. The format must be consistent with the Kafka data, which will be converted to a string in the HTTP payload. These parameters must be URL escaped. An example of this format for the Github dataset (assuming you are running ClickHouse locally) is shown below:
```response
://
:
?query=INSERT%20INTO%20
.
%20FORMAT%20JSONEachRow
http://localhost:8123?query=INSERT%20INTO%20default.github%20FORMAT%20JSONEachRow
```
The following additional parameters are relevant to using the HTTP Sink with ClickHouse. A complete parameter list can be found
here
:
request.method
- Set to
POST
retry.on.status.codes
- Set to 400-500 to retry on any error codes. Refine based expected errors in data.
request.body.format
- In most cases this will be JSON.
auth.type
- Set to BASIC if you security with ClickHouse. Other ClickHouse compatible authentication mechanisms are not currently supported.
ssl.enabled
- set to true if using SSL.
connection.user
- username for ClickHouse.
connection.password
- password for ClickHouse.
batch.max.size
- The number of rows to send in a single batch. Ensure this set is to an appropriately large number. Per ClickHouse
recommendations
a value of 1000 should be considered a minimum.
tasks.max
- The HTTP Sink connector supports running one or more tasks. This can be used to increase performance. Along with batch size this represents your primary means of improving performance.
key.converter
- set according to the types of your keys.
value.converter
- set based on the type of data on your topic. This data does not need a schema. The format here must be consistent with the FORMAT specified in the parameter
http.api.url
. The simplest here is to use JSON and the org.apache.kafka.connect.json.JsonConverter converter. Treating the value as a string, via the converter org.apache.kafka.connect.storage.StringConverter, is also possible - although this will require the user to extract a value in the insert statement using functions.
Avro format
is also supported in ClickHouse if using the io.confluent.connect.avro.AvroConverter converter.
A full list of settings, including how to configure a proxy, retries, and advanced SSL, can be found
here
.
Example configuration files for the Github sample data can be found
here
, assuming Connect is run in standalone mode and Kafka is hosted in Confluent Cloud.
2. Create the ClickHouse table {#2-create-the-clickhouse-table} | {"source_file": "kafka-connect-http.md"} | [
-0.017630094662308693,
-0.0865192711353302,
-0.10229849070310593,
0.05375611409544945,
-0.07415567338466644,
-0.037627145648002625,
-0.12252959609031677,
0.024395352229475975,
-0.002411108696833253,
0.042070332914590836,
0.013654491864144802,
-0.08987054228782654,
0.04168226197361946,
-0.0... |
507f97be-8596-4aa0-b157-d1cb60f9b097 | 2. Create the ClickHouse table {#2-create-the-clickhouse-table}
Ensure the table has been created. An example for a minimal github dataset using a standard MergeTree is shown below.
```sql
CREATE TABLE github
(
file_time DateTime,
event_type Enum('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4,'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
actor_login LowCardinality(String),
repo_name LowCardinality(String),
created_at DateTime,
updated_at DateTime,
action Enum('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
comment_id UInt64,
path String,
ref LowCardinality(String),
ref_type Enum('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
creator_user_login LowCardinality(String),
number UInt32,
title String,
labels Array(LowCardinality(String)),
state Enum('none' = 0, 'open' = 1, 'closed' = 2),
assignee LowCardinality(String),
assignees Array(LowCardinality(String)),
closed_at DateTime,
merged_at DateTime,
merge_commit_sha String,
requested_reviewers Array(LowCardinality(String)),
merged_by LowCardinality(String),
review_comments UInt32,
member_login LowCardinality(String)
) ENGINE = MergeTree ORDER BY (event_type, repo_name, created_at)
```
3. Add data to Kafka {#3-add-data-to-kafka}
Insert messages to Kafka. Below we use
kcat
to insert 10k messages.
bash
head -n 10000 github_all_columns.ndjson | kcat -b <host>:<port> -X security.protocol=sasl_ssl -X sasl.mechanisms=PLAIN -X sasl.username=<username> -X sasl.password=<password> -t github
A simple read on the target table "Github" should confirm the insertion of data.
```sql
SELECT count() FROM default.github;
| count() |
| :--- |
| 10000 |
``` | {"source_file": "kafka-connect-http.md"} | [
0.09059054404497147,
-0.10654311627149582,
-0.06043127924203873,
0.03800496086478233,
-0.02120785415172577,
-0.027234788984060287,
0.05130734294652939,
0.021722348406910896,
-0.03585558757185936,
0.08247509598731995,
0.08399377763271332,
-0.12088033556938171,
0.046532388776540756,
-0.07714... |
43783fc5-0109-47f1-b701-3f08fe57c39c | sidebar_label: 'Confluent Platform'
sidebar_position: 1
slug: /integrations/kafka/cloud/confluent
description: 'Kafka Connectivity with Confluent Cloud'
title: 'Integrating Confluent Cloud with ClickHouse'
doc_type: 'guide'
keywords: ['Confluent Cloud ClickHouse', 'Confluent ClickHouse integration', 'Kafka ClickHouse connector', 'Confluent Platform ClickHouse', 'ClickHouse Connect Sink']
Integrating Confluent Cloud with ClickHouse
Confluent platform provides two options to integration with ClickHouse
ClickHouse Connect Sink on Confluent Cloud
ClickHouse Connect Sink on Confluent Platform
using the custom connectors feature
HTTP Sink Connector for Confluent Platform
that integrates Apache Kafka with an API via HTTP or HTTPS | {"source_file": "index.md"} | [
-0.04390999302268028,
-0.04809745401144028,
-0.043944526463747025,
0.006856015417724848,
-0.017982179298996925,
-0.02929713949561119,
-0.04135333374142647,
-0.013226490467786789,
0.003379739820957184,
0.015647264197468758,
0.025613654404878616,
-0.029424278065562248,
0.010053942911326885,
... |
0ecaf624-45a2-4f4f-b82f-f5cce7c1ac12 | sidebar_label: 'Kafka Connector Sink on Confluent Platform'
sidebar_position: 3
slug: /integrations/kafka/cloud/confluent/custom-connector
description: 'Using ClickHouse Connector Sink with Kafka Connect and ClickHouse'
title: 'Integrating Confluent Cloud with ClickHouse'
keywords: ['Confluent ClickHouse integration', 'ClickHouse Kafka connector', 'Kafka Connect ClickHouse sink', 'Confluent Platform ClickHouse', 'custom connector Confluent']
doc_type: 'guide'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import AddCustomConnectorPlugin from '@site/static/images/integrations/data-ingestion/kafka/confluent/AddCustomConnectorPlugin.png';
Integrating Confluent platform with ClickHouse
Prerequisites {#prerequisites}
We assume you are familiar with:
*
ClickHouse Connector Sink
* Confluent Platform and
Custom Connectors
.
The official Kafka connector from ClickHouse with Confluent Platform {#the-official-kafka-connector-from-clickhouse-with-confluent-platform}
Installing on Confluent platform {#installing-on-confluent-platform}
This is meant to be a quick guide to get you started with the ClickHouse Sink Connector on Confluent Platform.
For more details, please refer to the
official Confluent documentation
.
Create a Topic {#create-a-topic}
Creating a topic on Confluent Platform is fairly simple, and there are detailed instructions
here
.
Important notes {#important-notes}
The Kafka topic name must be the same as the ClickHouse table name. The way to tweak this is by using a transformer (for example
ExtractTopic
).
More partitions does not always mean more performance - see our upcoming guide for more details and performance tips.
Install connector {#install-connector}
You can download the connector from our
repository
- please feel free to submit comments and issues there as well!
Navigate to "Connector Plugins" -> "Add plugin" and using the following settings:
text
'Connector Class' - 'com.clickhouse.kafka.connect.ClickHouseSinkConnector'
'Connector type' - Sink
'Sensitive properties' - 'password'. This will ensure entries of the ClickHouse password are masked during configuration.
Example:
Gather your connection details {#gather-your-connection-details}
Configure the connector {#configure-the-connector}
Navigate to
Connectors
->
Add Connector
and use the following settings (note that the values are examples only):
json
{
"database": "<DATABASE_NAME>",
"errors.retry.timeout": "30",
"exactlyOnce": "false",
"schemas.enable": "false",
"hostname": "<CLICKHOUSE_HOSTNAME>",
"password": "<SAMPLE_PASSWORD>",
"port": "8443",
"ssl": "true",
"topics": "<TOPIC_NAME>",
"username": "<SAMPLE_USERNAME>",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false"
} | {"source_file": "custom-connector.md"} | [
-0.0544695183634758,
-0.027257783338427544,
-0.05740269646048546,
0.02638314850628376,
-0.03100552409887314,
-0.012510249391198158,
-0.013421322219073772,
0.03209827467799187,
-0.027030430734157562,
0.015633152797818184,
0.014872482046484947,
-0.045245036482810974,
0.00878311786800623,
-0.... |
c0bdb3b7-be6d-4bde-b117-a0c2b46cc8f3 | Specify the connection endpoints {#specify-the-connection-endpoints}
You need to specify the allow-list of endpoints that the connector can access.
You must use a fully-qualified domain name (FQDN) when adding the networking egress endpoint(s).
Example:
u57swl97we.eu-west-1.aws.clickhouse.com:8443
:::note
You must specify HTTP(S) port. The Connector doesn't support Native protocol yet.
:::
Read the documentation.
You should be all set!
Known limitations {#known-limitations}
Custom Connectors must use public internet endpoints. Static IP addresses aren't supported.
You can override some Custom Connector properties. See the fill
list in the official documentation.
Custom Connectors are available only in
some AWS regions
See the list of
Custom Connectors limitations in the official docs | {"source_file": "custom-connector.md"} | [
-0.0017045396380126476,
-0.023099064826965332,
-0.043606601655483246,
0.002067653927952051,
-0.10817857086658478,
0.05117757245898247,
-0.03589267656207085,
-0.03461582958698273,
0.0067806788720190525,
0.057716693729162216,
-0.042946912348270416,
-0.034693993628025055,
-0.012791171669960022,... |
35c59321-13f0-4dbe-822d-b860d9d5b1c8 | sidebar_label: 'Amazon MSK with Kafka Connector Sink'
sidebar_position: 1
slug: /integrations/kafka/cloud/amazon-msk/
description: 'The official Kafka connector from ClickHouse with Amazon MSK'
keywords: ['integration', 'kafka', 'amazon msk', 'sink', 'connector']
title: 'Integrating Amazon MSK with ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'community'
- category: 'data_ingestion'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
Integrating Amazon MSK with ClickHouse
Note: The policy shown in the video is permissive and intended for quick start only. See leastβprivilege IAM guidance below.
Prerequisites {#prerequisites}
We assume:
* you are familiar with
ClickHouse Connector Sink
,Amazon MSK and MSK Connectors. We recommend the Amazon MSK
Getting Started guide
and
MSK Connect guide
.
* The MSK broker is publicly accessible. See the
Public Access
section of the Developer Guide.
The official Kafka connector from ClickHouse with Amazon MSK {#the-official-kafka-connector-from-clickhouse-with-amazon-msk}
Gather your connection details {#gather-your-connection-details}
Steps {#steps}
Make sure you're familiar with the
ClickHouse Connector Sink
Create an MSK instance
.
Create and assign IAM role
.
Download a
jar
file from ClickHouse Connect Sink
Release page
.
Install the downloaded
jar
file on
Custom plugin page
of Amazon MSK console.
If Connector communicates with a public ClickHouse instance,
enable internet access
.
Provide a topic name, ClickHouse instance hostname, and password in config.
yml
connector.class=com.clickhouse.kafka.connect.ClickHouseSinkConnector
tasks.max=1
topics=<topic_name>
ssl=true
security.protocol=SSL
hostname=<hostname>
database=<database_name>
password=<password>
ssl.truststore.location=/tmp/kafka.client.truststore.jks
port=8443
value.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
exactlyOnce=true
username=default
schemas.enable=false
Recommended IAM permissions (least privilege) {#iam-least-privilege}
Use the smallest set of permissions required for your setup. Start with the baseline below and add optional services only if you use them. | {"source_file": "index.md"} | [
-0.048590488731861115,
-0.013323384337127209,
-0.10687703639268875,
0.0068840221501886845,
-0.00031472972477786243,
0.0006786398589611053,
-0.06333213299512863,
-0.03196452930569649,
-0.006103357300162315,
0.04470415040850639,
0.0035050318110734224,
-0.05632678046822548,
0.002012445125728845... |
6e392f73-b35a-4fdb-a4db-4a1e22fc607d | Use the smallest set of permissions required for your setup. Start with the baseline below and add optional services only if you use them.
json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MSKClusterAccess",
"Effect": "Allow",
"Action": [
"kafka:DescribeCluster",
"kafka:GetBootstrapBrokers",
"kafka:DescribeClusterV2",
"kafka:ListClusters",
"kafka:ListClustersV2"
],
"Resource": "*"
},
{
"Sid": "KafkaAuthorization",
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect",
"kafka-cluster:DescribeCluster",
"kafka-cluster:DescribeGroup",
"kafka-cluster:DescribeTopic",
"kafka-cluster:ReadData"
],
"Resource": "*"
},
{
"Sid": "OptionalGlueSchemaRegistry",
"Effect": "Allow",
"Action": [
"glue:GetSchema*",
"glue:ListSchemas",
"glue:ListSchemaVersions"
],
"Resource": "*"
},
{
"Sid": "OptionalSecretsManager",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": [
"arn:aws:secretsmanager:<region>:<account-id>:secret:<your-secret-name>*"
]
},
{
"Sid": "OptionalS3Read",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<your-bucket>/<optional-prefix>/*"
}
]
}
Use the Glue block only if you use AWS Glue Schema Registry.
Use the Secrets Manager block only if you fetch credentials/truststores from Secrets Manager. Scope the ARN.
Use the S3 block only if you load artifacts (e.g., truststore) from S3. Scope to bucket/prefix.
See also:
Kafka best practices β IAM
.
Performance tuning {#performance-tuning}
One way of increasing performance is to adjust the batch size and the number of records that are fetched from Kafka by adding the following to the
worker
configuration:
yml
consumer.max.poll.records=[NUMBER OF RECORDS]
consumer.max.partition.fetch.bytes=[NUMBER OF RECORDS * RECORD SIZE IN BYTES]
The specific values you use are going to vary, based on desired number of records and record size. For example, the default values are:
yml
consumer.max.poll.records=500
consumer.max.partition.fetch.bytes=1048576
You can find more details (both implementation and other considerations) in the official
Kafka
and
Amazon MSK
documentation.
Notes on networking for MSK Connect {#notes-on-networking-for-msk-connect}
In order for MSK Connect to connect to ClickHouse, we recommend your MSK cluster to be in a private subnet with a Private NAT connected for internet access. Instructions on how to set this up are provided below. Note that public subnets are supported but not recommended due to the need to constantly assign an Elastic IP address to your ENI,
AWS provides more details here | {"source_file": "index.md"} | [
-0.03775181621313095,
0.00214604614302516,
-0.08202138543128967,
-0.0022236888762563467,
-0.03470797836780548,
-0.01200113445520401,
-0.06455111503601074,
0.004498239140957594,
0.0032429855782538652,
0.032869819551706314,
-0.00019077665638178587,
-0.06205330416560173,
-0.004028513096272945,
... |
4cf813fb-fbbc-488b-94a1-af2f72132183 | Create a Private Subnet:
Create a new subnet within your VPC, designating it as a private subnet. This subnet should not have direct access to the internet.
Create a NAT Gateway:
Create a NAT gateway in a public subnet of your VPC. The NAT gateway enables instances in your private subnet to connect to the internet or other AWS services, but prevents the internet from initiating a connection with those instances.
Update the Route Table:
Add a route that directs internet-bound traffic to the NAT gateway
Ensure Security Group(s) and Network ACLs Configuration:
Configure your
security groups
and
network ACLs (Access Control Lists)
to allow relevant traffic.
From MSK Connect worker ENIs to MSK brokers on TLS port (commonly 9094).
From MSK Connect worker ENIs to ClickHouse endpoint: 9440 (native TLS) or 8443 (HTTPS).
Allow inbound on broker SG from the MSK Connect worker SG.
For self-hosted ClickHouse, open the port configured in your server (default 8123 for HTTP).
Attach Security Group(s) to MSK:
Ensure that these security groups are attached to your MSK cluster and MSK Connect workers.
Connectivity to ClickHouse Cloud:
Public endpoint + IP allowlist: requires NAT egress from private subnets.
Private connectivity where available (e.g., VPC peering/PrivateLink/VPN). Ensure VPC DNS hostnames/resolution are enabled and DNS can resolve the private endpoint.
Validate connectivity (quick checklist):
From the connector environment, resolve MSK bootstrap DNS and connect via TLS to broker port.
Establish TLS connection to ClickHouse on port 9440 (or 8443 for HTTPS).
If using AWS services (Glue/Secrets Manager), allow egress to those endpoints. | {"source_file": "index.md"} | [
-0.048253342509269714,
0.04783910512924194,
-0.07822274416685104,
0.03242761641740799,
-0.0434127077460289,
0.02892587147653103,
0.024999765679240227,
-0.01691332273185253,
0.047324489802122116,
0.03161678835749626,
-0.04363773390650749,
-0.04966294392943382,
0.040593236684799194,
-0.06137... |
6e4f48a2-1e41-4f4e-839a-0f8a2fb66313 | sidebar_label: 'BigQuery To ClickHouse'
sidebar_position: 1
slug: /integrations/google-dataflow/templates/bigquery-to-clickhouse
description: 'Users can ingest data from BigQuery into ClickHouse using Google Dataflow Template'
title: 'Dataflow BigQuery to ClickHouse template'
doc_type: 'guide'
keywords: ['Dataflow', 'BigQuery']
import TOCInline from '@theme/TOCInline';
import Image from '@theme/IdealImage';
import dataflow_inqueue_job from '@site/static/images/integrations/data-ingestion/google-dataflow/dataflow-inqueue-job.png'
import dataflow_create_job_from_template_button from '@site/static/images/integrations/data-ingestion/google-dataflow/create_job_from_template_button.png'
import dataflow_template_clickhouse_search from '@site/static/images/integrations/data-ingestion/google-dataflow/template_clickhouse_search.png'
import dataflow_template_initial_form from '@site/static/images/integrations/data-ingestion/google-dataflow/template_initial_form.png'
import dataflow_extended_template_form from '@site/static/images/integrations/data-ingestion/google-dataflow/extended_template_form.png'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Dataflow BigQuery to ClickHouse template
The BigQuery to ClickHouse template is a batch pipeline that ingests data from a BigQuery table into a ClickHouse table.
The template can read the entire table or filter specific records using a provided SQL query.
Pipeline requirements {#pipeline-requirements}
The source BigQuery table must exist.
The target ClickHouse table must exist.
The ClickHouse host must be accessible from the Dataflow worker machines.
Template parameters {#template-parameters} | {"source_file": "bigquery-to-clickhouse.md"} | [
-0.015834568068385124,
0.04446818679571152,
0.020771048963069916,
0.01365239080041647,
-0.006138595752418041,
-0.016363395377993584,
0.014638655818998814,
0.011708339676260948,
-0.09182792901992798,
-0.051277533173561096,
-0.031580545008182526,
-0.004114578012377024,
0.026498988270759583,
... |
f1f2ae9e-11b3-4e75-87de-af8ee9f48672 | | Parameter Name | Parameter Description | Required | Notes |
|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
jdbcUrl
| The ClickHouse JDBC URL in the format
jdbc:clickhouse://<host>:<port>/<schema>
. | β
| Don't add the username and password as JDBC options. Any other JDBC option could be added at the end of the JDBC URL. For ClickHouse Cloud users, add
ssl=true&sslmode=NONE
to the
jdbcUrl
. |
|
clickHouseUsername
| The ClickHouse username to authenticate with. | β
| |
|
clickHousePassword | {"source_file": "bigquery-to-clickhouse.md"} | [
-0.04223363846540451,
0.056630879640579224,
-0.051115553826093674,
-0.0010035844752565026,
-0.09104421734809875,
0.04580479860305786,
0.029734469950199127,
0.038698114454746246,
-0.03905998170375824,
-0.029149286448955536,
0.06775809079408646,
-0.07618376612663269,
0.02608567103743553,
-0.... |
1c543a02-74ef-4f65-9589-e4903b6e60cc | |
clickHousePassword
| The ClickHouse password to authenticate with. | β
| |
|
clickHouseTable
| The target ClickHouse table into which data will be inserted. | β
| |
|
maxInsertBlockSize
| The maximum block size for insertion, if we control the creation of blocks for insertion (ClickHouseIO option). | | A
ClickHouseIO
option. |
|
insertDistributedSync
| If setting is enabled, insert query into distributed waits until data will be sent to all nodes in cluster. (ClickHouseIO option). | | A
ClickHouseIO
option. |
|
insertQuorum
| For INSERT queries in the replicated table, wait writing for the specified number of replicas and linearize the addition of the data. 0 - disabled. | | A
ClickHouseIO | {"source_file": "bigquery-to-clickhouse.md"} | [
-0.00426129437983036,
-0.08165998011827469,
-0.10138262063264847,
0.05370175465941429,
-0.0843576118350029,
-0.039246849715709686,
-0.05949900671839714,
-0.04900052398443222,
-0.08691255003213882,
0.04004938527941704,
0.04182041063904762,
-0.025504369288682938,
0.1291600465774536,
-0.10215... |
389e556f-0257-4acc-8590-3efef2ab3c40 | ClickHouseIO
option. This setting is disabled in default server settings. |
|
insertDeduplicate
| For INSERT queries in the replicated table, specifies that deduplication of inserting blocks should be performed. | | A
ClickHouseIO
option. |
|
maxRetries
| Maximum number of retries per insert. | | A
ClickHouseIO
option. |
|
InputTableSpec
| The BigQuery table to read from. Specify either
inputTableSpec
or
query
. When both are set, the
query
parameter takes precedence. Example:
<BIGQUERY_PROJECT>:<DATASET_NAME>.<INPUT_TABLE>
. | | Reads data directly from BigQuery storage using the
BigQuery Storage Read API
. Be aware of the
Storage Read API limitations
. |
|
outputDeadletterTable
| The BigQuery table for messages that failed to reach the output table. If a table doesn't exist, it is created during pipeline execution. If not specified,
<outputTableSpec>_error_records
is used. For example,
<PROJECT_ID>:<DATASET_NAME>.<DEADLETTER_TABLE>
. | | |
|
query
| The SQL query to use to read data from BigQuery. If the BigQuery dataset is in a different project than the Dataflow job, specify the full dataset name in the SQL query, for example:
<PROJECT_ID>.<DATASET_NAME>.<TABLE_NAME>
. Defaults to
GoogleSQL
unless
useLegacySql | {"source_file": "bigquery-to-clickhouse.md"} | [
0.02782341279089451,
-0.056776512414216995,
-0.07442590594291687,
0.05075347423553467,
-0.05781581997871399,
-0.06817132234573364,
-0.06264691799879074,
0.0021023578010499477,
-0.04484478384256363,
0.04176631197333336,
0.03507501259446144,
-0.0258049163967371,
0.08332082629203796,
-0.15149... |
4bb2f6dc-2032-4893-baa5-69234e1c7cab | <PROJECT_ID>.<DATASET_NAME>.<TABLE_NAME>
. Defaults to
GoogleSQL
unless
useLegacySql
is true. | | You must specify either
inputTableSpec
or
query
. If you set both parameters, the template uses the
query
parameter. Example:
SELECT * FROM sampledb.sample_table
. |
|
useLegacySql
| Set to
true
to use legacy SQL. This parameter only applies when using the
query
parameter. Defaults to
false
. | | |
|
queryLocation
| Needed when reading from an authorized view without the underlying table's permission. For example,
US
. | | |
|
queryTempDataset
| Set an existing dataset to create the temporary table to store the results of the query. For example,
temp_dataset
. | | |
|
KMSEncryptionKey
| If reading from BigQuery using the query source, use this Cloud KMS key to encrypt any temporary tables created. For example,
projects/your-project/locations/global/keyRings/your-keyring/cryptoKeys/your-key
. | | | | {"source_file": "bigquery-to-clickhouse.md"} | [
0.023609710857272148,
0.011883731000125408,
-0.08712224662303925,
0.004240481182932854,
-0.044293489307165146,
0.02266056276857853,
0.050987452268600464,
-0.022591589018702507,
-0.041952334344387054,
0.044404610991477966,
-0.009832017123699188,
-0.09749718010425568,
0.09358082711696625,
-0... |
eea61d4b-1dd0-4101-ae85-274d7f5d96cf | :::note
Default values for all
ClickHouseIO
parameters can be found in
ClickHouseIO
Apache Beam Connector
:::
Source and target tables schema {#source-and-target-tables-schema}
To effectively load the BigQuery dataset into ClickHouse, the pipeline performs a column inference process with the following phases:
The templates build a schema object based on the target ClickHouse table.
The templates iterate over the BigQuery dataset, and attempts to match columns based on their names.
:::important
Having said that, your BigQuery dataset (either table or query) must have the exact same column names as your ClickHouse
target table.
:::
Data type mapping {#data-types-mapping}
The BigQuery types are converted based on your ClickHouse table definition. Therefore, the above table lists the
recommended mapping you should have in your target ClickHouse table (for a given BigQuery table/query): | {"source_file": "bigquery-to-clickhouse.md"} | [
0.07724833488464355,
-0.03821119666099548,
-0.004916714504361153,
0.024673711508512497,
-0.03615975007414818,
-0.0016842533368617296,
-0.005714231636375189,
-0.011337380856275558,
-0.11183377355337143,
-0.02055954374372959,
-0.00609980896115303,
-0.0626644492149353,
-0.063815176486969,
-0.... |
d6d9d94f-47c4-4b96-a083-3e487206b7e4 | | BigQuery Type | ClickHouse Type | Notes |
|-----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Array Type
|
Array Type
| The inner type must be one of the supported primitive data types listed in this table. |
|
Boolean Type
|
Bool Type
| |
|
Date Type
|
Date Type
| |
|
Datetime Type
|
Datetime Type
| Works as well with
Enum8
,
Enum16
and
FixedString | {"source_file": "bigquery-to-clickhouse.md"} | [
-0.01941719651222229,
-0.005722812842577696,
-0.017938338220119476,
0.015837689861655235,
-0.11440826952457428,
0.02380073443055153,
0.05037974938750267,
0.0029593128710985184,
-0.01576392352581024,
-0.039606306701898575,
0.021538034081459045,
0.018883733078837395,
-0.02266528271138668,
-0... |
392feb33-4f0b-4775-b726-8bb39eaa8250 | |
Datetime Type
|
Datetime Type
| Works as well with
Enum8
,
Enum16
and
FixedString
. |
|
String Type
|
String Type
| In BigQuery all Int types (
INT
,
SMALLINT
,
INTEGER
,
BIGINT
,
TINYINT
,
BYTEINT
) are aliases to
INT64
. We recommend you setting in ClickHouse the right Integer size, as the template will convert the column based on the defined column type (
Int8
,
Int16
,
Int32
,
Int64
). |
|
Numeric - Integer Types
|
Integer Types
| In BigQuery all Int types (
INT
,
SMALLINT
,
INTEGER
,
BIGINT
,
TINYINT
,
BYTEINT
) are aliases to
INT64
. We recommend you setting in ClickHouse the right Integer size, as the template will convert the column based on the defined column type (
Int8
,
Int16
,
Int32
,
Int64
). The template will also convert unassigned Int types if used in ClickHouse table (
UInt8
,
UInt16
,
UInt32
,
UInt64
). |
|
Numeric - Float Types
|
Float Types
| Supported ClickHouse types:
Float32
and
Float64
| | {"source_file": "bigquery-to-clickhouse.md"} | [
0.10306571424007416,
0.03280071169137955,
-0.02205055207014084,
-0.009253870695829391,
-0.025842629373073578,
0.03178371116518974,
-0.03203047066926956,
0.053909510374069214,
-0.07173171639442444,
-0.013156757690012455,
-0.023099618032574654,
-0.09776650369167328,
-0.07369455695152283,
-0.... |
dcb8c4e3-71c8-404c-ac2a-f04b0d4f6d28 | Running the Template {#running-the-template}
The BigQuery to ClickHouse template is available for execution via the Google Cloud CLI.
:::note
Be sure to review this document, and specifically the above sections, to fully understand the template's configuration
requirements and prerequisites.
:::
Sign in to your Google Cloud Console and search for DataFlow.
Press the
CREATE JOB FROM TEMPLATE
button
Once the template form is open, enter a job name and select the desired region.
In the
DataFlow Template
input, type
ClickHouse
or
BigQuery
, and select the
BigQuery to ClickHouse
template
Once selected, the form will expand to allow you to provide additional details:
The ClickHouse server JDBC url, with the following format
jdbc:clickhouse://host:port/schema
.
The ClickHouse username.
The ClickHouse target table name.
:::note
The ClickHouse password option is marked as optional, for use cases where there is no password configured.
To add it, please scroll down to the
Password for ClickHouse Endpoint
option.
:::
Customize and add any BigQuery/ClickHouseIO related configurations, as detailed in
the
Template Parameters
section
Install & Configure
gcloud
CLI {#install--configure-gcloud-cli}
If not already installed, install the
gcloud
CLI
.
Follow the
Before you begin
section
in
this guide
to set
up the required configurations, settings, and permissions for running the DataFlow template.
Run command {#run-command}
Use the
gcloud dataflow flex-template run
command to run a Dataflow job that uses the Flex Template.
Below is an example of the command:
bash
gcloud dataflow flex-template run "bigquery-clickhouse-dataflow-$(date +%Y%m%d-%H%M%S)" \
--template-file-gcs-location "gs://clickhouse-dataflow-templates/bigquery-clickhouse-metadata.json" \
--parameters inputTableSpec="<bigquery table id>",jdbcUrl="jdbc:clickhouse://<clickhouse host>:<clickhouse port>/<schema>?ssl=true&sslmode=NONE",clickHouseUsername="<username>",clickHousePassword="<password>",clickHouseTable="<clickhouse target table>"
Command breakdown {#command-breakdown}
Job Name:
The text following the
run
keyword is the unique job name.
Template File:
The JSON file specified by
--template-file-gcs-location
defines the template structure and
details about the accepted parameters. The mention file path is public and ready to use.
Parameters:
Parameters are separated by commas. For string-based parameters, enclose the values in double quotes.
Expected response {#expected-response}
After running the command, you should see a response similar to the following:
bash
job:
createTime: '2025-01-26T14:34:04.608442Z'
currentStateTime: '1970-01-01T00:00:00Z'
id: 2025-01-26_06_34_03-13881126003586053150
location: us-central1
name: bigquery-clickhouse-dataflow-20250126-153400
projectId: ch-integrations
startTime: '2025-01-26T14:34:04.608442Z' | {"source_file": "bigquery-to-clickhouse.md"} | [
-0.018487753346562386,
0.007006722502410412,
0.008982928469777107,
-0.07266908884048462,
-0.0397331640124321,
-0.0051642353646457195,
-0.04337428882718086,
-0.06294315308332443,
-0.02385910600423813,
0.004407702479511499,
-0.08706363290548325,
-0.04822220280766487,
-0.00985068827867508,
-0... |
caae3bc9-2be8-4d60-8536-7fcc64c5221d | Monitor the job {#monitor-the-job}
Navigate to the
Dataflow Jobs tab
in your Google Cloud Console to
monitor the status of the job. You'll find the job details, including progress and any errors:
Troubleshooting {#troubleshooting}
Memory limit (total) exceeded error (code 241) {#code-241-dbexception-memory-limit-total-exceeded}
This error occurs when ClickHouse runs out of memory while processing large batches of data. To resolve this issue:
Increase the instance resources: Upgrade your ClickHouse server to a larger instance with more memory to handle the data processing load.
Decrease the batch size: Adjust the batch size in your Dataflow job configuration to send smaller chunks of data to ClickHouse, reducing memory consumption per batch. These changes can help balance resource usage during data ingestion.
Template source code {#template-source-code}
The template's source code is available in ClickHouse's
DataflowTemplates
fork. | {"source_file": "bigquery-to-clickhouse.md"} | [
-0.012911940924823284,
-0.021782653406262398,
0.01895892061293125,
-0.02594945766031742,
-0.02528952620923519,
-0.07147251069545746,
-0.0271696038544178,
-0.05369821563363075,
-0.041253462433815,
0.04286370053887367,
-0.030138248577713966,
0.007978671230375767,
-0.04689788073301315,
-0.073... |
08460e15-7bdf-4715-be0d-715e50ea9c11 | slug: /integrations/iceberg
sidebar_label: 'Iceberg'
title: 'Iceberg'
description: 'Page describing the IcebergFunction which can be used to integrate ClickHouse with the Iceberg table format'
doc_type: 'guide'
keywords: ['iceberg table function', 'apache iceberg', 'data lake format']
hide_title: true
import IcebergFunction from '@site/docs/sql-reference/table-functions/iceberg.md'; | {"source_file": "iceberg.md"} | [
-0.033398255705833435,
0.02138698473572731,
-0.037103958427906036,
0.04152173176407814,
0.013450536876916885,
-0.03428514301776886,
0.013230538927018642,
0.058708228170871735,
-0.07415448874235153,
-0.013468567281961441,
0.03926428407430649,
-0.0049939751625061035,
0.08896831423044205,
-0.... |
01c69cff-36d4-4822-9f4f-c7e031ebea27 | slug: /integrations/rabbitmq
sidebar_label: 'RabbitMQ'
title: 'RabbitMQ'
hide_title: true
description: 'Page describing the RabbitMQEngine integration'
doc_type: 'reference'
keywords: ['rabbitmq', 'message queue', 'streaming', 'integration', 'data ingestion']
import RabbitMQEngine from '@site/docs/engines/table-engines/integrations/rabbitmq.md'; | {"source_file": "rabbitmq.md"} | [
0.0652281641960144,
0.01461931224912405,
-0.012893431819975376,
0.04217046499252319,
-0.024020040407776833,
-0.011378212831914425,
0.08506430685520172,
-0.0036896178498864174,
-0.04141649231314659,
0.007050030864775181,
0.021461518481373787,
-0.023382136598229408,
0.06782183051109314,
-0.0... |
16af168e-4a45-4cf4-bc0d-efc42b0c634e | slug: /integrations/rocksdb
sidebar_label: 'RocksDB'
title: 'RocksDB'
hide_title: true
description: 'Page describing the RocksDBTableEngine'
doc_type: 'reference'
keywords: ['rocksdb', 'embedded database', 'integration', 'storage engine', 'key-value store']
import RocksDBTableEngine from '@site/docs/engines/table-engines/integrations/embedded-rocksdb.md'; | {"source_file": "rocksdb.md"} | [
0.018647823482751846,
0.03632107377052307,
0.00019207542936783284,
0.04112572595477104,
0.028543595224618912,
0.0019116996554657817,
0.0012611259007826447,
0.01819351315498352,
-0.08495384454727173,
-0.013993565924465656,
0.02363641746342182,
-0.012447310611605644,
0.11422082781791687,
-0.... |
ae0b381f-b189-4b43-a99d-6edb0e9d8c67 | slug: /integrations/hive
sidebar_label: 'Hive'
title: 'Hive'
hide_title: true
description: 'Page describing the Hive table engine'
doc_type: 'reference'
keywords: ['hive', 'table engine', 'integration']
import HiveTableEngine from '@site/docs/engines/table-engines/integrations/hive.md'; | {"source_file": "hive.md"} | [
0.00519147515296936,
0.05856027826666832,
0.003078828100115061,
-0.01543708797544241,
0.01222438644617796,
0.025828827172517776,
0.04429808631539345,
0.01500678900629282,
-0.09404391050338745,
0.01748719811439514,
0.011655713431537151,
-0.0481138713657856,
0.04529297351837158,
-0.080966219... |
de87f148-8b5c-4f95-a276-3b73e8706772 | slug: /integrations/hudi
sidebar_label: 'Hudi'
title: 'Hudi'
hide_title: true
description: 'Page describing the Hudi table engine'
doc_type: 'reference'
keywords: ['hudi table engine', 'apache hudi', 'data lake integration']
import HudiTableEngine from '@site/docs/engines/table-engines/integrations/hudi.md'; | {"source_file": "hudi.md"} | [
-0.03884969279170036,
0.05202804505825043,
-0.051252320408821106,
0.01192468497902155,
0.039676617830991745,
-0.04350150749087334,
0.018125558272004128,
-0.00572452787309885,
-0.08009111881256104,
-0.0360073521733284,
0.06427882611751556,
-0.003078034147620201,
0.06698216497898102,
-0.1297... |
2628fe1c-20e6-4144-ad86-36c1b8093f35 | slug: /integrations/redis
sidebar_label: 'Redis'
title: 'Redis'
description: 'Page describing the Redis table function'
doc_type: 'reference'
hide_title: true
keywords: ['redis', 'cache', 'integration', 'data source', 'key-value store']
import RedisFunction from '@site/docs/sql-reference/table-functions/redis.md'; | {"source_file": "redis.md"} | [
-0.005636930000036955,
0.0020854382310062647,
-0.0870809257030487,
0.04029451683163643,
-0.02945529669523239,
-0.028205715119838715,
0.07562685012817383,
0.0535857193171978,
-0.035375818610191345,
-0.00848698802292347,
0.027881737798452377,
-0.009260110557079315,
0.10815896838903427,
-0.11... |
e97e580b-8963-4f69-bbab-b536541f80e1 | slug: /integrations/deltalake
sidebar_label: 'Delta Lake'
hide_title: true
title: 'Delta Lake'
description: 'Page describing how users can integrate with the Delta lake table format via the table function.'
doc_type: 'reference'
keywords: ['delta lake', 'table function', 'data lake format']
import DeltaLakeFunction from '@site/docs/sql-reference/table-functions/deltalake.md'; | {"source_file": "deltalake.md"} | [
-0.00778587581589818,
0.0649212896823883,
-0.010035893879830837,
0.007933546788990498,
-0.016425518319010735,
-0.005050854291766882,
0.04319128021597862,
0.04890729859471321,
-0.071022167801857,
-0.015055222436785698,
0.038809604942798615,
-0.04470166563987732,
0.06438102573156357,
-0.0968... |
54a49d65-f9b8-45ba-abf4-cb9cb4edc9fe | slug: /integrations/nats
sidebar_label: 'NATS'
title: 'NATS'
hide_title: true
description: 'Page describing integration with the NATS engine'
doc_type: 'reference'
keywords: ['nats', 'message queue', 'streaming', 'integration', 'data ingestion']
import NatsEngine from '@site/docs/engines/table-engines/integrations/nats.md'; | {"source_file": "nats.md"} | [
-0.006174854002892971,
0.06537798047065735,
-0.030842922627925873,
0.015227623283863068,
0.07370476424694061,
-0.037490516901016235,
0.013260945677757263,
0.008192827925086021,
-0.09188757091760635,
-0.012499912641942501,
-0.00119036587420851,
-0.037156280130147934,
0.013708069920539856,
-... |
8185873a-dbf1-4e86-8bdb-6f9f7beec51b | slug: /integrations/sqlite
sidebar_label: 'SQLite'
title: 'SQLite'
hide_title: true
description: 'Page describing integration using the SQLite engine'
doc_type: 'reference'
keywords: ['sqlite', 'embedded database', 'integration', 'data source', 'file database']
import SQLiteEngine from '@site/docs/engines/table-engines/integrations/sqlite.md'; | {"source_file": "sqlite.md"} | [
-0.021622544154524803,
0.037705209106206894,
-0.02285575307905674,
0.039871696382761,
0.022871628403663635,
-0.02033570595085621,
0.04800042510032654,
0.024420499801635742,
-0.07980886101722717,
-0.022728201001882553,
0.0169576033949852,
-0.019572971388697624,
0.06396176666021347,
-0.09154... |
95ce3920-4448-470c-8be1-e5f94e721736 | slug: /integrations/mongodb
sidebar_label: 'MongoDB'
title: 'MongoDB'
hide_title: true
description: 'Page describing integration using the MongoDB engine'
doc_type: 'reference'
keywords: ['mongodb', 'nosql', 'integration', 'data source', 'document database']
import MongoDBEngine from '@site/docs/engines/table-engines/integrations/mongodb.md'; | {"source_file": "mongodb.md"} | [
-0.0021843588910996914,
0.0911296084523201,
-0.015931464731693268,
0.059392206370830536,
0.04458664357662201,
-0.043165821582078934,
-0.044491272419691086,
0.015845634043216705,
-0.015063787810504436,
-0.0339815579354763,
-0.02871347777545452,
-0.038253530859947205,
0.030382337048649788,
-... |
02ec96c3-7949-425f-8a88-93207cba8057 | description: 'Documentation for Distributed Ddl'
sidebar_label: 'Distributed DDL'
slug: /sql-reference/other/distributed-ddl
title: 'Page for Distributed DDL'
doc_type: 'reference'
import Content from '@site/docs/sql-reference/distributed-ddl.md'; | {"source_file": "distributed-ddl.md"} | [
-0.0496206060051918,
-0.07740184664726257,
-0.07880290597677231,
-0.007791565265506506,
0.008564852178096771,
-0.039318498224020004,
-0.03996487706899643,
0.011663869954645634,
-0.05975184217095375,
-0.023518577218055725,
0.0035390122793614864,
0.024673230946063995,
0.04942161217331886,
-0... |
3a351343-208b-4a03-932c-b3c206818580 | description: 'Documentation for Operators'
displayed_sidebar: 'sqlreference'
sidebar_label: 'Operators'
sidebar_position: 38
slug: /sql-reference/operators/
title: 'Operators'
doc_type: 'reference'
Operators
ClickHouse transforms operators to their corresponding functions at the query parsing stage according to their priority, precedence, and associativity.
Access Operators {#access-operators}
a[N]
β Access to an element of an array. The
arrayElement(a, N)
function.
a.N
β Access to a tuple element. The
tupleElement(a, N)
function.
Numeric Negation Operator {#numeric-negation-operator}
-a
β The
negate (a)
function.
For tuple negation:
tupleNegate
.
Multiplication and Division Operators {#multiplication-and-division-operators}
a * b
β The
multiply (a, b)
function.
For multiplying tuple by number:
tupleMultiplyByNumber
, for scalar product:
dotProduct
.
a / b
β The
divide(a, b)
function.
For dividing tuple by number:
tupleDivideByNumber
.
a % b
β The
modulo(a, b)
function.
Addition and Subtraction Operators {#addition-and-subtraction-operators}
a + b
β The
plus(a, b)
function.
For tuple addiction:
tuplePlus
.
a - b
β The
minus(a, b)
function.
For tuple subtraction:
tupleMinus
.
Comparison Operators {#comparison-operators}
equals function {#equals-function}
a = b
β The
equals(a, b)
function.
a == b
β The
equals(a, b)
function.
notEquals function {#notequals-function}
a != b
β The
notEquals(a, b)
function.
a <> b
β The
notEquals(a, b)
function.
lessOrEquals function {#lessorequals-function}
a <= b
β The
lessOrEquals(a, b)
function.
greaterOrEquals function {#greaterorequals-function}
a >= b
β The
greaterOrEquals(a, b)
function.
less function {#less-function}
a < b
β The
less(a, b)
function.
greater function {#greater-function}
a > b
β The
greater(a, b)
function.
like function {#like-function}
a LIKE b
β The
like(a, b)
function.
notLike function {#notlike-function}
a NOT LIKE b
β The
notLike(a, b)
function.
ilike function {#ilike-function}
a ILIKE b
β The
ilike(a, b)
function.
BETWEEN function {#between-function}
a BETWEEN b AND c
β The same as
a >= b AND a <= c
.
a NOT BETWEEN b AND c
β The same as
a < b OR a > c
.
Operators for Working with Data Sets {#operators-for-working-with-data-sets}
See
IN operators
and
EXISTS
operator.
in function {#in-function}
a IN ...
β The
in(a, b)
function.
notIn function {#notin-function}
a NOT IN ...
β The
notIn(a, b)
function.
globalIn function {#globalin-function}
a GLOBAL IN ...
β The
globalIn(a, b)
function.
globalNotIn function {#globalnotin-function}
a GLOBAL NOT IN ...
β The
globalNotIn(a, b)
function.
in subquery function {#in-subquery-function}
a = ANY (subquery)
β The
in(a, subquery)
function.
notIn subquery function {#notin-subquery-function} | {"source_file": "index.md"} | [
-0.03048114664852619,
-0.02284691296517849,
-0.03932981938123703,
-0.006803012453019619,
-0.12904582917690277,
-0.054845377802848816,
0.06582433730363846,
-0.05522890388965607,
-0.06879936158657074,
-0.04900157451629639,
0.022847985848784447,
-0.010813095606863499,
0.04581112787127495,
-0.... |
0be69187-74ec-4401-addc-56f402d60129 | in subquery function {#in-subquery-function}
a = ANY (subquery)
β The
in(a, subquery)
function.
notIn subquery function {#notin-subquery-function}
a != ANY (subquery)
β The same as
a NOT IN (SELECT singleValueOrNull(*) FROM subquery)
.
in subquery function {#in-subquery-function-1}
a = ALL (subquery)
β The same as
a IN (SELECT singleValueOrNull(*) FROM subquery)
.
notIn subquery function {#notin-subquery-function-1}
a != ALL (subquery)
β The
notIn(a, subquery)
function.
Examples
Query with ALL:
sql
SELECT number AS a FROM numbers(10) WHERE a > ALL (SELECT number FROM numbers(3, 3));
Result:
text
ββaββ
β 6 β
β 7 β
β 8 β
β 9 β
βββββ
Query with ANY:
sql
SELECT number AS a FROM numbers(10) WHERE a > ANY (SELECT number FROM numbers(3, 3));
Result:
text
ββaββ
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
βββββ
Operators for Working with Dates and Times {#operators-for-working-with-dates-and-times}
EXTRACT {#extract}
sql
EXTRACT(part FROM date);
Extract parts from a given date. For example, you can retrieve a month from a given date, or a second from a time.
The
part
parameter specifies which part of the date to retrieve. The following values are available:
DAY
β The day of the month. Possible values: 1β31.
MONTH
β The number of a month. Possible values: 1β12.
YEAR
β The year.
SECOND
β The second. Possible values: 0β59.
MINUTE
β The minute. Possible values: 0β59.
HOUR
β The hour. Possible values: 0β23.
The
part
parameter is case-insensitive.
The
date
parameter specifies the date or the time to process. Either
Date
or
DateTime
type is supported.
Examples:
sql
SELECT EXTRACT(DAY FROM toDate('2017-06-15'));
SELECT EXTRACT(MONTH FROM toDate('2017-06-15'));
SELECT EXTRACT(YEAR FROM toDate('2017-06-15'));
In the following example we create a table and insert into it a value with the
DateTime
type.
sql
CREATE TABLE test.Orders
(
OrderId UInt64,
OrderName String,
OrderDate DateTime
)
ENGINE = Log;
sql
INSERT INTO test.Orders VALUES (1, 'Jarlsberg Cheese', toDateTime('2008-10-11 13:23:44'));
sql
SELECT
toYear(OrderDate) AS OrderYear,
toMonth(OrderDate) AS OrderMonth,
toDayOfMonth(OrderDate) AS OrderDay,
toHour(OrderDate) AS OrderHour,
toMinute(OrderDate) AS OrderMinute,
toSecond(OrderDate) AS OrderSecond
FROM test.Orders;
text
ββOrderYearββ¬βOrderMonthββ¬βOrderDayββ¬βOrderHourββ¬βOrderMinuteββ¬βOrderSecondββ
β 2008 β 10 β 11 β 13 β 23 β 44 β
βββββββββββββ΄βββββββββββββ΄βββββββββββ΄ββββββββββββ΄ββββββββββββββ΄ββββββββββββββ
You can see more examples in
tests
.
INTERVAL {#interval}
Creates an
Interval
-type value that should be used in arithmetical operations with
Date
and
DateTime
-type values.
Types of intervals:
-
SECOND
-
MINUTE
-
HOUR
-
DAY
-
WEEK
-
MONTH
-
QUARTER
-
YEAR | {"source_file": "index.md"} | [
-0.03801950812339783,
-0.0028219164814800024,
0.045568645000457764,
-0.04697170853614807,
-0.020371517166495323,
-0.06305059045553207,
0.02893036976456642,
-0.0010447200620546937,
0.048040151596069336,
-0.034040819853544235,
-0.008714492432773113,
-0.011815782636404037,
0.05052840709686279,
... |
aba27e3d-0d73-4cb2-95bf-45e1e6bc9ddb | Types of intervals:
-
SECOND
-
MINUTE
-
HOUR
-
DAY
-
WEEK
-
MONTH
-
QUARTER
-
YEAR
You can also use a string literal when setting the
INTERVAL
value. For example,
INTERVAL 1 HOUR
is identical to the
INTERVAL '1 hour'
or
INTERVAL '1' hour
.
:::tip
Intervals with different types can't be combined. You can't use expressions like
INTERVAL 4 DAY 1 HOUR
. Specify intervals in units that are smaller or equal to the smallest unit of the interval, for example,
INTERVAL 25 HOUR
. You can use consecutive operations, like in the example below.
:::
Examples:
sql
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR;
text
ββββcurrent_date_timeββ¬βplus(plus(now(), toIntervalDay(4)), toIntervalHour(3))ββ
β 2020-11-03 22:09:50 β 2020-11-08 01:09:50 β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql
SELECT now() AS current_date_time, current_date_time + INTERVAL '4 day' + INTERVAL '3 hour';
text
ββββcurrent_date_timeββ¬βplus(plus(now(), toIntervalDay(4)), toIntervalHour(3))ββ
β 2020-11-03 22:12:10 β 2020-11-08 01:12:10 β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql
SELECT now() AS current_date_time, current_date_time + INTERVAL '4' day + INTERVAL '3' hour;
text
ββββcurrent_date_timeββ¬βplus(plus(now(), toIntervalDay('4')), toIntervalHour('3'))ββ
β 2020-11-03 22:33:19 β 2020-11-08 01:33:19 β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
:::note
The
INTERVAL
syntax or
addDays
function are always preferred. Simple addition or subtraction (syntax like
now() + ...
) doesn't consider time settings. For example, daylight saving time.
:::
Examples:
sql
SELECT toDateTime('2014-10-26 00:00:00', 'Asia/Istanbul') AS time, time + 60 * 60 * 24 AS time_plus_24_hours, time + toIntervalDay(1) AS time_plus_1_day;
text
βββββββββββββββββtimeββ¬ββtime_plus_24_hoursββ¬βββββtime_plus_1_dayββ
β 2014-10-26 00:00:00 β 2014-10-26 23:00:00 β 2014-10-27 00:00:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
See Also
Interval
data type
toInterval
type conversion functions
Logical AND Operator {#logical-and-operator}
Syntax
SELECT a AND b
β calculates logical conjunction of
a
and
b
with the function
and
.
Logical OR Operator {#logical-or-operator}
Syntax
SELECT a OR b
β calculates logical disjunction of
a
and
b
with the function
or
.
Logical Negation Operator {#logical-negation-operator}
Syntax
SELECT NOT a
β calculates logical negation of
a
with the function
not
.
Conditional Operator {#conditional-operator}
a ? b : c
β The
if(a, b, c)
function.
Note: | {"source_file": "index.md"} | [
0.019201112911105156,
0.03075282648205757,
0.09572099894285202,
0.044910114258527756,
-0.05673842504620552,
0.022068575024604797,
0.03961433842778206,
-0.032846029847860336,
-0.02424091286957264,
-0.02849067561328411,
0.06559517234563828,
-0.12867805361747742,
0.015110673382878304,
0.03440... |
d7427f0e-43f8-4db6-8529-e1fdde2c877f | Syntax
SELECT NOT a
β calculates logical negation of
a
with the function
not
.
Conditional Operator {#conditional-operator}
a ? b : c
β The
if(a, b, c)
function.
Note:
The conditional operator calculates the values of b and c, then checks whether condition a is met, and then returns the corresponding value. If
b
or
C
is an
arrayJoin()
function, each row will be replicated regardless of the "a" condition.
Conditional Expression {#conditional-expression}
sql
CASE [x]
WHEN a THEN b
[WHEN ... THEN ...]
[ELSE c]
END
If
x
is specified, then
transform(x, [a, ...], [b, ...], c)
function is used. Otherwise β
multiIf(a, b, ..., c)
.
If there is no
ELSE c
clause in the expression, the default value is
NULL
.
The
transform
function does not work with
NULL
.
Concatenation Operator {#concatenation-operator}
s1 || s2
β The
concat(s1, s2) function.
Lambda Creation Operator {#lambda-creation-operator}
x -> expr
β The
lambda(x, expr) function.
The following operators do not have a priority since they are brackets:
Array Creation Operator {#array-creation-operator}
[x1, ...]
β The
array(x1, ...) function.
Tuple Creation Operator {#tuple-creation-operator}
(x1, x2, ...)
β The
tuple(x2, x2, ...) function.
Associativity {#associativity}
All binary operators have left associativity. For example,
1 + 2 + 3
is transformed to
plus(plus(1, 2), 3)
.
Sometimes this does not work the way you expect. For example,
SELECT 4 > 2 > 3
will result in 0.
For efficiency, the
and
and
or
functions accept any number of arguments. The corresponding chains of
AND
and
OR
operators are transformed into a single call of these functions.
Checking for
NULL
{#checking-for-null}
ClickHouse supports the
IS NULL
and
IS NOT NULL
operators.
IS NULL {#is_null}
For
Nullable
type values, the
IS NULL
operator returns:
1
, if the value is
NULL
.
0
otherwise.
For other values, the
IS NULL
operator always returns
0
.
Can be optimized by enabling the
optimize_functions_to_subcolumns
setting. With
optimize_functions_to_subcolumns = 1
the function reads only
null
subcolumn instead of reading and processing the whole column data. The query
SELECT n IS NULL FROM table
transforms to
SELECT n.null FROM TABLE
.
sql
SELECT x+100 FROM t_null WHERE y IS NULL
text
ββplus(x, 100)ββ
β 101 β
ββββββββββββββββ
IS NOT NULL {#is_not_null}
For
Nullable
type values, the
IS NOT NULL
operator returns:
0
, if the value is
NULL
.
1
otherwise.
For other values, the
IS NOT NULL
operator always returns
1
.
sql
SELECT * FROM t_null WHERE y IS NOT NULL
text
ββxββ¬βyββ
β 2 β 3 β
βββββ΄ββββ | {"source_file": "index.md"} | [
-0.03404577821493149,
-0.0019361393060535192,
-0.013298877514898777,
-0.01747889257967472,
-0.06691428273916245,
-0.018490614369511604,
0.053253475576639175,
-0.02613365277647972,
-0.0006387321627698839,
0.01778641901910305,
0.00979702640324831,
-0.06810057908296585,
0.018264371901750565,
... |
6f37db2e-1113-4d5b-b502-b08516541ca0 | 1
otherwise.
For other values, the
IS NOT NULL
operator always returns
1
.
sql
SELECT * FROM t_null WHERE y IS NOT NULL
text
ββxββ¬βyββ
β 2 β 3 β
βββββ΄ββββ
Can be optimized by enabling the
optimize_functions_to_subcolumns
setting. With
optimize_functions_to_subcolumns = 1
the function reads only
null
subcolumn instead of reading and processing the whole column data. The query
SELECT n IS NOT NULL FROM table
transforms to
SELECT NOT n.null FROM TABLE
. | {"source_file": "index.md"} | [
0.01759612001478672,
0.020350465551018715,
-0.03517676517367363,
0.012435775250196457,
-0.01893993280827999,
-0.0657452717423439,
0.001980480272322893,
0.010346897877752781,
-0.002083035884425044,
-0.000268409465206787,
0.07869879901409149,
-0.04268506169319153,
-0.03849509730935097,
-0.09... |
18f31fcd-1ae8-4e61-a86b-aa99769d602c | description: 'Documentation for the
EXISTS
operator'
slug: /sql-reference/operators/exists
title: 'EXISTS'
doc_type: 'reference'
EXISTS
The
EXISTS
operator checks how many records are in the result of a subquery. If it is empty, then the operator returns
0
. Otherwise, it returns
1
.
EXISTS
can also be used in a
WHERE
clause.
:::tip
References to main query tables and columns are not supported in a subquery.
:::
Syntax
sql
EXISTS(subquery)
Example
Query checking existence of values in a subquery:
sql
SELECT EXISTS(SELECT * FROM numbers(10) WHERE number > 8), EXISTS(SELECT * FROM numbers(10) WHERE number > 11)
Result:
text
ββin(1, _subquery1)ββ¬βin(1, _subquery2)ββ
β 1 β 0 β
βββββββββββββββββββββ΄ββββββββββββββββββββ
Query with a subquery returning several rows:
sql
SELECT count() FROM numbers(10) WHERE EXISTS(SELECT number FROM numbers(10) WHERE number > 8);
Result:
text
ββcount()ββ
β 10 β
βββββββββββ
Query with a subquery that returns an empty result:
sql
SELECT count() FROM numbers(10) WHERE EXISTS(SELECT number FROM numbers(10) WHERE number > 11);
Result:
text
ββcount()ββ
β 0 β
βββββββββββ | {"source_file": "exists.md"} | [
-0.02368088811635971,
-0.04230460897088051,
0.020115867257118225,
0.0414387546479702,
-0.04276726767420769,
-0.014924364164471626,
-0.008829118683934212,
0.07019057124853134,
0.0728236511349678,
-0.050152137875556946,
0.005548699758946896,
-0.014412342570722103,
0.08251965045928955,
-0.053... |
2f18b387-5b17-48cd-aaa3-d66ab88a5664 | description: 'Documentation for the IN operators excluding NOT IN, GLOBAL IN and GLOBAL
NOT IN operators which are covered separately'
slug: /sql-reference/operators/in
title: 'IN Operators'
doc_type: 'reference'
IN Operators
The
IN
,
NOT IN
,
GLOBAL IN
, and
GLOBAL NOT IN
operators are covered separately, since their functionality is quite rich.
The left side of the operator is either a single column or a tuple.
Examples:
sql
SELECT UserID IN (123, 456) FROM ...
SELECT (CounterID, UserID) IN ((34, 123), (101500, 456)) FROM ...
If the left side is a single column that is in the index, and the right side is a set of constants, the system uses the index for processing the query.
Don't list too many values explicitly (i.e.Β millions). If a data set is large, put it in a temporary table (for example, see the section
External data for query processing
), then use a subquery.
The right side of the operator can be a set of constant expressions, a set of tuples with constant expressions (shown in the examples above), or the name of a database table or
SELECT
subquery in brackets.
ClickHouse allows types to differ in the left and the right parts of the
IN
subquery.
In this case, it converts the right side value to the type of the left side, as
if the
accurateCastOrNull
function were applied to the right side.
This means that the data type becomes
Nullable
, and if the conversion
cannot be performed, it returns
NULL
.
Example
Query:
sql
SELECT '1' IN (SELECT 1);
Result:
text
ββin('1', _subquery49)ββ
β 1 β
ββββββββββββββββββββββββ
If the right side of the operator is the name of a table (for example,
UserID IN users
), this is equivalent to the subquery
UserID IN (SELECT * FROM users)
. Use this when working with external data that is sent along with the query. For example, the query can be sent together with a set of user IDs loaded to the 'users' temporary table, which should be filtered.
If the right side of the operator is a table name that has the Set engine (a prepared data set that is always in RAM), the data set will not be created over again for each query.
The subquery may specify more than one column for filtering tuples.
Example:
sql
SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ...
The columns to the left and right of the
IN
operator should have the same type.
The
IN
operator and subquery may occur in any part of the query, including in aggregate functions and lambda functions.
Example:
sql
SELECT
EventDate,
avg(UserID IN
(
SELECT UserID
FROM test.hits
WHERE EventDate = toDate('2014-03-17')
)) AS ratio
FROM test.hits
GROUP BY EventDate
ORDER BY EventDate ASC | {"source_file": "in.md"} | [
0.0023392278235405684,
0.02621036022901535,
0.01368861272931099,
0.05067053437232971,
-0.06133482605218887,
-0.0779147818684578,
0.06703899800777435,
0.0209916140884161,
-0.005146906245499849,
-0.010326219722628593,
-0.006413046270608902,
-0.0017547496827319264,
0.05823631212115288,
-0.099... |
ad586dcd-3282-480e-b15d-6105d4fa6986 | text
βββEventDateββ¬ββββratioββ
β 2014-03-17 β 1 β
β 2014-03-18 β 0.807696 β
β 2014-03-19 β 0.755406 β
β 2014-03-20 β 0.723218 β
β 2014-03-21 β 0.697021 β
β 2014-03-22 β 0.647851 β
β 2014-03-23 β 0.648416 β
ββββββββββββββ΄βββββββββββ
For each day after March 17th, count the percentage of pageviews made by users who visited the site on March 17th.
A subquery in the
IN
clause is always run just one time on a single server. There are no dependent subqueries.
NULL Processing {#null-processing}
During request processing, the
IN
operator assumes that the result of an operation with
NULL
always equals
0
, regardless of whether
NULL
is on the right or left side of the operator.
NULL
values are not included in any dataset, do not correspond to each other and cannot be compared if
transform_null_in = 0
.
Here is an example with the
t_null
table:
text
ββxββ¬ββββyββ
β 1 β α΄Ία΅α΄Έα΄Έ β
β 2 β 3 β
βββββ΄βββββββ
Running the query
SELECT x FROM t_null WHERE y IN (NULL,3)
gives you the following result:
text
ββxββ
β 2 β
βββββ
You can see that the row in which
y = NULL
is thrown out of the query results. This is because ClickHouse can't decide whether
NULL
is included in the
(NULL,3)
set, returns
0
as the result of the operation, and
SELECT
excludes this row from the final output.
sql
SELECT y IN (NULL, 3)
FROM t_null
text
ββin(y, tuple(NULL, 3))ββ
β 0 β
β 1 β
βββββββββββββββββββββββββ
Distributed Subqueries {#distributed-subqueries}
There are two options for
IN
operators with subqueries (similar to
JOIN
operators): normal
IN
/
JOIN
and
GLOBAL IN
/
GLOBAL JOIN
. They differ in how they are run for distributed query processing.
:::note
Remember that the algorithms described below may work differently depending on the
settings
distributed_product_mode
setting.
:::
When using the regular
IN
, the query is sent to remote servers, and each of them runs the subqueries in the
IN
or
JOIN
clause.
When using
GLOBAL IN
/
GLOBAL JOIN
, first all the subqueries are run for
GLOBAL IN
/
GLOBAL JOIN
, and the results are collected in temporary tables. Then the temporary tables are sent to each remote server, where the queries are run using this temporary data.
For a non-distributed query, use the regular
IN
/
JOIN
.
Be careful when using subqueries in the
IN
/
JOIN
clauses for distributed query processing.
Let's look at some examples. Assume that each server in the cluster has a normal
local_table
. Each server also has a
distributed_table
table with the
Distributed
type, which looks at all the servers in the cluster.
For a query to the
distributed_table
, the query will be sent to all the remote servers and run on them using the
local_table
.
For example, the query
sql
SELECT uniq(UserID) FROM distributed_table
will be sent to all remote servers as
sql
SELECT uniq(UserID) FROM local_table | {"source_file": "in.md"} | [
-0.003640518058091402,
0.011674460023641586,
0.0038307898212224245,
0.030754774808883667,
-0.03307932987809181,
-0.050529785454273224,
0.040333084762096405,
-0.022924423217773438,
0.03784202039241791,
0.03858933225274086,
0.07584082335233688,
-0.05296790599822998,
0.05451571196317673,
-0.0... |
0bd131ba-28fc-4245-9290-74965b6af3ea | For example, the query
sql
SELECT uniq(UserID) FROM distributed_table
will be sent to all remote servers as
sql
SELECT uniq(UserID) FROM local_table
and run on each of them in parallel, until it reaches the stage where intermediate results can be combined. Then the intermediate results will be returned to the requestor server and merged on it, and the final result will be sent to the client.
Now let's examine a query with
IN
:
sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
Calculation of the intersection of audiences of two sites.
This query will be sent to all remote servers as
sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
In other words, the data set in the
IN
clause will be collected on each server independently, only across the data that is stored locally on each of the servers.
This will work correctly and optimally if you are prepared for this case and have spread data across the cluster servers such that the data for a single UserID resides entirely on a single server. In this case, all the necessary data will be available locally on each server. Otherwise, the result will be inaccurate. We refer to this variation of the query as "local IN".
To correct how the query works when data is spread randomly across the cluster servers, you could specify
distributed_table
inside a subquery. The query would look like this:
sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
This query will be sent to all remote servers as
sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
The subquery will begin running on each remote server. Since the subquery uses a distributed table, the subquery that is on each remote server will be resent to every remote server as:
sql
SELECT UserID FROM local_table WHERE CounterID = 34
For example, if you have a cluster of 100 servers, executing the entire query will require 10,000 elementary requests, which is generally considered unacceptable.
In such cases, you should always use
GLOBAL IN
instead of
IN
. Let's look at how it works for the query:
sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID GLOBAL IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
The requestor server will run the subquery:
sql
SELECT UserID FROM distributed_table WHERE CounterID = 34
and the result will be put in a temporary table in RAM. Then the request will be sent to each remote server as:
sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL IN _data1 | {"source_file": "in.md"} | [
0.05435187369585037,
-0.038117073476314545,
-0.013578017242252827,
0.051884107291698456,
-0.04347780719399452,
-0.05145698040723801,
0.04002004861831665,
-0.02306593582034111,
0.0059105330146849155,
0.013658232986927032,
0.027088966220617294,
-0.015575369819998741,
0.13538411259651184,
-0.... |
5e8b1fad-06d9-4f08-917e-926f900bc4b6 | sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL IN _data1
The temporary table
_data1
will be sent to every remote server with the query (the name of the temporary table is implementation-defined).
This is more optimal than using the normal
IN
. However, keep the following points in mind:
When creating a temporary table, data is not made unique. To reduce the volume of data transmitted over the network, specify DISTINCT in the subquery. (You do not need to do this for a normal
IN
.)
The temporary table will be sent to all the remote servers. Transmission does not account for network topology. For example, if 10 remote servers reside in a datacenter that is very remote in relation to the requestor server, the data will be sent 10 times over the channel to the remote datacenter. Try to avoid large data sets when using
GLOBAL IN
.
When transmitting data to remote servers, restrictions on network bandwidth are not configurable. You might overload the network.
Try to distribute data across servers so that you do not need to use
GLOBAL IN
on a regular basis.
If you need to use
GLOBAL IN
often, plan the location of the ClickHouse cluster so that a single group of replicas resides in no more than one data center with a fast network between them, so that a query can be processed entirely within a single data center.
It also makes sense to specify a local table in the
GLOBAL IN
clause, in case this local table is only available on the requestor server and you want to use data from it on remote servers.
Distributed Subqueries and max_rows_in_set {#distributed-subqueries-and-max_rows_in_set}
You can use
max_rows_in_set
and
max_bytes_in_set
to control how much data is transferred during distributed queries.
This is specially important if the
GLOBAL IN
query returns a large amount of data. Consider the following SQL:
sql
SELECT * FROM table1 WHERE col1 GLOBAL IN (SELECT col1 FROM table2 WHERE <some_predicate>)
If
some_predicate
is not selective enough, it will return a large amount of data and cause performance issues. In such cases, it is wise to limit the data transfer over the network. Also, note that
set_overflow_mode
is set to
throw
(by default) meaning that an exception is raised when these thresholds are met.
Distributed Subqueries and max_parallel_replicas {#distributed-subqueries-and-max_parallel_replicas}
When
max_parallel_replicas
is greater than 1, distributed queries are further transformed.
For example, the following:
sql
SELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100)
SETTINGS max_parallel_replicas=3
is transformed on each server into:
sql
SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100)
SETTINGS parallel_replicas_count=3, parallel_replicas_offset=M | {"source_file": "in.md"} | [
0.004787529353052378,
0.002912803553044796,
0.001726561225950718,
0.04834182560443878,
-0.070138119161129,
-0.06283623725175858,
0.11026263236999512,
-0.02018926478922367,
0.016551584005355835,
0.04535481706261635,
0.0036621547769755125,
0.0042676967568695545,
0.13677483797073364,
0.002116... |
a29ec6d9-4cd7-4620-95dd-f6262799e0b3 | sql
SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100)
SETTINGS parallel_replicas_count=3, parallel_replicas_offset=M
where
M
is between
1
and
3
depending on which replica the local query is executing on.
These settings affect every MergeTree-family table in the query and have the same effect as applying
SAMPLE 1/3 OFFSET (M-1)/3
on each table.
Therefore adding the
max_parallel_replicas
setting will only produce correct results if both tables have the same replication scheme and are sampled by UserID or a subkey of it. In particular, if
local_table_2
does not have a sampling key, incorrect results will be produced. The same rule applies to
JOIN
.
One workaround if
local_table_2
does not meet the requirements, is to use
GLOBAL IN
or
GLOBAL JOIN
.
If a table doesn't have a sampling key, more flexible options for
parallel_replicas_custom_key
can be used that can produce different and more optimal behaviour. | {"source_file": "in.md"} | [
0.01995290443301201,
-0.049725670367479324,
0.009879800491034985,
-0.053902767598629,
-0.05732090026140213,
-0.09124534577131271,
-0.024651335552334785,
0.03494943678379059,
-0.007884562015533447,
-0.023685583844780922,
0.06862019002437592,
-0.09575170278549194,
0.08638066798448563,
-0.062... |
007156c8-ab5c-467b-847a-7f6c78a90e94 | description: 'Documentation for the SimpleAggregateFunction data type'
sidebar_label: 'SimpleAggregateFunction'
sidebar_position: 48
slug: /sql-reference/data-types/simpleaggregatefunction
title: 'SimpleAggregateFunction Type'
doc_type: 'reference'
SimpleAggregateFunction Type
Description {#description}
The
SimpleAggregateFunction
data type stores the intermediate state of an
aggregate function, but not its full state as the
AggregateFunction
type does.
This optimization can be applied to functions for which the following property
holds:
the result of applying a function
f
to a row set
S1 UNION ALL S2
can
be obtained by applying
f
to parts of the row set separately, and then again
applying
f
to the results:
f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))
.
This property guarantees that partial aggregation results are enough to compute
the combined one, so we do not have to store and process any extra data. For
example, the result of the
min
or
max
functions require no extra steps to
calculate the final result from the intermediate steps, whereas the
avg
function
requires keeping track of a sum and a count, which will be divided to get the
average in a final
Merge
step which combines the intermediate states.
Aggregate function values are commonly produced by calling an aggregate function
with the
-SimpleState
combinator appended to the function name.
Syntax {#syntax}
sql
SimpleAggregateFunction(aggregate_function_name, types_of_arguments...)
Parameters
aggregate_function_name
- The name of an aggregate function.
Type
- Types of the aggregate function arguments.
Supported functions {#supported-functions}
The following aggregate functions are supported:
any
any_respect_nulls
anyLast
anyLast_respect_nulls
min
max
sum
sumWithOverflow
groupBitAnd
groupBitOr
groupBitXor
groupArrayArray
groupUniqArrayArray
groupUniqArrayArrayMap
sumMap
minMap
maxMap
:::note
Values of the
SimpleAggregateFunction(func, Type)
have the same
Type
,
so unlike with the
AggregateFunction
type there is no need to apply
-Merge
/
-State
combinators.
The
SimpleAggregateFunction
type has better performance than the
AggregateFunction
for the same aggregate functions.
:::
Example {#example}
sql
CREATE TABLE simple (id UInt64, val SimpleAggregateFunction(sum, Double)) ENGINE=AggregatingMergeTree ORDER BY id;
Related Content {#related-content}
Blog:
Using Aggregate Combinators in ClickHouse
- Blog:
Using Aggregate Combinators in ClickHouse
AggregateFunction
type. | {"source_file": "simpleaggregatefunction.md"} | [
-0.047792986035346985,
0.006363412830978632,
0.040454667061567307,
0.03216217830777168,
-0.04352011904120445,
0.04391179978847504,
-0.045842159539461136,
0.08324924856424332,
-0.04469791054725647,
0.01635119877755642,
-0.06840620189905167,
0.0025576718617230654,
0.024017926305532455,
-0.06... |
64262694-d99e-4b65-ba6e-0901862f1437 | description: 'Documentation for the Dynamic data type in ClickHouse, which can store
values of different types in a single column'
sidebar_label: 'Dynamic'
sidebar_position: 62
slug: /sql-reference/data-types/dynamic
title: 'Dynamic'
doc_type: 'guide'
Dynamic
This type allows to store values of any type inside it without knowing all of them in advance.
To declare a column of
Dynamic
type, use the following syntax:
sql
<column_name> Dynamic(max_types=N)
Where
N
is an optional parameter between
0
and
254
indicating how many different data types can be stored as separate subcolumns inside a column with type
Dynamic
across single block of data that is stored separately (for example across single data part for MergeTree table). If this limit is exceeded, all values with new types will be stored together in a special shared data structure in binary form. Default value of
max_types
is
32
.
Creating Dynamic {#creating-dynamic}
Using
Dynamic
type in table column definition:
sql
CREATE TABLE test (d Dynamic) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT d, dynamicType(d) FROM test;
text
ββdββββββββββββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β 42 β Int64 β
β Hello, World! β String β
β [1,2,3] β Array(Int64) β
βββββββββββββββββ΄βββββββββββββββββ
Using CAST from ordinary column:
sql
SELECT 'Hello, World!'::Dynamic AS d, dynamicType(d);
text
ββdββββββββββββββ¬βdynamicType(d)ββ
β Hello, World! β String β
βββββββββββββββββ΄βββββββββββββββββ
Using CAST from
Variant
column:
sql
SET use_variant_as_common_type = 1;
SELECT multiIf((number % 3) = 0, number, (number % 3) = 1, range(number + 1), NULL)::Dynamic AS d, dynamicType(d) FROM numbers(3)
text
ββdββββββ¬βdynamicType(d)ββ
β 0 β UInt64 β
β [0,1] β Array(UInt64) β
β α΄Ία΅α΄Έα΄Έ β None β
βββββββββ΄βββββββββββββββββ
Reading Dynamic nested types as subcolumns {#reading-dynamic-nested-types-as-subcolumns}
Dynamic
type supports reading a single nested type from a
Dynamic
column using the type name as a subcolumn.
So, if you have column
d Dynamic
you can read a subcolumn of any valid type
T
using syntax
d.T
,
this subcolumn will have type
Nullable(T)
if
T
can be inside
Nullable
and
T
otherwise. This subcolumn will
be the same size as original
Dynamic
column and will contain
NULL
values (or empty values if
T
cannot be inside
Nullable
)
in all rows in which original
Dynamic
column doesn't have type
T
.
Dynamic
subcolumns can be also read using function
dynamicElement(dynamic_column, type_name)
.
Examples:
sql
CREATE TABLE test (d Dynamic) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT d, dynamicType(d), d.String, d.Int64, d.`Array(Int64)`, d.Date, d.`Array(String)` FROM test; | {"source_file": "dynamic.md"} | [
0.0435580350458622,
-0.033104218542575836,
-0.06552029401063919,
0.08042719960212708,
-0.10246683657169342,
-0.03498426079750061,
0.031021228060126305,
0.05781722813844681,
-0.08085031807422638,
-0.005425360053777695,
-0.0010302691953256726,
-0.014304311014711857,
0.04107757285237312,
-0.0... |
1b106b07-e123-48fd-8b35-cc7603fc1f40 | text
ββdββββββββββββββ¬βdynamicType(d)ββ¬βd.Stringβββββββ¬βd.Int64ββ¬βd.Array(Int64)ββ¬βd.Dateββ¬βd.Array(String)ββ
β α΄Ία΅α΄Έα΄Έ β None β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β α΄Ία΅α΄Έα΄Έ β [] β
β 42 β Int64 β α΄Ία΅α΄Έα΄Έ β 42 β [] β α΄Ία΅α΄Έα΄Έ β [] β
β Hello, World! β String β Hello, World! β α΄Ία΅α΄Έα΄Έ β [] β α΄Ία΅α΄Έα΄Έ β [] β
β [1,2,3] β Array(Int64) β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [1,2,3] β α΄Ία΅α΄Έα΄Έ β [] β
βββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββ΄ββββββββββ΄βββββββββββββββββ΄βββββββββ΄ββββββββββββββββββ
sql
SELECT toTypeName(d.String), toTypeName(d.Int64), toTypeName(d.`Array(Int64)`), toTypeName(d.Date), toTypeName(d.`Array(String)`) FROM test LIMIT 1;
text
ββtoTypeName(d.String)ββ¬βtoTypeName(d.Int64)ββ¬βtoTypeName(d.Array(Int64))ββ¬βtoTypeName(d.Date)ββ¬βtoTypeName(d.Array(String))ββ
β Nullable(String) β Nullable(Int64) β Array(Int64) β Nullable(Date) β Array(String) β
ββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ
sql
SELECT d, dynamicType(d), dynamicElement(d, 'String'), dynamicElement(d, 'Int64'), dynamicElement(d, 'Array(Int64)'), dynamicElement(d, 'Date'), dynamicElement(d, 'Array(String)') FROM test;
```
text
ββdββββββββββββββ¬βdynamicType(d)ββ¬βdynamicElement(d, 'String')ββ¬βdynamicElement(d, 'Int64')ββ¬βdynamicElement(d, 'Array(Int64)')ββ¬βdynamicElement(d, 'Date')ββ¬βdynamicElement(d, 'Array(String)')ββ
β α΄Ία΅α΄Έα΄Έ β None β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β α΄Ία΅α΄Έα΄Έ β [] β
β 42 β Int64 β α΄Ία΅α΄Έα΄Έ β 42 β [] β α΄Ία΅α΄Έα΄Έ β [] β
β Hello, World! β String β Hello, World! β α΄Ία΅α΄Έα΄Έ β [] β α΄Ία΅α΄Έα΄Έ β [] β
β [1,2,3] β Array(Int64) β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [1,2,3] β α΄Ία΅α΄Έα΄Έ β [] β
βββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ
To know what variant is stored in each row function
dynamicType(dynamic_column)
can be used. It returns
String
with value type name for each row (or
'None'
if row is
NULL
).
Example:
sql
CREATE TABLE test (d Dynamic) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT dynamicType(d) FROM test; | {"source_file": "dynamic.md"} | [
0.07505001872777939,
0.006100786849856377,
-0.01533934473991394,
0.017759526148438454,
-0.07100512087345123,
-0.023716971278190613,
0.04643958434462547,
-0.03749464824795723,
-0.12214601784944534,
0.02616560272872448,
0.0274523738771677,
0.01791164092719555,
-0.003750226926058531,
-0.06561... |
9b5a50d7-82cf-4b36-8225-5fdc7dead29d | Example:
sql
CREATE TABLE test (d Dynamic) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT dynamicType(d) FROM test;
text
ββdynamicType(d)ββ
β None β
β Int64 β
β String β
β Array(Int64) β
ββββββββββββββββββ
Conversion between Dynamic column and other columns {#conversion-between-dynamic-column-and-other-columns}
There are 4 possible conversions that can be performed with
Dynamic
column.
Converting an ordinary column to a Dynamic column {#converting-an-ordinary-column-to-a-dynamic-column}
sql
SELECT 'Hello, World!'::Dynamic AS d, dynamicType(d);
text
ββdββββββββββββββ¬βdynamicType(d)ββ
β Hello, World! β String β
βββββββββββββββββ΄βββββββββββββββββ
Converting a String column to a Dynamic column through parsing {#converting-a-string-column-to-a-dynamic-column-through-parsing}
To parse
Dynamic
type values from a
String
column you can enable setting
cast_string_to_dynamic_use_inference
:
sql
SET cast_string_to_dynamic_use_inference = 1;
SELECT CAST(materialize(map('key1', '42', 'key2', 'true', 'key3', '2020-01-01')), 'Map(String, Dynamic)') as map_of_dynamic, mapApply((k, v) -> (k, dynamicType(v)), map_of_dynamic) as map_of_dynamic_types;
text
ββmap_of_dynamicβββββββββββββββββββββββββββββββ¬βmap_of_dynamic_typesββββββββββββββββββββββββββ
β {'key1':42,'key2':true,'key3':'2020-01-01'} β {'key1':'Int64','key2':'Bool','key3':'Date'} β
βββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββ
Converting a Dynamic column to an ordinary column {#converting-a-dynamic-column-to-an-ordinary-column}
It is possible to convert a
Dynamic
column to an ordinary column. In this case all nested types will be converted to a destination type:
sql
CREATE TABLE test (d Dynamic) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('42.42'), (true), ('e10');
SELECT d::Nullable(Float64) FROM test;
text
ββCAST(d, 'Nullable(Float64)')ββ
β α΄Ία΅α΄Έα΄Έ β
β 42 β
β 42.42 β
β 1 β
β 0 β
ββββββββββββββββββββββββββββββββ
Converting a Variant column to Dynamic column {#converting-a-variant-column-to-dynamic-column}
sql
CREATE TABLE test (v Variant(UInt64, String, Array(UInt64))) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('String'), ([1, 2, 3]);
SELECT v::Dynamic AS d, dynamicType(d) FROM test;
text
ββdββββββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β 42 β UInt64 β
β String β String β
β [1,2,3] β Array(UInt64) β
βββββββββββ΄βββββββββββββββββ
Converting a Dynamic(max_types=N) column to another Dynamic(max_types=K) {#converting-a-dynamicmax_typesn-column-to-another-dynamicmax_typesk}
If
K >= N
than during conversion the data doesn't change: | {"source_file": "dynamic.md"} | [
0.07822507619857788,
-0.011515163816511631,
-0.037735603749752045,
0.053979307413101196,
-0.0985516682267189,
-0.051050182431936264,
0.030984142795205116,
0.009462593123316765,
-0.08462804555892944,
0.0363922193646431,
-0.02354247309267521,
-0.05524919554591179,
-0.002007154282182455,
-0.0... |
208b4da7-cb72-4c17-805c-d2f5f6f299fb | If
K >= N
than during conversion the data doesn't change:
sql
CREATE TABLE test (d Dynamic(max_types=3)) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), (43), ('42.42'), (true);
SELECT d::Dynamic(max_types=5) as d2, dynamicType(d2) FROM test;
text
ββdββββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β 42 β Int64 β
β 43 β Int64 β
β 42.42 β String β
β true β Bool β
βββββββββ΄βββββββββββββββββ
If
K < N
, then the values with the rarest types will be inserted into a single special subcolumn, but still will be accessible:
text
CREATE TABLE test (d Dynamic(max_types=4)) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), (43), ('42.42'), (true), ([1, 2, 3]);
SELECT d, dynamicType(d), d::Dynamic(max_types=2) as d2, dynamicType(d2), isDynamicElementInSharedData(d2) FROM test;
text
ββdββββββββ¬βdynamicType(d)ββ¬βd2βββββββ¬βdynamicType(d2)ββ¬βisDynamicElementInSharedData(d2)ββ
β α΄Ία΅α΄Έα΄Έ β None β α΄Ία΅α΄Έα΄Έ β None β false β
β 42 β Int64 β 42 β Int64 β false β
β 43 β Int64 β 43 β Int64 β false β
β 42.42 β String β 42.42 β String β false β
β true β Bool β true β Bool β true β
β [1,2,3] β Array(Int64) β [1,2,3] β Array(Int64) β true β
βββββββββββ΄βββββββββββββββββ΄ββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββ
Functions
isDynamicElementInSharedData
returns
true
for rows that are stored in a special shared data structure inside
Dynamic
and as we can see, resulting column contains only 2 types that are not stored in shared data structure.
If
K=0
, all types will be inserted into single special subcolumn:
text
CREATE TABLE test (d Dynamic(max_types=4)) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), (43), ('42.42'), (true), ([1, 2, 3]);
SELECT d, dynamicType(d), d::Dynamic(max_types=0) as d2, dynamicType(d2), isDynamicElementInSharedData(d2) FROM test;
text
ββdββββββββ¬βdynamicType(d)ββ¬βd2βββββββ¬βdynamicType(d2)ββ¬βisDynamicElementInSharedData(d2)ββ
β α΄Ία΅α΄Έα΄Έ β None β α΄Ία΅α΄Έα΄Έ β None β false β
β 42 β Int64 β 42 β Int64 β true β
β 43 β Int64 β 43 β Int64 β true β
β 42.42 β String β 42.42 β String β true β
β true β Bool β true β Bool β true β
β [1,2,3] β Array(Int64) β [1,2,3] β Array(Int64) β true β
βββββββββββ΄βββββββββββββββββ΄ββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββ
Reading Dynamic type from the data {#reading-dynamic-type-from-the-data} | {"source_file": "dynamic.md"} | [
0.08908717334270477,
-0.00027181769837625325,
-0.002194165950641036,
0.024366168305277824,
-0.09008213877677917,
-0.023511625826358795,
0.022196266800165176,
-0.0015639194753021002,
-0.02790176123380661,
0.017809154465794563,
-0.01756814867258072,
-0.03753168508410454,
0.004683228209614754,
... |
82590685-15ca-406f-b937-7b981a159556 | Reading Dynamic type from the data {#reading-dynamic-type-from-the-data}
All text formats (TSV, CSV, CustomSeparated, Values, JSONEachRow, etc) supports reading
Dynamic
type. During data parsing ClickHouse tries to infer the type of each value and use it during insertion to
Dynamic
column.
Example:
sql
SELECT
d,
dynamicType(d),
dynamicElement(d, 'String') AS str,
dynamicElement(d, 'Int64') AS num,
dynamicElement(d, 'Float64') AS float,
dynamicElement(d, 'Date') AS date,
dynamicElement(d, 'Array(Int64)') AS arr
FROM format(JSONEachRow, 'd Dynamic', $$
{"d" : "Hello, World!"},
{"d" : 42},
{"d" : 42.42},
{"d" : "2020-01-01"},
{"d" : [1, 2, 3]}
$$)
text
ββdββββββββββββββ¬βdynamicType(d)ββ¬βstrββββββββββββ¬ββnumββ¬βfloatββ¬βββββββdateββ¬βarrββββββ
β Hello, World! β String β Hello, World! β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β
β 42 β Int64 β α΄Ία΅α΄Έα΄Έ β 42 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [] β
β 42.42 β Float64 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 42.42 β α΄Ία΅α΄Έα΄Έ β [] β
β 2020-01-01 β Date β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2020-01-01 β [] β
β [1,2,3] β Array(Int64) β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β [1,2,3] β
βββββββββββββββββ΄βββββββββββββββββ΄ββββββββββββββββ΄βββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββ
Using Dynamic type in functions {#using-dynamic-type-in-functions}
Most of the functions support arguments with type
Dynamic
. In this case the function is executed separately on each internal data type stored inside
Dynamic
column.
When the result type of the function depends on the arguments types, the result of such function executed with
Dynamic
arguments will be
Dynamic
. When the result type of the function doesn't depend on the arguments types - the result will be
Nullable(T)
where
T
the usual result type of this function.
Examples:
sql
CREATE TABLE test (d Dynamic) ENGINE=Memory;
INSERT INTO test VALUES (NULL), (1::Int8), (2::Int16), (3::Int32), (4::Int64);
sql
SELECT d, dynamicType(d) FROM test;
text
ββdβββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β 1 β Int8 β
β 2 β Int16 β
β 3 β Int32 β
β 4 β Int64 β
ββββββββ΄βββββββββββββββββ
sql
SELECT d, d + 1 AS res, toTypeName(res), dynamicType(res) FROM test;
text
ββdβββββ¬βresβββ¬βtoTypeName(res)ββ¬βdynamicType(res)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Dynamic β None β
β 1 β 2 β Dynamic β Int16 β
β 2 β 3 β Dynamic β Int32 β
β 3 β 4 β Dynamic β Int64 β
β 4 β 5 β Dynamic β Int64 β
ββββββββ΄βββββββ΄ββββββββββββββββββ΄βββββββββββββββββββ
sql
SELECT d, d + d AS res, toTypeName(res), dynamicType(res) FROM test; | {"source_file": "dynamic.md"} | [
0.01882038451731205,
-0.0011999548878520727,
0.018697872757911682,
0.09511303901672363,
-0.09474863857030869,
-0.03486000373959541,
0.0063946968875825405,
-0.03393508866429329,
-0.02620713599026203,
-0.03230488672852516,
-0.016913242638111115,
-0.015098861418664455,
-0.045015670359134674,
... |
db30b126-92ad-4679-bc32-6ecb393ac513 | sql
SELECT d, d + d AS res, toTypeName(res), dynamicType(res) FROM test;
text
ββdβββββ¬βresβββ¬βtoTypeName(res)ββ¬βdynamicType(res)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Dynamic β None β
β 1 β 2 β Dynamic β Int16 β
β 2 β 4 β Dynamic β Int32 β
β 3 β 6 β Dynamic β Int64 β
β 4 β 8 β Dynamic β Int64 β
ββββββββ΄βββββββ΄ββββββββββββββββββ΄βββββββββββββββββββ
sql
SELECT d, d < 3 AS res, toTypeName(res) FROM test;
text
ββdβββββ¬ββresββ¬βtoTypeName(res)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(UInt8) β
β 1 β 1 β Nullable(UInt8) β
β 2 β 1 β Nullable(UInt8) β
β 3 β 0 β Nullable(UInt8) β
β 4 β 0 β Nullable(UInt8) β
ββββββββ΄βββββββ΄ββββββββββββββββββ
sql
SELECT d, exp2(d) AS res, toTypeName(res) FROM test;
sql
ββdβββββ¬ββresββ¬βtoTypeName(res)ββββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(Float64) β
β 1 β 2 β Nullable(Float64) β
β 2 β 4 β Nullable(Float64) β
β 3 β 8 β Nullable(Float64) β
β 4 β 16 β Nullable(Float64) β
ββββββββ΄βββββββ΄ββββββββββββββββββββ
sql
TRUNCATE TABLE test;
INSERT INTO test VALUES (NULL), ('str_1'), ('str_2');
SELECT d, dynamicType(d) FROM test;
text
ββdββββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β str_1 β String β
β str_2 β String β
βββββββββ΄βββββββββββββββββ
sql
SELECT d, upper(d) AS res, toTypeName(res) FROM test;
text
ββdββββββ¬βresββββ¬βtoTypeName(res)βββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(String) β
β str_1 β STR_1 β Nullable(String) β
β str_2 β STR_2 β Nullable(String) β
βββββββββ΄ββββββββ΄βββββββββββββββββββ
sql
SELECT d, extract(d, '([0-3])') AS res, toTypeName(res) FROM test;
text
ββdββββββ¬βresβββ¬βtoTypeName(res)βββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(String) β
β str_1 β 1 β Nullable(String) β
β str_2 β 2 β Nullable(String) β
βββββββββ΄βββββββ΄βββββββββββββββββββ
sql
TRUNCATE TABLE test;
INSERT INTO test VALUES (NULL), ([1, 2]), ([3, 4]);
SELECT d, dynamicType(d) FROM test;
text
ββdββββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β [1,2] β Array(Int64) β
β [3,4] β Array(Int64) β
βββββββββ΄βββββββββββββββββ
sql
SELECT d, d[1] AS res, toTypeName(res), dynamicType(res) FROM test;
text
ββdββββββ¬βresβββ¬βtoTypeName(res)ββ¬βdynamicType(res)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Dynamic β None β
β [1,2] β 1 β Dynamic β Int64 β
β [3,4] β 3 β Dynamic β Int64 β
βββββββββ΄βββββββ΄ββββββββββββββββββ΄βββββββββββββββββββ
If function cannot be executed on some type inside
Dynamic
column, the exception will be thrown:
sql
INSERT INTO test VALUES (42), (43), ('str_1');
SELECT d, dynamicType(d) FROM test;
text
ββdββββββ¬βdynamicType(d)ββ
β 42 β Int64 β
β 43 β Int64 β
β str_1 β String β
βββββββββ΄βββββββββββββββββ
ββdββββββ¬βdynamicType(d)ββ
β α΄Ία΅α΄Έα΄Έ β None β
β [1,2] β Array(Int64) β
β [3,4] β Array(Int64) β
βββββββββ΄βββββββββββββββββ
sql
SELECT d, d + 1 AS res, toTypeName(res), dynamicType(d) FROM test; | {"source_file": "dynamic.md"} | [
-0.003316369839012623,
-0.05844894424080849,
-0.010939649306237698,
0.0607050396502018,
-0.06400511413812637,
-0.05306699872016907,
0.07135875523090363,
0.009347265586256981,
-0.047571856528520584,
0.06995537877082825,
0.039650049060583115,
-0.05907568335533142,
-0.005221168044954538,
-0.0... |
dbd8ce3a-2cdb-4dbb-8292-5f57e698d593 | sql
SELECT d, d + 1 AS res, toTypeName(res), dynamicType(d) FROM test;
text
Received exception:
Code: 43. DB::Exception: Illegal types Array(Int64) and UInt8 of arguments of function plus: while executing 'FUNCTION plus(__table1.d : 3, 1_UInt8 :: 1) -> plus(__table1.d, 1_UInt8) Dynamic : 0'. (ILLEGAL_TYPE_OF_ARGUMENT)
We can filter out unneeded types:
sql
SELECT d, d + 1 AS res, toTypeName(res), dynamicType(res) FROM test WHERE dynamicType(d) NOT IN ('String', 'Array(Int64)', 'None')
text
ββdβββ¬βresββ¬βtoTypeName(res)ββ¬βdynamicType(res)ββ
β 42 β 43 β Dynamic β Int64 β
β 43 β 44 β Dynamic β Int64 β
ββββββ΄ββββββ΄ββββββββββββββββββ΄βββββββββββββββββββ
Or extract required type as subcolumn:
sql
SELECT d, d.Int64 + 1 AS res, toTypeName(res) FROM test;
text
ββdββββββ¬ββresββ¬βtoTypeName(res)ββ
β 42 β 43 β Nullable(Int64) β
β 43 β 44 β Nullable(Int64) β
β str_1 β α΄Ία΅α΄Έα΄Έ β Nullable(Int64) β
βββββββββ΄βββββββ΄ββββββββββββββββββ
ββdββββββ¬ββresββ¬βtoTypeName(res)ββ
β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β Nullable(Int64) β
β [1,2] β α΄Ία΅α΄Έα΄Έ β Nullable(Int64) β
β [3,4] β α΄Ία΅α΄Έα΄Έ β Nullable(Int64) β
βββββββββ΄βββββββ΄ββββββββββββββββββ
Using Dynamic type in ORDER BY and GROUP BY {#using-dynamic-type-in-order-by-and-group-by}
During
ORDER BY
and
GROUP BY
values of
Dynamic
types are compared similar to values of
Variant
type:
The result of operator
<
for values
d1
with underlying type
T1
and
d2
with underlying type
T2
of a type
Dynamic
is defined as follows:
- If
T1 = T2 = T
, the result will be
d1.T < d2.T
(underlying values will be compared).
- If
T1 != T2
, the result will be
T1 < T2
(type names will be compared).
By default
Dynamic
type is not allowed in
GROUP BY
/
ORDER BY
keys, if you want to use it consider its special comparison rule and enable
allow_suspicious_types_in_group_by
/
allow_suspicious_types_in_order_by
settings.
Examples:
sql
CREATE TABLE test (d Dynamic) ENGINE=Memory;
INSERT INTO test VALUES (42), (43), ('abc'), ('abd'), ([1, 2, 3]), ([]), (NULL);
sql
SELECT d, dynamicType(d) FROM test;
text
ββdββββββββ¬βdynamicType(d)ββ
β 42 β Int64 β
β 43 β Int64 β
β abc β String β
β abd β String β
β [1,2,3] β Array(Int64) β
β [] β Array(Int64) β
β α΄Ία΅α΄Έα΄Έ β None β
βββββββββββ΄βββββββββββββββββ
sql
SELECT d, dynamicType(d) FROM test ORDER BY d SETTINGS allow_suspicious_types_in_order_by=1;
sql
ββdββββββββ¬βdynamicType(d)ββ
β [] β Array(Int64) β
β [1,2,3] β Array(Int64) β
β 42 β Int64 β
β 43 β Int64 β
β abc β String β
β abd β String β
β α΄Ία΅α΄Έα΄Έ β None β
βββββββββββ΄βββββββββββββββββ
Note:
values of dynamic types with different numeric types are considered as different values and not compared between each other, their type names are compared instead.
Example: | {"source_file": "dynamic.md"} | [
0.03876044601202011,
0.008728819899260998,
0.020291578024625778,
0.06588193774223328,
-0.040093354880809784,
-0.05188985541462898,
0.06790141016244888,
0.016343144699931145,
-0.07550372183322906,
0.007604208309203386,
-0.022109106183052063,
-0.06599882245063782,
-0.048927776515483856,
-0.0... |
f8a650f6-7447-4cd7-b42b-31611c03e4b5 | Note:
values of dynamic types with different numeric types are considered as different values and not compared between each other, their type names are compared instead.
Example:
sql
CREATE TABLE test (d Dynamic) ENGINE=Memory;
INSERT INTO test VALUES (1::UInt32), (1::Int64), (100::UInt32), (100::Int64);
SELECT d, dynamicType(d) FROM test ORDER BY d SETTINGS allow_suspicious_types_in_order_by=1;
text
ββvββββ¬βdynamicType(v)ββ
β 1 β Int64 β
β 100 β Int64 β
β 1 β UInt32 β
β 100 β UInt32 β
βββββββ΄βββββββββββββββββ
sql
SELECT d, dynamicType(d) FROM test GROUP BY d SETTINGS allow_suspicious_types_in_group_by=1;
text
ββdββββ¬βdynamicType(d)ββ
β 1 β Int64 β
β 100 β UInt32 β
β 1 β UInt32 β
β 100 β Int64 β
βββββββ΄βββββββββββββββββ
Note:
the described comparison rule is not applied during execution of comparison functions like
<
/
>
/
=
and others because of
special work
of functions with
Dynamic
type
Reaching the limit in number of different data types stored inside Dynamic {#reaching-the-limit-in-number-of-different-data-types-stored-inside-dynamic}
Dynamic
data type can store only limited number of different data types as separate subcolumns. By default, this limit is 32, but you can change it in type declaration using syntax
Dynamic(max_types=N)
where N is between 0 and 254 (due to implementation details, it's impossible to have more than 254 different data types that can be stored as separate subcolumns inside Dynamic).
When the limit is reached, all new data types inserted to
Dynamic
column will be inserted into a single shared data structure that stores values with different data types in binary form.
Let's see what happens when the limit is reached in different scenarios.
Reaching the limit during data parsing {#reaching-the-limit-during-data-parsing}
During parsing of
Dynamic
values from the data, when the limit is reached for current block of data, all new values will be inserted into shared data structure:
sql
SELECT d, dynamicType(d), isDynamicElementInSharedData(d) FROM format(JSONEachRow, 'd Dynamic(max_types=3)', '
{"d" : 42}
{"d" : [1, 2, 3]}
{"d" : "Hello, World!"}
{"d" : "2020-01-01"}
{"d" : ["str1", "str2", "str3"]}
{"d" : {"a" : 1, "b" : [1, 2, 3]}}
') | {"source_file": "dynamic.md"} | [
0.0029812154825776815,
-0.002200106391683221,
0.06802601367235184,
0.06588529050350189,
-0.0598834864795208,
-0.0788540467619896,
-0.014853822998702526,
0.029921632260084152,
0.014447794295847416,
-0.0195450522005558,
-0.001840396085754037,
-0.008467670530080795,
-0.0177475493401289,
-0.02... |
ee1f0eea-0abe-4433-92d0-6f0051da0623 | text
ββdβββββββββββββββββββββββ¬βdynamicType(d)ββββββββββββββββββ¬βisDynamicElementInSharedData(d)ββ
β 42 β Int64 β false β
β [1,2,3] β Array(Int64) β false β
β Hello, World! β String β false β
β 2020-01-01 β Date β true β
β ['str1','str2','str3'] β Array(String) β true β
β (1,[1,2,3]) β Tuple(a Int64, b Array(Int64)) β true β
ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ
As we can see, after inserting 3 different data types
Int64
,
Array(Int64)
and
String
all new types were inserted into special shared data structure.
During merges of data parts in MergeTree table engines {#during-merges-of-data-parts-in-mergetree-table-engines}
During merge of several data parts in MergeTree table the
Dynamic
column in the resulting data part can reach the limit of different data types that can be stored in separate subcolumns inside and won't be able to store all types as subcolumns from source parts.
In this case ClickHouse chooses what types will remain as separate subcolumns after merge and what types will be inserted into shared data structure. In most cases ClickHouse tries to keep the most frequent types and store the rarest types in shared data structure, but it depends on the implementation.
Let's see an example of such merge. First, let's create a table with
Dynamic
column, set the limit of different data types to
3
and insert values with
5
different types:
sql
CREATE TABLE test (id UInt64, d Dynamic(max_types=3)) ENGINE=MergeTree ORDER BY id;
SYSTEM STOP MERGES test;
INSERT INTO test SELECT number, number FROM numbers(5);
INSERT INTO test SELECT number, range(number) FROM numbers(4);
INSERT INTO test SELECT number, toDate(number) FROM numbers(3);
INSERT INTO test SELECT number, map(number, number) FROM numbers(2);
INSERT INTO test SELECT number, 'str_' || toString(number) FROM numbers(1);
Each insert will create a separate data pert with
Dynamic
column containing single type:
sql
SELECT count(), dynamicType(d), isDynamicElementInSharedData(d), _part FROM test GROUP BY _part, dynamicType(d), isDynamicElementInSharedData(d) ORDER BY _part, count(); | {"source_file": "dynamic.md"} | [
0.03747449815273285,
-0.02971179410815239,
0.021137433126568794,
0.03667015954852104,
-0.03246653825044632,
-0.06354495137929916,
-0.02986995317041874,
-0.04601125791668892,
-0.010692193172872066,
-0.006933653727173805,
0.03868791460990906,
0.00015116231224965304,
-0.023475628346204758,
-0... |
0e062a76-f58e-4946-90bc-0d056908d794 | sql
SELECT count(), dynamicType(d), isDynamicElementInSharedData(d), _part FROM test GROUP BY _part, dynamicType(d), isDynamicElementInSharedData(d) ORDER BY _part, count();
text
ββcount()ββ¬βdynamicType(d)βββββββ¬βisDynamicElementInSharedData(d)ββ¬β_partββββββ
β 5 β UInt64 β false β all_1_1_0 β
β 4 β Array(UInt64) β false β all_2_2_0 β
β 3 β Date β false β all_3_3_0 β
β 2 β Map(UInt64, UInt64) β false β all_4_4_0 β
β 1 β String β false β all_5_5_0 β
βββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄ββββββββββββ
Now, let's merge all parts into one and see what will happen:
sql
SYSTEM START MERGES test;
OPTIMIZE TABLE test FINAL;
SELECT count(), dynamicType(d), isDynamicElementInSharedData(d), _part FROM test GROUP BY _part, dynamicType(d), isDynamicElementInSharedData(d) ORDER BY _part, count() desc;
text
ββcount()ββ¬βdynamicType(d)βββββββ¬βisDynamicElementInSharedData(d)ββ¬β_partββββββ
β 5 β UInt64 β false β all_1_5_2 β
β 4 β Array(UInt64) β false β all_1_5_2 β
β 3 β Date β false β all_1_5_2 β
β 2 β Map(UInt64, UInt64) β true β all_1_5_2 β
β 1 β String β true β all_1_5_2 β
βββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄ββββββββββββ
As we can see, ClickHouse kept the most frequent types
UInt64
and
Array(UInt64)
as subcolumns and inserted all other types into shared data.
JSONExtract functions with Dynamic {#jsonextract-functions-with-dynamic}
All
JSONExtract*
functions support
Dynamic
type:
sql
SELECT JSONExtract('{"a" : [1, 2, 3]}', 'a', 'Dynamic') AS dynamic, dynamicType(dynamic) AS dynamic_type;
text
ββdynamicββ¬βdynamic_typeββββββββββββ
β [1,2,3] β Array(Nullable(Int64)) β
βββββββββββ΄βββββββββββββββββββββββββ
sql
SELECT JSONExtract('{"obj" : {"a" : 42, "b" : "Hello", "c" : [1,2,3]}}', 'obj', 'Map(String, Dynamic)') AS map_of_dynamics, mapApply((k, v) -> (k, dynamicType(v)), map_of_dynamics) AS map_of_dynamic_types
text
ββmap_of_dynamicsβββββββββββββββββββ¬βmap_of_dynamic_typesβββββββββββββββββββββββββββββββββββββ
β {'a':42,'b':'Hello','c':[1,2,3]} β {'a':'Int64','b':'String','c':'Array(Nullable(Int64))'} β
ββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
sql
SELECT JSONExtractKeysAndValues('{"a" : 42, "b" : "Hello", "c" : [1,2,3]}', 'Dynamic') AS dynamics, arrayMap(x -> (x.1, dynamicType(x.2)), dynamics) AS dynamic_types
``` | {"source_file": "dynamic.md"} | [
0.04338204115629196,
0.0063798618502914906,
0.07275503128767014,
0.05678944289684296,
-0.087944395840168,
-0.0032935598865151405,
0.03929285705089569,
0.016202162951231003,
0.011248844675719738,
0.0148054463788867,
0.0876137763261795,
0.00037556077586486936,
-0.010629801079630852,
-0.03363... |
bd9e1327-19bb-4512-86eb-e28bc399a72b | sql
SELECT JSONExtractKeysAndValues('{"a" : 42, "b" : "Hello", "c" : [1,2,3]}', 'Dynamic') AS dynamics, arrayMap(x -> (x.1, dynamicType(x.2)), dynamics) AS dynamic_types
```
text
ββdynamicsββββββββββββββββββββββββββββββββ¬βdynamic_typesββββββββββββββββββββββββββββββββββββββββββββββββββ
β [('a',42),('b','Hello'),('c',[1,2,3])] β [('a','Int64'),('b','String'),('c','Array(Nullable(Int64))')] β
ββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Binary output format {#binary-output-format}
In RowBinary format values of
Dynamic
type are serialized in the following format:
text
<binary_encoded_data_type><value_in_binary_format_according_to_the_data_type> | {"source_file": "dynamic.md"} | [
0.05702364444732666,
0.06571618467569351,
-0.0209187101572752,
0.00882851704955101,
-0.11618398129940033,
0.016292927786707878,
0.07413999736309052,
0.007381593808531761,
-0.04778410121798515,
-0.020384088158607483,
-0.04022445157170296,
-0.05173267051577568,
0.05593451112508774,
-0.007603... |
50db9b81-3964-4a84-82b3-76550f9ec48d | description: 'Documentation for the QBit data type in ClickHouse, which allows fine-grained quantization for approximate vector search'
keywords: ['qbit', 'data type']
sidebar_label: 'QBit'
sidebar_position: 64
slug: /sql-reference/data-types/qbit
title: 'QBit Data Type'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
The
QBit
data type reorganizes vector storage for faster approximate searches. Instead of storing each vector's elements together, it groups the same binary digit positions across all vectors.
This stores vectors at full precision while letting you choose the fine-grained quantization level at search time: read fewer bits for less I/O and faster calculations, or more bits for higher accuracy. You get the speed benefits of reduced data transfer and computation from quantization, but all the original data remains available when needed.
:::note
QBit
data type and distance functions associated with it are currently experimental.
To enable them, please first run
SET allow_experimental_qbit_type = 1
.
If you run into problems, kindly open an issue in the
ClickHouse repository
.
:::
To declare a column of
QBit
type, use the following syntax:
sql
column_name QBit(element_type, dimension)
element_type
β the type of each vector element. The allowed types are
BFloat16
,
Float32
and
Float64
dimension
β the number of elements in each vector
Creating QBit {#creating-qbit}
Using the
QBit
type in table column definition:
sql
CREATE TABLE test (id UInt32, vec QBit(Float32, 8)) ENGINE = Memory;
INSERT INTO test VALUES (1, [1, 2, 3, 4, 5, 6, 7, 8]), (2, [9, 10, 11, 12, 13, 14, 15, 16]);
SELECT vec FROM test ORDER BY id;
text
ββvecβββββββββββββββββββββββ
β [1,2,3,4,5,6,7,8] β
β [9,10,11,12,13,14,15,16] β
ββββββββββββββββββββββββββββ
QBit subcolumns {#qbit-subcolumns}
QBit
implements a subcolumn access pattern that allows you to access individual bit planes of the stored vectors. Each bit position can be accessed using the
.N
syntax, where
N
is the bit position:
sql
CREATE TABLE test (id UInt32, vec QBit(Float32, 8)) ENGINE = Memory;
INSERT INTO test VALUES (1, [0, 0, 0, 0, 0, 0, 0, 0]);
INSERT INTO test VALUES (1, [-0, -0, -0, -0, -0, -0, -0, -0]);
SELECT bin(vec.1) FROM test;
text
ββbin(tupleElement(vec, 1))ββ
β 00000000 β
β 11111111 β
βββββββββββββββββββββββββββββ
The number of accessible subcolumns depends on the element type:
BFloat16
: 16 subcolumns (1-16)
Float32
: 32 subcolumns (1-32)
Float64
: 64 subcolumns (1-64)
Vector search functions {#vector-search-functions}
These are the distance functions for vector similarity search that use
QBit
data type:
L2DistanceTransposed | {"source_file": "qbit.md"} | [
-0.056288786232471466,
0.007290210574865341,
-0.06448762863874435,
-0.02752741612493992,
-0.010538030415773392,
0.006213681772351265,
-0.004689094610512257,
-0.029282687231898308,
-0.028986116871237755,
-0.062276385724544525,
0.010028043761849403,
0.0032573374919593334,
0.09328050911426544,
... |
4c00cc1e-af9e-4f4e-b51d-8626f3453425 | description: 'Documentation for the Map data type in ClickHouse'
sidebar_label: 'Map(K, V)'
sidebar_position: 36
slug: /sql-reference/data-types/map
title: 'Map(K, V)'
doc_type: 'reference'
Map(K, V)
Data type
Map(K, V)
stores key-value pairs.
Unlike other databases, maps are not unique in ClickHouse, i.e. a map can contain two elements with the same key.
(The reason for that is that maps are internally implemented as
Array(Tuple(K, V))
.)
You can use use syntax
m[k]
to obtain the value for key
k
in map
m
.
Also,
m[k]
scans the map, i.e. the runtime of the operation is linear in the size of the map.
Parameters
K
β The type of the Map keys. Arbitrary type except
Nullable
and
LowCardinality
nested with
Nullable
types.
V
β The type of the Map values. Arbitrary type.
Examples
Create a table with a column of type map:
sql
CREATE TABLE tab (m Map(String, UInt64)) ENGINE=Memory;
INSERT INTO tab VALUES ({'key1':1, 'key2':10}), ({'key1':2,'key2':20}), ({'key1':3,'key2':30});
To select
key2
values:
sql
SELECT m['key2'] FROM tab;
Result:
text
ββarrayElement(m, 'key2')ββ
β 10 β
β 20 β
β 30 β
βββββββββββββββββββββββββββ
If the requested key
k
is not contained in the map,
m[k]
returns the value type's default value, e.g.
0
for integer types and
''
for string types.
To check whether a key exists in a map, you can use function
mapContains
.
sql
CREATE TABLE tab (m Map(String, UInt64)) ENGINE=Memory;
INSERT INTO tab VALUES ({'key1':100}), ({});
SELECT m['key1'] FROM tab;
Result:
text
ββarrayElement(m, 'key1')ββ
β 100 β
β 0 β
βββββββββββββββββββββββββββ
Converting Tuple to Map {#converting-tuple-to-map}
Values of type
Tuple()
can be cast to values of type
Map()
using function
CAST
:
Example
Query:
sql
SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map;
Result:
text
ββmapββββββββββββββββββββββββββββ
β {1:'Ready',2:'Steady',3:'Go'} β
βββββββββββββββββββββββββββββββββ
Reading subcolumns of Map {#reading-subcolumns-of-map}
To avoid reading the entire map, you can use subcolumns
keys
and
values
in some cases.
Example
Query:
```sql
CREATE TABLE tab (m Map(String, UInt64)) ENGINE = Memory;
INSERT INTO tab VALUES (map('key1', 1, 'key2', 2, 'key3', 3));
SELECT m.keys FROM tab; -- same as mapKeys(m)
SELECT m.values FROM tab; -- same as mapValues(m)
```
Result:
```text
ββm.keysββββββββββββββββββ
β ['key1','key2','key3'] β
ββββββββββββββββββββββββββ
ββm.valuesββ
β [1,2,3] β
ββββββββββββ
```
See Also
map()
function
CAST()
function
-Map combinator for Map datatype
Related content {#related-content}
Blog:
Building an Observability Solution with ClickHouse - Part 2 - Traces | {"source_file": "map.md"} | [
0.10672884434461594,
0.004136678762733936,
-0.043395642191171646,
0.011613850481808186,
-0.10465145111083984,
0.0024263160303235054,
0.0912334993481636,
0.005832958500832319,
-0.11990153044462204,
-0.011790631338953972,
0.07070311158895493,
-0.023858267813920975,
0.10664112120866776,
-0.12... |
6812990e-5bc6-402d-8dd0-0871d7bb7c58 | description: 'Documentation for the Data types binary encoding specification'
sidebar_label: 'Data types binary encoding specification.'
sidebar_position: 56
slug: /sql-reference/data-types/data-types-binary-encoding
title: 'Data types binary encoding specification'
doc_type: 'reference'
Data types binary encoding specification
This specification describes the binary format that can be used for binary encoding and decoding of ClickHouse data types. This format is used in
Dynamic
column
binary serialization
and can be used in input/output formats
RowBinaryWithNamesAndTypes
and
Native
under corresponding settings.
The table below describes how each data type is represented in binary format. Each data type encoding consist of 1 byte that indicates the type and some optional additional information.
var_uint
in the binary encoding means that the size is encoded using Variable-Length Quantity compression. | {"source_file": "data-types-binary-encoding.md"} | [
0.04120176285505295,
-0.04021621122956276,
-0.08182784914970398,
0.013335744850337505,
-0.060576070100069046,
-0.018204091116786003,
0.05572434514760971,
0.012305017560720444,
-0.07588864117860794,
-0.04223796725273132,
-0.02989095076918602,
-0.02461358718574047,
0.049279600381851196,
-0.0... |
7c044bde-8088-4cc8-b2e9-17d50072339e | | ClickHouse data type | Binary encoding |
|-----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Nothing
|
0x00
|
|
UInt8
|
0x01
|
|
UInt16
|
0x02
|
|
UInt32
|
0x03
|
|
UInt64
|
0x04 | {"source_file": "data-types-binary-encoding.md"} | [
0.04247273504734039,
-0.0405721440911293,
-0.10873773694038391,
0.0028931673150509596,
-0.0802532359957695,
-0.03522744029760361,
0.014071546494960785,
-0.03010925091803074,
-0.04162786155939102,
-0.020237738266587257,
0.04363758862018585,
-0.05825803428888321,
0.026029041036963463,
-0.081... |
2bbcf5ae-d21f-4cbb-b16b-f54b0d420741 | |
UInt64
|
0x04
|
|
UInt128
|
0x05
|
|
UInt256
|
0x06
|
|
Int8
|
0x07
|
|
Int16
|
0x08
|
|
Int32
|
0x09
|
|
Int64
|
0x0A | {"source_file": "data-types-binary-encoding.md"} | [
0.09153345227241516,
0.07453096657991409,
-0.11869869381189346,
-0.06438116729259491,
-0.04568563774228096,
-0.03755154833197594,
-0.033624500036239624,
0.016218561679124832,
-0.02984045445919037,
-0.034855738282203674,
0.0534302219748497,
-0.028772717341780663,
0.0013559844810515642,
-0.0... |
6b2bb743-9f3a-4f12-915a-50785f979e79 | |
Int64
|
0x0A
|
|
Int128
|
0x0B
|
|
Int256
|
0x0C
|
|
Float32
|
0x0D
|
|
Float64
|
0x0E
|
|
Date
|
0x0F
|
|
Date32
|
0x10 | {"source_file": "data-types-binary-encoding.md"} | [
0.05247751995921135,
0.058547794818878174,
-0.1087450310587883,
-0.007742042187601328,
-0.018356507644057274,
0.010396040976047516,
-0.1024971455335617,
0.018117239698767662,
-0.020285097882151604,
-0.013664992526173592,
0.03173074871301651,
-0.0822024717926979,
-0.004807345103472471,
-0.0... |
c29c94bb-9e54-4e64-9b66-c66971f64e48 | |
Date32
|
0x10
|
|
DateTime
|
0x11
|
|
DateTime(time_zone)
|
0x12<var_uint_time_zone_name_size><time_zone_name_data>
|
|
DateTime64(P)
|
0x13<uint8_precision>
|
|
DateTime64(P, time_zone)
|
0x14<uint8_precision><var_uint_time_zone_name_size><time_zone_name_data>
|
|
String
|
0x15
|
|
FixedString(N)
|
0x16<var_uint_size> | {"source_file": "data-types-binary-encoding.md"} | [
0.07884380221366882,
0.07237502187490463,
-0.11875885725021362,
-0.011733208782970905,
-0.03167864680290222,
-0.00876979622989893,
-0.06406431645154953,
0.0603368915617466,
-0.031009234488010406,
-0.015827754512429237,
-0.012648875825107098,
-0.09203222393989563,
-0.06367302685976028,
-0.0... |
cb70f3a2-0c46-4cf9-ae3d-38018ce205f8 | |
FixedString(N)
|
0x16<var_uint_size>
|
|
Enum8
|
0x17<var_uint_number_of_elements><var_uint_name_size_1><name_data_1><int8_value_1>...<var_uint_name_size_N><name_data_N><int8_value_N>
|
|
Enum16
|
0x18<var_uint_number_of_elements><var_uint_name_size_1><name_data_1><int16_little_endian_value_1>...><var_uint_name_size_N><name_data_N><int16_little_endian_value_N>
|
|
Decimal32(P, S)
|
0x19<uint8_precision><uint8_scale>
|
|
Decimal64(P, S)
|
0x1A<uint8_precision><uint8_scale>
|
|
Decimal128(P, S)
|
0x1B<uint8_precision><uint8_scale>
|
|
Decimal256(P, S)
|
0x1C<uint8_precision><uint8_scale> | {"source_file": "data-types-binary-encoding.md"} | [
0.04497351869940758,
0.049174342304468155,
-0.08826697617769241,
-0.03160256892442703,
-0.08880936354398727,
-0.08921723067760468,
-0.012991373427212238,
0.1192958727478981,
-0.06633912026882172,
-0.018886463716626167,
0.06285303831100464,
-0.058299314230680466,
0.01317746564745903,
-0.078... |
1ec6bbdc-8c78-431a-8d71-e628a22f44a7 | |
Decimal256(P, S)
|
0x1C<uint8_precision><uint8_scale>
|
|
UUID
|
0x1D
|
|
Array(T)
|
0x1E<nested_type_encoding>
|
|
Tuple(T1, ..., TN)
|
0x1F<var_uint_number_of_elements><nested_type_encoding_1>...<nested_type_encoding_N>
|
|
Tuple(name1 T1, ..., nameN TN)
|
0x20<var_uint_number_of_elements><var_uint_name_size_1><name_data_1><nested_type_encoding_1>...<var_uint_name_size_N><name_data_N><nested_type_encoding_N>
|
|
Set
|
0x21
|
|
Interval
|
0x22<interval_kind>
(see | {"source_file": "data-types-binary-encoding.md"} | [
0.022001873701810837,
-0.0067719751968979836,
-0.006045014131814241,
0.03875865787267685,
-0.06714107096195221,
-0.08002819865942001,
0.018072443082928658,
-0.002004760317504406,
-0.05856790393590927,
-0.04584980010986328,
-0.010076009668409824,
-0.05874273180961609,
0.06489566713571548,
-... |
ea31ea36-c94c-42d0-899a-becf1f439748 | |
Interval
|
0x22<interval_kind>
(see
interval kind binary encoding
) |
|
Nullable(T)
|
0x23<nested_type_encoding>
|
|
Function
|
0x24<var_uint_number_of_arguments><argument_type_encoding_1>...<argument_type_encoding_N><return_type_encoding>
|
|
AggregateFunction(function_name(param_1, ..., param_N), arg_T1, ..., arg_TN)
|
0x25<var_uint_version><var_uint_function_name_size><function_name_data><var_uint_number_of_parameters><param_1>...<param_N><var_uint_number_of_arguments><argument_type_encoding_1>...<argument_type_encoding_N>
(see
aggregate function parameter binary encoding
) |
|
LowCardinality(T)
|
0x26<nested_type_encoding>
|
|
Map(K, V)
|
0x27<key_type_encoding><value_type_encoding>
|
|
IPv4
|
0x28 | {"source_file": "data-types-binary-encoding.md"} | [
0.042175836861133575,
-0.008272274397313595,
-0.049326248466968536,
0.00017756210581865162,
-0.08972730487585068,
-0.03194152191281319,
-0.0031871329993009567,
0.04867377132177353,
-0.03096199408173561,
-0.06445897370576859,
0.0066564348526299,
-0.09189039468765259,
0.020769597962498665,
-... |
7ba08f37-9dd4-484d-926c-a5a607126b83 | |
IPv4
|
0x28
|
|
IPv6
|
0x29
|
|
Variant(T1, ..., TN)
|
0x2A<var_uint_number_of_variants><variant_type_encoding_1>...<variant_type_encoding_N>
|
|
Dynamic(max_types=N)
|
0x2B<uint8_max_types>
|
|
Custom type
(
Ring
,
Polygon
, etc) |
0x2C<var_uint_type_name_size><type_name_data>
|
|
Bool
|
0x2D
|
|
SimpleAggregateFunction(function_name(param_1, ..., param_N), arg_T1, ..., arg_TN)
| | {"source_file": "data-types-binary-encoding.md"} | [
0.02941211499273777,
0.028257790952920914,
-0.022181794047355652,
-0.011348444037139416,
-0.09270346909761429,
-0.024557264521718025,
0.002555343322455883,
0.07814625650644302,
-0.0246291384100914,
-0.03491951897740364,
-0.03449606895446777,
-0.02521626092493534,
-0.02608824335038662,
-0.0... |
53a2b96d-7901-4cc8-8bf3-637a953bf367 | |
SimpleAggregateFunction(function_name(param_1, ..., param_N), arg_T1, ..., arg_TN)
|
0x2E<var_uint_function_name_size><function_name_data><var_uint_number_of_parameters><param_1>...<param_N><var_uint_number_of_arguments><argument_type_encoding_1>...<argument_type_encoding_N>
(see
aggregate function parameter binary encoding
) |
|
Nested(name1 T1, ..., nameN TN)
|
0x2F<var_uint_number_of_elements><var_uint_name_size_1><name_data_1><nested_type_encoding_1>...<var_uint_name_size_N><name_data_N><nested_type_encoding_N>
|
|
JSON(max_dynamic_paths=N, max_dynamic_types=M, path Type, SKIP skip_path, SKIP REGEXP skip_path_regexp)
|
0x30<uint8_serialization_version><var_int_max_dynamic_paths><uint8_max_dynamic_types><var_uint_number_of_typed_paths><var_uint_path_name_size_1><path_name_data_1><encoded_type_1>...<var_uint_number_of_skip_paths><var_uint_skip_path_size_1><skip_path_data_1>...<var_uint_number_of_skip_path_regexps><var_uint_skip_path_regexp_size_1><skip_path_data_regexp_1>...
|
|
BFloat16
|
0x31
|
|
Time
|
0x32
|
|
Time64(P)
|
0x34<uint8_precision>
|
|
QBit(T, N)
|
0x36<element_type_encoding><var_uint_dimension> | {"source_file": "data-types-binary-encoding.md"} | [
-0.019747896119952202,
-0.01685173064470291,
-0.0209810771048069,
0.03273366391658783,
-0.08995618671178818,
-0.0688507929444313,
-0.04366292804479599,
0.09058486670255661,
-0.06611871719360352,
-0.0666639432311058,
-0.017038850113749504,
-0.04504800960421562,
0.0234683845192194,
-0.013031... |
b8469e9e-1ed7-4a84-8322-99a77f14d6f1 | |
QBit(T, N)
|
0x36<element_type_encoding><var_uint_dimension>
| | {"source_file": "data-types-binary-encoding.md"} | [
0.018257848918437958,
0.03862215578556061,
-0.08421111851930618,
-0.004210944287478924,
-0.06048589199781418,
0.055617060512304306,
-0.030094563961029053,
0.06011302024126053,
-0.01942540891468525,
-0.08077926933765411,
0.03718634322285652,
-0.07367227226495743,
0.08675854653120041,
-0.062... |
4771c713-89be-4ae1-b1a5-a96728f1b78f | For type
JSON
byte
uint8_serialization_version
indicates the version of the serialization. Right now the version is always 0 but can change in future if new arguments will be introduced for
JSON
type.
Interval kind binary encoding {#interval-kind-binary-encoding}
The table below describes how different interval kinds of
Interval
data type are encoded.
| Interval kind | Binary encoding |
|---------------|-----------------|
|
Nanosecond
|
0x00
|
|
Microsecond
|
0x01
|
|
Millisecond
|
0x02
|
|
Second
|
0x03
|
|
Minute
|
0x04
|
|
Hour
|
0x05
|
|
Day
|
0x06
|
|
Week
|
0x07
|
|
Month
|
0x08
|
|
Quarter
|
0x09
|
|
Year
|
0x1A
|
Aggregate function parameter binary encoding {#aggregate-function-parameter-binary-encoding}
The table below describes how parameters of
AggregateFunction
and
SimpleAggregateFunction
are encoded.
The encoding of a parameter consists of 1 byte indicating the type of the parameter and the value itself. | {"source_file": "data-types-binary-encoding.md"} | [
-0.025235725566744804,
-0.003737939754500985,
-0.012113090604543686,
-0.0019544761162251234,
-0.08537056297063828,
0.001121293636970222,
-0.055989500135183334,
0.06856413185596466,
-0.02565305307507515,
-0.0650518536567688,
-0.005949808284640312,
-0.04650461673736572,
0.0065598865039646626,
... |
ae9ef3d7-c74a-48d5-ac31-5bb64d10d76f | | Parameter type | Binary encoding |
|--------------------------|--------------------------------------------------------------------------------------------------------------------------------|
|
Null
|
0x00
|
|
UInt64
|
0x01<var_uint_value>
|
|
Int64
|
0x02<var_int_value>
|
|
UInt128
|
0x03<uint128_little_endian_value>
|
|
Int128
|
0x04<int128_little_endian_value>
|
|
UInt128
|
0x05<uint128_little_endian_value>
|
|
Int128
|
0x06<int128_little_endian_value>
|
|
Float64
|
0x07<float64_little_endian_value>
|
|
Decimal32
|
0x08<var_uint_scale><int32_little_endian_value>
|
|
Decimal64
|
0x09<var_uint_scale><int64_little_endian_value>
|
|
Decimal128
|
0x0A<var_uint_scale><int128_little_endian_value>
|
|
Decimal256
|
0x0B<var_uint_scale><int256_little_endian_value>
|
|
String
|
0x0C<var_uint_size><data>
|
|
Array
|
0x0D<var_uint_size><value_encoding_1>...<value_encoding_N>
|
|
Tuple
|
0x0E<var_uint_size><value_encoding_1>...<value_encoding_N>
|
|
Map
|
0x0F<var_uint_size><key_encoding_1><value_encoding_1>...<key_encoding_N><value_encoding_N>
|
|
IPv4
|
0x10<uint32_little_endian_value> | {"source_file": "data-types-binary-encoding.md"} | [
0.054553210735321045,
0.036614615470170975,
-0.14550922811031342,
-0.054860975593328476,
-0.08486264199018478,
-0.07605811953544617,
0.020238177850842476,
0.051303502172231674,
-0.07312335073947906,
-0.046395983546972275,
0.04829276353120804,
-0.10731247067451477,
0.04095959663391113,
-0.0... |
bfa3fa93-7d85-4296-b045-189195d8ff69 | 0x0F<var_uint_size><key_encoding_1><value_encoding_1>...<key_encoding_N><value_encoding_N>
|
|
IPv4
|
0x10<uint32_little_endian_value>
|
|
IPv6
|
0x11<uint128_little_endian_value>
|
|
UUID
|
0x12<uuid_value>
|
|
Bool
|
0x13<bool_value>
|
|
Object
|
0x14<var_uint_size><var_uint_key_size_1><key_data_1><value_encoding_1>...<var_uint_key_size_N><key_data_N><value_encoding_N>
|
|
AggregateFunctionState
|
0x15<var_uint_name_size><name_data><var_uint_data_size><data>
|
|
Negative infinity
|
0xFE
|
|
Positive infinity
|
0xFF
| | {"source_file": "data-types-binary-encoding.md"} | [
0.03288978710770607,
0.006844875402748585,
-0.09991280734539032,
-0.015865808352828026,
-0.07119925320148468,
-0.06875203549861908,
0.004656206350773573,
0.035457123070955276,
0.0020535150542855263,
-0.023515168577432632,
0.07595591247081757,
-0.07051365077495575,
0.008705615997314453,
-0.... |
dba1841b-f569-47c0-9910-6f09f4a00e6d | description: 'Documentation for the Variant data type in ClickHouse'
sidebar_label: 'Variant(T1, T2, ...)'
sidebar_position: 40
slug: /sql-reference/data-types/variant
title: 'Variant(T1, T2, ...)'
doc_type: 'reference'
Variant(T1, T2, ...)
This type represents a union of other data types. Type
Variant(T1, T2, ..., TN)
means that each row of this type
has a value of either type
T1
or
T2
or ... or
TN
or none of them (
NULL
value).
The order of nested types doesn't matter: Variant(T1, T2) = Variant(T2, T1).
Nested types can be arbitrary types except Nullable(...), LowCardinality(Nullable(...)) and Variant(...) types.
:::note
It's not recommended to use similar types as variants (for example different numeric types like
Variant(UInt32, Int64)
or different date types like
Variant(Date, DateTime)
),
because working with values of such types can lead to ambiguity. By default, creating such
Variant
type will lead to an exception, but can be enabled using setting
allow_suspicious_variant_types
:::
Creating Variant {#creating-variant}
Using
Variant
type in table column definition:
sql
CREATE TABLE test (v Variant(UInt64, String, Array(UInt64))) ENGINE = Memory;
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
SELECT v FROM test;
text
ββvββββββββββββββ
β α΄Ία΅α΄Έα΄Έ β
β 42 β
β Hello, World! β
β [1,2,3] β
βββββββββββββββββ
Using CAST from ordinary columns:
sql
SELECT toTypeName(variant) AS type_name, 'Hello, World!'::Variant(UInt64, String, Array(UInt64)) as variant;
text
ββtype_nameβββββββββββββββββββββββββββββββ¬βvariantββββββββ
β Variant(Array(UInt64), String, UInt64) β Hello, World! β
ββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββ
Using functions
if/multiIf
when arguments don't have common type (setting
use_variant_as_common_type
should be enabled for it):
sql
SET use_variant_as_common_type = 1;
SELECT if(number % 2, number, range(number)) as variant FROM numbers(5);
text
ββvariantββββ
β [] β
β 1 β
β [0,1] β
β 3 β
β [0,1,2,3] β
βββββββββββββ
sql
SET use_variant_as_common_type = 1;
SELECT multiIf((number % 4) = 0, 42, (number % 4) = 1, [1, 2, 3], (number % 4) = 2, 'Hello, World!', NULL) AS variant FROM numbers(4);
text
ββvariantββββββββ
β 42 β
β [1,2,3] β
β Hello, World! β
β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββ
Using functions 'array/map' if array elements/map values don't have common type (setting
use_variant_as_common_type
should be enabled for it):
sql
SET use_variant_as_common_type = 1;
SELECT array(range(number), number, 'str_' || toString(number)) as array_of_variants FROM numbers(3);
text
ββarray_of_variantsββ
β [[],0,'str_0'] β
β [[0],1,'str_1'] β
β [[0,1],2,'str_2'] β
βββββββββββββββββββββ
sql
SET use_variant_as_common_type = 1;
SELECT map('a', range(number), 'b', number, 'c', 'str_' || toString(number)) as map_of_variants FROM numbers(3); | {"source_file": "variant.md"} | [
-0.0021922967862337828,
0.01402481272816658,
0.05711667984724045,
0.01763308234512806,
-0.033953387290239334,
0.026506051421165466,
0.008486483246088028,
0.04554613679647446,
-0.03299669921398163,
-0.019142236560583115,
0.06661340594291687,
0.008564656600356102,
0.01476444210857153,
-0.054... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.