id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
958b91e7-d53b-45b5-bd52-2136514b13a6 | slug: /native-protocol/columns
sidebar_position: 4
title: 'Column types'
description: 'Column types for the native protocol'
keywords: ['native protocol columns', 'column types', 'data types', 'protocol data types', 'binary encoding']
doc_type: 'reference'
Column types
See
Data Types
for general reference.
Numeric types {#numeric-types}
:::tip
Numeric types encoding matches memory layout of little endian CPUs like AMD64 or ARM64.
This allows to implement very efficient encoding and decoding.
:::
Integers {#integers}
String of Int and UInt of 8, 16, 32, 64, 128 or 256 bits, in little endian.
Floats {#floats}
Float32 and Float64 in IEEE 754 binary representation.
String {#string}
Just an array of String, i.e. (len, value).
FixedString(N) {#fixedstringn}
An array of N-byte sequences.
IP {#ip}
IPv4 is alias of
UInt32
numeric type and represented as UInt32.
IPv6 is alias of
FixedString(16)
and represented as binary directly.
Tuple {#tuple}
Tuple is just an array of columns. For example, Tuple(String, UInt8) is just two columns
encoded continuously.
Map {#map}
Map(K, V)
consists of three columns:
Offsets ColUInt64, Keys K, Values V
.
Rows count in
Keys
and
Values
column is last value from
Offsets
.
Array {#array}
Array(T)
consists of two columns:
Offsets ColUInt64, Data T
.
Rows count in
Data
is last value from
Offsets
.
Nullable {#nullable}
Nullable(T)
consists of
Nulls ColUInt8, Values T
with same rows count.
go
// Nulls is nullable "mask" on Values column.
// For example, to encode [null, "", "hello", null, "world"]
// Values: ["", "", "hello", "", "world"] (len: 5)
// Nulls: [ 1, 0, 0, 1, 0] (len: 5)
UUID {#uuid}
Alias of
FixedString(16)
, UUID value represented as binary.
Enum {#enum}
Alias of
Int8
or
Int16
, but each integer is mapped to some
String
value.
LowCardinality
type {#low-cardinality}
LowCardinality(T)
consists of
Index T, Keys K
,
where
K
is one of (UInt8, UInt16, UInt32, UInt64) depending on size of
Index
.
go
// Index (i.e. dictionary) column contains unique values, Keys column contains
// sequence of indexes in Index column that represent actual values.
//
// For example, ["Eko", "Eko", "Amadela", "Amadela", "Amadela", "Amadela"] can
// be encoded as:
// Index: ["Eko", "Amadela"] (String)
// Keys: [0, 0, 1, 1, 1, 1] (UInt8)
//
// The CardinalityKey is chosen depending on Index size, i.e. maximum value
// of chosen type should be able to represent any index of Index element.
Bool {#bool}
Alias of
UInt8
, where
0
is false and
1
is true. | {"source_file": "columns.md"} | [
0.0378388948738575,
0.06852685660123825,
-0.07198382169008255,
-0.057035576552152634,
-0.051646772772073746,
-0.02719017304480076,
-0.036110274493694305,
-0.010236824862658978,
-0.02154158055782318,
0.002893865806981921,
-0.008429662324488163,
-0.005873852875083685,
-0.01995854638516903,
-... |
b05bb417-576e-4728-830f-7973ba32354e | slug: /native-protocol/hash
sidebar_position: 5
title: 'CityHash'
description: 'Native protocol hash'
doc_type: 'reference'
keywords: ['CityHash', 'native protocol hash', 'hash function', 'Google CityHash', 'protocol hashing']
CityHash
ClickHouse uses
one of the previous
versions of
CityHash from Google
.
:::info
CityHash has changed the algorithm after we have added it into ClickHouse.
CityHash documentation specifically notes that the user should not rely on
specific hash values and should not save it anywhere or use it as a sharding key.
But as we exposed this function to the user, we had to fix the version of CityHash (to 1.0.2). And now we guarantee that the behaviour of CityHash functions available in SQL will not change.
— Alexey Milovidov
:::
:::note Note
Current version of Google's CityHash
differs
from ClickHouse
cityHash64
variant.
Don't use
farmHash64
to get Google's CityHash value!
FarmHash
is a successor to CityHash, but they are not fully compatible.
| String | ClickHouse64 | CityHash64 | FarmHash64 |
|------------------------------------------------------------|----------------------|---------------------|----------------------|
|
Moscow
| 12507901496292878638 | 5992710078453357409 | 5992710078453357409 |
|
How can you write a big system without C++? -Paul Glick
| 6237945311650045625 | 749291162957442504 | 11716470977470720228 |
:::
Also see
Introducing CityHash
for description and
reasoning behind creation. TL;DR
non-cryptographic
hash that is faster than
MurmurHash
, but more complex.
Implementations {#implementations}
Go {#go}
You can use
go-faster/city
Go package that implements both variants. | {"source_file": "hash.md"} | [
0.06155507639050484,
0.006975099910050631,
0.06129170209169388,
-0.07515265792608261,
-0.05419148504734039,
-0.06925281882286072,
-0.01010225061327219,
-0.1011059507727623,
-0.09721563011407852,
0.03959296643733978,
0.042455218732357025,
-0.019318874925374985,
0.0049203489907085896,
-0.060... |
c20f0f4f-db76-4b88-823e-5a6e6c5adc64 | slug: /native-protocol/server
sidebar_position: 3
title: 'Server packets'
description: 'Native protocol server'
doc_type: 'reference'
keywords: ['native protocol', 'tcp protocol', 'client-server', 'protocol specification', 'networking']
Server packets
| value | name | description |
|-------|----------------------------------|-----------------------------------------------------------------|
| 0 |
Hello
| Server handshake response |
| 1 | Data | Same as
client data
|
| 2 |
Exception
| Query processing exception |
| 3 |
Progress
| Query progress |
| 4 |
Pong
| Ping response |
| 5 |
EndOfStream
| All packets were transferred |
| 6 |
ProfileInfo
| Profiling data |
| 7 | Totals | Total values |
| 8 | Extremes | Extreme values (min, max) |
| 9 | TablesStatusResponse | Response to TableStatus request |
| 10 |
Log
| Query system log |
| 11 | TableColumns | Columns description |
| 12 | UUIDs | List of unique parts ids |
| 13 | ReadTaskRequest | String (UUID) describes a request for which next task is needed |
| 14 |
ProfileEvents
| Packet with profile events from server |
The
Data
,
Totals
and
Extremes
can be compressed.
Hello {#hello}
Response to
client hello
.
| field | type | value | description |
|---------------|---------|-----------------|----------------------|
| name | String |
Clickhouse
| Server name |
| version_major | UVarInt |
21
| Server major version |
| version_minor | UVarInt |
12
| Server minor version |
| revision | UVarInt |
54452
| Server revision |
| tz | String |
Europe/Moscow
| Server timezone |
| display_name | String |
Clickhouse
| Server name for UI |
| version_patch | UVarInt |
3
| Server patch version |
Exception {#exception}
Server exception during query processing. | {"source_file": "server.md"} | [
0.004435721784830093,
0.07022187113761902,
-0.022674495354294777,
-0.046860795468091965,
-0.03833254426717758,
-0.06590127944946289,
0.050316717475652695,
-0.0060044690035283566,
-0.0159465204924345,
0.03794672712683678,
-0.008298498578369617,
-0.06746979057788849,
0.0137383583933115,
-0.0... |
fb35e6a4-6def-422c-a809-2a5819276368 | Exception {#exception}
Server exception during query processing.
| field | type | value | description |
|-------------|--------|----------------------------------------|------------------------------|
| code | Int32 |
60
| See
ErrorCodes.cpp
. |
| name | String |
DB::Exception
| Server major version |
| message | String |
DB::Exception: Table X doesn't exist
| Server minor version |
| stack_trace | String | ~ | C++ stack trace |
| nested | Bool |
true
| More errors |
Can be continuous list of exceptions until
nested
is
false
.
Progress {#progress}
Progress of query execution periodically reported by server.
:::tip
Progress reported in
deltas
. For totals, accumulate it on client.
:::
| field | type | value | description |
|-------------|---------|----------|-------------------|
| rows | UVarInt |
65535
| Row count |
| bytes | UVarInt |
871799
| Byte count |
| total_rows | UVarInt |
0
| Total rows |
| wrote_rows | UVarInt |
0
| Rows from client |
| wrote_bytes | UVarInt |
0
| Bytes from client |
Pong {#pong}
Response for
client ping
, no packet body.
End of stream {#end-of-stream}
No more
Data
packets will be sent, query result is fully steamed from server to client.
No packet body.
Profile info {#profile-info}
| field | type |
|------------------------------|---------|
| rows | UVarInt |
| blocks | UVarInt |
| bytes | UVarInt |
| applied_limit | Bool |
| rows_before_limit | UVarInt |
| calculated_rows_before_limit | Bool |
Log {#log}
Data block
with server log.
:::tip
Encoded as
data block
of columns, but is never compressed.
:::
| column | type |
|------------|----------|
| time | DateTime |
| time_micro | UInt32 |
| host_name | String |
| query_id | String |
| thread_id | UInt64 |
| priority | Int8 |
| source | String |
| text | String |
Profile events {#profile-events}
Data block
with profile events.
:::tip
Encoded as
data block
of columns, but is never compressed.
The
value
type is
UInt64
or
Int64
, depending on server revision.
:::
| column | type |
|--------------|-----------------|
| host_name | String |
| current_time | DateTime |
| thread_id | UInt64 |
| type | Int8 |
| name | String |
| value | UInt64 or Int64 | | {"source_file": "server.md"} | [
-0.0024266561958938837,
0.014633086510002613,
-0.01722019352018833,
0.07002171128988266,
-0.07252220064401627,
-0.002185618970543146,
0.0007314191898331046,
0.05356748029589653,
-0.017023414373397827,
0.03279782086610794,
0.040469713509082794,
-0.06598794460296631,
0.07053352892398834,
-0.... |
d7a97303-46ee-404d-918b-679f3891ebd5 | slug: /native-protocol/client
sidebar_position: 2
title: 'Native client packets'
description: 'Native protocol client'
doc_type: 'reference'
keywords: ['client packets', 'native protocol client', 'protocol packets', 'client communication', 'TCP client']
Client packets
| value | name | description |
|-------|-------------------|------------------------|
| 0 |
Hello
| Client handshake start |
| 1 |
Query
| Query request |
| 2 |
Data
| Block with data |
| 3 |
Cancel
| Cancel query |
| 4 |
Ping
| Ping request |
| 5 | TableStatus | Table status request |
The
Data
can be compressed.
Hello {#hello}
For example, we are
Go Client
v1.10 that supports
54451
protocol version and
want to connect to
default
database with
default
user and
secret
password.
| field | type | value | description |
|------------------|---------|---------------|----------------------------|
| client_name | String |
"Go Client"
| Client implementation name |
| version_major | UVarInt |
1
| Client major version |
| version_minor | UVarInt |
10
| Client minor version |
| protocol_version | UVarInt |
54451
| TCP Protocol version |
| database | String |
"default"
| Database name |
| username | String |
"default"
| Username |
| password | String |
"secret"
| Password |
Protocol version {#protocol-version}
Protocol version is TCP protocol version of client.
Usually it is equal to the latest compatible server revision, but
should not be confused with it.
Defaults {#defaults}
All values should be
explicitly set
, there are no defaults on server side.
On client side, use
"default"
database,
"default"
username and
""
(blank string)
password as defaults.
Query {#query}
| field | type | value | description |
|-----------------|----------------------------|------------|---------------------------|
| query_id | String |
1ff-a123
| Query ID, can be UUIDv4 |
| client_info |
ClientInfo
| See type | Data about client |
| settings |
Settings
| See type | List of settings |
| secret | String |
secret
| Inter-server secret |
|
stage
| UVarInt |
2
| Execute until query stage |
| compression | UVarInt |
0
| Disabled=0, enabled=1 |
| body | String |
SELECT 1
| Query text |
Client info {#client-info} | {"source_file": "client.md"} | [
0.0018228008411824703,
0.03647042438387871,
-0.04000090807676315,
-0.035530295222997665,
-0.11260376125574112,
-0.01625850796699524,
0.07195580750703812,
0.02927635796368122,
-0.04466196522116661,
0.013462964445352554,
-0.02123054303228855,
-0.050655145198106766,
0.05697094276547432,
0.027... |
fcac8688-70bb-40fd-92ab-d91f2e20d702 | Client info {#client-info}
| field | type | description |
|-------------------|-----------------|--------------------------------|
| query_kind | byte | None=0, Initial=1, Secondary=2 |
| initial_user | String | Initial user |
| initial_query_id | String | Initial query id |
| initial_address | String | Initial address |
| initial_time | Int64 | Initial time |
| interface | byte | TCP=1, HTTP=2 |
| os_user | String | OS User |
| client_hostname | String | Client Hostname |
| client_name | String | Client Name |
| version_major | UVarInt | Client major version |
| version_minor | UVarInt | Client minor version |
| protocol_version | UVarInt | Client protocol version |
| quota_key | String | Quota key |
| distributed_depth | UVarInt | Distributed depth |
| version_patch | UVarInt | Client patch version |
| otel | Bool | Trace fields are present |
| trace_id | FixedString(16) | Trace ID |
| span_id | FixedString(8) | Span ID |
| trace_state | String | Tracing state |
| trace_flags | Byte | Tracing flags |
Settings {#settings}
| field | type | value | description |
|-----------|--------|-------------------|-----------------------|
| key | String |
send_logs_level
| Key of setting |
| value | String |
trace
| Value of setting |
| important | Bool |
true
| Can be ignored or not |
Encoded as list, blank key and value denotes end of list.
Stage {#stage}
| value | name | description |
|-------|--------------------|---------------------------------------------|
| 0 | FetchColumns | Only fetch column types |
| 1 | WithMergeableState | Until mergeable state |
| 2 | Complete | Until full completeness (should be default) |
Data {#data}
| field | type | description |
|---------|---------------------|--------------------|
| info | BlockInfo | Encoded block info |
| columns | UVarInt | Columns count |
| rows | UVarInt | Rows count |
| columns |
[]Column
| Columns with data |
Column {#column} | {"source_file": "client.md"} | [
-0.004243410658091307,
0.03497939929366112,
-0.0695769190788269,
0.0007738773128949106,
-0.08227521926164627,
-0.07341881096363068,
0.017274178564548492,
-0.0003577361349016428,
-0.017824353650212288,
0.01438532117754221,
-0.017506379634141922,
-0.061085306107997894,
0.11423642933368683,
-... |
de3c6cda-d2f1-4e7c-b8e4-7980c4b4b3d5 | Column {#column}
| field | type | value | description |
|-------|--------|-----------------|-------------|
| name | String |
foo
| Column name |
| type | String |
DateTime64(9)
| Column type |
| data | bytes | ~ | Column data |
Cancel {#cancel}
No packet body. Server should cancel query.
Ping {#ping}
No packet body. Server should
respond with pong
. | {"source_file": "client.md"} | [
0.026333153247833252,
0.07594127207994461,
-0.03627076745033264,
0.03495313972234726,
-0.12388013303279877,
-0.06270658224821091,
0.08074232190847397,
-0.015927977859973907,
0.013135664165019989,
0.07592939585447311,
0.00808129832148552,
-0.04233961179852486,
0.0032268103677779436,
-0.0804... |
c9e9fead-8993-4af2-bd7e-4d39abc0560b | slug: /data-compression/compression-modes
sidebar_position: 6
title: 'Compression Modes'
description: 'ClickHouse column compression modes'
keywords: ['compression', 'codec', 'encoding', 'modes']
doc_type: 'reference'
import CompressionBlock from '@site/static/images/data-compression/ch_compression_block.png';
import Image from '@theme/IdealImage';
Compression modes
ClickHouse protocol supports
data blocks
compression with checksums.
Use
LZ4
if not sure what mode to pick.
:::tip
Learn more about the
column compression codecs
available and specify them when creating your tables, or afterward.
:::
Modes {#modes}
| value | name | description |
|--------|--------------------|------------------------------------------|
|
0x02
|
None
| No compression, only checksums |
|
0x82
| LZ4 | Extremely fast, good compression |
|
0x90
| ZSTD | Zstandard, pretty fast, best compression |
Both LZ4 and ZSTD are made by same author, but with different tradeoffs.
From
Facebook benchmarks
:
| name | ratio | encoding | decoding |
|-------------------|-------|----------|-----------|
|
zstd
1.4.5 -1 | 2.8 | 500 MB/s | 1660 MB/s |
|
lz4
1.9.2 | 2.1 | 740 MB/s | 4530 MB/s |
Block {#block}
| field | type | description |
|-----------------|---------|--------------------------------------------------|
| checksum | uint128 |
Hash
of (header + compressed data) |
| raw_size | uint32 | Raw size without header |
| data_size | uint32 | Uncompressed data size |
| mode | byte | Compression mode |
| compressed_data | binary | Block of compressed data |
Header is (raw_size + data_size + mode), raw size consists of len(header + compressed_data).
Checksum is
hash(header + compressed_data)
, using
ClickHouse CityHash
.
None mode {#none-mode}
If
None
mode is used,
compressed_data
is equal to original data.
No compression mode is useful to ensure additional data integrity with checksums, because
hashing overhead is negligible. | {"source_file": "compression-modes.md"} | [
-0.02005741186439991,
0.03830410912632942,
-0.12295497208833694,
0.032412804663181305,
0.07521877437829971,
-0.03956444188952446,
-0.026150047779083252,
0.010774231515824795,
-0.024546122178435326,
0.08812782168388367,
0.03345209360122681,
-0.005064848810434341,
0.10490220040082932,
-0.020... |
9b79ed0b-2c8b-4aaf-98d8-ba23c3c2564a | slug: /data-compression/compression-in-clickhouse
title: 'Compression in ClickHouse'
description: 'Choosing ClickHouse compression algorithms'
keywords: ['compression', 'codec', 'encoding']
doc_type: 'reference'
One of the secrets to ClickHouse query performance is compression.
Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU is in most cases outweighed by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast.
For why ClickHouse compresses data so well, we recommended
this article
. In summary, as a column-oriented database, values will be written in column order. If these values are sorted, the same values will be adjacent to each other. Compression algorithms exploit contiguous patterns of data. On top of this, ClickHouse has codecs and granular data types which allow users to tune the compression techniques further.
Compression in ClickHouse will be impacted by 3 principal factors:
- The ordering key
- The data types
- Which codecs are used
All of these are configured through the schema.
Choose the right data type to optimize compression {#choose-the-right-data-type-to-optimize-compression}
Let's use the Stack Overflow dataset as an example. Let's compare compression statistics for the following schemas for the
posts
table:
posts
- A non type optimized schema with no ordering key.
posts_v3
- A type optimized schema with the appropriate type and bit size for each column with ordering key
(PostTypeId, toDate(CreationDate), CommentCount)
.
Using the following queries, we can measure the current compressed and uncompressed size of each column. Let's examine the size of the initial optimized schema
posts
with no ordering key.
```sql
SELECT name,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE table = 'posts'
GROUP BY name | {"source_file": "compression-in-clickhouse.md"} | [
-0.036050666123628616,
0.019397076219320297,
-0.04516880214214325,
0.00883796438574791,
0.0009389352053403854,
-0.09428601711988449,
-0.017937209457159042,
-0.04133738577365875,
0.02387407049536705,
0.03213467821478844,
-0.018179453909397125,
0.07356150448322296,
0.02776510640978813,
-0.04... |
e0e1b615-0e31-4c2f-a07e-97bd675cf365 | ┌─name──────────────────┬─compressed_size─┬─uncompressed_size─┬───ratio────┐
│ Body │ 46.14 GiB │ 127.31 GiB │ 2.76 │
│ Title │ 1.20 GiB │ 2.63 GiB │ 2.19 │
│ Score │ 84.77 MiB │ 736.45 MiB │ 8.69 │
│ Tags │ 475.56 MiB │ 1.40 GiB │ 3.02 │
│ ParentId │ 210.91 MiB │ 696.20 MiB │ 3.3 │
│ Id │ 111.17 MiB │ 736.45 MiB │ 6.62 │
│ AcceptedAnswerId │ 81.55 MiB │ 736.45 MiB │ 9.03 │
│ ClosedDate │ 13.99 MiB │ 517.82 MiB │ 37.02 │
│ LastActivityDate │ 489.84 MiB │ 964.64 MiB │ 1.97 │
│ CommentCount │ 37.62 MiB │ 565.30 MiB │ 15.03 │
│ OwnerUserId │ 368.98 MiB │ 736.45 MiB │ 2 │
│ AnswerCount │ 21.82 MiB │ 622.35 MiB │ 28.53 │
│ FavoriteCount │ 280.95 KiB │ 508.40 MiB │ 1853.02 │
│ ViewCount │ 95.77 MiB │ 736.45 MiB │ 7.69 │
│ LastEditorUserId │ 179.47 MiB │ 736.45 MiB │ 4.1 │
│ ContentLicense │ 5.45 MiB │ 847.92 MiB │ 155.5 │
│ OwnerDisplayName │ 14.30 MiB │ 142.58 MiB │ 9.97 │
│ PostTypeId │ 20.93 MiB │ 565.30 MiB │ 27 │
│ CreationDate │ 314.17 MiB │ 964.64 MiB │ 3.07 │
│ LastEditDate │ 346.32 MiB │ 964.64 MiB │ 2.79 │
│ LastEditorDisplayName │ 5.46 MiB │ 124.25 MiB │ 22.75 │
│ CommunityOwnedDate │ 2.21 MiB │ 509.60 MiB │ 230.94 │
└───────────────────────┴─────────────────┴───────────────────┴────────────┘
```
A note on compact versus wide parts
If you are seeing `compressed_size` or `uncompressed_size` values equal to `0`, this could be because the type of the
parts are `compact` and not `wide` (see description for `part_type` in [`system.parts`](/operations/system-tables/parts)).
The part format is controlled by settings [`min_bytes_for_wide_part`](/operations/settings/merge-tree-settings#min_bytes_for_wide_part)
and [`min_rows_for_wide_part`](/operations/settings/merge-tree-settings#min_rows_for_wide_part) meaning that if the inserted
data results in a part which does not exceed the values of the aforementioned settings, the part will be compact rather
than wide and you will not see the values for `compressed_size` or `uncompressed_size`.
To demonstrate:
```sql title="Query"
-- Create a table with compact parts
CREATE TABLE compact (
number UInt32
)
ENGINE = MergeTree()
ORDER BY number
AS SELECT * FROM numbers(100000); -- Not big enough to exceed default of min_bytes_for_wide_part = 10485760
-- Check the type of the parts
SELECT table, name, part_type from system.parts where table = 'compact'; | {"source_file": "compression-in-clickhouse.md"} | [
0.015027021989226341,
-0.0526832714676857,
0.006598168518394232,
0.020448734983801842,
0.03132835403084755,
-0.10108183324337006,
0.029608692973852158,
0.013546976260840893,
0.026732545346021652,
0.10334447771310806,
0.055872779339551926,
-0.011402479372918606,
0.0794132798910141,
-0.01013... |
dac8757c-6ee9-46f7-b7dd-4690f28e01c4 | -- Check the type of the parts
SELECT table, name, part_type from system.parts where table = 'compact';
-- Get the compressed and uncompressed column sizes for the compact table
SELECT name,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE table = 'compact'
GROUP BY name;
-- Create a table with wide parts
CREATE TABLE wide (
number UInt32
)
ENGINE = MergeTree()
ORDER BY number
SETTINGS min_bytes_for_wide_part=0
AS SELECT * FROM numbers(100000);
-- Check the type of the parts
SELECT table, name, part_type from system.parts where table = 'wide';
-- Get the compressed and uncompressed sizes for the wide table
SELECT name,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE table = 'wide'
GROUP BY name;
```
```response title="Response"
┌─table───┬─name──────┬─part_type─┐
1. │ compact │ all_1_1_0 │ Compact │
└─────────┴───────────┴───────────┘
┌─name───┬─compressed_size─┬─uncompressed_size─┬─ratio─┐
1. │ number │ 0.00 B │ 0.00 B │ nan │
└────────┴─────────────────┴───────────────────┴───────┘
┌─table─┬─name──────┬─part_type─┐
1. │ wide │ all_1_1_0 │ Wide │
└───────┴───────────┴───────────┘
┌─name───┬─compressed_size─┬─uncompressed_size─┬─ratio─┐
1. │ number │ 392.31 KiB │ 390.63 KiB │ 1 │
└────────┴─────────────────┴───────────────────┴───────┘
```
We show both a compressed and uncompressed size here. Both are important. The compressed size equates to what we will need to read off disk - something we want to minimize for query performance (and storage cost). This data will need to be decompressed prior to reading. The size of this uncompressed size will be dependent on the data type used in this case. Minimizing this size will reduce memory overhead of queries and the amount of data which has to be processed by the query, improving utilization of caches and ultimately query times.
The above query relies on the table
columns
in the system database. This database is managed by ClickHouse and is a treasure trove of useful information, from query performance metrics to background cluster logs. We recommend
"System Tables and a Window into the Internals of ClickHouse"
and accompanying articles
[1]
[2]
for the curious reader.
To summarize the total size of the table, we can simplify the above query: | {"source_file": "compression-in-clickhouse.md"} | [
0.04902243614196777,
0.03046952188014984,
-0.0347476564347744,
0.02494169771671295,
-0.05100683495402336,
-0.07041218876838684,
-0.003430071519687772,
0.07068217545747757,
-0.07343252748250961,
0.018970392644405365,
-0.05511998012661934,
-0.03962622955441475,
0.043810147792100906,
-0.08446... |
79ad882c-8a86-4f79-93ba-dcf8e0c039a7 | To summarize the total size of the table, we can simplify the above query:
```sql
SELECT formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE table = 'posts'
┌─compressed_size─┬─uncompressed_size─┬─ratio─┐
│ 50.16 GiB │ 143.47 GiB │ 2.86 │
└─────────────────┴───────────────────┴───────┘
```
Repeating this query for the
posts_v3
, the table with an optimized type and ordering key, we can see a significant reduction in uncompressed and compressed sizes.
``sql
SELECT
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE
table` = 'posts_v3'
┌─compressed_size─┬─uncompressed_size─┬─ratio─┐
│ 25.15 GiB │ 68.87 GiB │ 2.74 │
└─────────────────┴───────────────────┴───────┘
```
The full column breakdown shows considerable savings for the
Body
,
Title
,
Tags
and
CreationDate
columns achieved by ordering the data prior to compression and using the appropriate types.
``sql
SELECT
name,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE
table` = 'posts_v3'
GROUP BY name | {"source_file": "compression-in-clickhouse.md"} | [
0.03525036945939064,
0.024780817329883575,
-0.03551813215017319,
0.05685805529356003,
0.01986011303961277,
-0.03926938772201538,
-0.026303470134735107,
0.01550241932272911,
-0.019987894222140312,
0.06466522812843323,
-0.05449625104665756,
0.020860863849520683,
0.05004866048693657,
-0.01704... |
0176fa3e-e094-44b5-aabc-0e44f09ab66a | ┌─name──────────────────┬─compressed_size─┬─uncompressed_size─┬───ratio─┐
│ Body │ 23.10 GiB │ 63.63 GiB │ 2.75 │
│ Title │ 614.65 MiB │ 1.28 GiB │ 2.14 │
│ Score │ 40.28 MiB │ 227.38 MiB │ 5.65 │
│ Tags │ 234.05 MiB │ 688.49 MiB │ 2.94 │
│ ParentId │ 107.78 MiB │ 321.33 MiB │ 2.98 │
│ Id │ 159.70 MiB │ 227.38 MiB │ 1.42 │
│ AcceptedAnswerId │ 40.34 MiB │ 227.38 MiB │ 5.64 │
│ ClosedDate │ 5.93 MiB │ 9.49 MiB │ 1.6 │
│ LastActivityDate │ 246.55 MiB │ 454.76 MiB │ 1.84 │
│ CommentCount │ 635.78 KiB │ 56.84 MiB │ 91.55 │
│ OwnerUserId │ 183.86 MiB │ 227.38 MiB │ 1.24 │
│ AnswerCount │ 9.67 MiB │ 113.69 MiB │ 11.76 │
│ FavoriteCount │ 19.77 KiB │ 147.32 KiB │ 7.45 │
│ ViewCount │ 45.04 MiB │ 227.38 MiB │ 5.05 │
│ LastEditorUserId │ 86.25 MiB │ 227.38 MiB │ 2.64 │
│ ContentLicense │ 2.17 MiB │ 57.10 MiB │ 26.37 │
│ OwnerDisplayName │ 5.95 MiB │ 16.19 MiB │ 2.72 │
│ PostTypeId │ 39.49 KiB │ 56.84 MiB │ 1474.01 │
│ CreationDate │ 181.23 MiB │ 454.76 MiB │ 2.51 │
│ LastEditDate │ 134.07 MiB │ 454.76 MiB │ 3.39 │
│ LastEditorDisplayName │ 2.15 MiB │ 6.25 MiB │ 2.91 │
│ CommunityOwnedDate │ 824.60 KiB │ 1.34 MiB │ 1.66 │
└───────────────────────┴─────────────────┴───────────────────┴─────────┘
```
Choosing the right column compression codec {#choosing-the-right-column-compression-codec}
With column compression codecs, we can change the algorithm (and its settings) used to encode and compress each column.
Encodings and compression work slightly differently with the same objective: to reduce our data size. Encodings apply a mapping to our data, transforming the values based on a function by exploiting properties of the data type. Conversely, compression uses a generic algorithm to compress data at a byte level.
Typically, encodings are applied first before compression is used. Since different encodings and compression algorithms are effective on different value distributions, we must understand our data.
ClickHouse supports a large number of codecs and compression algorithms. The following are some recommendations in order of importance:
Recommendation | Reasoning
--- | --- | {"source_file": "compression-in-clickhouse.md"} | [
0.018732242286205292,
-0.06894560158252716,
0.018451305106282234,
0.013258613646030426,
0.03240467235445976,
-0.08122909069061279,
0.034677211195230484,
0.01409154199063778,
0.012783966027200222,
0.10955251008272171,
0.05511021241545677,
-0.019869789481163025,
0.09069673717021942,
-0.00935... |
3eb79330-268d-4002-a872-e556991c7eff | Recommendation | Reasoning
--- | ---
ZSTD
all the way
|
ZSTD
compression offers the best rates of compression.
ZSTD(1)
should be the default for most common types. Higher rates of compression can be tried by modifying the numeric value. We rarely see sufficient benefits on values higher than 3 for the increased cost of compression (slower insertion).
Delta
for date and integer sequences
|
Delta
-based codecs work well whenever you have monotonic sequences or small deltas in consecutive values. More specifically, the Delta codec works well, provided the derivatives yield small numbers. If not,
DoubleDelta
is worth trying (this typically adds little if the first-level derivative from
Delta
is already very small). Sequences where the monotonic increment is uniform, will compress even better e.g. DateTime fields.
Delta
improves
ZSTD
|
ZSTD
is an effective codec on delta data - conversely, delta encoding can improve
ZSTD
compression. In the presence of
ZSTD
, other codecs rarely offer further improvement.
LZ4
over
ZSTD
if possible
| if you get comparable compression between
LZ4
and
ZSTD
, favor the former since it offers faster decompression and needs less CPU. However,
ZSTD
will outperform
LZ4
by a significant margin in most cases. Some of these codecs may work faster in combination with
LZ4
while providing similar compression compared to
ZSTD
without a codec. This will be data specific, however, and requires testing.
T64
for sparse or small ranges
|
T64
can be effective on sparse data or when the range in a block is small. Avoid
T64
for random numbers.
Gorilla
and
T64
for unknown patterns?
| If the data has an unknown pattern, it may be worth trying
Gorilla
and
T64
.
Gorilla
for gauge data
|
Gorilla
can be effective on floating point data, specifically that which represents gauge readings, i.e. random spikes.
See
here
for further options.
Below we specify the
Delta
codec for the
Id
,
ViewCount
and
AnswerCount
, hypothesizing these will be linearly correlated with the ordering key and thus should benefit from Delta encoding. | {"source_file": "compression-in-clickhouse.md"} | [
-0.12997876107692719,
0.056505195796489716,
0.0033250891137868166,
-0.0446295328438282,
-0.006269047036767006,
-0.013885898515582085,
-0.09954684972763062,
0.05104963853955269,
0.015970982611179352,
-0.024929359555244446,
-0.044878654181957245,
0.020025018602609634,
-0.04108434543013573,
0... |
d1717924-0d96-449b-8bb9-d56c0bfdaec0 | Below we specify the
Delta
codec for the
Id
,
ViewCount
and
AnswerCount
, hypothesizing these will be linearly correlated with the ordering key and thus should benefit from Delta encoding.
sql
CREATE TABLE posts_v4
(
`Id` Int32 CODEC(Delta, ZSTD),
`PostTypeId` Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime64(3, 'UTC'),
`Score` Int32,
`ViewCount` UInt32 CODEC(Delta, ZSTD),
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime64(3, 'UTC'),
`LastActivityDate` DateTime64(3, 'UTC'),
`Title` String,
`Tags` String,
`AnswerCount` UInt16 CODEC(Delta, ZSTD),
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense` LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime64(3, 'UTC'),
`ClosedDate` DateTime64(3, 'UTC')
)
ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CommentCount)
The compression improvements for these columns is shown below:
``sql
SELECT
table
,
name,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE (name IN ('Id', 'ViewCount', 'AnswerCount')) AND (
table
IN ('posts_v3', 'posts_v4'))
GROUP BY
table
,
name
ORDER BY
name ASC,
table` ASC
┌─table────┬─name────────┬─compressed_size─┬─uncompressed_size─┬─ratio─┐
│ posts_v3 │ AnswerCount │ 9.67 MiB │ 113.69 MiB │ 11.76 │
│ posts_v4 │ AnswerCount │ 10.39 MiB │ 111.31 MiB │ 10.71 │
│ posts_v3 │ Id │ 159.70 MiB │ 227.38 MiB │ 1.42 │
│ posts_v4 │ Id │ 64.91 MiB │ 222.63 MiB │ 3.43 │
│ posts_v3 │ ViewCount │ 45.04 MiB │ 227.38 MiB │ 5.05 │
│ posts_v4 │ ViewCount │ 52.72 MiB │ 222.63 MiB │ 4.22 │
└──────────┴─────────────┴─────────────────┴───────────────────┴───────┘
6 rows in set. Elapsed: 0.008 sec
```
Compression in ClickHouse Cloud {#compression-in-clickhouse-cloud} | {"source_file": "compression-in-clickhouse.md"} | [
0.012072579935193062,
-0.021897494792938232,
-0.07786715030670166,
0.02678149752318859,
-0.07324262708425522,
0.0386328399181366,
0.020664367824792862,
-0.016610797494649887,
-0.018593646585941315,
0.040783852338790894,
0.0751587301492691,
-0.05160560831427574,
0.05160226300358772,
-0.0238... |
d27fb03d-43d8-4b78-8184-9e7eb9d9cd78 | 6 rows in set. Elapsed: 0.008 sec
```
Compression in ClickHouse Cloud {#compression-in-clickhouse-cloud}
In ClickHouse Cloud, we utilize the
ZSTD
compression algorithm (with a default value of 1) by default. While compression speeds can vary for this algorithm, depending on the compression level (higher = slower), it has the advantage of being consistently fast on decompression (around 20% variance) and also benefiting from the ability to be parallelized. Our historical tests also suggest that this algorithm is often sufficiently effective and can even outperform
LZ4
combined with a codec. It is effective on most data types and information distributions, and is thus a sensible general-purpose default and why our initial earlier compression is already excellent even without optimization. | {"source_file": "compression-in-clickhouse.md"} | [
-0.1053967997431755,
0.07763862609863281,
-0.024018630385398865,
-0.017347339540719986,
0.036566417664289474,
-0.04249409958720207,
-0.041641633957624435,
-0.045778971165418625,
-0.0000210512680496322,
0.04741424694657326,
-0.037147026509046555,
0.07800410687923431,
0.032830655574798584,
-... |
dc106997-e611-44e7-aa5d-6f2effda37ae | slug: /data-modeling/denormalization
title: 'Denormalizing Data'
description: 'How to use denormalization to improve query performance'
keywords: ['data denormalization', 'denormalize', 'query optimization']
doc_type: 'guide'
import denormalizationDiagram from '@site/static/images/data-modeling/denormalization-diagram.png';
import denormalizationSchema from '@site/static/images/data-modeling/denormalization-schema.png';
import Image from '@theme/IdealImage';
Denormalizing Data
Data denormalization is a technique in ClickHouse to use flattened tables to help minimize query latency by avoiding joins.
Comparing Normalized vs. Denormalized Schemas {#comparing-normalized-vs-denormalized-schemas}
Denormalizing data involves intentionally reversing the normalization process to optimize database performance for specific query patterns. In normalized databases, data is split into multiple related tables to minimize redundancy and ensure data integrity. Denormalization reintroduces redundancy by combining tables, duplicating data, and incorporating calculated fields into either a single table or fewer tables - effectively moving any joins from query to insert time.
This process reduces the need for complex joins at query time and can significantly speed up read operations, making it ideal for applications with heavy read requirements and complex queries. However, it can increase the complexity of write operations and maintenance, as any changes to the duplicated data must be propagated across all instances to maintain consistency.
A common technique popularized by NoSQL solutions is to denormalize data in the absence of
JOIN
support, effectively storing all statistics or related rows on a parent row as columns and nested objects. For example, in an example schema for a blog, we can store all
Comments
as an
Array
of objects on their respective posts.
When to use denormalization {#when-to-use-denormalization}
In general, we would recommend denormalizing in the following cases:
Denormalize tables which change infrequently or for which a delay before data is available for analytical queries can be tolerated i.e. the data can be completely reloaded in a batch.
Avoid denormalizing many-to-many relationships. This can result in the need to update many rows if a single source row changes.
Avoid denormalizing high cardinality relationships. If each row in a table has thousands of related entries in another table, these will need to be represented as an
Array
- either of a primitive type or tuples. Generally, arrays with more than 1000 tuples would not be recommended.
Rather than denormalizing all columns as nested objects, consider denormalizing just a statistic using materialized views (see below).
All information doesn't need to be denormalized - just the key information that needs to be frequently accessed.
The denormalization work can be handled in either ClickHouse or upstream e.g. using Apache Flink. | {"source_file": "denormalization.md"} | [
0.032279618084430695,
0.03521018847823143,
0.002250590594485402,
0.04116524010896683,
-0.01685839705169201,
-0.1262473165988922,
-0.019487397745251656,
0.06235882267355919,
-0.07713404297828674,
-0.0010070427088066936,
0.04125042259693146,
0.06544003635644913,
0.1176031157374382,
0.0285825... |
8b3c35ec-8575-414e-9649-a22adee3b580 | The denormalization work can be handled in either ClickHouse or upstream e.g. using Apache Flink.
Avoid denormalization on frequently updated data {#avoid-denormalization-on-frequently-updated-data}
For ClickHouse, denormalization is one of several options users can use in order to optimize query performance but should be used carefully. If data is updated frequently and needs to be updated in near-real time, this approach should be avoided. Use this if the main table is largely append only or can be reloaded periodically as a batch e.g. daily.
As an approach it suffers from one principal challenge - write performance and updating data. More specifically, denormalization effectively shifts the responsibility of the data join from query time to ingestion time. While this can significantly improve query performance, it complicates ingestion and means that data pipelines need to re-insert a row into ClickHouse should any of the rows which were used to compose it change. This can mean that a change in one source row potentially means many rows in ClickHouse need to be updated. In complicated schemas, where rows have been composed from complex joins, a single row change in a nested component of a join can potentially mean millions of rows need to be updated.
Achieving this in real-time is often unrealistic and requires significant engineering, due to two challenges:
Triggering the correct join statements when a table row changes. This should ideally not cause all objects for the join to be updated - rather just those that have been impacted. Modifying the joins to filter to the correct rows efficiently, and achieving this under high throughput, requires external tooling or engineering.
Row updates in ClickHouse need to be carefully managed, introducing additional complexity.
A batch update process is thus more common, where all of the denormalized objects are periodically reloaded.
Practical cases for denormalization {#practical-cases-for-denormalization}
Let's consider a few practical examples where denormalizing might make sense, and others where alternative approaches are more desirable.
Consider a
Posts
table that has already been denormalized with statistics such as
AnswerCount
and
CommentCount
- the source data is provided in this form. In reality, we may want to actually normalize this information as it's likely to be subject to be frequently changed. Many of these columns are also available through other tables e.g. comments for a post are available via the
PostId
column and
Comments
table. For the purposes of example, we assume posts are reloaded in a batch process.
We also only consider denormalizing other tables onto
Posts
, as we consider this our main table for analytics. Denormalizing in the other direction would also be appropriate for some queries, with the same above considerations applying. | {"source_file": "denormalization.md"} | [
-0.044286929070949554,
-0.00996240135282278,
0.021806251257658005,
0.04134209826588631,
-0.03729322925209999,
-0.08524367213249207,
-0.04173165559768677,
-0.007594998925924301,
-0.014905404299497604,
0.007248334586620331,
0.016503753140568733,
0.024745719507336617,
-0.024831311777234077,
-... |
23c0cf63-ac54-4295-a8ea-c59ca1d36338 | For each of the following examples, assume a query exists which requires both tables to be used in a join.
Posts and Votes {#posts-and-votes}
Votes for posts are represented as separate tables. The optimized schema for this is shown below as well as the insert command to load the data:
``sql
CREATE TABLE votes
(
Id
UInt32,
PostId
Int32,
VoteTypeId
UInt8,
CreationDate
DateTime64(3, 'UTC'),
UserId
Int32,
BountyAmount` UInt8
)
ENGINE = MergeTree
ORDER BY (VoteTypeId, CreationDate, PostId)
INSERT INTO votes SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/*.parquet')
0 rows in set. Elapsed: 26.272 sec. Processed 238.98 million rows, 2.13 GB (9.10 million rows/s., 80.97 MB/s.)
```
At a first glance, these might be candidates for denormalizing on the posts table. There are a few challenges to this approach.
Votes are added frequently to posts. While this might diminish per post over time, the following query shows that we have around 40k votes per hour over 30k posts.
```sql
SELECT round(avg(c)) AS avg_votes_per_hr, round(avg(posts)) AS avg_posts_per_hr
FROM
(
SELECT
toStartOfHour(CreationDate) AS hr,
count() AS c,
uniq(PostId) AS posts
FROM votes
GROUP BY hr
)
┌─avg_votes_per_hr─┬─avg_posts_per_hr─┐
│ 41759 │ 33322 │
└──────────────────┴──────────────────┘
```
This could be addressed by batching if a delay can be tolerated, but this still requires us to handle updates unless we periodically reload all posts (unlikely to be desirable).
More troublesome is some posts have an extremely high number of votes:
```sql
SELECT PostId, concat('https://stackoverflow.com/questions/', PostId) AS url, count() AS c
FROM votes
GROUP BY PostId
ORDER BY c DESC
LIMIT 5
┌───PostId─┬─url──────────────────────────────────────────┬─────c─┐
│ 11227902 │ https://stackoverflow.com/questions/11227902 │ 35123 │
│ 927386 │ https://stackoverflow.com/questions/927386 │ 29090 │
│ 11227809 │ https://stackoverflow.com/questions/11227809 │ 27475 │
│ 927358 │ https://stackoverflow.com/questions/927358 │ 26409 │
│ 2003515 │ https://stackoverflow.com/questions/2003515 │ 25899 │
└──────────┴──────────────────────────────────────────────┴───────┘
```
The main observation here is that aggregated vote statistics for each post would be sufficient for most analysis - we do not need to denormalize all of the vote information. For example, the current
Score
column represents such a statistic i.e. total up votes minus down votes. Ideally, we would just be able to retrieve these statistics at query time with a simple lookup (see
dictionaries
).
Users and Badges {#users-and-badges}
Now let's consider our
Users
and
Badges
:
We first insert the data with the following command: | {"source_file": "denormalization.md"} | [
-0.028775684535503387,
-0.0368080772459507,
-0.05290713533759117,
0.0173045564442873,
-0.014818333089351654,
-0.0478881411254406,
-0.03054053895175457,
-0.025326179340481758,
-0.0062642949633300304,
0.007527371868491173,
0.02437019906938076,
-0.011751668527722359,
0.0727638378739357,
-0.07... |
ac04cb8f-18e1-4007-8786-2179eb48c9de | Users and Badges {#users-and-badges}
Now let's consider our
Users
and
Badges
:
We first insert the data with the following command:
sql
CREATE TABLE users
(
`Id` Int32,
`Reputation` LowCardinality(String),
`CreationDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
`DisplayName` String,
`LastAccessDate` DateTime64(3, 'UTC'),
`AboutMe` String,
`Views` UInt32,
`UpVotes` UInt32,
`DownVotes` UInt32,
`WebsiteUrl` String,
`Location` LowCardinality(String),
`AccountId` Int32
)
ENGINE = MergeTree
ORDER BY (Id, CreationDate)
``sql
CREATE TABLE badges
(
Id
UInt32,
UserId
Int32,
Name
LowCardinality(String),
Date
DateTime64(3, 'UTC'),
Class
Enum8('Gold' = 1, 'Silver' = 2, 'Bronze' = 3),
TagBased` Bool
)
ENGINE = MergeTree
ORDER BY UserId
INSERT INTO users SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/users.parquet')
0 rows in set. Elapsed: 26.229 sec. Processed 22.48 million rows, 1.36 GB (857.21 thousand rows/s., 51.99 MB/s.)
INSERT INTO badges SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet')
0 rows in set. Elapsed: 18.126 sec. Processed 51.29 million rows, 797.05 MB (2.83 million rows/s., 43.97 MB/s.)
```
While users may acquire badges frequently, this is unlikely to be a dataset we need to update more than daily. The relationship between badges and users are one-to-many. Maybe we can simply denormalize badges onto users as a list of tuples? While possible, a quick check to confirm the highest number of badges per user suggests this isn't ideal:
```sql
SELECT UserId, count() AS c FROM badges GROUP BY UserId ORDER BY c DESC LIMIT 5
┌─UserId─┬─────c─┐
│ 22656 │ 19334 │
│ 6309 │ 10516 │
│ 100297 │ 7848 │
│ 157882 │ 7574 │
│ 29407 │ 6512 │
└────────┴───────┘
```
It's probably not realistic to denormalize 19k objects onto a single row. This relationship may be best left as separate tables or with statistics added.
We may wish to denormalize statistics from badges on to users e.g. the number of badges. We consider such an example when using dictionaries for this dataset at insert time.
Posts and PostLinks {#posts-and-postlinks}
PostLinks
connect
Posts
which users consider to be related or duplicated. The following query shows the schema and load command:
``sql
CREATE TABLE postlinks
(
Id
UInt64,
CreationDate
DateTime64(3, 'UTC'),
PostId
Int32,
RelatedPostId
Int32,
LinkTypeId` Enum('Linked' = 1, 'Duplicate' = 3)
)
ENGINE = MergeTree
ORDER BY (PostId, RelatedPostId)
INSERT INTO postlinks SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/postlinks.parquet')
0 rows in set. Elapsed: 4.726 sec. Processed 6.55 million rows, 129.70 MB (1.39 million rows/s., 27.44 MB/s.)
```
We can confirm that no posts have an excessive number of links preventing denormalization: | {"source_file": "denormalization.md"} | [
-0.01922249235212803,
0.00780197698622942,
0.01185384951531887,
0.058559149503707886,
-0.10886700451374054,
0.011525335721671581,
0.09657912701368332,
0.05140989273786545,
-0.059622857719659805,
-0.03243229165673256,
0.06425385177135468,
-0.07243211567401886,
0.11536306887865067,
-0.048581... |
4911c14d-1c5e-42a3-883b-28c0b5429f7e | We can confirm that no posts have an excessive number of links preventing denormalization:
```sql
SELECT PostId, count() AS c
FROM postlinks
GROUP BY PostId
ORDER BY c DESC LIMIT 5
┌───PostId─┬───c─┐
│ 22937618 │ 125 │
│ 9549780 │ 120 │
│ 3737139 │ 109 │
│ 18050071 │ 103 │
│ 25889234 │ 82 │
└──────────┴─────┘
```
Likewise, these links are not events which occur overly frequently:
```sql
SELECT
round(avg(c)) AS avg_votes_per_hr,
round(avg(posts)) AS avg_posts_per_hr
FROM
(
SELECT
toStartOfHour(CreationDate) AS hr,
count() AS c,
uniq(PostId) AS posts
FROM postlinks
GROUP BY hr
)
┌─avg_votes_per_hr─┬─avg_posts_per_hr─┐
│ 54 │ 44 │
└──────────────────┴──────────────────┘
```
We use this as our denormalization example below.
Simple statistic example {#simple-statistic-example}
In most cases, denormalization requires the adding of a single column or statistic onto a parent row. For example, we may just wish to enrich our posts with the number of duplicate posts and we simply need to add a column.
sql
CREATE TABLE posts_with_duplicate_count
(
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
... -other columns
`DuplicatePosts` UInt16
) ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CommentCount)
To populate this table, we utilize an
INSERT INTO SELECT
joining our duplicate statistic with our posts.
sql
INSERT INTO posts_with_duplicate_count SELECT
posts.*,
DuplicatePosts
FROM posts AS posts
LEFT JOIN
(
SELECT PostId, countIf(LinkTypeId = 'Duplicate') AS DuplicatePosts
FROM postlinks
GROUP BY PostId
) AS postlinks ON posts.Id = postlinks.PostId
Exploiting complex types for one-to-many relationships {#exploiting-complex-types-for-one-to-many-relationships}
In order to perform denormalization, we often need to exploit complex types. If a one-to-one relationship is being denormalized, with a low number of columns, users can simply add these as rows with their original types as shown above. However, this is often undesirable for larger objects and not possible for one-to-many relationships.
In cases of complex objects or one-to-many relationships, users can use:
Named Tuples - These allow a related structure to be represented as a set of columns.
Array(Tuple) or Nested - An array of named tuples, also known as Nested, with each entry representing an object. Applicable to one-to-many relationships.
As an example, we demonstrate denormalizing
PostLinks
on to
Posts
below.
Each post can contain a number of links to other posts as shown in the
PostLinks
schema earlier. As a Nested type, we might represent these linked and duplicates posts as follows: | {"source_file": "denormalization.md"} | [
-0.03233170881867409,
-0.08803779631853104,
-0.015176957473158836,
0.05351359024643898,
0.005314184352755547,
-0.018452266231179237,
-0.08666164427995682,
-0.00737358070909977,
0.007979446090757847,
0.06751294434070587,
0.043501850217580795,
-0.03931664675474167,
0.02380521222949028,
-0.02... |
0f0e3a4e-88f9-430b-9125-fca586642124 | Each post can contain a number of links to other posts as shown in the
PostLinks
schema earlier. As a Nested type, we might represent these linked and duplicates posts as follows:
sql
SET flatten_nested=0
CREATE TABLE posts_with_links
(
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
... -other columns
`LinkedPosts` Nested(CreationDate DateTime64(3, 'UTC'), PostId Int32),
`DuplicatePosts` Nested(CreationDate DateTime64(3, 'UTC'), PostId Int32),
) ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CommentCount)
Note the use of the setting
flatten_nested=0
. We recommend disabling the flattening of nested data.
We can perform this denormalization using an
INSERT INTO SELECT
with an
OUTER JOIN
query:
```sql
INSERT INTO posts_with_links
SELECT
posts.*,
arrayMap(p -> (p.1, p.2), arrayFilter(p -> p.3 = 'Linked' AND p.2 != 0, Related)) AS LinkedPosts,
arrayMap(p -> (p.1, p.2), arrayFilter(p -> p.3 = 'Duplicate' AND p.2 != 0, Related)) AS DuplicatePosts
FROM posts
LEFT JOIN (
SELECT
PostId,
groupArray((CreationDate, RelatedPostId, LinkTypeId)) AS Related
FROM postlinks
GROUP BY PostId
) AS postlinks ON posts.Id = postlinks.PostId
0 rows in set. Elapsed: 155.372 sec. Processed 66.37 million rows, 76.33 GB (427.18 thousand rows/s., 491.25 MB/s.)
Peak memory usage: 6.98 GiB.
```
Note the timing here. We've managed to denormalize 66m rows in around 2mins. As we'll see later, this is an operation we can schedule.
Note the use of the
groupArray
functions to collapse the
PostLinks
down into an array for each
PostId
, prior to joining. This array is then filtered into two sublists:
LinkedPosts
and
DuplicatePosts
, which also exclude any empty results from the outer join.
We can select some rows to see our new denormalized structure:
```sql
SELECT LinkedPosts, DuplicatePosts
FROM posts_with_links
WHERE (length(LinkedPosts) > 2) AND (length(DuplicatePosts) > 0)
LIMIT 1
FORMAT Vertical
Row 1:
──────
LinkedPosts: [('2017-04-11 11:53:09.583',3404508),('2017-04-11 11:49:07.680',3922739),('2017-04-11 11:48:33.353',33058004)]
DuplicatePosts: [('2017-04-11 12:18:37.260',3922739),('2017-04-11 12:18:37.260',33058004)]
```
Orchestrating and scheduling denormalization {#orchestrating-and-scheduling-denormalization}
Batch {#batch}
Exploiting denormalization requires a transformation process in which it can be performed and orchestrated.
We have shown above how ClickHouse can be used to perform this transformation once data has been loaded through an
INSERT INTO SELECT
. This is appropriate for periodic batch transformations.
Users have several options for orchestrating this in ClickHouse, assuming a periodic batch load process is acceptable: | {"source_file": "denormalization.md"} | [
-0.05547897517681122,
-0.065253846347332,
0.038017068058252335,
0.06173077970743179,
-0.05625524744391441,
0.0010110996663570404,
-0.06590414047241211,
-0.02067689225077629,
-0.03273007646203041,
0.050180401653051376,
0.035441480576992035,
0.014865045435726643,
0.05346377566456795,
-0.0465... |
abd52639-daeb-4d4f-b738-9e282f365678 | Users have several options for orchestrating this in ClickHouse, assuming a periodic batch load process is acceptable:
Refreshable Materialized Views
- Refreshable materialized views can be used to periodically schedule a query with the results sent to a target table. On query execution, the view ensures the target table is atomically updated. This provides a ClickHouse native means of scheduling this work.
External tooling
- Utilizing tools such as
dbt
and
Airflow
to periodically schedule the transformation. The
ClickHouse integration for dbt
ensures this is performed atomically with a new version of the target table created and then atomically swapped with the version receiving queries (via the
EXCHANGE
command).
Streaming {#streaming}
Users may alternatively wish to perform this outside of ClickHouse, prior to insertion, using streaming technologies such as
Apache Flink
. Alternatively, incremental
materialized views
can be used to perform this process as data is inserted. | {"source_file": "denormalization.md"} | [
-0.03604015335440636,
-0.10361228138208389,
-0.06616580486297607,
0.0408422090113163,
-0.03709487244486809,
-0.03054601140320301,
0.01325790211558342,
-0.044285304844379425,
-0.03817692771553993,
0.0010593595216050744,
-0.04871555417776108,
-0.00032534313504584134,
0.004179814830422401,
-0... |
510aada7-5901-459e-8664-4e5955d49eaa | slug: /data-modeling/backfilling
title: 'Backfilling Data'
description: 'How to use backfill large datasets in ClickHouse'
keywords: ['materialized views', 'backfilling', 'inserting data', 'resilient data load']
doc_type: 'guide'
import nullTableMV from '@site/static/images/data-modeling/null_table_mv.png';
import Image from '@theme/IdealImage';
Backfilling Data
Whether new to ClickHouse or responsible for an existing deployment, users will invariably need to backfill tables with historical data. In some cases, this is relatively simple but can become more complex when materialized views need to be populated. This guide documents some processes for this task that users can apply to their use case.
:::note
This guide assumes users are already familiar with the concept of
Incremental Materialized Views
and
data loading using table functions such as s3 and gcs
. We also recommend users read our guide on
optimizing insert performance from object storage
, the advice of which can be applied to inserts throughout this guide.
:::
Example dataset {#example-dataset}
Throughout this guide, we use a PyPI dataset. Each row in this dataset represents a Python package download using a tool such as
pip
.
For example, the subset covers a single day -
2024-12-17
and is available publicly at
https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/
. Users can query with:
```sql
SELECT count()
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/*.parquet')
┌────count()─┐
│ 2039988137 │ -- 2.04 billion
└────────────┘
1 row in set. Elapsed: 32.726 sec. Processed 2.04 billion rows, 170.05 KB (62.34 million rows/s., 5.20 KB/s.)
Peak memory usage: 239.50 MiB.
```
The full dataset for this bucket contains over 320 GB of parquet files. In the examples below, we intentionally target subsets using glob patterns.
We assume the user is consuming a stream of this data e.g. from Kafka or object storage, for data after this date. The schema for this data is shown below:
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/*.parquet')
FORMAT PrettyCompactNoEscapesMonoBlock
SETTINGS describe_compact_output = 1 | {"source_file": "backfilling.md"} | [
-0.014961384236812592,
-0.042474813759326935,
-0.05261927470564842,
0.030727006494998932,
-0.012567281723022461,
-0.04398041218519211,
-0.06257332116365433,
0.027471259236335754,
-0.09156592190265656,
0.06566619127988815,
0.028490277007222176,
0.017473779618740082,
0.09383125603199005,
-0.... |
f5579cad-ce67-4634-88ef-c36476af74b6 | ┌─name───────────────┬─type────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ timestamp │ Nullable(DateTime64(6)) │
│ country_code │ Nullable(String) │
│ url │ Nullable(String) │
│ project │ Nullable(String) │
│ file │ Tuple(filename Nullable(String), project Nullable(String), version Nullable(String), type Nullable(String)) │
│ installer │ Tuple(name Nullable(String), version Nullable(String)) │
│ python │ Nullable(String) │
│ implementation │ Tuple(name Nullable(String), version Nullable(String)) │
│ distro │ Tuple(name Nullable(String), version Nullable(String), id Nullable(String), libc Tuple(lib Nullable(String), version Nullable(String))) │
│ system │ Tuple(name Nullable(String), release Nullable(String)) │
│ cpu │ Nullable(String) │
│ openssl_version │ Nullable(String) │
│ setuptools_version │ Nullable(String) │
│ rustc_version │ Nullable(String) │
│ tls_protocol │ Nullable(String) │
│ tls_cipher │ Nullable(String) │
└────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
``` | {"source_file": "backfilling.md"} | [
-0.05551200732588768,
-0.006464257370680571,
-0.04194047674536705,
-0.07444610446691513,
0.046000633388757706,
-0.025374410673975945,
-0.0426378957927227,
-0.031479548662900925,
-0.10395106673240662,
0.02638222649693489,
0.060843709856271744,
0.0007073086453601718,
0.016485031694173813,
-0... |
92bf4e87-bbf7-4da7-8eb8-9bd4de70a226 | :::note
The full PyPI dataset, consisting of over 1 trillion rows, is available in our public demo environment
clickpy.clickhouse.com
. For further details on this dataset, including how the demo exploits materialized views for performance and how the data is populated daily, see
here
.
:::
Backfilling scenarios {#backfilling-scenarios}
Backfilling is typically needed when a stream of data is being consumed from a point in time. This data is being inserted into ClickHouse tables with
incremental materialized views
, triggering on blocks as they are inserted. These views may be transforming the data prior to insert or computing aggregates and sending results to target tables for later use in downstream applications.
We will attempt to cover the following scenarios:
Backfilling data with existing data ingestion
- New data is being loaded, and historical data needs to be backfilled. This historical data has been identified.
Adding materialized views to existing tables
- New materialized views need to be added to a setup for which historical data has been populated and data is already streaming.
We assume data will be backfilled from object storage. In all cases, we aim to avoid pauses in data insertion.
We recommend backfilling historical data from object storage. Data should be exported to Parquet where possible for optimal read performance and compression (reduced network transfer). A file size of around 150MB is typically preferred, but ClickHouse supports over
70 file formats
and is capable of handling files of all sizes.
Using duplicate tables and views {#using-duplicate-tables-and-views}
For all of the scenarios, we rely on the concept of "duplicate tables and views". These tables and views represent copies of those used for the live streaming data and allow the backfill to be performed in isolation with an easy means of recovery should failure occur. For example, we have the following main
pypi
table and materialized view, which computes the number of downloads per Python project:
``sql
CREATE TABLE pypi
(
timestamp
DateTime,
country_code
LowCardinality(String),
project
String,
type
LowCardinality(String),
installer
LowCardinality(String),
python_minor
LowCardinality(String),
system
LowCardinality(String),
on` String
)
ENGINE = MergeTree
ORDER BY (project, timestamp)
CREATE TABLE pypi_downloads
(
project
String,
count
Int64
)
ENGINE = SummingMergeTree
ORDER BY project
CREATE MATERIALIZED VIEW pypi_downloads_mv TO pypi_downloads
AS SELECT
project,
count() AS count
FROM pypi
GROUP BY project
```
We populate the main table and associated view with a subset of the data:
```sql
INSERT INTO pypi SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/1734393600-000000000{000..100}.parquet')
0 rows in set. Elapsed: 15.702 sec. Processed 41.23 million rows, 3.94 GB (2.63 million rows/s., 251.01 MB/s.)
Peak memory usage: 977.49 MiB. | {"source_file": "backfilling.md"} | [
-0.09871774166822433,
-0.04060361161828041,
-0.04937067627906799,
0.02629479393362999,
-0.0021244296804070473,
-0.05573083832859993,
-0.010481134057044983,
-0.03152972087264061,
-0.04471750184893608,
0.015312910079956055,
-0.018717816099524498,
-0.005694309715181589,
0.013436639681458473,
... |
01a8e8e7-f2b4-45e1-aee5-daeac6345765 | 0 rows in set. Elapsed: 15.702 sec. Processed 41.23 million rows, 3.94 GB (2.63 million rows/s., 251.01 MB/s.)
Peak memory usage: 977.49 MiB.
SELECT count() FROM pypi
┌──count()─┐
│ 20612750 │ -- 20.61 million
└──────────┘
1 row in set. Elapsed: 0.004 sec.
SELECT sum(count)
FROM pypi_downloads
┌─sum(count)─┐
│ 20612750 │ -- 20.61 million
└────────────┘
1 row in set. Elapsed: 0.006 sec. Processed 96.15 thousand rows, 769.23 KB (16.53 million rows/s., 132.26 MB/s.)
Peak memory usage: 682.38 KiB.
```
Suppose we wish to load another subset
{101..200}
. While we could insert directly into
pypi
, we can do this backfill in isolation by creating duplicate tables.
Should the backfill fail, we have not impacted our main tables and can simply
truncate
our duplicate tables and repeat.
To create new copies of these views, we can use the
CREATE TABLE AS
clause with the suffix
_v2
:
```sql
CREATE TABLE pypi_v2 AS pypi
CREATE TABLE pypi_downloads_v2 AS pypi_downloads
CREATE MATERIALIZED VIEW pypi_downloads_mv_v2 TO pypi_downloads_v2
AS SELECT
project,
count() AS count
FROM pypi_v2
GROUP BY project
```
We populate this with our 2nd subset of approximately the same size and confirm the successful load.
```sql
INSERT INTO pypi_v2 SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/1734393600-000000000{101..200}.parquet')
0 rows in set. Elapsed: 17.545 sec. Processed 40.80 million rows, 3.90 GB (2.33 million rows/s., 222.29 MB/s.)
Peak memory usage: 991.50 MiB.
SELECT count()
FROM pypi_v2
┌──count()─┐
│ 20400020 │ -- 20.40 million
└──────────┘
1 row in set. Elapsed: 0.004 sec.
SELECT sum(count)
FROM pypi_downloads_v2
┌─sum(count)─┐
│ 20400020 │ -- 20.40 million
└────────────┘
1 row in set. Elapsed: 0.006 sec. Processed 95.49 thousand rows, 763.90 KB (14.81 million rows/s., 118.45 MB/s.)
Peak memory usage: 688.77 KiB.
```
If we experienced a failure at any point during this second load, we could simply
truncate
our
pypi_v2
and
pypi_downloads_v2
and repeat the data load.
With our data load complete, we can move the data from our duplicate tables to the main tables using the
ALTER TABLE MOVE PARTITION
clause.
```sql
ALTER TABLE pypi_v2 MOVE PARTITION () TO pypi
0 rows in set. Elapsed: 1.401 sec.
ALTER TABLE pypi_downloads_v2 MOVE PARTITION () TO pypi_downloads
0 rows in set. Elapsed: 0.389 sec.
```
:::note Partition names
The above
MOVE PARTITION
call uses the partition name
()
. This represents the single partition for this table (which isn't partitioned). For tables that are partitioned, users will need to invoke multiple
MOVE PARTITION
calls - one for each partition. The name of the current partitions can be established from the
system.parts
table e.g.
SELECT DISTINCT partition FROM system.parts WHERE (table = 'pypi_v2')
.
::: | {"source_file": "backfilling.md"} | [
-0.010568131692707539,
-0.05529772862792015,
-0.10684043169021606,
0.05707567185163498,
-0.051988668739795685,
-0.03621058538556099,
0.052291300147771835,
0.031201332807540894,
0.022990774363279343,
0.029715362936258316,
-0.0037728138267993927,
0.02251981571316719,
0.03885786235332489,
-0.... |
ee846c14-2a86-4fe2-b61b-d3f5b3bcbc5e | We can now confirm
pypi
and
pypi_downloads
contain the complete data.
pypi_downloads_v2
and
pypi_v2
can be safely dropped.
```sql
SELECT count()
FROM pypi
┌──count()─┐
│ 41012770 │ -- 41.01 million
└──────────┘
1 row in set. Elapsed: 0.003 sec.
SELECT sum(count)
FROM pypi_downloads
┌─sum(count)─┐
│ 41012770 │ -- 41.01 million
└────────────┘
1 row in set. Elapsed: 0.007 sec. Processed 191.64 thousand rows, 1.53 MB (27.34 million rows/s., 218.74 MB/s.)
SELECT count()
FROM pypi_v2
```
Importantly, the
MOVE PARTITION
operation is both lightweight (exploiting hard links) and atomic, i.e. it either fails or succeeds with no intermediate state.
We exploit this process heavily in our backfilling scenarios below.
Notice how this process requires users to choose the size of each insert operation.
Larger inserts i.e. more rows, will mean fewer
MOVE PARTITION
operations are required. However, this must be balanced against the cost in the event of an insert failure e.g. due to network interruption, to recover. Users can complement this process with batching files to reduce the risk. This can be performed with either range queries e.g.
WHERE timestamp BETWEEN 2024-12-17 09:00:00 AND 2024-12-17 10:00:00
or glob patterns. For example,
sql
INSERT INTO pypi_v2 SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/1734393600-000000000{101..200}.parquet')
INSERT INTO pypi_v2 SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/1734393600-000000000{201..300}.parquet')
INSERT INTO pypi_v2 SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/1734393600-000000000{301..400}.parquet')
--continued to all files loaded OR MOVE PARTITION call is performed
:::note
ClickPipes uses this approach when loading data from object storage, automatically creating duplicates of the target table and its materialized views and avoiding the need for the user to perform the above steps. By also using multiple worker threads, each handling different subsets (via glob patterns) and with its own duplicate tables, data can be loaded quickly with exactly-once semantics. For those interested, further details can be found
in this blog
.
:::
Scenario 1: Backfilling data with existing data ingestion {#scenario-1-backfilling-data-with-existing-data-ingestion}
In this scenario, we assume that the data to backfill is not in an isolated bucket and thus filtering is required. Data is already inserting and a timestamp or monotonically increasing column can be identified from which historical data needs to be backfilled.
This process follows the following steps:
Identify the checkpoint - either a timestamp or column value from which historical data needs to be restored.
Create duplicates of the main table and target tables for materialized views. | {"source_file": "backfilling.md"} | [
-0.057198043912649155,
-0.04269878938794136,
-0.03136182948946953,
0.033609550446271896,
0.022928981110453606,
-0.06108548864722252,
0.04005450755357742,
0.0710705891251564,
0.07225129753351212,
0.0395318977534771,
0.02046957053244114,
0.02430151402950287,
0.011794027872383595,
-0.01898828... |
89d6a310-eb89-430a-9ee6-bc896d15a4d9 | Identify the checkpoint - either a timestamp or column value from which historical data needs to be restored.
Create duplicates of the main table and target tables for materialized views.
Create copies of any materialized views pointing to the target tables created in step (2).
Insert into our duplicate main table created in step (2).
Move all partitions from the duplicate tables to their original versions. Drop duplicate tables.
For example, in our PyPI data suppose we have data loaded. We can identify the minimum timestamp and, thus, our "checkpoint".
```sql
SELECT min(timestamp)
FROM pypi
┌──────min(timestamp)─┐
│ 2024-12-17 09:00:00 │
└─────────────────────┘
1 row in set. Elapsed: 0.163 sec. Processed 1.34 billion rows, 5.37 GB (8.24 billion rows/s., 32.96 GB/s.)
Peak memory usage: 227.84 MiB.
```
From the above, we know we need to load data prior to
2024-12-17 09:00:00
. Using our earlier process, we create duplicate tables and views and load the subset using a filter on the timestamp.
```sql
CREATE TABLE pypi_v2 AS pypi
CREATE TABLE pypi_downloads_v2 AS pypi_downloads
CREATE MATERIALIZED VIEW pypi_downloads_mv_v2 TO pypi_downloads_v2
AS SELECT project, count() AS count
FROM pypi_v2
GROUP BY project
INSERT INTO pypi_v2 SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/2024-12-17/1734393600-*.parquet')
WHERE timestamp < '2024-12-17 09:00:00'
0 rows in set. Elapsed: 500.152 sec. Processed 2.74 billion rows, 364.40 GB (5.47 million rows/s., 728.59 MB/s.)
```
:::note
Filtering on timestamp columns in Parquet can be very efficient. ClickHouse will only read the timestamp column to identify the full data ranges to load, minimizing network traffic. Parquet indices, such as min-max, can also be exploited by the ClickHouse query engine.
:::
Once this insert is complete, we can move the associated partitions.
```sql
ALTER TABLE pypi_v2 MOVE PARTITION () TO pypi
ALTER TABLE pypi_downloads_v2 MOVE PARTITION () TO pypi_downloads
```
If the historical data is an isolated bucket, the above time filter is not required. If a time or monotonic column is unavailable, isolate your historical data.
:::note Just use ClickPipes in ClickHouse Cloud
ClickHouse Cloud users should use ClickPipes for restoring historical backups if the data can be isolated in its own bucket (and a filter is not required). As well as parallelizing the load with multiple workers, thus reducing the load time, ClickPipes automates the above process - creating duplicate tables for both the main table and materialized views.
:::
Scenario 2: Adding materialized views to existing tables {#scenario-2-adding-materialized-views-to-existing-tables} | {"source_file": "backfilling.md"} | [
-0.0255739763379097,
-0.035963024944067,
0.013920835219323635,
-0.011855141259729862,
-0.014202360063791275,
-0.0730474665760994,
0.04268190637230873,
-0.010382365435361862,
-0.02692958153784275,
0.008311688899993896,
-0.00024798279628157616,
0.013150990940630436,
0.04219822213053703,
-0.0... |
6b0e84b2-c399-4c4b-91f8-af6549927080 | Scenario 2: Adding materialized views to existing tables {#scenario-2-adding-materialized-views-to-existing-tables}
It is not uncommon for new materialized views to need to be added to a setup for which significant data has been populated and data is being inserted. A timestamp or monotonically increasing column, which can be used to identify a point in the stream, is useful here and avoids pauses in data ingestion. In the examples below, we assume both cases, preferring approaches that avoid pauses in ingestion.
:::note Avoid POPULATE
We do not recommend using the
POPULATE
command for backfilling materialized views for anything other than small datasets where ingest is paused. This operator can miss rows inserted into its source table, with the materialized view created after the populate hash is finished. Furthermore, this populate runs against all data and is vulnerable to interruptions or memory limits on large datasets.
:::
Timestamp or Monotonically increasing column available {#timestamp-or-monotonically-increasing-column-available}
In this case, we recommend the new materialized view include a filter that restricts rows to those greater than arbitrary data in the future. The materialized view can subsequently be backfilled from this date using historical data from the main table. The backfilling approach depends on the data size and the complexity of the associated query.
Our simplest approach involves the following steps:
Create our materialized view with a filter that only considers rows greater than an arbitrary time in the near future.
Run an
INSERT INTO SELECT
query which inserts into our materialized view's target table, reading from the source table with the views aggregation query.
This can be further enhanced to target subsets of data in step (2) and/or use a duplicate target table for the materialized view (attach partitions to the original once the insert is complete) for easier recovery after failure.
Consider the following materialized view, which computes the most popular projects per hour.
``sql
CREATE TABLE pypi_downloads_per_day
(
hour
DateTime,
project
String,
count` Int64
)
ENGINE = SummingMergeTree
ORDER BY (project, hour)
CREATE MATERIALIZED VIEW pypi_downloads_per_day_mv TO pypi_downloads_per_day
AS SELECT
toStartOfHour(timestamp) as hour,
project,
count() AS count
FROM pypi
GROUP BY
hour,
project
```
While we can add the target table, prior to adding the materialized view we modify its
SELECT
clause to include a filer which only considers rows greater than an arbitrary time in the near future - in this case we assume
2024-12-17 09:00:00
is a few minutes in the future.
sql
CREATE MATERIALIZED VIEW pypi_downloads_per_day_mv TO pypi_downloads_per_day
AS SELECT
toStartOfHour(timestamp) AS hour,
project, count() AS count
FROM pypi WHERE timestamp >= '2024-12-17 09:00:00'
GROUP BY hour, project | {"source_file": "backfilling.md"} | [
-0.07082760334014893,
-0.05162832513451576,
-0.06364937126636505,
0.021651268005371094,
0.04707498848438263,
-0.0048747593536973,
-0.0018650973215699196,
-0.03718802332878113,
0.019380616024136543,
0.017572104930877686,
0.019271245226264,
-0.029362792149186134,
0.008608819916844368,
-0.069... |
a5b8d002-edaa-405f-9a40-97c7e9dcb995 | Once this view is added, we can backfill all data for the materialized view prior to this data.
The simplest means of doing this is to simply run the query from the materialized view on the main table with a filter that ignores recently added data, inserting the results into our view's target table via an
INSERT INTO SELECT
. For example, for the above view:
```sql
INSERT INTO pypi_downloads_per_day SELECT
toStartOfHour(timestamp) AS hour,
project,
count() AS count
FROM pypi
WHERE timestamp < '2024-12-17 09:00:00'
GROUP BY
hour,
project
Ok.
0 rows in set. Elapsed: 2.830 sec. Processed 798.89 million rows, 17.40 GB (282.28 million rows/s., 6.15 GB/s.)
Peak memory usage: 543.71 MiB.
```
:::note
In the above example our target table is a
SummingMergeTree
. In this case we can simply use our original aggregation query. For more complex use cases which exploit the
AggregatingMergeTree
, users will use
-State
functions for the aggregates. An example of this can be found
here
.
:::
In our case, this is a relatively lightweight aggregation that completes in under 3s and uses less than 600MiB of memory. For more complex or longer-running aggregations, users can make this process more resilient by using the earlier duplicate table approach i.e. create a shadow target table, e.g.,
pypi_downloads_per_day_v2
, insert into this, and attach its resulting partitions to
pypi_downloads_per_day
.
Often materialized view's query can be more complex (not uncommon as otherwise users wouldn't use a view!) and consume resources. In rarer cases, the resources for the query are beyond that of the server. This highlights one of the advantages of ClickHouse materialized views - they are incremental and don't process the entire dataset in one go!
In this case, users have several options:
Modify your query to backfill ranges e.g.
WHERE timestamp BETWEEN 2024-12-17 08:00:00 AND 2024-12-17 09:00:00
,
WHERE timestamp BETWEEN 2024-12-17 07:00:00 AND 2024-12-17 08:00:00
etc.
Use a
Null table engine
to fill the materialized view. This replicates the typical incremental population of a materialized view, executing it's query over blocks of data (of configurable size).
(1) represents the simplest approach is often sufficient. We do not include examples for brevity.
We explore (2) further below.
Using a Null table engine for filling materialized views {#using-a-null-table-engine-for-filling-materialized-views}
The
Null table engine
provides a storage engine which doesn't persist data (think of it as the
/dev/null
of the table engine world). While this seems contradictory, materialized views will still execute on data inserted into this table engine. This allows materialized views to be constructed without persisting the original data - avoiding I/O and the associated storage. | {"source_file": "backfilling.md"} | [
-0.035836488008499146,
-0.037610094994306564,
-0.05141184478998184,
0.09279592335224152,
-0.056676462292671204,
0.00900691282004118,
0.021124698221683502,
-0.03227581828832626,
-0.006316962651908398,
0.05325378477573395,
-0.02966783568263054,
-0.032903317362070084,
0.05194338411092758,
-0.... |
56caa014-ebed-4bd2-b0bd-1e0bb7bbc34c | Importantly, any materialized views attached to the table engine still execute over blocks of data as its inserted - sending their results to a target table. These blocks are of a configurable size. While larger blocks can potentially be more efficient (and faster to process), they consume more resources (principally memory). Use of this table engine means we can build our materialized view incrementally i.e. a block at a time, avoiding the need to hold the entire aggregation in memory.
Consider the following example:
``sql
CREATE TABLE pypi_v2
(
timestamp
DateTime,
project` String
)
ENGINE = Null
CREATE MATERIALIZED VIEW pypi_downloads_per_day_mv_v2 TO pypi_downloads_per_day
AS SELECT
toStartOfHour(timestamp) as hour,
project,
count() AS count
FROM pypi_v2
GROUP BY
hour,
project
```
Here, we create a Null table,
pypi_v2,
to receive the rows that will be used to build our materialized view. Note how we limit the schema to only the columns we need. Our materialized view performs an aggregation over rows inserted into this table (one block at a time), sending the results to our target table,
pypi_downloads_per_day
.
:::note
We have used
pypi_downloads_per_day
as our target table here. For additional resiliency, users could create a duplicate table,
pypi_downloads_per_day_v2
, and use this as the target table of the view, as shown in previous examples. On completion of the insert, partitions in
pypi_downloads_per_day_v2
could, in turn, be moved to
pypi_downloads_per_day.
This would allow recovery in the case our insert fails due to memory issues or server interruptions i.e. we just truncate
pypi_downloads_per_day_v2
, tune settings, and retry.
:::
To populate this materialized view, we simply insert the relevant data to backfill into
pypi_v2
from
pypi.
```sql
INSERT INTO pypi_v2 SELECT timestamp, project FROM pypi WHERE timestamp < '2024-12-17 09:00:00'
0 rows in set. Elapsed: 27.325 sec. Processed 1.50 billion rows, 33.48 GB (54.73 million rows/s., 1.23 GB/s.)
Peak memory usage: 639.47 MiB.
```
Notice our memory usage here is
639.47 MiB
.
Tuning performance & resources {#tuning-performance--resources}
Several factors will determine the performance and resources used in the above scenario. Before attempting to tune, we recommend readers understand the insert mechanics documented in detail in the
Using Threads for Reads
section of the
Optimizing for S3 Insert and Read Performance guide
. In summary:
Read Parallelism
- The number of threads used to read. Controlled through
max_threads
. In ClickHouse Cloud this is determined by the instance size with it defaulting to the number of vCPUs. Increasing this value may improve read performance at the expense of greater memory usage. | {"source_file": "backfilling.md"} | [
-0.05377810820937157,
-0.029972612857818604,
-0.06075639650225639,
0.039102569222450256,
-0.02219700813293457,
-0.023996694013476372,
-0.04778042063117027,
-0.019404081627726555,
0.07214254140853882,
0.03408348560333252,
0.0062682125717401505,
-0.03368452936410904,
0.01642860472202301,
-0.... |
5c8acad3-99fe-4a54-9e65-0fad1d64ce47 | Insert Parallelism
- The number of insert threads used to insert. Controlled through
max_insert_threads
. In ClickHouse Cloud this is determined by the instance size (between 2 and 4) and is set to 1 in OSS. Increasing this value may improve performance at the expense of greater memory usage.
Insert Block Size
- data is processed in a loop where it is pulled, parsed, and formed into in-memory insert blocks based on the
partitioning key
. These blocks are sorted, optimized, compressed, and written to storage as new
data parts
. The size of the insert block, controlled by settings
min_insert_block_size_rows
and
min_insert_block_size_bytes
(uncompressed), impacts memory usage and disk I/O. Larger blocks use more memory but create fewer parts, reducing I/O and background merges. These settings represent minimum thresholds (whichever is reached first triggers a flush).
Materialized view block size
- As well as the above mechanics for the main insert, prior to insertion into materialized views, blocks are also squashed for more efficient processing. The size of these blocks is determined by the settings
min_insert_block_size_bytes_for_materialized_views
and
min_insert_block_size_rows_for_materialized_views
. Larger blocks allow more efficient processing at the expense of greater memory usage. By default, these settings revert to the values of the source table settings
min_insert_block_size_rows
and
min_insert_block_size_bytes
, respectively.
For improving performance, users can follow the guidelines outlined in the
Tuning Threads and Block Size for Inserts
section of the
Optimizing for S3 Insert and Read Performance guide
. It should not be necessary to also modify
min_insert_block_size_bytes_for_materialized_views
and
min_insert_block_size_rows_for_materialized_views
to improve performance in most cases. If these are modified, use the same best practices as discussed for
min_insert_block_size_rows
and
min_insert_block_size_bytes
.
To minimize memory, users may wish to experiment with these settings. This will invariably lower performance. Using the earlier query, we show examples below.
Lowering
max_insert_threads
to 1 reduces our memory overhead.
```sql
INSERT INTO pypi_v2
SELECT
timestamp,
project
FROM pypi
WHERE timestamp < '2024-12-17 09:00:00'
SETTINGS max_insert_threads = 1
0 rows in set. Elapsed: 27.752 sec. Processed 1.50 billion rows, 33.48 GB (53.89 million rows/s., 1.21 GB/s.)
Peak memory usage: 506.78 MiB.
```
We can lower memory further by reducing our
max_threads
setting to 1.
```sql
INSERT INTO pypi_v2
SELECT timestamp, project
FROM pypi
WHERE timestamp < '2024-12-17 09:00:00'
SETTINGS max_insert_threads = 1, max_threads = 1
Ok.
0 rows in set. Elapsed: 43.907 sec. Processed 1.50 billion rows, 33.48 GB (34.06 million rows/s., 762.54 MB/s.)
Peak memory usage: 272.53 MiB.
``` | {"source_file": "backfilling.md"} | [
-0.033824022859334946,
-0.042187467217445374,
-0.0421547070145607,
-0.005365079268813133,
-0.005524412263184786,
-0.03715239465236664,
-0.08653689175844193,
0.004625312983989716,
0.07056070864200592,
0.04830117151141167,
0.051034942269325256,
0.022622480988502502,
0.0456511490046978,
-0.09... |
05dfde18-2635-4c47-95d0-d053dee9b876 | Ok.
0 rows in set. Elapsed: 43.907 sec. Processed 1.50 billion rows, 33.48 GB (34.06 million rows/s., 762.54 MB/s.)
Peak memory usage: 272.53 MiB.
```
Finally, we can reduce memory further by setting
min_insert_block_size_rows
to 0 (disables it as a deciding factor on block size) and
min_insert_block_size_bytes
to 10485760 (10MiB).
```sql
INSERT INTO pypi_v2
SELECT
timestamp,
project
FROM pypi
WHERE timestamp < '2024-12-17 09:00:00'
SETTINGS max_insert_threads = 1, max_threads = 1, min_insert_block_size_rows = 0, min_insert_block_size_bytes = 10485760
0 rows in set. Elapsed: 43.293 sec. Processed 1.50 billion rows, 33.48 GB (34.54 million rows/s., 773.36 MB/s.)
Peak memory usage: 218.64 MiB.
```
Finally, be aware that lowering block sizes produces more parts and causes greater merge pressure. As discussed
here
, these settings should be changed cautiously.
No timestamp or monotonically increasing column {#no-timestamp-or-monotonically-increasing-column}
The above processes rely on the user have a timestamp or monotonically increasing column. In some cases this is simply not available. In this case, we recommend the following process, which exploits many of the steps outlined previously but requires users to pause ingest.
Pause inserts into your main table.
Create a duplicate of your main target table using the
CREATE AS
syntax.
Attach partitions from the original target table to the duplicate using
ALTER TABLE ATTACH
.
Note:
This attach operation is different than the earlier used move. While relying on hard links, the data in the original table is preserved.
Create new materialized views.
Restart inserts.
Note:
Inserts will only update the target table, and not the duplicate, which will reference only the original data.
Backfill the materialized view, applying the same process used above for data with timestamps, using the duplicate table as the source.
Consider the following example using PyPI and our previous new materialized view
pypi_downloads_per_day
(we'll assume we can't use the timestamp):
```sql
SELECT count() FROM pypi
┌────count()─┐
│ 2039988137 │ -- 2.04 billion
└────────────┘
1 row in set. Elapsed: 0.003 sec.
-- (1) Pause inserts
-- (2) Create a duplicate of our target table
CREATE TABLE pypi_v2 AS pypi
SELECT count() FROM pypi_v2
┌────count()─┐
│ 2039988137 │ -- 2.04 billion
└────────────┘
1 row in set. Elapsed: 0.004 sec.
-- (3) Attach partitions from the original target table to the duplicate.
ALTER TABLE pypi_v2
(ATTACH PARTITION tuple() FROM pypi)
-- (4) Create our new materialized views
CREATE TABLE pypi_downloads_per_day
(
hour
DateTime,
project
String,
count
Int64
)
ENGINE = SummingMergeTree
ORDER BY (project, hour)
CREATE MATERIALIZED VIEW pypi_downloads_per_day_mv TO pypi_downloads_per_day
AS SELECT
toStartOfHour(timestamp) as hour,
project,
count() AS count
FROM pypi
GROUP BY
hour,
project | {"source_file": "backfilling.md"} | [
0.02909902110695839,
0.036743585020303726,
-0.0724375769495964,
0.0013607523869723082,
-0.0445224866271019,
-0.08251521736383438,
-0.03106243722140789,
0.0765150710940361,
-0.005668250378221273,
0.017755556851625443,
-0.005158286076039076,
-0.009700847789645195,
0.008027350530028343,
-0.01... |
c8b56efb-902d-431b-8563-e146d862b726 | CREATE MATERIALIZED VIEW pypi_downloads_per_day_mv TO pypi_downloads_per_day
AS SELECT
toStartOfHour(timestamp) as hour,
project,
count() AS count
FROM pypi
GROUP BY
hour,
project
-- (4) Restart inserts. We replicate here by inserting a single row.
INSERT INTO pypi SELECT *
FROM pypi
LIMIT 1
SELECT count() FROM pypi
┌────count()─┐
│ 2039988138 │ -- 2.04 billion
└────────────┘
1 row in set. Elapsed: 0.003 sec.
-- notice how pypi_v2 contains same number of rows as before
SELECT count() FROM pypi_v2
┌────count()─┐
│ 2039988137 │ -- 2.04 billion
└────────────┘
-- (5) Backfill the view using the backup pypi_v2
INSERT INTO pypi_downloads_per_day SELECT
toStartOfHour(timestamp) as hour,
project,
count() AS count
FROM pypi_v2
GROUP BY
hour,
project
0 rows in set. Elapsed: 3.719 sec. Processed 2.04 billion rows, 47.15 GB (548.57 million rows/s., 12.68 GB/s.)
DROP TABLE pypi_v2;
```
In the penultimate step we backfill
pypi_downloads_per_day
using our simple
INSERT INTO SELECT
approach described
earlier
. This can also be enhanced using the Null table approach documented
above
, with the optional use of a duplicate table for more resiliency.
While this operation does require inserts to be paused, the intermediate operations can typically be completed quickly - minimizing any data interruption. | {"source_file": "backfilling.md"} | [
-0.008476725779473782,
-0.048022691160440445,
-0.013505736365914345,
0.04325990378856659,
-0.04898823797702789,
-0.03317386284470558,
0.040581267327070236,
-0.015062747523188591,
0.05115903168916702,
0.038987305015325546,
0.04271216318011284,
-0.08262783288955688,
0.06904716789722443,
-0.0... |
bab1681d-fb0d-458d-b645-465d7cd7d8b1 | slug: /data-modeling/overview
title: 'Data Modelling Overview'
description: 'Overview of Data Modelling'
keywords: ['data modelling', 'schema design', 'dictionary', 'materialized view', 'data compression', 'denormalizing data']
doc_type: 'landing-page'
Data Modeling
This section is about data modeling in ClickHouse and contains the following topics:
| Page | Description |
|-----------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Schema Design
| Discusses ClickHouse schema design for optimal performance, considering factors like queries, data updates, latency, and volume. |
|
Dictionary
| An explainer on how to define and use dictionaries to improve query performance and enrich data. |
|
Materialized Views
| Information on Materialized Views and Refreshable Materialized Views in ClickHouse. |
|
Projections
| Information on Projections in ClickHouse.|
|
Data Compression
| Discusses various compression modes in ClickHouse and how to optimize data storage and query performance by choosing the right compression method for your specific data types and workloads. |
|
Denormalizing Data
| Discusses the denormalization approach used in ClickHouse which aims to improve query performance by storing related data in a single table. | | {"source_file": "index.md"} | [
-0.02333972603082657,
-0.019384177401661873,
-0.0060679432936012745,
0.03779645264148712,
-0.0341322235763073,
-0.04250597208738327,
-0.032515447586774826,
-0.006573479622602463,
-0.02368849515914917,
0.038679614663124084,
0.0024526729248464108,
0.03085245005786419,
0.0903797373175621,
-0.... |
d4f6b945-bcc9-4c40-b495-6b6285a24f5c | slug: /data-modeling/schema-design
title: 'Schema Design'
description: 'Optimizing ClickHouse schema for query performance'
keywords: ['schema', 'schema design', 'query optimization']
doc_type: 'guide'
import stackOverflowSchema from '@site/static/images/data-modeling/stackoverflow-schema.png';
import schemaDesignIndices from '@site/static/images/data-modeling/schema-design-indices.png';
import Image from '@theme/IdealImage';
Understanding effective schema design is key to optimizing ClickHouse performance and includes choices that often involve trade-offs, with the optimal approach depending on the queries being served as well as factors such as data update frequency, latency requirements, and data volume. This guide provides an overview of schema design best practices and data modeling techniques for optimizing ClickHouse performance.
Stack Overflow dataset {#stack-overflow-dataset}
For the examples in this guide, we use a subset of the Stack Overflow dataset. This contains every post, vote, user, comment and badge that has occurred on Stack Overflow from 2008 to Apr 2024. This data is available in Parquet using the schemas below under the S3 bucket
s3://datasets-documentation/stackoverflow/parquet/
:
The primary keys and relationships indicated are not enforced through constraints (Parquet is file not table format) and purely indicate how the data is related and the unique keys it possesses.
The Stack Overflow dataset contains a number of related tables. In any data modeling task, we recommend users focus on loading their primary table first. This may not necessarily be the largest table but rather the one on which you expect to receive most analytical queries. This will allow you to familiarize yourself with the main ClickHouse concepts and types, especially important if coming from a predominantly OLTP background. This table may require remodeling as additional tables are added to fully exploit ClickHouse features and obtain optimal performance.
The above schema is intentionally not optimal for the purposes of this guide.
Establish initial schema {#establish-initial-schema}
Since the
posts
table will be the target for most analytics queries, we focus on establishing a schema for this table. This data is available in the public S3 bucket
s3://datasets-documentation/stackoverflow/parquet/posts/*.parquet
with a file per year.
Loading data from S3 in Parquet format represents the most common and preferred way to load data into ClickHouse. ClickHouse is optimized for processing Parquet and can potentially read and insert 10s of millions of rows from S3 per second. | {"source_file": "schema-design.md"} | [
-0.014195442199707031,
-0.015552611090242863,
-0.004609585739672184,
0.037216849625110626,
0.0043616206385195255,
-0.07038581371307373,
-0.007271952927112579,
0.004725623410195112,
-0.06402837485074997,
0.002457721158862114,
0.009811587631702423,
0.014663329347968102,
0.07529600709676743,
... |
6914dad1-3165-41e9-a67b-c2e8b46d0ba9 | ClickHouse provides a schema inference capability to automatically identify the types for a dataset. This is supported for all data formats, including Parquet. We can exploit this feature to identify the ClickHouse types for the data via s3 table function and
DESCRIBE
command. Note below we use the glob pattern
*.parquet
to read all files in the
stackoverflow/parquet/posts
folder.
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
SETTINGS describe_compact_output = 1
┌─name──────────────────┬─type───────────────────────────┐
│ Id │ Nullable(Int64) │
│ PostTypeId │ Nullable(Int64) │
│ AcceptedAnswerId │ Nullable(Int64) │
│ CreationDate │ Nullable(DateTime64(3, 'UTC')) │
│ Score │ Nullable(Int64) │
│ ViewCount │ Nullable(Int64) │
│ Body │ Nullable(String) │
│ OwnerUserId │ Nullable(Int64) │
│ OwnerDisplayName │ Nullable(String) │
│ LastEditorUserId │ Nullable(Int64) │
│ LastEditorDisplayName │ Nullable(String) │
│ LastEditDate │ Nullable(DateTime64(3, 'UTC')) │
│ LastActivityDate │ Nullable(DateTime64(3, 'UTC')) │
│ Title │ Nullable(String) │
│ Tags │ Nullable(String) │
│ AnswerCount │ Nullable(Int64) │
│ CommentCount │ Nullable(Int64) │
│ FavoriteCount │ Nullable(Int64) │
│ ContentLicense │ Nullable(String) │
│ ParentId │ Nullable(String) │
│ CommunityOwnedDate │ Nullable(DateTime64(3, 'UTC')) │
│ ClosedDate │ Nullable(DateTime64(3, 'UTC')) │
└───────────────────────┴────────────────────────────────┘
```
The
s3 table function
allows data in S3 to be queried in-place from ClickHouse. This function is compatible with all of the file formats ClickHouse supports.
This provides us with an initial non-optimized schema. By default, ClickHouse maps these to equivalent Nullable types. We can create a ClickHouse table using these types with a simple
CREATE EMPTY AS SELECT
command.
sql
CREATE TABLE posts
ENGINE = MergeTree
ORDER BY () EMPTY AS
SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
A few important points:
Our posts table is empty after running this command. No data has been loaded.
We have specified the MergeTree as our table engine. MergeTree is the most common ClickHouse table engine you will likely use. It's the multi-tool in your ClickHouse box, capable of handling PB of data, and serves most analytical use cases. Other table engines exist for use cases such as CDC which need to support efficient updates. | {"source_file": "schema-design.md"} | [
-0.01635614037513733,
-0.07254599779844284,
-0.07616287469863892,
-0.020219510421156883,
-0.0018798382952809334,
-0.034355904906988144,
0.030707556754350662,
-0.05623132735490799,
-0.05428013578057289,
0.013577528297901154,
0.01437213085591793,
-0.05020751804113388,
-0.016115771606564522,
... |
1dc39a9c-1327-46e3-8cbb-edb65b05ad15 | The clause
ORDER BY ()
means we have no index, and more specifically no order in our data. More on this later. For now, just know all queries will require a linear scan.
To confirm the table has been created:
```sql
SHOW CREATE TABLE posts
CREATE TABLE posts
(
Id
Nullable(Int64),
PostTypeId
Nullable(Int64),
AcceptedAnswerId
Nullable(Int64),
CreationDate
Nullable(DateTime64(3, 'UTC')),
Score
Nullable(Int64),
ViewCount
Nullable(Int64),
Body
Nullable(String),
OwnerUserId
Nullable(Int64),
OwnerDisplayName
Nullable(String),
LastEditorUserId
Nullable(Int64),
LastEditorDisplayName
Nullable(String),
LastEditDate
Nullable(DateTime64(3, 'UTC')),
LastActivityDate
Nullable(DateTime64(3, 'UTC')),
Title
Nullable(String),
Tags
Nullable(String),
AnswerCount
Nullable(Int64),
CommentCount
Nullable(Int64),
FavoriteCount
Nullable(Int64),
ContentLicense
Nullable(String),
ParentId
Nullable(String),
CommunityOwnedDate
Nullable(DateTime64(3, 'UTC')),
ClosedDate
Nullable(DateTime64(3, 'UTC'))
)
ENGINE = MergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
ORDER BY tuple()
```
With our initial schema defined, we can populate the data using an
INSERT INTO SELECT
, reading the data using the s3 table function. The following loads the
posts
data in around 2 mins on an 8-core ClickHouse Cloud instance.
```sql
INSERT INTO posts SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
0 rows in set. Elapsed: 148.140 sec. Processed 59.82 million rows, 38.07 GB (403.80 thousand rows/s., 257.00 MB/s.)
```
The above query loads 60m rows. While small for ClickHouse, users with slower internet connections may wish to load a subset of data. This can be achieved by simply specifying the years they wish to load via a glob pattern e.g.
https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/2008.parquet
or
https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/{2008, 2009}.parquet
. See
here
for how glob patterns can be used to target subsets of files.
Optimizing Types {#optimizing-types}
One of the secrets to ClickHouse query performance is compression.
Less data on disk means less I/O and thus faster queries and inserts. The overhead of any compression algorithm with respect to CPU will in most cases be out weighted by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. | {"source_file": "schema-design.md"} | [
0.04329069331288338,
-0.020971208810806274,
-0.014751369133591652,
0.08336854726076126,
-0.035655293613672256,
0.06514686346054077,
0.003878374118357897,
-0.02500816434621811,
-0.01553112082183361,
0.09623986482620239,
0.08464671671390533,
0.004175631795078516,
0.10119427740573883,
-0.0287... |
f7b74cf4-187b-4916-ba7a-3ec6def5836a | For why ClickHouse compresses data so well, we recommend
this article
. In summary, as a column-oriented database, values will be written in column order. If these values are sorted, the same values will be adjacent to each other. Compression algorithms exploit contiguous patterns of data. On top of this, ClickHouse has codecs and granular data types which allow users to tune the compression techniques further.
Compression in ClickHouse will be impacted by 3 main factors: the ordering key, the data types, and any codecs used. All of these are configured through the schema.
The largest initial improvement in compression and query performance can be obtained through a simple process of type optimization. A few simple rules can be applied to optimize the schema:
Use strict types
- Our initial schema used Strings for many columns which are clearly numerics. Usage of the correct types will ensure the expected semantics when filtering and aggregating. The same applies to date types, which have been correctly provided in the Parquet files.
Avoid nullable Columns
- By default the above columns have been assumed to be Null. The Nullable type allows queries to determine the difference between an empty and Null value. This creates a separate column of UInt8 type. This additional column has to be processed every time a user works with a nullable column. This leads to additional storage space used and almost always negatively affects query performance. Only use Nullable if there is a difference between the default empty value for a type and Null. For example, a value of 0 for empty values in the
ViewCount
column will likely be sufficient for most queries and not impact results. If empty values should be treated differently, they can often also be excluded from queries with a filter.
Use the minimal precision for numeric types - ClickHouse has a number of numeric types designed for different numeric ranges and precision. Always aim to minimize the number of bits used to represent a column. As well as integers of different size e.g. Int16, ClickHouse offers unsigned variants whose minimum value is 0. These can allow fewer bits to be used for a column e.g. UInt16 has a maximum value of 65535, twice that of an Int16. Prefer these types over larger signed variants if possible.
Minimal precision for date types
- ClickHouse supports a number of date and datetime types. Date and Date32 can be used for storing pure dates, with the latter supporting a larger date range at the expense of more bits. DateTime and DateTime64 provide support for date times. DateTime is limited to second granularity and uses 32 bits. DateTime64, as the name suggests, uses 64 bits but provides support up to nanosecond granularity. As ever, choose the more coarse version acceptable for queries, minimizing the number of bits needed. | {"source_file": "schema-design.md"} | [
-0.01311857346445322,
0.04691556841135025,
-0.02288949117064476,
-0.014050587080419064,
-0.042831435799598694,
-0.024138620123267174,
-0.03934971243143082,
-0.023839665576815605,
0.01377923134714365,
0.021926118060946465,
-0.036968208849430084,
0.03431257605552673,
0.015860360115766525,
-0... |
4486887b-d439-4ea5-8291-6637b2d5d5c3 | Use LowCardinality
- Numbers, strings, Date or DateTime columns with a low number of unique values can potentially be encoded using the LowCardinality type. This dictionary encodes values, reducing the size on disk. Consider this for columns with less than 10k unique values.
FixedString for special cases - Strings which have a fixed length can be encoded with the FixedString type e.g. language and currency codes. This is efficient when data has the length of precisely N bytes. In all other cases, it is likely to reduce efficiency and LowCardinality is preferred.
Enums for data validation
- The Enum type can be used to efficiently encode enumerated types. Enums can either be 8 or 16 bits, depending on the number of unique values they are required to store. Consider using this if you need either the associated validation at insert time (undeclared values will be rejected) or wish to perform queries which exploit a natural ordering in the Enum values e.g. imagine a feedback column containing user responses
Enum(':(' = 1, ':|' = 2, ':)' = 3)
.
Tip: To find the range of all columns, and the number of distinct values, users can use the simple query
SELECT * APPLY min, * APPLY max, * APPLY uniq FROM table FORMAT Vertical
. We recommend performing this over a smaller subset of the data as this can be expensive. This query requires numerics to be at least defined as such for an accurate result i.e. not a String.
By applying these simple rules to our posts table, we can identify an optimal type for each column: | {"source_file": "schema-design.md"} | [
-0.003793502924963832,
0.03100484050810337,
-0.10073171555995941,
-0.03394269570708275,
-0.028896937146782875,
0.0034604365937411785,
-0.031065503135323524,
0.02684442326426506,
0.008938753046095371,
-0.002112468471750617,
0.0021615575533360243,
-0.01755502074956894,
0.02875605598092079,
0... |
5433b1f7-4ba3-4911-bc93-6b872ccd6118 | | Column | Is Numeric | Min, Max | Unique Values | Nulls | Comment | Optimized Type |
|------------------------|------------|------------------------------------------------------------------------|----------------|--------|----------------------------------------------------------------------------------------------|------------------------------------------|
|
PostTypeId
| Yes | 1, 8 | 8 | No | |
Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8)
|
|
AcceptedAnswerId
| Yes | 0, 78285170 | 12282094 | Yes | Differentiate Null with 0 value | UInt32 |
|
CreationDate
| No | 2008-07-31 21:42:52.667000000, 2024-03-31 23:59:17.697000000 | - | No | Millisecond granularity is not required, use DateTime | DateTime |
|
Score
| Yes | -217, 34970 | 3236 | No | | Int32 |
|
ViewCount
| Yes | 2, 13962748 | 170867 | No | | UInt32 |
|
Body
| No | - | - | No | | String |
|
OwnerUserId
| Yes | -1, 4056915 | 6256237 | Yes | | Int32 |
|
OwnerDisplayName
| No | - | 181251 | Yes | Consider Null to be empty string | String |
|
LastEditorUserId | {"source_file": "schema-design.md"} | [
0.02567560225725174,
-0.0011926016304641962,
-0.06033676862716675,
0.0020764777436852455,
-0.09470570832490921,
0.03782108426094055,
0.060976963490247726,
0.025806330144405365,
-0.027843007817864418,
0.03445541858673096,
0.0657227635383606,
-0.03378608450293541,
0.05902000144124031,
-0.045... |
576221ab-b12e-4ae5-aa5b-396429bcf890 | |
LastEditorUserId
| Yes | -1, 9999993 | 1104694 | Yes | 0 is an unused value can be used for Nulls | Int32 |
|
LastEditorDisplayName
| No | - | 70952 | Yes | Consider Null to be an empty string. Tested LowCardinality and no benefit | String |
|
LastEditDate
| No | 2008-08-01 13:24:35.051000000, 2024-04-06 21:01:22.697000000 | - | No | Millisecond granularity is not required, use DateTime | DateTime |
|
LastActivityDate
| No | 2008-08-01 12:19:17.417000000, 2024-04-06 21:01:22.697000000 | - | No | Millisecond granularity is not required, use DateTime | DateTime |
|
Title
| No | - | - | No | Consider Null to be an empty string | String |
|
Tags
| No | - | - | No | Consider Null to be an empty string | String |
|
AnswerCount
| Yes | 0, 518 | 216 | No | Consider Null and 0 to same | UInt16 |
|
CommentCount
| Yes | 0, 135 | 100 | No | Consider Null and 0 to same | UInt8 |
|
FavoriteCount
| Yes | 0, 225 | 6 | Yes | Consider Null and 0 to same | UInt8 |
|
ContentLicense
| No | - | 3 | No | LowCardinality outperforms FixedString | LowCardinality(String) |
|
ParentId | {"source_file": "schema-design.md"} | [
0.07246224582195282,
0.053261008113622665,
-0.04887404665350914,
0.046304717659950256,
-0.04050883278250694,
0.04809420183300972,
0.04277205467224121,
0.1012502908706665,
0.016008410602808,
-0.025603754445910454,
0.04296072572469711,
-0.10865738242864609,
0.03802284225821495,
-0.0344740338... |
4efe36cd-d921-4115-b1e2-541ffdb41a47 | |
ParentId
| No | - | 20696028 | Yes | Consider Null to be an empty string | String |
|
CommunityOwnedDate
| No | 2008-08-12 04:59:35.017000000, 2024-04-01 05:36:41.380000000 | - | Yes | Consider default 1970-01-01 for Nulls. Millisecond granularity is not required, use DateTime | DateTime |
|
ClosedDate
| No | 2008-09-04 20:56:44, 2024-04-06 18:49:25.393000000 | - | Yes | Consider default 1970-01-01 for Nulls. Millisecond granularity is not required, use DateTime | DateTime | | {"source_file": "schema-design.md"} | [
0.07706430554389954,
0.07191555202007294,
-0.02257552370429039,
0.029711956158280373,
-0.06261958181858063,
0.007698889821767807,
-0.009800597093999386,
0.08025017380714417,
-0.015289925038814545,
-0.001012114342302084,
0.10052545368671417,
-0.13369879126548767,
-0.02058214694261551,
-0.05... |
8673124d-630a-4bda-a9de-c1147cc43328 | The above gives us the following schema:
sql
CREATE TABLE posts_v2
(
`Id` Int32,
`PostTypeId` Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime,
`Score` Int32,
`ViewCount` UInt32,
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime,
`LastActivityDate` DateTime,
`Title` String,
`Tags` String,
`AnswerCount` UInt16,
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense`LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime,
`ClosedDate` DateTime
)
ENGINE = MergeTree
ORDER BY tuple()
COMMENT 'Optimized types'
We can populate this with a simple
INSERT INTO SELECT
, reading the data from our previous table and inserting into this one:
```sql
INSERT INTO posts_v2 SELECT * FROM posts
0 rows in set. Elapsed: 146.471 sec. Processed 59.82 million rows, 83.82 GB (408.40 thousand rows/s., 572.25 MB/s.)
```
We don't retain any nulls in our new schema. The above insert converts these implicitly to default values for their respective types - 0 for integers and an empty value for strings. ClickHouse also automatically converts any numerics to their target precision.
Primary (Ordering) Keys in ClickHouse
Users coming from OLTP databases often look for the equivalent concept in ClickHouse.
Choosing an ordering key {#choosing-an-ordering-key}
At the scale at which ClickHouse is often used, memory and disk efficiency are paramount. Data is written to ClickHouse tables in chunks known as parts, with rules applied for merging the parts in the background. In ClickHouse, each part has its own primary index. When parts are merged, then the merged part's primary indexes are also merged. The primary index for a part has one index entry per group of rows - this technique is called sparse indexing.
The selected key in ClickHouse will determine not only the index, but also order in which data is written on disk. Because of this, it can dramatically impact compression levels which can in turn affect query performance. An ordering key which causes the values of most columns to be written in contiguous order will allow the selected compression algorithm (and codecs) to compress the data more effectively.
All columns in a table will be sorted based on the value of the specified ordering key, regardless of whether they are included in the key itself. For instance, if
CreationDate
is used as the key, the order of values in all other columns will correspond to the order of values in the
CreationDate
column. Multiple ordering keys can be specified - this will order with the same semantics as an
ORDER BY
clause in a
SELECT
query. | {"source_file": "schema-design.md"} | [
0.05719456821680069,
-0.014544048346579075,
-0.0752677246928215,
0.06270253658294678,
-0.01656223088502884,
-0.01823236234486103,
-0.0022314502857625484,
0.02434663102030754,
-0.012643390335142612,
0.07335035502910614,
0.07679887115955353,
-0.0947636067867279,
0.08826608210802078,
-0.02684... |
7b0d22f8-8fca-4b7a-85e4-3590ca6611e1 | Some simple rules can be applied to help choose an ordering key. The following can sometimes be in conflict, so consider these in order. Users can identify a number of keys from this process, with 4-5 typically sufficient:
Select columns which align with your common filters. If a column is used frequently in
WHERE
clauses, prioritize including these in your key over those which are used less frequently.
Prefer columns which help exclude a large percentage of the total rows when filtered, thus reducing the amount of data which needs to be read.
Prefer columns which are likely to be highly correlated with other columns in the table. This will help ensure these values are also stored contiguously, improving compression.
GROUP BY
and
ORDER BY
operations for columns in the ordering key can be made more memory efficient.
When identifying the subset of columns for the ordering key, declare the columns in a specific order. This order can significantly influence both the efficiency of the filtering on secondary key columns in queries, and the compression ratio for the table's data files. In general, it is best to order the keys in ascending order of cardinality. This should be balanced against the fact that filtering on columns that appear later in the ordering key will be less efficient than filtering on those that appear earlier in the tuple. Balance these behaviors and consider your access patterns (and most importantly test variants).
Example {#example}
Applying the above guidelines to our
posts
table, let's assume that our users wish to perform analytics which filter by date and post type e.g.:
"Which questions had the most comments in the last 3 months".
The query for this question using our earlier
posts_v2
table with optimized types but no ordering key:
```sql
SELECT
Id,
Title,
CommentCount
FROM posts_v2
WHERE (CreationDate >= '2024-01-01') AND (PostTypeId = 'Question')
ORDER BY CommentCount DESC
LIMIT 3
┌───────Id─┬─Title─────────────────────────────────────────────────────────────┬─CommentCount─┐
│ 78203063 │ How to avoid default initialization of objects in std::vector? │ 74 │
│ 78183948 │ About memory barrier │ 52 │
│ 77900279 │ Speed Test for Buffer Alignment: IBM's PowerPC results vs. my CPU │ 49 │
└──────────┴───────────────────────────────────────────────────────────────────┴──────────────
10 rows in set. Elapsed: 0.070 sec. Processed 59.82 million rows, 569.21 MB (852.55 million rows/s., 8.11 GB/s.)
Peak memory usage: 429.38 MiB.
```
The query here is very fast even though all 60m rows have been linearly scanned - ClickHouse is just fast :) You'll have to trust us ordering keys is worth it at TB and PB scale!
Lets select the columns
PostTypeId
and
CreationDate
as our ordering keys. | {"source_file": "schema-design.md"} | [
0.0435713455080986,
0.03609684854745865,
0.01265793852508068,
-0.030274901539087296,
0.03317039832472801,
0.03244040533900261,
-0.055185362696647644,
-0.019048945978283882,
0.07833625376224518,
0.06246604025363922,
-0.012929966673254967,
0.13194938004016876,
0.02129410393536091,
-0.0219280... |
36d75966-d445-4fe5-aa08-0c8ae3e16857 | Lets select the columns
PostTypeId
and
CreationDate
as our ordering keys.
Maybe in our case, we expect users to always filter by
PostTypeId
. This has a cardinality of 8 and represents the logical choice for the first entry in our ordering key. Recognizing date granularity filtering is likely to be sufficient (it will still benefit datetime filters) so we use
toDate(CreationDate)
as the 2nd component of our key. This will also produce a smaller index as a date can be represented by 16, speeding up filtering. Our final key entry is the
CommentCount
to assist with finding the most commented posts (the final sort).
``sql
CREATE TABLE posts_v3
(
Id
Int32,
PostTypeId
Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
AcceptedAnswerId
UInt32,
CreationDate
DateTime,
Score
Int32,
ViewCount
UInt32,
Body
String,
OwnerUserId
Int32,
OwnerDisplayName
String,
LastEditorUserId
Int32,
LastEditorDisplayName
String,
LastEditDate
DateTime,
LastActivityDate
DateTime,
Title
String,
Tags
String,
AnswerCount
UInt16,
CommentCount
UInt8,
FavoriteCount
UInt8,
ContentLicense
LowCardinality(String),
ParentId
String,
CommunityOwnedDate
DateTime,
ClosedDate` DateTime
)
ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CommentCount)
COMMENT 'Ordering Key'
--populate table from existing table
INSERT INTO posts_v3 SELECT * FROM posts_v2
0 rows in set. Elapsed: 158.074 sec. Processed 59.82 million rows, 76.21 GB (378.42 thousand rows/s., 482.14 MB/s.)
Peak memory usage: 6.41 GiB.
Our previous query improves the query response time by over 3x:
SELECT
Id,
Title,
CommentCount
FROM posts_v3
WHERE (CreationDate >= '2024-01-01') AND (PostTypeId = 'Question')
ORDER BY CommentCount DESC
LIMIT 3
10 rows in set. Elapsed: 0.020 sec. Processed 290.09 thousand rows, 21.03 MB (14.65 million rows/s., 1.06 GB/s.)
```
For users interested in the compression improvements achieved by using specific types and appropriate ordering keys, see
Compression in ClickHouse
. If users need to further improve compression we also recommend the section
Choosing the right column compression codec
.
Next: Data Modeling Techniques {#next-data-modeling-techniques}
Until now, we've migrated only a single table. While this has allowed us to introduce some core ClickHouse concepts, most schemas are unfortunately not this simple.
In the other guides listed below, we will explore a number of techniques to restructure our wider schema for optimal ClickHouse querying. Throughout this process we aim for
Posts
to remain our central table through which most analytical queries are performed. While other tables can still be queried in isolation, we assume most analytics want to be performed in the context of
posts
. | {"source_file": "schema-design.md"} | [
-0.011547444388270378,
-0.054309915751218796,
0.037411417812108994,
0.037090010941028595,
-0.03947516158223152,
-0.01272125355899334,
0.048656709492206573,
-0.0229367446154356,
-0.0009753731428645551,
0.052199795842170715,
0.047622907906770706,
-0.008930218406021595,
0.03450162708759308,
-... |
53c416c6-6dd9-4e7c-b831-c3766a1a1c14 | Through this section, we use optimized variants of our other tables. While we provide the schemas for these, for the sake of brevity we omit the decisions made. These are based on the rules described earlier and we leave inferring the decisions to the reader.
The following approaches all aim to minimize the need to use JOINs to optimize reads and improve query performance. While JOINs are fully supported in ClickHouse, we recommend they are used sparingly (2 to 3 tables in a JOIN query is fine) to achieve optimal performance.
ClickHouse has no notion of foreign keys. This does not prohibit joins but means referential integrity is left to the user to manage at an application level. In OLAP systems like ClickHouse, data integrity is often managed at the application level or during the data ingestion process rather than being enforced by the database itself where it incurs a significant overhead. This approach allows for more flexibility and faster data insertion. This aligns with ClickHouse's focus on speed and scalability of read and insert queries with very large datasets.
In order to minimize the use of Joins at query time, users have several tools/approaches:
Denormalizing data
- Denormalize data by combining tables and using complex types for non 1:1 relationships. This often involves moving any joins from query time to insert time.
Dictionaries
- A ClickHouse specific feature for handling direct joins and key value lookups.
Incremental Materialized Views
- A ClickHouse feature for shifting the cost of a computation from query time to insert time, including the ability to incrementally compute aggregate values.
Refreshable Materialized Views
- Similar to materialized views used in other database products, this allows the results of a query to be periodically computed and the result cached.
We explore each of these approaches in each guide, highlighting when each is appropriate with an example showing how it can be applied to solving questions for the Stack Overflow dataset. | {"source_file": "schema-design.md"} | [
-0.025006573647260666,
0.01738569885492325,
-0.02099020592868328,
0.033252205699682236,
0.011251420713961124,
-0.07751701027154922,
0.039440419524908066,
0.043656136840581894,
0.006319581065326929,
-0.04895057529211044,
0.0027229462284594774,
0.012464017607271671,
0.08216159045696259,
0.00... |
49d26ed8-d7e3-490c-adfd-82a9e9422a0a | slug: /best-practices/use-json-where-appropriate
sidebar_position: 10
sidebar_label: 'Using JSON'
title: 'Use JSON where appropriate'
description: 'Page describing when to use JSON'
keywords: ['JSON']
show_related_blogs: true
doc_type: 'reference'
ClickHouse now offers a native JSON column type designed for semi-structured and dynamic data. It's important to clarify that
this is a column type, not a data format
—you can insert JSON into ClickHouse as a string or via supported formats like
JSONEachRow
, but that does not imply using the JSON column type. Users should only use the JSON type when the structure of their data is dynamic, not when they simply happen to store JSON.
When to use the JSON type {#when-to-use-the-json-type}
Use the JSON type when your data:
Has
unpredictable keys
that can change over time.
Contains
values with varying types
(e.g., a path might sometimes contain a string, sometimes a number).
Requires schema flexibility where strict typing isn't viable.
If your data structure is known and consistent, there is rarely a need for the JSON type, even if your data is in JSON format. Specifically, if your data has:
A flat structure with known keys
: use standard column types e.g. String.
Predictable nesting
: use Tuple, Array, or Nested types for these structures.
Predictable structure with varying types
: consider Dynamic or Variant types instead.
You can also mix approaches—for example, use static columns for predictable top-level fields and a single JSON column for a dynamic section of the payload.
Considerations and tips for using JSON {#considerations-and-tips-for-using-json}
The JSON type enables efficient columnar storage by flattening paths into subcolumns. But with flexibility comes responsibility. To use it effectively:
Specify path types
using
hints in the column definition
to specify types for known subcolumns, avoiding unnecessary type inference.
Skip paths
if you don't need the values, with
SKIP and SKIP REGEXP
to reduce storage and improve performance.
Avoid setting
max_dynamic_paths
too high
—large values increase resource consumption and reduce efficiency. As a rule of thumb, keep it below 10,000.
:::note Type hints
Type hints offer more than just a way to avoid unnecessary type inference—they eliminate storage and processing indirection entirely. JSON paths with type hints are always stored just like traditional columns, bypassing the need for
discriminator columns
or dynamic resolution during query time. This means that with well-defined type hints, nested JSON fields achieve the same performance and efficiency as if they were modeled as top-level fields from the outset. As a result, for datasets that are mostly consistent but still benefit from the flexibility of JSON, type hints provide a convenient way to preserve performance without needing to restructure your schema or ingest pipeline.
:::
Advanced features {#advanced-features} | {"source_file": "json_type.md"} | [
-0.039739418774843216,
0.005590792745351791,
-0.023132706061005592,
0.016025908291339874,
-0.015401181764900684,
0.015466805547475815,
-0.03815338760614395,
0.02209380269050598,
-0.024857481941580772,
-0.0553099624812603,
0.07546141743659973,
0.03067149594426155,
0.027820169925689697,
0.07... |
b6c6ecf9-90c7-4180-921e-b4621acf00b1 | Advanced features {#advanced-features}
JSON columns
can be used in primary keys
like any other columns. Codecs cannot be specified for a subcolumn.
They support introspection via functions like
JSONAllPathsWithTypes()
and
JSONDynamicPaths()
.
You can read nested sub-objects using the
.^
syntax.
Query syntax may differ from standard SQL and may require special casting or operators for nested fields.
For additional guidance, see
ClickHouse JSON documentation
or explore our blog post
A New Powerful JSON Data Type for ClickHouse
.
Examples {#examples}
Consider the following JSON sample, representing a row from the
Python PyPI dataset
:
json
{
"date": "2022-11-15",
"country_code": "ES",
"project": "clickhouse-connect",
"type": "bdist_wheel",
"installer": "pip",
"python_minor": "3.9",
"system": "Linux",
"version": "0.3.0"
}
Lets assume this schema is static and the types can be well defined. Even if the data is in NDJSON format (JSON row per line), there is no need to use the JSON type for such a schema. Simply define the schema with classic types.
sql
CREATE TABLE pypi (
`date` Date,
`country_code` String,
`project` String,
`type` String,
`installer` String,
`python_minor` String,
`system` String,
`version` String
)
ENGINE = MergeTree
ORDER BY (project, date)
and insert JSON rows:
sql
INSERT INTO pypi FORMAT JSONEachRow
{"date":"2022-11-15","country_code":"ES","project":"clickhouse-connect","type":"bdist_wheel","installer":"pip","python_minor":"3.9","system":"Linux","version":"0.3.0"}
Consider the
arXiv dataset
containing 2.5m scholarly papers. Each row in this dataset, distributed as NDJSON, represents a published academic paper. An example row is shown below:
json
{
"id": "2101.11408",
"submitter": "Daniel Lemire",
"authors": "Daniel Lemire",
"title": "Number Parsing at a Gigabyte per Second",
"comments": "Software at https://github.com/fastfloat/fast_float and\n https://github.com/lemire/simple_fastfloat_benchmark/",
"journal-ref": "Software: Practice and Experience 51 (8), 2021",
"doi": "10.1002/spe.2984",
"report-no": null,
"categories": "cs.DS cs.MS",
"license": "http://creativecommons.org/licenses/by/4.0/",
"abstract": "With disks and networks providing gigabytes per second ....\n",
"versions": [
{
"created": "Mon, 11 Jan 2021 20:31:27 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 23:57:29 GMT",
"version": "v2"
}
],
"update_date": "2022-11-07",
"authors_parsed": [
[
"Lemire",
"Daniel",
""
]
]
}
While the JSON here is complex, with nested structures, it is predictable. The number and type of the fields will not change. While we could use the JSON type for this example, we can also just define the structure explicitly using
Tuples
and
Nested
types: | {"source_file": "json_type.md"} | [
-0.04395878314971924,
0.0009913841495290399,
-0.010641916655004025,
0.013187499716877937,
-0.018835991621017456,
-0.09325855225324631,
-0.02958568185567856,
0.022337976843118668,
-0.08210592716932297,
-0.042932771146297455,
0.03622554987668991,
-0.02258421666920185,
-0.003995944280177355,
... |
296ddba4-2786-425d-81f3-1842cdbc33c3 | sql
CREATE TABLE arxiv
(
`id` String,
`submitter` String,
`authors` String,
`title` String,
`comments` String,
`journal-ref` String,
`doi` String,
`report-no` String,
`categories` String,
`license` String,
`abstract` String,
`versions` Array(Tuple(created String, version String)),
`update_date` Date,
`authors_parsed` Array(Array(String))
)
ENGINE = MergeTree
ORDER BY update_date
Again we can insert the data as JSON:
sql
INSERT INTO arxiv FORMAT JSONEachRow
{"id":"2101.11408","submitter":"Daniel Lemire","authors":"Daniel Lemire","title":"Number Parsing at a Gigabyte per Second","comments":"Software at https://github.com/fastfloat/fast_float and\n https://github.com/lemire/simple_fastfloat_benchmark/","journal-ref":"Software: Practice and Experience 51 (8), 2021","doi":"10.1002/spe.2984","report-no":null,"categories":"cs.DS cs.MS","license":"http://creativecommons.org/licenses/by/4.0/","abstract":"With disks and networks providing gigabytes per second ....\n","versions":[{"created":"Mon, 11 Jan 2021 20:31:27 GMT","version":"v1"},{"created":"Sat, 30 Jan 2021 23:57:29 GMT","version":"v2"}],"update_date":"2022-11-07","authors_parsed":[["Lemire","Daniel",""]]}
Suppose another column called
tags
is added. If this was simply a list of strings we could model this as an
Array(String)
, but let's assume users can add arbitrary tag structures with mixed types (notice
score
is a string or integer). Our modified JSON document:
sql
{
"id": "2101.11408",
"submitter": "Daniel Lemire",
"authors": "Daniel Lemire",
"title": "Number Parsing at a Gigabyte per Second",
"comments": "Software at https://github.com/fastfloat/fast_float and\n https://github.com/lemire/simple_fastfloat_benchmark/",
"journal-ref": "Software: Practice and Experience 51 (8), 2021",
"doi": "10.1002/spe.2984",
"report-no": null,
"categories": "cs.DS cs.MS",
"license": "http://creativecommons.org/licenses/by/4.0/",
"abstract": "With disks and networks providing gigabytes per second ....\n",
"versions": [
{
"created": "Mon, 11 Jan 2021 20:31:27 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 23:57:29 GMT",
"version": "v2"
}
],
"update_date": "2022-11-07",
"authors_parsed": [
[
"Lemire",
"Daniel",
""
]
],
"tags": {
"tag_1": {
"name": "ClickHouse user",
"score": "A+",
"comment": "A good read, applicable to ClickHouse"
},
"28_03_2025": {
"name": "professor X",
"score": 10,
"comment": "Didn't learn much",
"updates": [
{
"name": "professor X",
"comment": "Wolverine found more interesting"
}
]
}
}
}
In this case, we could model the arXiv documents as either all JSON or simply add a JSON
tags
column. We provide both examples below:
sql
CREATE TABLE arxiv
(
`doc` JSON(update_date Date)
)
ENGINE = MergeTree
ORDER BY doc.update_date | {"source_file": "json_type.md"} | [
0.004651025868952274,
0.019346239045262337,
0.0012308586155995727,
0.027229391038417816,
-0.12725114822387695,
-0.01557585597038269,
-0.0626300647854805,
0.0030635064467787743,
-0.023996656760573387,
0.05068362131714821,
0.046635664999485016,
-0.014581852592527866,
0.04076279699802399,
-0.... |
b4297f79-637d-43e9-bce9-b2ad36bb0a29 | sql
CREATE TABLE arxiv
(
`doc` JSON(update_date Date)
)
ENGINE = MergeTree
ORDER BY doc.update_date
:::note
We provide a type hint for the
update_date
column in the JSON definition, as we use it in the ordering/primary key. This helps ClickHouse to know that this column won't be null and ensures it knows which
update_date
subcolumn to use (there may be multiple for each type, so this is ambiguous otherwise).
:::
We can insert into this table and view the subsequently inferred schema using the
JSONAllPathsWithTypes
function and
PrettyJSONEachRow
output format:
sql
INSERT INTO arxiv FORMAT JSONAsObject
{"id":"2101.11408","submitter":"Daniel Lemire","authors":"Daniel Lemire","title":"Number Parsing at a Gigabyte per Second","comments":"Software at https://github.com/fastfloat/fast_float and\n https://github.com/lemire/simple_fastfloat_benchmark/","journal-ref":"Software: Practice and Experience 51 (8), 2021","doi":"10.1002/spe.2984","report-no":null,"categories":"cs.DS cs.MS","license":"http://creativecommons.org/licenses/by/4.0/","abstract":"With disks and networks providing gigabytes per second ....\n","versions":[{"created":"Mon, 11 Jan 2021 20:31:27 GMT","version":"v1"},{"created":"Sat, 30 Jan 2021 23:57:29 GMT","version":"v2"}],"update_date":"2022-11-07","authors_parsed":[["Lemire","Daniel",""]],"tags":{"tag_1":{"name":"ClickHouse user","score":"A+","comment":"A good read, applicable to ClickHouse"},"28_03_2025":{"name":"professor X","score":10,"comment":"Didn't learn much","updates":[{"name":"professor X","comment":"Wolverine found more interesting"}]}}}
```sql
SELECT JSONAllPathsWithTypes(doc)
FROM arxiv
FORMAT PrettyJSONEachRow
{
"JSONAllPathsWithTypes(doc)": {
"abstract": "String",
"authors": "String",
"authors_parsed": "Array(Array(Nullable(String)))",
"categories": "String",
"comments": "String",
"doi": "String",
"id": "String",
"journal-ref": "String",
"license": "String",
"submitter": "String",
"tags.28_03_2025.comment": "String",
"tags.28_03_2025.name": "String",
"tags.28_03_2025.score": "Int64",
"tags.28_03_2025.updates": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))",
"tags.tag_1.comment": "String",
"tags.tag_1.name": "String",
"tags.tag_1.score": "String",
"title": "String",
"update_date": "Date",
"versions": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))"
}
}
1 row in set. Elapsed: 0.003 sec.
```
Alternatively, we could model this using our earlier schema and a JSON
tags
column. This is generally preferred, minimizing the inference required by ClickHouse: | {"source_file": "json_type.md"} | [
-0.010753166861832142,
0.022250723093748093,
0.075799860060215,
0.049775660037994385,
-0.06546176224946976,
0.010688967071473598,
-0.12168550491333008,
0.06495014578104019,
0.0024309155996888876,
0.04345264285802841,
0.021378526464104652,
0.005079454742372036,
-0.004389593377709389,
-0.051... |
0b9e49b9-a28f-40e8-a6ff-3d5c07975f9f | Alternatively, we could model this using our earlier schema and a JSON
tags
column. This is generally preferred, minimizing the inference required by ClickHouse:
sql
CREATE TABLE arxiv
(
`id` String,
`submitter` String,
`authors` String,
`title` String,
`comments` String,
`journal-ref` String,
`doi` String,
`report-no` String,
`categories` String,
`license` String,
`abstract` String,
`versions` Array(Tuple(created String, version String)),
`update_date` Date,
`authors_parsed` Array(Array(String)),
`tags` JSON()
)
ENGINE = MergeTree
ORDER BY update_date
sql
INSERT INTO arxiv FORMAT JSONEachRow
{"id":"2101.11408","submitter":"Daniel Lemire","authors":"Daniel Lemire","title":"Number Parsing at a Gigabyte per Second","comments":"Software at https://github.com/fastfloat/fast_float and\n https://github.com/lemire/simple_fastfloat_benchmark/","journal-ref":"Software: Practice and Experience 51 (8), 2021","doi":"10.1002/spe.2984","report-no":null,"categories":"cs.DS cs.MS","license":"http://creativecommons.org/licenses/by/4.0/","abstract":"With disks and networks providing gigabytes per second ....\n","versions":[{"created":"Mon, 11 Jan 2021 20:31:27 GMT","version":"v1"},{"created":"Sat, 30 Jan 2021 23:57:29 GMT","version":"v2"}],"update_date":"2022-11-07","authors_parsed":[["Lemire","Daniel",""]],"tags":{"tag_1":{"name":"ClickHouse user","score":"A+","comment":"A good read, applicable to ClickHouse"},"28_03_2025":{"name":"professor X","score":10,"comment":"Didn't learn much","updates":[{"name":"professor X","comment":"Wolverine found more interesting"}]}}}
We can now infer the types of the subcolumn
tags
.
```sql
SELECT JSONAllPathsWithTypes(tags)
FROM arxiv
FORMAT PrettyJSONEachRow
{
"JSONAllPathsWithTypes(tags)": {
"28_03_2025.comment": "String",
"28_03_2025.name": "String",
"28_03_2025.score": "Int64",
"28_03_2025.updates": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))",
"tag_1.comment": "String",
"tag_1.name": "String",
"tag_1.score": "String"
}
}
1 row in set. Elapsed: 0.002 sec.
``` | {"source_file": "json_type.md"} | [
-0.00799941923469305,
-0.006655457895249128,
-0.003419124521315098,
0.034073635935783386,
-0.08099022507667542,
-0.024995410814881325,
-0.08266585320234299,
0.0068525285460054874,
-0.0255180262029171,
0.048512447625398636,
0.038379959762096405,
-0.023251069709658623,
0.04525142163038254,
-... |
e273d51b-7c3d-4b9b-908d-182b032e3b2e | slug: /best-practices/avoid-optimize-final
sidebar_position: 10
sidebar_label: 'Avoid optimize final'
title: 'Avoid OPTIMIZE FINAL'
description: 'Page describing why you should avoid the OPTIMIZE FINAL clause in ClickHouse'
keywords: ['avoid OPTIMIZE FINAL', 'background merges']
hide_title: true
doc_type: 'guide'
Avoid
OPTIMIZE FINAL
import Content from '@site/docs/best-practices/_snippets/_avoid_optimize_final.md'; | {"source_file": "avoid_optimize_final.md"} | [
0.009339634329080582,
0.12567894160747528,
0.01258156169205904,
-0.0027154211420565844,
0.06429700553417206,
-0.0080213388428092,
-0.028054870665073395,
0.025517214089632034,
-0.0874343290925026,
0.02699534222483635,
0.03062129020690918,
0.06055517867207527,
0.08417440950870514,
-0.0496967... |
9f210cad-040c-40dc-a0a6-a4b059978423 | slug: /best-practices/avoid-mutations
sidebar_position: 10
sidebar_label: 'Avoid mutations'
title: 'Avoid mutations'
description: 'Page describing why to avoid mutations in ClickHouse'
keywords: ['mutations']
doc_type: 'guide'
import Content from '@site/docs/best-practices/_snippets/_avoid_mutations.md'; | {"source_file": "avoid_mutations.md"} | [
-0.05297867953777313,
0.0320674367249012,
-0.038414519280195236,
-0.016234062612056732,
0.06773917376995087,
-0.055206723511219025,
0.00197497196495533,
0.026555854827165604,
-0.04880748689174652,
-0.009904004633426666,
0.07788009196519852,
0.05526478588581085,
0.06321919709444046,
-0.0440... |
55202a93-6624-4e64-9d91-2916ebc8a3e9 | slug: /best-practices/choosing-a-primary-key
sidebar_position: 10
sidebar_label: 'Choosing a primary key'
title: 'Choosing a Primary Key'
description: 'Page describing how to choose a primary key in ClickHouse'
keywords: ['primary key']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import create_primary_key from '@site/static/images/bestpractices/create_primary_key.gif';
import primary_key from '@site/static/images/bestpractices/primary_key.gif';
We interchangeably use the term "ordering key" to refer to the "primary key" on this page. Strictly,
these differ in ClickHouse
, but for the purposes of this document, readers can use them interchangeably, with the ordering key referring to the columns specified in the table
ORDER BY
.
Note that a ClickHouse primary key works
very differently
to those familiar with similar terms in OLTP databases such as Postgres.
Choosing an effective primary key in ClickHouse is crucial for query performance and storage efficiency. ClickHouse organizes data into parts, each containing its own sparse primary index. This index significantly speeds up queries by reducing the volume of data scanned. Additionally, because the primary key determines the physical order of data on disk, it directly impacts compression efficiency. Optimally ordered data compresses more effectively, which further enhances performance by reducing I/O.
When selecting an ordering key, prioritize columns frequently used in query filters (i.e. the
WHERE
clause), especially those that exclude large numbers of rows.
Columns highly correlated with other data in the table are also beneficial, as contiguous storage improves compression ratios and memory efficiency during
GROUP BY
and
ORDER BY
operations.
Some simple rules can be applied to help choose an ordering key. The following can sometimes be in conflict, so consider these in order.
Users can identify a number of keys from this process, with 4-5 typically sufficient
:
:::note Important
Ordering keys must be defined on table creation and cannot be added. Additional ordering can be added to a table after (or before) data insertion through a feature known as projections. Be aware these result in data duplication. Further details
here
.
:::
Example {#example}
Consider the following
posts_unordered
table. This contains a row per Stack Overflow post.
This table has no primary key - as indicated by
ORDER BY tuple()
. | {"source_file": "choosing_a_primary_key.md"} | [
-0.027205072343349457,
-0.008050285279750824,
-0.05725054442882538,
0.01588236354291439,
0.02024948038160801,
-0.03113502264022827,
-0.021294519305229187,
0.06556840986013412,
0.045026279985904694,
0.04013705626130104,
0.041586101055145264,
0.1290159523487091,
0.09171102195978165,
-0.03567... |
5d1c4600-9d6a-4845-85b5-f2259d0035c6 | Example {#example}
Consider the following
posts_unordered
table. This contains a row per Stack Overflow post.
This table has no primary key - as indicated by
ORDER BY tuple()
.
sql
CREATE TABLE posts_unordered
(
`Id` Int32,
`PostTypeId` Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4,
'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime,
`Score` Int32,
`ViewCount` UInt32,
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime,
`LastActivityDate` DateTime,
`Title` String,
`Tags` String,
`AnswerCount` UInt16,
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense`LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime,
`ClosedDate` DateTime
)
ENGINE = MergeTree
ORDER BY tuple()
Suppose a user wishes to compute the number of questions submitted after 2024, with this representing their most common access pattern.
```sql
SELECT count()
FROM stackoverflow.posts_unordered
WHERE (CreationDate >= '2024-01-01') AND (PostTypeId = 'Question')
┌─count()─┐
│ 192611 │
└─────────┘
--highlight-next-line
1 row in set. Elapsed: 0.055 sec. Processed 59.82 million rows, 361.34 MB (1.09 billion rows/s., 6.61 GB/s.)
```
Note the number of rows and bytes read by this query. Without a primary key, queries must scan the entire dataset.
Using
EXPLAIN indexes=1
confirms a full table scan due to lack of indexing.
```sql
EXPLAIN indexes = 1
SELECT count()
FROM stackoverflow.posts_unordered
WHERE (CreationDate >= '2024-01-01') AND (PostTypeId = 'Question')
┌─explain───────────────────────────────────────────────────┐
│ Expression ((Project names + Projection)) │
│ Aggregating │
│ Expression (Before GROUP BY) │
│ Expression │
│ ReadFromMergeTree (stackoverflow.posts_unordered) │
└───────────────────────────────────────────────────────────┘
5 rows in set. Elapsed: 0.003 sec.
```
Assume a table
posts_ordered
, containing the same data, is defined with an
ORDER BY
defined as
(PostTypeId, toDate(CreationDate))
i.e.
sql
CREATE TABLE posts_ordered
(
`Id` Int32,
`PostTypeId` Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6,
'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
...
)
ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate)) | {"source_file": "choosing_a_primary_key.md"} | [
-0.015862461179494858,
-0.021116437390446663,
-0.041230566799640656,
0.043090805411338806,
-0.024810951203107834,
-0.052017904818058014,
-0.04030382260680199,
-0.003739158157259226,
0.03172115609049797,
0.09051136672496796,
0.06141907349228859,
-0.001582036493346095,
0.10159394890069962,
-... |
3166c730-17a9-47db-9611-c0be1dc2e1b8 | PostTypeId
has a cardinality of 8 and represents the logical choice for the first entry in our ordering key. Recognizing date granularity filtering is likely to be sufficient (it will still benefit datetime filters) so we use
toDate(CreationDate)
as the 2nd component of our key. This will also produce a smaller index as a date can be represented by 16 bits, speeding up filtering.
The following animation shows how an optimized sparse primary index is created for the Stack Overflow posts table. Instead of indexing individual rows, the index targets blocks of rows:
If the same query is repeated on a table with this ordering key:
```sql
SELECT count()
FROM stackoverflow.posts_ordered
WHERE (CreationDate >= '2024-01-01') AND (PostTypeId = 'Question')
┌─count()─┐
│ 192611 │
└─────────┘
--highlight-next-line
1 row in set. Elapsed: 0.013 sec. Processed 196.53 thousand rows, 1.77 MB (14.64 million rows/s., 131.78 MB/s.)
```
This query now leverages sparse indexing, significantly reducing the amount of data read and speeding up the execution time by 4x - note the reduction of rows and bytes read.
The use of the index can be confirmed with an
EXPLAIN indexes=1
.
```sql
EXPLAIN indexes = 1
SELECT count()
FROM stackoverflow.posts_ordered
WHERE (CreationDate >= '2024-01-01') AND (PostTypeId = 'Question')
┌─explain─────────────────────────────────────────────────────────────────────────────────────┐
│ Expression ((Project names + Projection)) │
│ Aggregating │
│ Expression (Before GROUP BY) │
│ Expression │
│ ReadFromMergeTree (stackoverflow.posts_ordered) │
│ Indexes: │
│ PrimaryKey │
│ Keys: │
│ PostTypeId │
│ toDate(CreationDate) │
│ Condition: and((PostTypeId in [1, 1]), (toDate(CreationDate) in [19723, +Inf))) │
│ Parts: 14/14 │
│ Granules: 39/7578 │
└─────────────────────────────────────────────────────────────────────────────────────────────┘
13 rows in set. Elapsed: 0.004 sec.
```
Additionally, we visualize how the sparse index prunes all row blocks that can't possibly contain matches for our example query: | {"source_file": "choosing_a_primary_key.md"} | [
-0.031605314463377,
-0.007421466987580061,
0.0769546627998352,
0.06685428321361542,
-0.021299438551068306,
0.010855996049940586,
-0.00713673559948802,
-0.009757333435118198,
0.02468486689031124,
-0.020341532304883003,
-0.01618300750851631,
0.04388238862156868,
0.01547452062368393,
-0.09588... |
2d9a8bb5-0ad8-42ac-8779-0fa6adaa92e0 | 13 rows in set. Elapsed: 0.004 sec.
```
Additionally, we visualize how the sparse index prunes all row blocks that can't possibly contain matches for our example query:
:::note
All columns in a table will be sorted based on the value of the specified ordering key, regardless of whether they are included in the key itself. For instance, if
CreationDate
is used as the key, the order of values in all other columns will correspond to the order of values in the
CreationDate
column. Multiple ordering keys can be specified - this will order with the same semantics as an
ORDER BY
clause in a
SELECT
query.
:::
A complete advanced guide on choosing primary keys can be found
here
.
For deeper insights into how ordering keys improve compression and further optimize storage, explore the official guides on
Compression in ClickHouse
and
Column Compression Codecs
. | {"source_file": "choosing_a_primary_key.md"} | [
-0.039333492517471313,
0.040418412536382675,
-0.05832454189658165,
-0.014783904887735844,
-0.014310196042060852,
-0.0009399424307048321,
-0.019483795389533043,
-0.010289632715284824,
0.054503656923770905,
0.05724869295954704,
0.011575945653021336,
0.08126328140497208,
0.03335125371813774,
... |
3affc194-44a1-4c8e-a65e-31ae4e3e31fb | slug: /best-practices/selecting-an-insert-strategy
sidebar_position: 10
sidebar_label: 'Selecting an insert strategy'
title: 'Selecting an insert strategy'
description: 'Page describing how to choose an insert strategy in ClickHouse'
keywords: ['INSERT', 'asynchronous inserts', 'compression', 'batch inserts']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import insert_process from '@site/static/images/bestpractices/insert_process.png';
import async_inserts from '@site/static/images/bestpractices/async_inserts.png';
import AsyncInserts from '@site/docs/best-practices/_snippets/_async_inserts.md';
import BulkInserts from '@site/docs/best-practices/_snippets/_bulk_inserts.md';
Efficient data ingestion forms the basis of high-performance ClickHouse deployments. Selecting the right insert strategy can dramatically impact throughput, cost, and reliability. This section outlines best practices, tradeoffs, and configuration options to help you make the right decision for your workload.
:::note
The following assumes you are pushing data to ClickHouse via a client. If you are pulling data into ClickHouse e.g. using built in table functions such as
s3
and
gcs
, we recommend our guide
"Optimizing for S3 Insert and Read Performance"
.
:::
Synchronous inserts by default {#synchronous-inserts-by-default}
By default, inserts into ClickHouse are synchronous. Each insert query immediately creates a storage part on disk, including metadata and indexes.
:::note Use synchronous inserts if you can batch the data client side
If not, see
Asynchronous inserts
below.
:::
We briefly review ClickHouse's MergeTree insert mechanics below:
Client-side steps {#client-side-steps}
For optimal performance, data must be ①
batched
, making batch size the
first decision
.
ClickHouse stores inserted data on disk,
ordered
by the table's primary key column(s). The
second decision
is whether to ② pre-sort the data before transmission to the server. If a batch arrives pre-sorted by primary key column(s), ClickHouse can
skip
the ⑩ sorting step, speeding up ingestion.
If the data to be ingested has no predefined format, the
key decision
is choosing a format. ClickHouse supports inserting data in
over 70 formats
. However, when using the ClickHouse command-line client or programming language clients, this choice is often handled automatically. If needed, this automatic selection can also be overridden explicitly.
The next
major decision
is ④ whether to compress data before transmission to the ClickHouse server. Compression reduces transfer size and improves network efficiency, leading to faster data transfers and lower bandwidth usage, especially for large datasets.
The data is ⑤ transmitted to a ClickHouse network interface—either the
native
or
HTTP
interface (which we
compare
later in this post).
Server-side steps {#server-side-steps} | {"source_file": "selecting_an_insert_strategy.md"} | [
-0.042565297335386276,
-0.0162906926125288,
-0.082196444272995,
0.06562990695238113,
0.0097923269495368,
-0.02380438521504402,
-0.0345786027610302,
0.026836631819605827,
-0.07099563628435135,
0.053581204265356064,
0.05164588615298271,
0.055370476096868515,
0.11872082948684692,
-0.041018161... |
8b51139d-bbcc-4b90-b730-82355a87a9f5 | The data is ⑤ transmitted to a ClickHouse network interface—either the
native
or
HTTP
interface (which we
compare
later in this post).
Server-side steps {#server-side-steps}
After ⑥ receiving the data, ClickHouse ⑦ decompresses it if compression was used, then ⑧ parses it from the originally sent format.
Using the values from that formatted data and the target table's
DDL
statement, ClickHouse ⑨ builds an in-memory
block
in the MergeTree format, ⑩
sorts
rows by the primary key columns if they are not already pre-sorted, ⑪ creates a
sparse primary index
, ⑫ applies
per-column compression
, and ⑬ writes the data as a new ⑭
data part
to disk.
Batch inserts if synchronous {#batch-inserts-if-synchronous}
Ensure idempotent retries {#ensure-idempotent-retries}
Synchronous inserts are also
idempotent
. When using MergeTree engines, ClickHouse will deduplicate inserts by default. This protects against ambiguous failure cases, such as:
The insert succeeded but the client never received an acknowledgment due to a network interruption.
The insert failed server-side and timed out.
In both cases, it's safe to
retry the insert
— as long as the batch contents and order remain identical. For this reason, it's critical that clients retry consistently, without modifying or reordering data.
Choose the right insert target {#choose-the-right-insert-target}
For sharded clusters, you have two options:
Insert directly into a
MergeTree
or
ReplicatedMergeTree
table. This is the most efficient option when the client can perform load balancing across shards. With
internal_replication = true
, ClickHouse handles replication transparently.
Insert into a
Distributed table
. This allows clients to send data to any node and let ClickHouse forward it to the correct shard. This is simpler but slightly less performant due to the extra forwarding step.
internal_replication = true
is still recommended.
In ClickHouse Cloud all nodes read and write to the same single shard. Inserts are automatically balanced across nodes. Users can simply send inserts to the exposed endpoint.
Choose the right format {#choose-the-right-format}
Choosing the right input format is crucial for efficient data ingestion in ClickHouse. With over 70 supported formats, selecting the most performant option can significantly impact insert speed, CPU and memory usage, and overall system efficiency.
While flexibility is useful for data engineering and file-based imports,
applications should prioritize performance-oriented formats
:
Native format
(recommended): Most efficient. Column-oriented, minimal parsing required server-side. Used by default in Go and Python clients.
RowBinary
: Efficient row-based format, ideal if columnar transformation is hard client-side. Used by the Java client.
JSONEachRow
: Easy to use but expensive to parse. Suitable for low-volume use cases or quick integrations.
Use compression {#use-compression} | {"source_file": "selecting_an_insert_strategy.md"} | [
-0.05021221563220024,
-0.024748705327510834,
0.01114972960203886,
0.04129926487803459,
-0.03759145736694336,
-0.08698512613773346,
-0.03384701907634735,
-0.025953684002161026,
0.014198546297848225,
0.03025461733341217,
0.058176737278699875,
0.05864819511771202,
0.05902780219912529,
-0.1154... |
5c0abfc6-aa87-44c4-8942-c1635eeb80f1 | JSONEachRow
: Easy to use but expensive to parse. Suitable for low-volume use cases or quick integrations.
Use compression {#use-compression}
Compression plays a critical role in reducing network overhead, speeding up inserts, and lowering storage costs in ClickHouse. Used effectively, it enhances ingestion performance without requiring changes to data format or schema.
Compressing insert data reduces the size of the payload sent over the network, minimizing bandwidth usage and accelerating transmission.
For inserts, compression is especially effective when used with the Native format, which already matches ClickHouse's internal columnar storage model. In this setup, the server can efficiently decompress and directly store the data with minimal transformation.
Use LZ4 for speed, ZSTD for compression ratio {#use-lz4-for-speed-zstd-for-compression-ratio}
ClickHouse supports several compression codecs during data transmission. Two common options are:
LZ4
: Fast and lightweight. It reduces data size significantly with minimal CPU overhead, making it ideal for high-throughput inserts and default in most ClickHouse clients.
ZSTD
: Higher compression ratio but more CPU-intensive. It's useful when network transfer costs are high—such as in cross-region or cloud provider scenarios—though it increases client-side compute and server-side decompression time slightly.
Best practice: Use LZ4 unless you have constrained bandwidth or incur data egress costs — then consider ZSTD.
:::note
In tests from the
FastFormats benchmark
, LZ4-compressed Native inserts reduced data size by more than 50%, cutting ingestion time from 150s to 131s for a 5.6 GiB dataset. Switching to ZSTD compressed the same dataset down to 1.69 GiB, but increased server-side processing time slightly.
:::
Compression reduces resource usage {#compression-reduces-resource-usage}
Compression not only reduces network traffic—it also improves CPU and memory efficiency on the server. With compressed data, ClickHouse receives fewer bytes and spends less time parsing large inputs. This benefit is especially important when ingesting from multiple concurrent clients, such as in observability scenarios.
The impact of compression on CPU and memory is modest for LZ4, and moderate for ZSTD. Even under load, server-side efficiency improves due to the reduced data volume.
Combining compression with batching and an efficient input format (like Native) yields the best ingestion performance.
When using the native interface (e.g.
clickhouse-client
), LZ4 compression is enabled by default. You can optionally switch to ZSTD via settings.
With the
HTTP interface
, use the Content-Encoding header to apply compression (e.g. Content-Encoding: lz4). The entire payload must be compressed before sending.
Pre-sort if low cost {#pre-sort-if-low-cost}
Pre-sorting data by primary key before insertion can improve ingestion efficiency in ClickHouse, particularly for large batches. | {"source_file": "selecting_an_insert_strategy.md"} | [
-0.0692194253206253,
0.06472939252853394,
-0.0722656100988388,
-0.02135910838842392,
-0.052702054381370544,
-0.06784626096487045,
-0.06913480907678604,
0.01741112768650055,
0.014262758195400238,
0.007087529171258211,
0.006894034333527088,
0.057054758071899414,
-0.012579387985169888,
0.0354... |
757411f7-0171-441b-b673-b170ffeb7f86 | Pre-sort if low cost {#pre-sort-if-low-cost}
Pre-sorting data by primary key before insertion can improve ingestion efficiency in ClickHouse, particularly for large batches.
When data arrives pre-sorted, ClickHouse can skip or simplify the internal sorting step during part creation, reducing CPU usage and accelerating the insert process. Pre-sorting also improves compression efficiency, since similar values are grouped together—enabling codecs like LZ4 or ZSTD to achieve a better compression ratio. This is especially beneficial when combined with large batch inserts and compression, as it reduces both the processing overhead and the amount of data transferred.
That said, pre-sorting is an optional optimization—not a requirement.
ClickHouse sorts data highly efficiently using parallel processing, and in many cases, server-side sorting is faster or more convenient than pre-sorting client-side.
We recommend pre-sorting only if the data is already nearly ordered or if client-side resources (CPU, memory) are sufficient and underutilized.
In latency-sensitive or high-throughput use cases, such as observability, where data arrives out of order or from many agents, it's often better to skip pre-sorting and rely on ClickHouse's built-in performance.
Asynchronous inserts {#asynchronous-inserts}
Choose an interface—HTTP or native {#choose-an-interface}
Native {#choose-an-interface-native}
ClickHouse offers two main interfaces for data ingestion: the
native interface
and the
HTTP interface
—each with trade-offs between performance and flexibility. The native interface, used by
clickhouse-client
and select language clients like Go and C++, is purpose-built for performance. It always transmits data in ClickHouse's highly efficient Native format, supports block-wise compression with LZ4 or ZSTD, and minimizes server-side processing by offloading work such as parsing and format conversion to the client.
It even enables client-side computation of MATERIALIZED and DEFAULT column values, allowing the server to skip these steps entirely. This makes the native interface ideal for high-throughput ingestion scenarios where efficiency is critical.
HTTP {#choose-an-interface-http}
Unlike many traditional databases, ClickHouse also supports an HTTP interface.
This, by contrast, prioritizes compatibility and flexibility.
It allows data to be sent in
any supported format
—including JSON, CSV, Parquet, and others—and is widely supported across most ClickHouse clients, including Python, Java, JavaScript, and Rust.
This is often preferable to ClickHouse's native protocol as it allows traffic to be easily switched with load balancers. We expect small differences in insert performance with the native protocol, which incurs a little less overhead. | {"source_file": "selecting_an_insert_strategy.md"} | [
-0.031123671680688858,
0.03188930079340935,
0.023747887462377548,
-0.011739387176930904,
-0.06678851693868637,
0.014970398508012295,
-0.045135874301195145,
0.03598013520240784,
0.056897781789302826,
0.07650357484817505,
0.03918987512588501,
0.13078679144382477,
-0.027457136660814285,
-0.00... |
8e505b5d-ed45-4dc9-b8f2-e8ecf3024e09 | However, it lacks the native protocol's deeper integration and cannot perform client-side optimizations like materialized value computation or automatic conversion to Native format. While HTTP inserts can still be compressed using standard HTTP headers (e.g.
Content-Encoding: lz4
), the compression is applied to the entire payload rather than individual data blocks. This interface is often preferred in environments where protocol simplicity, load balancing, or broad format compatibility is more important than raw performance.
For a more detailed description of these interfaces see
here
. | {"source_file": "selecting_an_insert_strategy.md"} | [
-0.08379574865102768,
0.06303349882364273,
-0.04661174491047859,
0.0010628344025462866,
-0.03562048450112343,
-0.05434650927782059,
-0.11225743591785431,
-0.004981532227247953,
0.022288190200924873,
0.04040240868926048,
-0.05054819956421852,
0.02380843460559845,
0.020547354593873024,
0.010... |
7de33f45-f4e8-4f0b-832d-b6e78b5351f3 | slug: /best-practices/use-data-skipping-indices-where-appropriate
sidebar_position: 10
sidebar_label: 'Data skipping indices'
title: 'Use data skipping indices where appropriate'
description: 'Page describing how and when to use data skipping indices'
keywords: ['data skipping index', 'skip index']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import building_skipping_indices from '@site/static/images/bestpractices/building_skipping_indices.gif';
import using_skipping_indices from '@site/static/images/bestpractices/using_skipping_indices.gif';
Data skipping indices should be considered when previous best practices have been followed i.e. types are optimized, a good primary key has been selected and materialized views have been exploited. If you're new to skipping indices,
this guide
is a good place to start.
These types of indices can be used to accelerate query performance if used carefully with an understanding of how they work.
ClickHouse provides a powerful mechanism called
data skipping indices
that can dramatically reduce the amount of data scanned during query execution — particularly when the primary key isn't helpful for a specific filter condition. Unlike traditional databases that rely on row-based secondary indexes (like B-trees), ClickHouse is a column-store and doesn't store row locations in a way that supports such structures. Instead, it uses skip indexes, which help it avoid reading blocks of data guaranteed not to match a query's filtering conditions.
Skip indexes work by storing metadata about blocks of data — such as min/max values, value sets, or Bloom filter representations — and using this metadata during query execution to determine which data blocks can be skipped entirely. They apply only to the
MergeTree family
of table engines and are defined using an expression, an index type, a name, and a granularity that defines the size of each indexed block. These indexes are stored alongside the table data and are consulted when the query filter matches the index expression.
There are several types of data skipping indexes, each suited to different types of queries and data distributions:
minmax
: Tracks the minimum and maximum value of an expression per block. Ideal for range queries on loosely sorted data.
set(N)
: Tracks a set of values up to a specified size N for each block. Effective on columns with low cardinality per blocks.
bloom_filter
: Probabilistically determines if a value exists in a block, allowing fast approximate filtering for set membership. Effective for optimizing queries looking for the “needle in a haystack”, where a positive match is needed.
tokenbf_v1 / ngrambf_v1
: Specialized Bloom filter variants designed for searching tokens or character sequences in strings — particularly useful for log data or text search use cases. | {"source_file": "using_data_skipping_indices.md"} | [
0.012319398112595081,
0.056772537529468536,
0.013269382528960705,
0.07686308771371841,
0.048083268105983734,
-0.034948885440826416,
0.020583977922797203,
-0.01502571813762188,
-0.028035519644618034,
-0.013386215083301067,
0.03847237676382065,
0.09437955170869827,
0.09292159229516983,
-0.01... |
869b4720-d7e8-4285-a381-ac35a60229b2 | tokenbf_v1 / ngrambf_v1
: Specialized Bloom filter variants designed for searching tokens or character sequences in strings — particularly useful for log data or text search use cases.
While powerful, skip indexes must be used with care. They only provide benefit when they eliminate a meaningful number of data blocks, and can actually introduce overhead if the query or data structure doesn't align. If even a single matching value exists in a block, that entire block must still be read.
Effective skip index usage often depends on a strong correlation between the indexed column and the table's primary key, or inserting data in a way that groups similar values together.
In general, data skipping indices are best applied after ensuring proper primary key design and type optimization. They are particularly useful for:
Columns with high overall cardinality but low cardinality within a block.
Rare values that are critical for search (e.g. error codes, specific IDs).
Cases where filtering occurs on non-primary key columns with localized distribution.
Always:
Test skip indexes on real data with realistic queries. Try different index types and granularity values.
Evaluate their impact using tools like send_logs_level='trace' and
EXPLAIN indexes=1
to view index effectiveness.
Always evaluate the size of an index and how it is impacted by granularity. Reducing granularity size often will improve performance to a point, resulting in more granules being filtered and needing to be scanned. However, as index size increases with lower granularity, performance can also degrade. Measure the performance and index size for various granularity data points. This is particularly pertinent on bloom filter indexes.
When used appropriately, skip indexes can provide a substantial performance boost — when used blindly, they can add unnecessary cost.
For a more detailed guide on Data Skipping Indices see
here
.
Example {#example}
Consider the following optimized table. This contains Stack Overflow data with a row per post. | {"source_file": "using_data_skipping_indices.md"} | [
-0.012414109893143177,
0.024664340540766716,
0.019305212423205376,
0.04061475023627281,
0.04439707100391388,
-0.013875764794647694,
0.027596967294812202,
-0.008529236540198326,
0.044112809002399445,
-0.02465178817510605,
0.016825497150421143,
0.01963849738240242,
-0.027233004570007324,
-0.... |
88405ac8-31f2-4f02-a8b3-304a267ba270 | For a more detailed guide on Data Skipping Indices see
here
.
Example {#example}
Consider the following optimized table. This contains Stack Overflow data with a row per post.
sql
CREATE TABLE stackoverflow.posts
(
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime64(3, 'UTC'),
`Score` Int32,
`ViewCount` UInt32 CODEC(Delta(4), ZSTD(1)),
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
`LastActivityDate` DateTime64(3, 'UTC'),
`Title` String,
`Tags` String,
`AnswerCount` UInt16 CODEC(Delta(2), ZSTD(1)),
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense` LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime64(3, 'UTC'),
`ClosedDate` DateTime64(3, 'UTC')
)
ENGINE = MergeTree
PARTITION BY toYear(CreationDate)
ORDER BY (PostTypeId, toDate(CreationDate))
This table is optimized for queries which filter and aggregate by post type and date. Suppose we wished to count the number of posts with over 10,000,000 views published after 2009.
```sql
SELECT count()
FROM stackoverflow.posts
WHERE (CreationDate > '2009-01-01') AND (ViewCount > 10000000)
┌─count()─┐
│ 5 │
└─────────┘
1 row in set. Elapsed: 0.720 sec. Processed 59.55 million rows, 230.23 MB (82.66 million rows/s., 319.56 MB/s.)
```
This query is able to exclude some of the rows (and granules) using the primary index. However, the majority of rows still need to be read as indicated by the above response and the following
EXPLAIN indexes = 1
:
```sql
EXPLAIN indexes = 1
SELECT count()
FROM stackoverflow.posts
WHERE (CreationDate > '2009-01-01') AND (ViewCount > 10000000)
LIMIT 1 | {"source_file": "using_data_skipping_indices.md"} | [
0.00627964548766613,
-0.0067152297124266624,
-0.014325716532766819,
0.09772289544343948,
-0.006843769922852516,
-0.05095652490854263,
0.02275637723505497,
-0.014925374649465084,
-0.03843541815876961,
0.03569614887237549,
0.08487604558467865,
-0.04230760782957077,
0.035146910697221756,
-0.0... |
235ce5e1-42d3-496c-9782-f59224f8f64c | ```sql
EXPLAIN indexes = 1
SELECT count()
FROM stackoverflow.posts
WHERE (CreationDate > '2009-01-01') AND (ViewCount > 10000000)
LIMIT 1
┌─explain──────────────────────────────────────────────────────────┐
│ Expression ((Project names + Projection)) │
│ Limit (preliminary LIMIT (without OFFSET)) │
│ Aggregating │
│ Expression (Before GROUP BY) │
│ Expression │
│ ReadFromMergeTree (stackoverflow.posts) │
│ Indexes: │
│ MinMax │
│ Keys: │
│ CreationDate │
│ Condition: (CreationDate in ('1230768000', +Inf)) │
│ Parts: 123/128 │
│ Granules: 8513/8545 │
│ Partition │
│ Keys: │
│ toYear(CreationDate) │
│ Condition: (toYear(CreationDate) in [2009, +Inf)) │
│ Parts: 123/123 │
│ Granules: 8513/8513 │
│ PrimaryKey │
│ Keys: │
│ toDate(CreationDate) │
│ Condition: (toDate(CreationDate) in [14245, +Inf)) │
│ Parts: 123/123 │
│ Granules: 8513/8513 │
└──────────────────────────────────────────────────────────────────┘
25 rows in set. Elapsed: 0.070 sec.
```
A simple analysis shows that
ViewCount
is correlated with the
CreationDate
(a primary key) as one might expect — the longer a post exists, the more time it has to be viewed.
sql
SELECT toDate(CreationDate) AS day, avg(ViewCount) AS view_count FROM stackoverflow.posts WHERE day > '2009-01-01' GROUP BY day
This therefore makes a logical choice for a data skipping index. Given the numeric type, a minmax index makes sense. We add an index using the following
ALTER TABLE
commands — first adding it, then "materializing it".
```sql
ALTER TABLE stackoverflow.posts
(ADD INDEX view_count_idx ViewCount TYPE minmax GRANULARITY 1);
ALTER TABLE stackoverflow.posts MATERIALIZE INDEX view_count_idx;
```
This index could have also been added during initial table creation. The schema with the minmax index defined as part of the DDL: | {"source_file": "using_data_skipping_indices.md"} | [
0.0020688986405730247,
-0.0200921893119812,
0.05188572034239769,
0.06539690494537354,
-0.0024732102174311876,
0.02774183079600334,
0.022766223177313805,
-0.019542746245861053,
-0.010347933508455753,
0.06129372492432594,
0.036230575293302536,
-0.06218980997800827,
0.07901317626237869,
-0.02... |
90331fe0-7e48-4367-829e-4d8fec0745cf | This index could have also been added during initial table creation. The schema with the minmax index defined as part of the DDL:
sql
CREATE TABLE stackoverflow.posts
(
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime64(3, 'UTC'),
`Score` Int32,
`ViewCount` UInt32 CODEC(Delta(4), ZSTD(1)),
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime64(3, 'UTC') CODEC(Delta(8), ZSTD(1)),
`LastActivityDate` DateTime64(3, 'UTC'),
`Title` String,
`Tags` String,
`AnswerCount` UInt16 CODEC(Delta(2), ZSTD(1)),
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense` LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime64(3, 'UTC'),
`ClosedDate` DateTime64(3, 'UTC'),
INDEX view_count_idx ViewCount TYPE minmax GRANULARITY 1 --index here
)
ENGINE = MergeTree
PARTITION BY toYear(CreationDate)
ORDER BY (PostTypeId, toDate(CreationDate))
The following animation illustrates how our minmax skipping index is built for the example table, tracking the minimum and maximum
ViewCount
values for each block of rows (granule) in the table:
Repeating our earlier query shows significant performance improvements. Notice the reduced number of rows scanned:
```sql
SELECT count()
FROM stackoverflow.posts
WHERE (CreationDate > '2009-01-01') AND (ViewCount > 10000000)
┌─count()─┐
│ 5 │
└─────────┘
1 row in set. Elapsed: 0.012 sec. Processed 39.11 thousand rows, 321.39 KB (3.40 million rows/s., 27.93 MB/s.)
```
An
EXPLAIN indexes = 1
confirms use of the index.
```sql
EXPLAIN indexes = 1
SELECT count()
FROM stackoverflow.posts
WHERE (CreationDate > '2009-01-01') AND (ViewCount > 10000000) | {"source_file": "using_data_skipping_indices.md"} | [
-0.005375411361455917,
-0.03610186278820038,
-0.03861131891608238,
0.08138274401426315,
-0.010597135871648788,
-0.02544785477221012,
-0.011694345623254776,
0.026655247434973717,
-0.016530653461813927,
0.05039272457361221,
0.06898027658462524,
-0.025432754307985306,
0.028962060809135437,
-0... |
8ac43115-4238-45f7-807c-82d541f17e0e | An
EXPLAIN indexes = 1
confirms use of the index.
```sql
EXPLAIN indexes = 1
SELECT count()
FROM stackoverflow.posts
WHERE (CreationDate > '2009-01-01') AND (ViewCount > 10000000)
┌─explain────────────────────────────────────────────────────────────┐
│ Expression ((Project names + Projection)) │
│ Aggregating │
│ Expression (Before GROUP BY) │
│ Expression │
│ ReadFromMergeTree (stackoverflow.posts) │
│ Indexes: │
│ MinMax │
│ Keys: │
│ CreationDate │
│ Condition: (CreationDate in ('1230768000', +Inf)) │
│ Parts: 123/128 │
│ Granules: 8513/8545 │
│ Partition │
│ Keys: │
│ toYear(CreationDate) │
│ Condition: (toYear(CreationDate) in [2009, +Inf)) │
│ Parts: 123/123 │
│ Granules: 8513/8513 │
│ PrimaryKey │
│ Keys: │
│ toDate(CreationDate) │
│ Condition: (toDate(CreationDate) in [14245, +Inf)) │
│ Parts: 123/123 │
│ Granules: 8513/8513 │
│ Skip │
│ Name: view_count_idx │
│ Description: minmax GRANULARITY 1 │
│ Parts: 5/123 │
│ Granules: 23/8513 │
└────────────────────────────────────────────────────────────────────┘
29 rows in set. Elapsed: 0.211 sec.
```
We also show an animation how the minmax skipping index prunes all row blocks that cannot possibly contain matches for the
ViewCount
> 10,000,000 predicate in our example query:
Related docs {#related-docs}
Data skipping indices guide
Data skipping index examples
Manipulating data skipping indices
System table information | {"source_file": "using_data_skipping_indices.md"} | [
-0.0015952861867845058,
-0.0358237586915493,
0.05213150754570961,
0.09271673858165741,
0.00663627777248621,
0.0341828316450119,
0.023306583985686302,
-0.024788957089185715,
0.020640306174755096,
0.049170177429914474,
0.035688795149326324,
-0.03400271385908127,
0.07346796989440918,
-0.04594... |
dfa945d6-77e9-4df7-b9e0-244a5718280a | slug: /best-practices/minimize-optimize-joins
sidebar_position: 10
sidebar_label: 'Minimize and optimize JOINs'
title: 'Minimize and optimize JOINs'
description: 'Page describing best practices for JOINs'
keywords: ['JOIN', 'Parallel Hash JOIN']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import joins from '@site/static/images/bestpractices/joins-speed-memory.png';
ClickHouse supports a wide variety of JOIN types and algorithms, and JOIN performance has improved significantly in recent releases. However, JOINs are inherently more expensive than querying from a single, denormalized table. Denormalization shifts computational work from query time to insert or pre-processing time, which often results in significantly lower latency at runtime. For real-time or latency-sensitive analytical queries,
denormalization is strongly recommended
.
In general, denormalize when:
Tables change infrequently or when batch refreshes are acceptable.
Relationships are not many-to-many or not excessively high in cardinality.
Only a limited subset of the columns will be queried, i.e. certain columns can be excluded from denormalization.
You have the capability to shift processing out of ClickHouse into upstream systems like Flink, where real-time enrichment or flattening can be managed.
Not all data needs to be denormalized — focus on the attributes that are frequently queried. Also consider
materialized views
to incrementally compute aggregates instead of duplicating entire sub-tables. When schema updates are rare and latency is critical, denormalization offers the best performance trade-off.
For a full guide on denormalizing data in ClickHouse see
here
.
When JOINs are required {#when-joins-are-required}
When JOINs are required, ensure you're using
at least version 24.12 and preferably the latest version
, as JOIN performance continues to improve with each new release. As of ClickHouse 24.12, the query planner now automatically places the smaller table on the right side of the join for optimal performance — a task that previously had to be done manually. Even more enhancements are coming soon, including more aggressive filter pushdown and automatic re-ordering of multiple joins.
Follow these best practices to improve JOIN performance:
Avoid Cartesian products
: If a value on the left-hand side matches multiple values on the right-hand side, the JOIN will return multiple rows — the so-called Cartesian product. If your use case doesn't need all matches from the right-hand side but just any single match, you can use
ANY
JOINs (e.g.
LEFT ANY JOIN
). They are faster and use less memory than regular JOINs. | {"source_file": "minimize_optimize_joins.md"} | [
-0.0227682963013649,
0.005657868925482035,
-0.04073791205883026,
0.05785524845123291,
0.011081499978899956,
-0.03675446659326553,
-0.03003794699907303,
0.037573155015707016,
-0.03725942224264145,
-0.009452450089156628,
0.006067299284040928,
0.05162552371621132,
0.07624831795692444,
0.02106... |
8bc6656f-098e-4448-9db8-59b46227f2da | Reduce the sizes of JOINed tables
: The runtime and memory consumption of JOINs grows proportionally with the sizes of the left and right tables. To reduce the amount of processed data by the JOIN, add additional filter conditions in the
WHERE
or
JOIN ON
clauses of the query. ClickHouse pushes filter conditions as deep as possible down in the query plan, usually before JOINs. If the filters are not pushed down automatically (for any reason), rewrite one side of the JOIN as a sub-query to force pushdown.
Use direct JOINs via dictionaries if appropriate
: Standard JOINs in ClickHouse are executed in two phases: a build phase which iterates the right-hand side to build a hash table, followed by a probe phase which iterates the left-hand side to find matching join partners via hash table lookups. If the right-hand side is a
dictionary
or another table engine with key-value characteristics (e.g.
EmbeddedRocksDB
or the
Join table engine
), then ClickHouse can use the "direct" join algorithm, which effectively removes the need to build a hash table, speeding up query processing. This works for
INNER
and
LEFT OUTER
JOINs and is preferred for real-time analytical workloads.
Utilize table sorting for JOINs
: Each table in ClickHouse is sorted by the table's primary key columns. It is possible to exploit the table's sorting by using so-called sort-merge JOIN algorithms like
full_sorting_merge
and
partial_merge
. Unlike standard JOIN algorithms based on hash tables (see below,
parallel_hash
,
hash
,
grace_hash
), sort-merge JOIN algorithms first sort and then merge both tables. If the query JOINs both tables by their respective primary key columns, then sort-merge has an optimization which omits the sort step, saving processing time and overhead.
Avoid disk-spilling JOINs
: Intermediate states of JOINs (e.g. hash tables) can become so big that they no longer fit into main memory. In this situation, ClickHouse will return an out-of-memory error by default. Some join algorithms (see below), for example
grace_hash
,
partial_merge
and
full_sorting_merge
, are able to spill intermediate states to disk and continue query execution. These join algorithms should nevertheless be used with care as disk access can significantly slow down join processing. We instead recommend optimizing the JOIN query in other ways to reduce the size of intermediate states.
Default values as no-match markers in outer JOINs
: Left/right/full outer joins include all values from the left/right/both tables. If no join partner is found in the other table for some value, ClickHouse replaces the join partner by a special marker. The SQL standard mandates that databases use NULL as such a marker. In ClickHouse, this requires wrapping the result column in Nullable, creating an additional memory and performance overhead. As an alternative, you can configure the setting
join_use_nulls = 0
and use the default value of the result column data type as the marker. | {"source_file": "minimize_optimize_joins.md"} | [
-0.005292526446282864,
0.020868616178631783,
-0.029873289167881012,
0.028063824400305748,
-0.061902012676000595,
-0.06515362858772278,
-0.001348509336821735,
-0.008015069179236889,
-0.027897316962480545,
-0.02899489365518093,
-0.030885357409715652,
0.050991881638765335,
0.05101175233721733,
... |
b29c6d1f-9fb3-4e94-8c98-88b55c5787f8 | :::note Use dictionaries carefully
When using dictionaries for JOINs in ClickHouse, it's important to understand that dictionaries, by design, do not allow duplicate keys. During data loading, any duplicate keys are silently deduplicated—only the last loaded value for a given key is retained. This behavior makes dictionaries ideal for one-to-one or many-to-one relationships where only the latest or authoritative value is needed. However, using a dictionary for a one-to-many or many-to-many relationship (e.g. joining roles to actors where an actor can have multiple roles) will result in silent data loss, as all but one of the matching rows will be discarded. As a result, dictionaries are not suitable for scenarios requiring full relational fidelity across multiple matches.
:::
Choosing the correct JOIN Algorithm {#choosing-the-right-join-algorithm}
ClickHouse supports several JOIN algorithms that trade off between speed and memory:
Parallel Hash JOIN (default):
Fast for small-to-medium right-hand tables that fit in memory.
Direct JOIN:
Ideal when using dictionaries (or other table engines with key-value characteristics) with
INNER
or
LEFT ANY JOIN
— the fastest method for point lookups as it eliminates the need to build a hash table.
Full Sorting Merge JOIN:
Efficient when both tables are sorted on the join key.
Partial Merge JOIN:
Minimizes memory but is slower—best for joining large tables with limited memory.
Grace Hash JOIN:
Flexible and memory-tunable, good for large datasets with adjustable performance characteristics.
:::note
Each algorithm has varying support for JOIN types. A full list of supported join types for each algorithm can be found
here
.
:::
You can let ClickHouse choose the best algorithm by setting
join_algorithm = 'auto'
(the default), or explicitly control it based on your workload. If you need to select a join algorithm to optimize for performance or memory overhead, we recommend
this guide
.
For optimal performance:
Keep JOINs to a minimum in high-performance workloads.
Avoid more than 3–4 joins per query.
Benchmark different algorithms on real data — performance varies based on JOIN key distribution and data size.
For more on JOIN optimization strategies, JOIN algorithms, and how to tune them, refer to the
ClickHouse documentation
and this
blog series
. | {"source_file": "minimize_optimize_joins.md"} | [
-0.04028661176562309,
-0.049415674060583115,
-0.026080403476953506,
-0.019827859476208687,
-0.06016158685088158,
-0.033727534115314484,
0.036336496472358704,
-0.07401296496391296,
0.006135937757790089,
-0.007174860220402479,
0.028993045911192894,
0.03301661089062691,
0.075572170317173,
0.0... |
70980529-c081-4d45-aaa8-66cd4b764c27 | slug: /best-practices
keywords: ['Cloud', 'Primary key', 'Ordering key', 'Materialized Views', 'Best Practices', 'Bulk Inserts', 'Asynchronous Inserts', 'Avoid Mutations', 'Avoid nullable Columns', 'Avoid Optimize Final', 'Partitioning Key']
title: 'Overview'
hide_title: true
description: 'Landing page for Best Practices section in ClickHouse'
doc_type: 'landing-page'
import TableOfContents from '@site/docs/best-practices/_snippets/_table_of_contents.md';
Best Practices in ClickHouse {#best-practices-in-clickhouse}
This section provides the best practices you will want to follow to get the most out of ClickHouse. | {"source_file": "index.md"} | [
0.014548268169164658,
0.015029177069664001,
-0.008035221137106419,
0.019602585583925247,
0.03946736454963684,
-0.014339479617774487,
-0.012559651397168636,
0.009596980176866055,
-0.08502176403999329,
0.10097550600767136,
0.044873058795928955,
0.059290580451488495,
0.09174201637506485,
-0.0... |
5c4bb584-5039-49fd-91c1-05b651468449 | slug: /best-practices/select-data-types
sidebar_position: 10
sidebar_label: 'Selecting data types'
title: 'Selecting data types'
description: 'Page describing how to choose data types in ClickHouse'
keywords: ['data types']
doc_type: 'reference'
import NullableColumns from '@site/docs/best-practices/_snippets/_avoid_nullable_columns.md';
One of the core reasons for ClickHouse's query performance is its efficient data compression. Less data on disk results in faster queries and inserts by minimizing I/O overhead. ClickHouse's column-oriented architecture naturally arranges similar data adjacently, enabling compression algorithms and codecs to reduce data size dramatically. To maximize these compression benefits, it's essential to carefully choose appropriate data types.
Compression efficiency in ClickHouse depends mainly on three factors: the ordering key, data types, and codecs, all defined through the table schema. Choosing optimal data types yields immediate improvements in both storage and query performance.
Some straightforward guidelines can significantly enhance the schema:
Use Strict Types:
Always select the correct data type for columns. Numeric and date fields should use appropriate numeric and date types rather than general-purpose String types. This ensures correct semantics for filtering and aggregations.
Avoid nullable Columns:
Nullable columns introduce additional overhead by maintaining separate columns for tracking null values. Only use Nullable if explicitly required to distinguish between empty and null states. Otherwise, default or zero-equivalent values typically suffice. For further information on why this type should be avoided unless needed, see
Avoid nullable Columns
.
Minimize Numeric Precision:
Select numeric types with minimal bit-width that still accommodate the expected data range. For instance, prefer
UInt16 over Int32
if negative values aren't needed, and the range fits within 0–65535.
Optimize Date and Time Precision:
Choose the most coarse-grained date or datetime type that meets query requirements. Use Date or Date32 for date-only fields, and prefer DateTime over DateTime64 unless millisecond or finer precision is essential.
Leverage LowCardinality and Specialized Types:
For columns with fewer than approximately 10,000 unique values, use LowCardinality types to significantly reduce storage through dictionary encoding. Similarly, use FixedString only when the column values are strictly fixed-length strings (e.g., country or currency codes), and prefer Enum types for columns with a finite set of possible values to enable efficient storage and built-in data validation. | {"source_file": "select_data_type.md"} | [
-0.003948734607547522,
0.008316170424222946,
-0.06806094199419022,
0.02704126574099064,
0.00010097982158185914,
-0.05605313926935196,
-0.042765650898218155,
0.02556094154715538,
-0.006318378262221813,
0.030775723978877068,
0.012947906740009785,
0.05468827486038208,
0.03966040536761284,
-0.... |
bb4d78a7-c910-471d-8cf8-bc82457f00f5 | Enums for data validation:
The Enum type can be used to efficiently encode enumerated types. Enums can either be 8 or 16 bits, depending on the number of unique values they are required to store. Consider using this if you need either the associated validation at insert time (undeclared values will be rejected) or wish to perform queries which exploit a natural ordering in the Enum values e.g. imagine a feedback column containing user responses Enum(':(' = 1, ':|' = 2, ':)' = 3).
Example {#example}
ClickHouse offers built-in tools to streamline type optimization. For example, schema inference can automatically identify initial types. Consider the Stack Overflow dataset, publicly available in Parquet format. Running a simple schema inference via the
DESCRIBE
command provides an initial non-optimized schema.
:::note
By default, ClickHouse maps these to equivalent Nullable types. This is preferred as the schema is based on a sample of the rows only.
:::
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
SETTINGS describe_compact_output = 1
┌─name───────────────────────┬─type──────────────────────────────┐
│ Id │ Nullable(Int64) │
│ PostTypeId │ Nullable(Int64) │
│ AcceptedAnswerId │ Nullable(Int64) │
│ CreationDate │ Nullable(DateTime64(3, 'UTC')) │
│ Score │ Nullable(Int64) │
│ ViewCount │ Nullable(Int64) │
│ Body │ Nullable(String) │
│ OwnerUserId │ Nullable(Int64) │
│ OwnerDisplayName │ Nullable(String) │
│ LastEditorUserId │ Nullable(Int64) │
│ LastEditorDisplayName │ Nullable(String) │
│ LastEditDate │ Nullable(DateTime64(3, 'UTC')) │
│ LastActivityDate │ Nullable(DateTime64(3, 'UTC')) │
│ Title │ Nullable(String) │
│ Tags │ Nullable(String) │
│ AnswerCount │ Nullable(Int64) │
│ CommentCount │ Nullable(Int64) │
│ FavoriteCount │ Nullable(Int64) │
│ ContentLicense │ Nullable(String) │
│ ParentId │ Nullable(String) │
│ CommunityOwnedDate │ Nullable(DateTime64(3, 'UTC')) │
│ ClosedDate │ Nullable(DateTime64(3, 'UTC')) │
└────────────────────────────┴───────────────────────────────────┘
22 rows in set. Elapsed: 0.130 sec.
```
:::note
Note below we use the glob pattern *.parquet to read all files in the stackoverflow/parquet/posts folder.
::: | {"source_file": "select_data_type.md"} | [
-0.03392697870731354,
-0.06260275840759277,
-0.01765456795692444,
-0.032226238399744034,
-0.0005499336984939873,
-0.04679213464260101,
0.009017406031489372,
-0.010445228777825832,
-0.05132795125246048,
-0.0023768674582242966,
-0.012121548876166344,
-0.08001647144556046,
0.0012449660571292043... |
15201b4e-f288-41cf-a95f-e239494faa9f | 22 rows in set. Elapsed: 0.130 sec.
```
:::note
Note below we use the glob pattern *.parquet to read all files in the stackoverflow/parquet/posts folder.
:::
By applying our early simple rules to our posts table, we can identify an optimal type for each column: | {"source_file": "select_data_type.md"} | [
0.0019494574517011642,
0.0124068483710289,
-0.0883110985159874,
-0.10452785342931747,
-0.049061551690101624,
-0.02278020977973938,
0.04314608499407768,
0.014288477599620819,
0.006797218695282936,
0.025538308545947075,
0.01867827959358692,
0.03992354869842529,
-0.004239360801875591,
-0.0479... |
057f0284-01bc-4cbb-a6f5-ff1d0160625c | | Column | Is Numeric | Min, Max | Unique Values | Nulls | Comment | Optimized Type |
|------------------------|------------|------------------------------------------------------------------------|----------------|--------|----------------------------------------------------------------------------------------------|------------------------------------------|
|
PostTypeId
| Yes | 1, 8 | 8 | No | |
Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8)
|
|
AcceptedAnswerId
| Yes | 0, 78285170 | 12282094 | Yes | Differentiate Null with 0 value | UInt32 |
|
CreationDate
| No | 2008-07-31 21:42:52.667000000, 2024-03-31 23:59:17.697000000 | - | No | Millisecond granularity is not required, use DateTime | DateTime |
|
Score
| Yes | -217, 34970 | 3236 | No | | Int32 |
|
ViewCount
| Yes | 2, 13962748 | 170867 | No | | UInt32 |
|
Body
| No | - | - | No | | String |
|
OwnerUserId
| Yes | -1, 4056915 | 6256237 | Yes | | Int32 |
|
OwnerDisplayName
| No | - | 181251 | Yes | Consider Null to be empty string | String |
|
LastEditorUserId | {"source_file": "select_data_type.md"} | [
0.02567560225725174,
-0.0011926016304641962,
-0.06033676862716675,
0.0020764777436852455,
-0.09470570832490921,
0.03782108426094055,
0.060976963490247726,
0.025806330144405365,
-0.027843007817864418,
0.03445541858673096,
0.0657227635383606,
-0.03378608450293541,
0.05902000144124031,
-0.045... |
839411a0-ee72-4681-91fb-ba0c945dd358 | |
LastEditorUserId
| Yes | -1, 9999993 | 1104694 | Yes | 0 is an unused value can be used for Nulls | Int32 |
|
LastEditorDisplayName
| No | - | 70952 | Yes | Consider Null to be an empty string. Tested LowCardinality and no benefit | String |
|
LastEditDate
| No | 2008-08-01 13:24:35.051000000, 2024-04-06 21:01:22.697000000 | - | No | Millisecond granularity is not required, use DateTime | DateTime |
|
LastActivityDate
| No | 2008-08-01 12:19:17.417000000, 2024-04-06 21:01:22.697000000 | - | No | Millisecond granularity is not required, use DateTime | DateTime |
|
Title
| No | - | - | No | Consider Null to be an empty string | String |
|
Tags
| No | - | - | No | Consider Null to be an empty string | String |
|
AnswerCount
| Yes | 0, 518 | 216 | No | Consider Null and 0 to same | UInt16 |
|
CommentCount
| Yes | 0, 135 | 100 | No | Consider Null and 0 to same | UInt8 |
|
FavoriteCount
| Yes | 0, 225 | 6 | Yes | Consider Null and 0 to same | UInt8 |
|
ContentLicense
| No | - | 3 | No | LowCardinality outperforms FixedString | LowCardinality(String) |
|
ParentId | {"source_file": "select_data_type.md"} | [
0.07246224582195282,
0.053261008113622665,
-0.04887404665350914,
0.046304717659950256,
-0.04050883278250694,
0.04809420183300972,
0.04277205467224121,
0.1012502908706665,
0.016008410602808,
-0.025603754445910454,
0.04296072572469711,
-0.10865738242864609,
0.03802284225821495,
-0.0344740338... |
7be14ad0-4e9a-432d-91bc-b2f8bff88019 | |
ParentId
| No | - | 20696028 | Yes | Consider Null to be an empty string | String |
|
CommunityOwnedDate
| No | 2008-08-12 04:59:35.017000000, 2024-04-01 05:36:41.380000000 | - | Yes | Consider default 1970-01-01 for Nulls. Millisecond granularity is not required, use DateTime | DateTime |
|
ClosedDate
| No | 2008-09-04 20:56:44, 2024-04-06 18:49:25.393000000 | - | Yes | Consider default 1970-01-01 for Nulls. Millisecond granularity is not required, use DateTime | DateTime | | {"source_file": "select_data_type.md"} | [
0.07706430554389954,
0.07191555202007294,
-0.02257552370429039,
0.029711956158280373,
-0.06261958181858063,
0.007698889821767807,
-0.009800597093999386,
0.08025017380714417,
-0.015289925038814545,
-0.001012114342302084,
0.10052545368671417,
-0.13369879126548767,
-0.02058214694261551,
-0.05... |
0290ee81-b716-4ea2-9a14-7621514b9cfb | :::note Tip
Identifying the type for a column relies on understanding its numeric range and number of unique values. To find the range of all columns, and the number of distinct values, users can use the simple query
SELECT * APPLY min, * APPLY max, * APPLY uniq FROM table FORMAT Vertical
. We recommend performing this over a smaller subset of the data as this can be expensive.
:::
This results in the following optimized schema (with respect to types):
sql
CREATE TABLE posts
(
Id Int32,
PostTypeId Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5,
'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
AcceptedAnswerId UInt32,
CreationDate DateTime,
Score Int32,
ViewCount UInt32,
Body String,
OwnerUserId Int32,
OwnerDisplayName String,
LastEditorUserId Int32,
LastEditorDisplayName String,
LastEditDate DateTime,
LastActivityDate DateTime,
Title String,
Tags String,
AnswerCount UInt16,
CommentCount UInt8,
FavoriteCount UInt8,
ContentLicense LowCardinality(String),
ParentId String,
CommunityOwnedDate DateTime,
ClosedDate DateTime
)
ENGINE = MergeTree
ORDER BY tuple()
Avoid nullable columns {#avoid-nullable-columns} | {"source_file": "select_data_type.md"} | [
0.05692516267299652,
-0.01787417382001877,
-0.04772800952196121,
0.03647730499505997,
-0.05878186970949173,
0.007583237253129482,
0.06789073348045349,
0.011418239213526249,
-0.025842109695076942,
0.04720934480428696,
0.10160548239946365,
-0.05149169638752937,
0.07853856682777405,
-0.033823... |
c46f3d5a-7a2e-43e0-9eda-5e4bb9d3de25 | slug: /best-practices/use-materialized-views
sidebar_position: 10
sidebar_label: 'Use materialized views'
title: 'Use Materialized Views'
description: 'Page describing Materialized Views'
keywords: ['materialized views', 'medallion architecture']
show_related_blogs: true
doc_type: 'guide'
import Image from '@theme/IdealImage';
import incremental_materialized_view from '@site/static/images/bestpractices/incremental_materialized_view.gif';
import refreshable_materialized_view from '@site/static/images/bestpractices/refreshable_materialized_view.gif';
ClickHouse supports two types of materialized views:
incremental
and
refreshable
. While both are designed to accelerate queries by pre-computing and storing results, they differ significantly in how and when the underlying queries are executed, what workloads they are suited for, and how data freshness is handled.
Users should consider materialized views for specific query patterns which need to be accelerated, assuming previous best practices
regarding type
and
primary key optimization
have been performed.
Incremental materialized views
are updated in real-time. As new data is inserted into the source table, ClickHouse automatically applies the materialized view's query to the new data block and writes the results to a separate target table. Over time, ClickHouse merges these partial results to produce a complete, up-to-date view. This approach is highly efficient because it shifts the computational cost to insert time and only processes new data. As a result,
SELECT
queries against the target table are fast and lightweight. Incremental views support all aggregation functions and scale well—even to petabytes of data—because each query operates on a small, recent subset of the dataset being inserted.
Refreshable materialized views
, by contrast, are updated on a schedule. These views periodically re-execute their full query and overwrite the result in the target table. This is similar to materialized views in traditional OLTP databases like Postgres.
The choice between incremental and refreshable materialized views depends largely on the nature of the query, how frequently data changes, and whether updates to the view must reflect every row as it is inserted, or if a periodic refresh is acceptable. Understanding these trade-offs is key to designing performant, scalable materialized views in ClickHouse.
When to use incremental materialized views {#when-to-use-incremental-materialized-views} | {"source_file": "use_materialized_views.md"} | [
-0.09266466647386551,
-0.041895247995853424,
-0.05808138847351074,
0.027300305664539337,
-0.04890763387084007,
-0.06074318662285805,
-0.05490296706557274,
-0.00935611966997385,
-0.015075999312102795,
0.030629782006144524,
0.016713183373212814,
0.10760565847158432,
0.044035010039806366,
-0.... |
a772e713-6afe-4c2e-adf7-7f40e91f3ee1 | When to use incremental materialized views {#when-to-use-incremental-materialized-views}
Incremental materialized views are generally preferred, as they update automatically in real-time whenever the source tables receive new data. They support all aggregation functions and are particularly effective for aggregations over a single table. By computing results incrementally at insert-time, queries run against significantly smaller data subsets, allowing these views to scale effortlessly even to petabytes of data. In most cases they will have no appreciable impact on overall cluster performance.
Use incremental materialized views when:
You require real-time query results updated with every insert.
You're aggregating or filtering large volumes of data frequently.
Your queries involve straightforward transformations or aggregations on single tables.
For examples of incremental materialized views see
here
.
When to use refreshable materialized views {#when-to-use-refreshable-materialized-views}
Refreshable materialized views execute their queries periodically rather than incrementally, storing the query result set for rapid retrieval.
They are most useful when query performance is critical (e.g. sub-millisecond latency) and slightly stale results are acceptable. Since the query is re-run in full, refreshable views are best suited to queries that are either relatively fast to compute or which can be computed at infrequent intervals (e.g. hourly), such as caching “top N” results or lookup tables.
Execution frequency should be tuned carefully to avoid excessive load on the system. Extremely complex queries which consume significant resources should be scheduled cautiously — these can cause overall cluster performance to degrade by impacting caches and consuming CPU and memory. The query should run relatively quickly compared to the refresh interval to avoid overloading your cluster. For example, do not schedule a view to be updated every 10 seconds if the query itself takes at least 10 seconds to compute.
Summary {#summary}
In summary, use refreshable materialized views when:
You need cached query results available instantly, and minor delays in freshness are acceptable.
You need the top N for a query result set.
The size of the result set does not grow unbounded over time. This will cause performance of the target view to degrade.
You're performing complex joins or denormalization involving multiple tables, requiring updates whenever any source table changes.
You're building batch workflows, denormalization tasks, or creating view dependencies similar to DBT DAGs.
For examples of refreshable materialized views see
here
.
APPEND vs REPLACE mode {#append-vs-replace-mode}
Refreshable materialized views support two modes for writing data to the target table:
APPEND
and
REPLACE
. These modes define how the result of the view's query is written when the view is refreshed. | {"source_file": "use_materialized_views.md"} | [
-0.029512107372283936,
-0.05260050669312477,
-0.03844645619392395,
0.047255199402570724,
-0.009924388490617275,
-0.0017761732451617718,
-0.042874015867710114,
-0.005682786460965872,
0.06258659809827805,
0.024410704150795937,
0.005745923146605492,
0.02010957896709442,
0.011701557785272598,
... |
2400d27d-89da-4575-a4fb-626e2190a177 | REPLACE
is the default behavior. Each time the view is refreshed, the previous contents of the target table are completely overwritten with the latest query result. This is suitable for use cases where the view should always reflect the latest state, such as caching a result set.
APPEND
, by contrast, allows new rows to be added to the end of the target table instead of replacing its contents. This enables additional use cases, such as capturing periodic snapshots.
APPEND
is particularly useful when each refresh represents a distinct point-in-time or when historical accumulation of results is desired.
Choose
APPEND
mode when:
You want to keep a history of past refreshes.
You're building periodic snapshots or reports.
You need to incrementally collect refreshed results over time.
Choose
REPLACE
mode when:
You only need the most recent result.
Stale data should be discarded entirely.
The view represents a current state or lookup.
Users can find an application of the
APPEND
functionality if building a
Medallion architecture
. | {"source_file": "use_materialized_views.md"} | [
-0.044382184743881226,
-0.04535149782896042,
0.031584482640028,
0.0339680016040802,
-0.0022786851041018963,
0.029470456764101982,
-0.037433281540870667,
-0.026118312031030655,
0.02224930189549923,
0.06454718112945557,
0.027795542031526566,
0.02848171256482601,
0.013011067174375057,
-0.0833... |
787a6c7a-0db1-4adb-98a1-684ecd7fa061 | slug: /guides/sizing-and-hardware-recommendations
sidebar_label: 'Sizing and hardware recommendations'
sidebar_position: 4
title: 'Sizing and hardware recommendations'
description: 'This guide discusses our general recommendations regarding hardware, compute, memory, and disk configurations for open-source users.'
doc_type: 'guide'
keywords: ['sizing', 'hardware', 'capacity planning', 'best practices', 'performance']
Sizing and hardware recommendations
This guide discusses our general recommendations regarding hardware, compute, memory, and disk configurations for open-source users. If you would like to simplify your setup, we recommend using
ClickHouse Cloud
as it automatically scales and adapts to your workloads while minimizing costs pertaining to infrastructure management.
The configuration of your ClickHouse cluster is highly dependent on your application's use case and workload patterns. When planning your architecture, you must consider the following factors:
Concurrency (requests per second)
Throughput (rows processed per second)
Data volume
Data retention policy
Hardware costs
Maintenance costs
Disk {#disk}
The type(s) of disks you should use with ClickHouse depends on data volume, latency, or throughput requirements.
Optimizing for performance {#optimizing-for-performance}
To maximize performance, we recommend directly attaching
provisioned IOPS SSD volumes from AWS
or the equivalent offering from your cloud provider, which optimizes for IO.
Optimizing for storage costs {#optimizing-for-storage-costs}
For lower costs, you can use
general purpose SSD EBS volumes
.
You can also implement a tiered storage using SSDs and HDDs in a
hot/warm/cold architecture
. Alternatively,
AWS S3
for storage is also possible to separate compute and storage. Please see our guide for using open-source ClickHouse with separation of compute and storage
here
. Separation of compute and storage is available by default in ClickHouse Cloud.
CPU {#cpu}
Which CPU should I use? {#which-cpu-should-i-use}
The type of CPU you should use depends on your usage pattern. In general, however, applications with many frequent concurrent queries, that process more data, or that use compute-intensive UDFs will require more CPU cores.
Low latency or customer-facing applications
For latency requirements in the 10s of milliseconds such as for customer-facing workloads, we recommend the EC2
i3 line
or
i4i line
from AWS or the equivalent offerings from your cloud provider, which are IO-optimized.
High concurrency applications
For workloads that need to optimize for concurrency (100+ queries per second), we recommend the
compute-optimized C series
from AWS or the equivalent offering from your cloud provider.
Data warehousing use case | {"source_file": "sizing-and-hardware-recommendations.md"} | [
0.014436524361371994,
-0.025029323995113373,
0.010381265543401241,
0.0034637420903891325,
-0.04903377592563629,
-0.06103525683283806,
-0.03487955033779144,
0.05643318220973015,
-0.06330393254756927,
0.04446955397725105,
-0.022123483940958977,
0.011789037846028805,
0.027686744928359985,
-0.... |
f15dc5c4-be94-4e8f-8b16-a56b3912292a | Data warehousing use case
For data warehousing workloads and ad-hoc analytical queries, we recommend the
R-type series
from AWS or the equivalent offering from your cloud provider as they are memory optimized.
What should CPU utilization be? {#what-should-cpu-utilization-be}
There is no standard CPU utilization target for ClickHouse. Utilize a tool such as
iostat
to measure average CPU usage, and accordingly adjust the size of your servers to manage unexpected traffic spikes. However, for analytical or data warehousing use cases with ad-hoc queries, you should target 10-20% CPU utilization.
How many CPU cores should I use? {#how-many-cpu-cores-should-i-use}
The number of CPUs you should use depends on your workload. However, we generally recommend the following memory-to-CPU-core ratios based on your CPU type:
M-type
(general purpose use cases):
4 GB:1 memory-to-CPU-core ratio
R-type
(data warehousing use cases):
8 GB:1 memory-to-CPU-core ratio
C-type
(compute-optimized use cases):
2 GB:1 memory-to-CPU-core ratio
As an example, when using M-type CPUs, we recommend provisioning 100GB of memory per 25 CPU cores. To determine the amount of memory appropriate for your application, profiling your memory usage is necessary. You can read
this guide on debugging memory issues
or use the
built-in observability dashboard
to monitor ClickHouse.
Memory {#memory}
Like your choice of CPU, your choice of memory-to-storage ratio and memory-to-CPU ratio is dependent on your use-case.
The required volume of RAM generally depends on:
- The complexity of queries.
- The amount of data that is processed in queries.
In general, however, the more memory you have, the faster your queries will run.
If your use case is sensitive to price, lower amounts of memory will work as it is possible to enable settings (
max_bytes_before_external_group_by
and
max_bytes_before_external_sort
) to allow spilling data to disk, but note that this may significantly affect query performance.
What should the memory-to-storage ratio be? {#what-should-the-memory-to-storage-ratio-be}
For low data volumes, a 1:1 memory-to-storage ratio is acceptable but total memory should not be below 8GB.
For use cases with long retention periods for your data or with high data volumes, we recommend a 1:100 to 1:130 memory-to-storage ratio. For example, 100GB of RAM per replica if you are storing 10TB of data.
For use cases with frequent access such as for customer-facing workloads, we recommend using more memory at a 1:30 to 1:50 memory-to-storage ratio.
Replicas {#replicas}
We recommend having at least three replicas per shard (or two replicas with
Amazon EBS
). Additionally, we suggest vertically scaling all replicas prior to adding additional replicas (horizontal scaling). | {"source_file": "sizing-and-hardware-recommendations.md"} | [
0.0024381966795772314,
-0.022509027272462845,
-0.047940582036972046,
0.04094863682985306,
-0.02944285050034523,
-0.03446786850690842,
0.06260374188423157,
-0.025837428867816925,
-0.04525722563266754,
0.011189542710781097,
-0.018153071403503418,
-0.060260459780693054,
0.0023783205542713404,
... |
41fb3c45-ef2e-4384-92f9-4058a6a5663a | ClickHouse does not automatically shard, and re-sharding your dataset will require significant compute resources. Therefore, we generally recommend using the largest server available to prevent having to re-shard your data in the future.
Consider using
ClickHouse Cloud
which scales automatically and allows you to easily control the number of replicas for your use case.
Example configurations for large workloads {#example-configurations-for-large-workloads}
ClickHouse configurations are highly dependent on your specific application's requirements. Please
contact sales
if you would like us to help optimize your architecture for cost and performance.
To provide guidance (not recommendations), the following are example configurations of ClickHouse users in production:
Fortune 500 B2B SaaS {#fortune-500-b2b-saas}
Storage
Monthly new data volume
30TB
Total Storage (compressed)
540TB
Data retention
18 months
Disk per node
25TB
CPU
Concurrency
200+ concurrent queries
# of replicas (including HA pair)
44
vCPU per node
62
Total vCPU
2700
Memory
Total RAM
11TB
RAM per replica
256GB
RAM-to-vCPU ratio
4 GB:1
RAM-to-disk ratio
1:50
Fortune 500 Telecom Operator for a logging use case {#fortune-500-telecom-operator-for-a-logging-use-case}
Storage
Monthly log data volume
4860TB
Total Storage (compressed)
608TB
Data retention
30 days
Disk per node
13TB
CPU
# of replicas (including HA pair)
38
vCPU per node
42
Total vCPU
1600
Memory
Total RAM
10TB
RAM per replica
256GB
RAM-to-vCPU ratio
6 GB:1
RAM-to-disk ratio
1:60
Further reading {#further-reading}
Below are published blog posts on architectures from companies using open-source ClickHouse:
Cloudflare
eBay
GitLab
Lyft
MessageBird
Microsoft
Uber
Zomato | {"source_file": "sizing-and-hardware-recommendations.md"} | [
-0.005654744803905487,
-0.03691874444484711,
-0.010659468360245228,
-0.022889316082000732,
-0.07990516722202301,
-0.04977010190486908,
-0.060220614075660706,
-0.011282311752438545,
-0.03402582183480263,
0.04267152026295662,
-0.03293206915259361,
0.06141405552625656,
0.024331288412213326,
-... |
18e9e81a-a2d0-49d5-a065-f019c2ebb83a | description: 'Documentation for the HTTP interface in ClickHouse, which provides REST
API access to ClickHouse from any platform and programming language'
sidebar_label: 'HTTP Interface'
sidebar_position: 15
slug: /interfaces/http
title: 'HTTP Interface'
doc_type: 'reference'
import PlayUI from '@site/static/images/play.png';
import Image from '@theme/IdealImage';
HTTP Interface
Prerequisites {#prerequisites}
For the examples in this article you will need:
- to have a running instance of ClickHouse server
- have
curl
installed. On Ubuntu or Debian, run
sudo apt install curl
or refer to this
documentation
for installation instructions.
Overview {#overview}
The HTTP interface lets you use ClickHouse on any platform from any programming language in the form of a REST API. The HTTP interface is more limited than the native interface, but it has better language support.
By default,
clickhouse-server
listens on the following ports:
- port 8123 for HTTP
- port 8443 for HTTPS can be enabled
If you make a
GET /
request without any parameters, a 200 response code is returned along with the string "Ok.":
bash
$ curl 'http://localhost:8123/'
Ok.
"Ok." is the default value defined in
http_server_default_response
and can be changed if desired.
Also see:
HTTP response codes caveats
.
Web user interface {#web-ui}
ClickHouse includes a web user interface, which can be accessed from the following address:
text
http://localhost:8123/play
The web UI supports displaying progress during query runtime, query cancellation, and result streaming.
It has a secret feature for displaying charts and graphs for query pipelines.
The web UI is designed for professionals like you.
In health-check scripts use the
GET /ping
request. This handler always returns "Ok." (with a line feed at the end). Available from version 18.12.13. See also
/replicas_status
to check replica's delay.
bash
$ curl 'http://localhost:8123/ping'
Ok.
$ curl 'http://localhost:8123/replicas_status'
Ok.
Querying over HTTP/HTTPS {#querying}
To query over HTTP/HTTPS there are three options:
- send the request as a URL 'query' parameter
- using the POST method.
- Send the beginning of the query in the 'query' parameter, and the rest using POST
:::note
The size of the URL is limited to 1 MiB by default, this can be changed with the
http_max_uri_size
setting.
:::
If successful, you receive the 200 response code and the result in the response body.
If an error occurs, you receive the 500 response code and an error description text in the response body.
Requests using GET are 'readonly'. This means that for queries that modify data, you can only use the POST method.
You can send the query itself either in the POST body or in the URL parameter. Let's look at some examples.
In the example below curl is used to send the query
SELECT 1
. Note the use of URL encoding for the space:
%20
. | {"source_file": "http.md"} | [
-0.013305538333952427,
-0.02297312393784523,
-0.06039637699723244,
-0.11419336497783661,
-0.07157640159130096,
-0.014482775703072548,
-0.07545841485261917,
0.011911083944141865,
-0.03606916591525078,
0.0028183083049952984,
0.026082051917910576,
0.02571593038737774,
-0.00684102438390255,
-0... |
9ab7d6d6-383a-4f42-aab8-33a79931ff83 | In the example below curl is used to send the query
SELECT 1
. Note the use of URL encoding for the space:
%20
.
bash title="command"
curl 'http://localhost:8123/?query=SELECT%201'
response title="Response"
1
In this example wget is used with the
-nv
(non-verbose) and
-O-
parameters to output the result to the terminal.
In this case it is not necessary to use URL encoding for the space:
bash title="command"
wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
response
1
In this example we pipe a raw HTTP request into netcat:
bash title="command"
echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
```response title="response"
HTTP/1.0 200 OK
X-ClickHouse-Summary: {"read_rows":"1","read_bytes":"1","written_rows":"0","written_bytes":"0","total_rows_to_read":"1","result_rows":"0","result_bytes":"0","elapsed_ns":"4505959","memory_usage":"1111711"}
Date: Tue, 11 Nov 2025 18:16:01 GMT
Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
Access-Control-Expose-Headers: X-ClickHouse-Query-Id,X-ClickHouse-Summary,X-ClickHouse-Server-Display-Name,X-ClickHouse-Format,X-ClickHouse-Timezone,X-ClickHouse-Exception-Code,X-ClickHouse-Exception-Tag
X-ClickHouse-Server-Display-Name: MacBook-Pro.local
X-ClickHouse-Query-Id: ec0d8ec6-efc4-4e1d-a14f-b748e01f5294
X-ClickHouse-Format: TabSeparated
X-ClickHouse-Timezone: Europe/London
X-ClickHouse-Exception-Tag: dngjzjnxkvlwkeua
1
```
As you can see, the
curl
command is somewhat inconvenient in that spaces must be URL escaped.
Although
wget
escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
```bash
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
1
$ echo 'SELECT 1' | curl 'http://localhost:8123/?query=' --data-binary @-
1
$ echo '1' | curl 'http://localhost:8123/?query=SELECT' --data-binary @-
1
```
If part of the query is sent in the parameter, and part in the POST, a line feed is inserted between these two data parts.
For example, this won't work:
bash
$ echo 'ECT 1' | curl 'http://localhost:8123/?query=SEL' --data-binary @-
Code: 59, e.displayText() = DB::Exception: Syntax error: failed at position 0: SEL
ECT 1
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
By default, data is returned in the
TabSeparated
format.
The
FORMAT
clause is used in the query to request any other format. For example:
bash title="command"
wget -nv -O- 'http://localhost:8123/?query=SELECT 1, 2, 3 FORMAT JSON'
```response title="Response"
{
"meta":
[
{
"name": "1",
"type": "UInt8"
},
{
"name": "2",
"type": "UInt8"
},
{
"name": "3",
"type": "UInt8"
}
], | {"source_file": "http.md"} | [
-0.015564218163490295,
0.04720044136047363,
-0.06237102672457695,
0.028055552393198013,
-0.07279830425977707,
-0.074781633913517,
-0.016660692170262337,
-0.008365616202354431,
0.08046018332242966,
0.009608539752662182,
-0.05986873432993889,
-0.0464722216129303,
0.022846871986985207,
-0.068... |
5a75420f-851a-4de0-a98b-dea78b6c089b | "data":
[
{
"1": 1,
"2": 2,
"3": 3
}
],
"rows": 1,
"statistics":
{
"elapsed": 0.000515,
"rows_read": 1,
"bytes_read": 1
}
}
```
You can use the
default_format
URL parameter or the
X-ClickHouse-Format
header to specify a default format other than
TabSeparated
.
bash
$ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @-
┏━━━┓
┃ 1 ┃
┡━━━┩
│ 1 │
└───┘
You can use POST method with parameterized queries. The parameters are specified using curly braces with the parameter name and type, like
{name:Type}
. The parameter values are passed with the
param_name
:
```bash
$ curl -X POST -F 'query=select {p1:UInt8} + {p2:UInt8}' -F "param_p1=3" -F "param_p2=4" 'http://localhost:8123/'
7
```
Insert queries over HTTP/HTTPS {#insert-queries}
The
POST
method of transmitting data is necessary for
INSERT
queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the
INSERT
query replaces
LOAD DATA LOCAL INFILE
from MySQL.
Examples {#examples}
To create a table:
bash
$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
To use the familiar
INSERT
query for data insertion:
bash
$ echo 'INSERT INTO t VALUES (1),(2),(3)' | curl 'http://localhost:8123/' --data-binary @-
To send data separately from the query:
bash
$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @-
Any data format can be specified. For example, the 'Values' format, the same format as the one used when writing
INSERT INTO t VALUES
, can be specified:
bash
$ echo '(7),(8),(9)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20Values' --data-binary @-
To insert data from a tab-separated dump, specify the corresponding format:
bash
$ echo -ne '10\n11\n12\n' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20TabSeparated' --data-binary @-
To read the table contents:
bash
$ curl 'http://localhost:8123/?query=SELECT%20a%20FROM%20t'
7
8
9
10
11
12
1
2
3
4
5
6
:::note
Data is output in a random order due to parallel query processing
:::
To delete the table:
bash
$ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
For successful requests that do not return a data table, an empty response body is returned.
Compression {#compression}
Compression can be used to reduce network traffic when transmitting a large amount of data, or for creating dumps that are immediately compressed.
You can use the internal ClickHouse compression format when transmitting data. The compressed data has a non-standard format, and you need the
clickhouse-compressor
program to work with it. It is installed by default with the
clickhouse-client
package. | {"source_file": "http.md"} | [
0.04321284964680672,
0.027629826217889786,
-0.06952638924121857,
0.048841774463653564,
-0.10746080428361893,
-0.04968194290995598,
0.014324870891869068,
0.03532572090625763,
-0.03347430378198624,
0.060015998780727386,
-0.0358329638838768,
-0.0654265359044075,
0.049154311418533325,
-0.03616... |
41e93be3-32c3-473d-ac0e-5016aa753523 | To increase the efficiency of data insertion, disable server-side checksum verification by using the
http_native_compression_disable_checksumming_on_decompress
setting.
If you specify
compress=1
in the URL, the server will compress the data it sends to you. If you specify
decompress=1
in the URL, the server will decompress the data which you pass in the
POST
method.
You can also choose to use
HTTP compression
. ClickHouse supports the following
compression methods
:
gzip
br
deflate
xz
zstd
lz4
bz2
snappy
To send a compressed
POST
request, append the request header
Content-Encoding: compression_method
.
In order for ClickHouse to compress the response, append the
Accept-Encoding: compression_method
header to the request.
You can configure the data compression level using the
http_zlib_compression_level
setting for all compression methods.
:::info
Some HTTP clients might decompress data from the server by default (with
gzip
and
deflate
) and you might get decompressed data even if you use the compression settings correctly.
:::
Examples {#examples-compression}
To send compressed data to the server:
bash
echo "SELECT 1" | gzip -c | \
curl -sS --data-binary @- -H 'Content-Encoding: gzip' 'http://localhost:8123/'
To receive the compressed data archive from the server:
```bash
curl -vsS "http://localhost:8123/?enable_http_compression=1" \
-H 'Accept-Encoding: gzip' --output result.gz -d 'SELECT number FROM system.numbers LIMIT 3'
zcat result.gz
0
1
2
```
To receive compressed data from the server, using gunzip to receive decompressed data:
bash
curl -sS "http://localhost:8123/?enable_http_compression=1" \
-H 'Accept-Encoding: gzip' -d 'SELECT number FROM system.numbers LIMIT 3' | gunzip -
0
1
2
Default database {#default-database}
You can use the
database
URL parameter or the
X-ClickHouse-Database
header to specify the default database.
bash
echo 'SELECT number FROM numbers LIMIT 10' | curl 'http://localhost:8123/?database=system' --data-binary @-
0
1
2
3
4
5
6
7
8
9
By default, the database that is registered in the server settings is used as the default database. Out of the box, this is the database called
default
. Alternatively, you can always specify the database using a dot before the table name.
Authentication {#authentication}
The username and password can be indicated in one of three ways:
Using HTTP Basic Authentication.
For example:
bash
echo 'SELECT 1' | curl 'http://user:password@localhost:8123/' -d @-
In the
user
and
password
URL parameters
:::warning
We do not recommend using this method as the parameter might be logged by web proxy and cached in the browser
:::
For example:
bash
echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @-
Using the 'X-ClickHouse-User' and 'X-ClickHouse-Key' headers
For example: | {"source_file": "http.md"} | [
-0.08650732040405273,
0.05485256388783455,
-0.061313822865486145,
0.007258418947458267,
-0.00835657212883234,
-0.14055630564689636,
-0.07519770413637161,
-0.05772833526134491,
-0.05225338041782379,
0.006777254864573479,
-0.020495295524597168,
0.05493083596229553,
0.0593457855284214,
-0.012... |
2b3eec10-bc19-494b-bee3-d92d89622252 | For example:
bash
echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @-
Using the 'X-ClickHouse-User' and 'X-ClickHouse-Key' headers
For example:
bash
echo 'SELECT 1' | curl -H 'X-ClickHouse-User: user' -H 'X-ClickHouse-Key: password' 'http://localhost:8123/' -d @-
If the user name is not specified, then the
default
name is used. If the password is not specified, then an empty password is used.
You can also use the URL parameters to specify any settings for processing a single query or entire profiles of settings.
For example:
text
http://localhost:8123/?profile=web&max_rows_to_read=1000000000&query=SELECT+1
bash
$ echo 'SELECT number FROM system.numbers LIMIT 10' | curl 'http://localhost:8123/?' --data-binary @-
0
1
2
3
4
5
6
7
8
9
For more information see:
-
Settings
-
SET
Using ClickHouse sessions in the HTTP protocol {#using-clickhouse-sessions-in-the-http-protocol}
You can also use ClickHouse sessions in the HTTP protocol. To do this, you need to add the
session_id
GET
parameter to the request. You can use any string as the session ID.
By default, the session is terminated after 60 seconds of inactivity. To change this timeout (in seconds), modify the
default_session_timeout
setting in the server configuration, or add the
session_timeout
GET
parameter to the request.
To check the session status, use the
session_check=1
parameter. Only one query at a time can be executed within a single session.
You can receive information about the progress of a query in the
X-ClickHouse-Progress
response headers. To do this, enable
send_progress_in_http_headers
.
Below is an example of the header sequence:
text
X-ClickHouse-Progress: {"read_rows":"261636","read_bytes":"2093088","total_rows_to_read":"1000000","elapsed_ns":"14050417","memory_usage":"22205975"}
X-ClickHouse-Progress: {"read_rows":"654090","read_bytes":"5232720","total_rows_to_read":"1000000","elapsed_ns":"27948667","memory_usage":"83400279"}
X-ClickHouse-Progress: {"read_rows":"1000000","read_bytes":"8000000","total_rows_to_read":"1000000","elapsed_ns":"38002417","memory_usage":"80715679"}
The possible header fields are:
| Header field | Description |
|----------------------|---------------------------------|
|
read_rows
| Number of rows read. |
|
read_bytes
| Volume of data read in bytes. |
|
total_rows_to_read
| Total number of rows to be read.|
|
written_rows
| Number of rows written. |
|
written_bytes
| Volume of data written in bytes.|
|
elapsed_ns
| Query runtime in nanoseconds.|
|
memory_usage
| Memory in bytes used by the query. |
Running requests do not stop automatically if the HTTP connection is lost. Parsing and data formatting are performed on the server-side, and using the network might be ineffective.
The following optional parameters exist: | {"source_file": "http.md"} | [
0.022831643000245094,
0.01421637088060379,
-0.12091406434774399,
-0.02750774472951889,
-0.13237276673316956,
-0.05579677224159241,
0.04705898463726044,
0.046834446489810944,
-0.002607940463349223,
-0.02467181347310543,
-0.02536713145673275,
-0.04532236605882645,
0.10192219913005829,
-0.107... |
d8a98846-3887-4c04-bdb4-5629696b2352 | The following optional parameters exist:
| Parameters | Description |
|-----------------------|-------------------------------------------|
|
query_id
(optional) | Can be passed as the query ID (any string).
replace_running_query
|
|
quota_key
(optional)| Can be passed as the quota key (any string).
"Quotas"
|
The HTTP interface allows passing external data (external temporary tables) for querying. For more information, see
"External data for query processing"
.
Response buffering {#response-buffering}
Response buffering can be enabled on the server-side. The following URL parameters are provided for this purpose:
-
buffer_size
-
wait_end_of_query
The following settings can be used:
-
http_response_buffer_size
-
http_wait_end_of_query
buffer_size
determines the number of bytes in the result to buffer in the server memory. If a result body is larger than this threshold, the buffer is written to the HTTP channel, and the remaining data is sent directly to the HTTP channel.
To ensure that the entire response is buffered, set
wait_end_of_query=1
. In this case, the data that is not stored in memory will be buffered in a temporary server file.
For example:
bash
curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary'
:::tip
Use buffering to avoid situations where a query processing error occurred after the response code and HTTP headers were sent to the client. In this situation, an error message is written at the end of the response body, and on the client-side, the error can only be detected at the parsing stage.
:::
Setting a role with query parameters {#setting-role-with-query-parameters}
This feature was added in ClickHouse 24.4.
In specific scenarios, setting the granted role first might be required before executing the statement itself.
However, it is not possible to send
SET ROLE
and the statement together, as multi-statements are not allowed:
bash
curl -sS "http://localhost:8123" --data-binary "SET ROLE my_role;SELECT * FROM my_table;"
The command above results in an error:
sql
Code: 62. DB::Exception: Syntax error (Multi-statements are not allowed)
To overcome this limitation, use the
role
query parameter instead:
bash
curl -sS "http://localhost:8123?role=my_role" --data-binary "SELECT * FROM my_table;"
This is the equivalent of executing
SET ROLE my_role
before the statement.
Additionally, it is possible to specify multiple
role
query parameters:
bash
curl -sS "http://localhost:8123?role=my_role&role=my_other_role" --data-binary "SELECT * FROM my_table;"
In this case,
?role=my_role&role=my_other_role
works similarly to executing
SET ROLE my_role, my_other_role
before the statement.
HTTP response codes caveats {#http_response_codes_caveats} | {"source_file": "http.md"} | [
-0.04483695700764656,
0.03197921812534332,
-0.11833003163337708,
0.06417617946863174,
-0.07224076986312866,
-0.032247550785541534,
0.014327755197882652,
0.07649978995323181,
0.004517476074397564,
-0.004611572716385126,
-0.05880740284919739,
-0.03999898210167885,
0.07773188501596451,
-0.062... |
49d57fa8-964c-4253-b3ca-5ba287fc21bc | In this case,
?role=my_role&role=my_other_role
works similarly to executing
SET ROLE my_role, my_other_role
before the statement.
HTTP response codes caveats {#http_response_codes_caveats}
Because of limitations of the HTTP protocol, a HTTP 200 response code does not guarantee that a query was successful.
Here is an example:
bash
curl -v -Ss "http://localhost:8123/?max_block_size=1&query=select+sleepEachRow(0.001),throwIf(number=2)from+numbers(5)"
* Trying 127.0.0.1:8123...
...
< HTTP/1.1 200 OK
...
Code: 395. DB::Exception: Value passed to 'throwIf' function is non-zero: while executing 'FUNCTION throwIf(equals(number, 2) :: 1) -> throwIf(equals(number, 2))
The reason for this behavior is the nature of the HTTP protocol. The HTTP header is sent first with an HTTP code of 200, followed by the HTTP body, and then the error is injected into the body as plain text.
This behavior is independent of the format used, whether it's
Native
,
TSV
, or
JSON
; the error message will always be in the middle of the response stream.
You can mitigate this problem by enabling
wait_end_of_query=1
(
Response Buffering
). In this case, sending of the HTTP header is delayed until the entire query is resolved. This however, does not completely solve the problem because the result must still fit within the
http_response_buffer_size
, and other settings like
send_progress_in_http_headers
can interfere with the delay of the header.
:::tip
The only way to catch all errors is to analyze the HTTP body before parsing it using the required format.
:::
Such exceptions in ClickHouse have consistent exception format as below irrespective of which format used (eg.
Native
,
TSV
,
JSON
, etc) when
http_write_exception_in_output_format=0
(default) . Which makes it easy to parse and extract error messages on the client side.
```text
\r\n
exception
\r\n
\r\n
\r\n
\r\n
exception
\r\n
```
Where
<TAG>
is a 16 byte random tag, which is the same tag sent in the
X-ClickHouse-Exception-Tag
response header.
The
<error message>
is the actual exception message (exact length can be found in
<message_length>
). The whole exception block described above can be up to 16 KiB.
Here is an example in
JSON
format
```bash
$ curl -v -Ss "http://localhost:8123/?max_block_size=1&query=select+sleepEachRow(0.001),throwIf(number=2)from+numbers(5)+FORMAT+JSON"
...
{
"meta":
[
{
"name": "sleepEachRow(0.001)",
"type": "UInt8"
},
{
"name": "throwIf(equals(number, 2))",
"type": "UInt8"
}
],
"data":
[
{
"sleepEachRow(0.001)": 0,
"throwIf(equals(number, 2))": 0
},
{
"sleepEachRow(0.001)": 0,
"throwIf(equals(number, 2))": 0
}
exception | {"source_file": "http.md"} | [
0.019969727843999863,
0.050389427691698074,
0.002403988502919674,
0.01228311751037836,
-0.03495563194155693,
-0.055328648537397385,
0.010210728272795677,
0.024689478799700737,
-0.022081945091485977,
-0.05007966235280037,
0.004693548660725355,
-0.07819701731204987,
0.09862884134054184,
-0.0... |
cdbb61c8-1524-4062-8226-06b3bb22f4e2 | "data":
[
{
"sleepEachRow(0.001)": 0,
"throwIf(equals(number, 2))": 0
},
{
"sleepEachRow(0.001)": 0,
"throwIf(equals(number, 2))": 0
}
exception
dmrdfnujjqvszhav
Code: 395. DB::Exception: Value passed to 'throwIf' function is non-zero: while executing 'FUNCTION throwIf(equals(
table1.number, 2_UInt8) :: 1) -> throwIf(equals(__table1.number, 2_UInt8)) UInt8 : 0'. (FUNCTION_THROW_IF_VALUE_IS_NON_ZERO) (version 25.11.1.1)
262 dmrdfnujjqvszhav
__exception
```
Here is similar example but in
CSV
format
```bash
$ curl -v -Ss "http://localhost:8123/?max_block_size=1&query=select+sleepEachRow(0.001),throwIf(number=2)from+numbers(5)+FORMAT+CSV"
...
<
0,0
0,0
exception
rumfyutuqkncbgau
Code: 395. DB::Exception: Value passed to 'throwIf' function is non-zero: while executing 'FUNCTION throwIf(equals(
table1.number, 2_UInt8) :: 1) -> throwIf(equals(__table1.number, 2_UInt8)) UInt8 : 0'. (FUNCTION_THROW_IF_VALUE_IS_NON_ZERO) (version 25.11.1.1)
262 rumfyutuqkncbgau
__exception
```
Queries with parameters {#cli-queries-with-parameters}
You can create a query with parameters and pass values for them from the corresponding HTTP request parameters. For more information, see
Queries with Parameters for CLI
.
Example {#example-3}
bash
$ curl -sS "<address>?param_id=2¶m_phrase=test" -d "SELECT * FROM table WHERE int_column = {id:UInt8} and string_column = {phrase:String}"
Tabs in URL Parameters {#tabs-in-url-parameters}
Query parameters are parsed from the "escaped" format. This has some benefits, such as the possibility to unambiguously parse nulls as
\N
. This means the tab character should be encoded as
\t
(or
\
and a tab). For example, the following contains an actual tab between
abc
and
123
and the input string is split into two values:
bash
curl -sS "http://localhost:8123" -d "SELECT splitByChar('\t', 'abc 123')"
response
['abc','123']
However, if you try to encode an actual tab using
%09
in a URL parameter, it won't get parsed properly:
bash
curl -sS "http://localhost:8123?param_arg1=abc%09123" -d "SELECT splitByChar('\t', {arg1:String})"
Code: 457. DB::Exception: Value abc 123 cannot be parsed as String for query parameter 'arg1' because it isn't parsed completely: only 3 of 7 bytes was parsed: abc. (BAD_QUERY_PARAMETER) (version 23.4.1.869 (official build))
If you are using URL parameters, you will need to encode the
\t
as
%5C%09
. For example:
bash
curl -sS "http://localhost:8123?param_arg1=abc%5C%09123" -d "SELECT splitByChar('\t', {arg1:String})"
response
['abc','123']
Predefined HTTP Interface {#predefined_http_interface}
ClickHouse supports specific queries through the HTTP interface. For example, you can write data to a table as follows:
bash
$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @- | {"source_file": "http.md"} | [
0.04986727982759476,
0.11149758845567703,
-0.055715762078762054,
0.00227931491099298,
0.019455142319202423,
-0.020719461143016815,
0.014515743590891361,
0.026205560192465782,
0.06487008929252625,
0.043057262897491455,
-0.0030424059368669987,
-0.12004321813583374,
0.043756064027547836,
-0.0... |
12a6c405-5c2d-40a5-a63d-e987337bcc1f | bash
$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @-
ClickHouse also supports a Predefined HTTP Interface which can help you more easily integrate with third-party tools like
Prometheus exporter
. Let's look at an example.
First of all, add this section to your server configuration file.
http_handlers
is configured to contain multiple
rule
. ClickHouse will match the HTTP requests received to the predefined type in
rule
and the first rule matched runs the handler. Then ClickHouse will execute the corresponding predefined query if the match is successful.
yaml title="config.xml"
<http_handlers>
<rule>
<url>/predefined_query</url>
<methods>POST,GET</methods>
<handler>
<type>predefined_query_handler</type>
<query>SELECT * FROM system.metrics LIMIT 5 FORMAT Template SETTINGS format_template_resultset = 'prometheus_template_output_format_resultset', format_template_row = 'prometheus_template_output_format_row', format_template_rows_between_delimiter = '\n'</query>
</handler>
</rule>
<rule>...</rule>
<rule>...</rule>
</http_handlers>
You can now request the URL directly for data in the Prometheus format:
```bash
$ curl -v 'http://localhost:8123/predefined_query'
* Trying ::1...
* Connected to localhost (::1) port 8123 (#0)
GET /predefined_query HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
< HTTP/1.1 200 OK
< Date: Tue, 28 Apr 2020 08:52:56 GMT
< Connection: Keep-Alive
< Content-Type: text/plain; charset=UTF-8
< X-ClickHouse-Server-Display-Name: i-mloy5trc
< Transfer-Encoding: chunked
< X-ClickHouse-Query-Id: 96fe0052-01e6-43ce-b12a-6b7370de6e8a
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","memory_usage":"8451671"}
<
HELP "Query" "Number of executing queries"
TYPE "Query" counter
"Query" 1
HELP "Merge" "Number of executing background merges"
TYPE "Merge" counter
"Merge" 0
HELP "PartMutation" "Number of mutations (ALTER DELETE/UPDATE)"
TYPE "PartMutation" counter
"PartMutation" 0
HELP "ReplicatedFetch" "Number of data parts being fetched from replica"
TYPE "ReplicatedFetch" counter
"ReplicatedFetch" 0
HELP "ReplicatedSend" "Number of data parts being sent to replicas"
TYPE "ReplicatedSend" counter
"ReplicatedSend" 0
Connection #0 to host localhost left intact
Connection #0 to host localhost left intact
```
Configuration options for
http_handlers
work as follows.
rule
can configure the following parameters:
-
method
-
headers
-
url
-
full_url
-
handler
Each of these are discussed below:
method
is responsible for matching the method part of the HTTP request.
method
fully conforms to the definition of [
method
] | {"source_file": "http.md"} | [
-0.017176859080791473,
0.06354020535945892,
-0.05277476832270622,
-0.007796696852892637,
-0.03327254205942154,
-0.09718069434165955,
-0.009823275730013847,
-0.05097853019833565,
-0.01012217253446579,
0.003865256905555725,
0.009675140492618084,
-0.06937051564455032,
0.03344497084617615,
-0.... |
56056138-ee53-4026-b46f-a050551c243a | -
full_url
-
handler
Each of these are discussed below:
method
is responsible for matching the method part of the HTTP request.
method
fully conforms to the definition of [
method
]
(https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods) in the HTTP protocol. It is an optional configuration. If it is not defined in the
configuration file, it does not match the method portion of the HTTP request.
url
is responsible for matching the URL part (path and query string) of the HTTP request.
If the
url
prefixed with
regex:
it expects
RE2
's regular expressions.
It is an optional configuration. If it is not defined in the configuration file, it does not match the URL portion of the HTTP request.
full_url
same as
url
, but, includes complete URL, i.e.
schema://host:port/path?query_string
.
Note, ClickHouse does not support "virtual hosts", so the
host
is an IP address (and not the value of
Host
header).
empty_query_string
- ensures that there is no query string (
?query_string
) in the request
headers
are responsible for matching the header part of the HTTP request. It is compatible with RE2's regular expressions. It is an optional
configuration. If it is not defined in the configuration file, it does not match the header portion of the HTTP request.
handler
contains the main processing part.
It can have the following
type
:
-
predefined_query_handler
-
dynamic_query_handler
-
static
-
redirect
And the following parameters:
-
query
— use with
predefined_query_handler
type, executes query when the handler is called.
-
query_param_name
— use with
dynamic_query_handler
type, extracts and executes the value corresponding to the
query_param_name
value in
HTTP request parameters.
-
status
— use with
static
type, response status code.
-
content_type
— use with any type, response
content-type
.
-
http_response_headers
— use with any type, response headers map. Could be used to set content type as well.
-
response_content
— use with
static
type, response content sent to client, when using the prefix 'file://' or 'config://', find the content
from the file or configuration sends to client.
-
user
- user to execute the query from (default user is
default
).
Note
, you do not need to specify password for this user.
The configuration methods for different
type
s are discussed next.
predefined_query_handler {#predefined_query_handler}
predefined_query_handler
supports setting
Settings
and
query_params
values. You can configure
query
in the type of
predefined_query_handler
.
query
value is a predefined query of
predefined_query_handler
, which is executed by ClickHouse when an HTTP request is matched and the result of the query is returned. It is a must configuration. | {"source_file": "http.md"} | [
-0.021142853423953056,
0.0631500855088234,
-0.0042295074090361595,
-0.01547284610569477,
-0.006249269004911184,
-0.11164231598377228,
0.009542275220155716,
-0.019453583285212517,
0.023578006774187088,
-0.035877615213394165,
0.019916392862796783,
-0.0006487203063443303,
0.003465704619884491,
... |
37754237-5a19-47d5-95e8-56032974a7b1 | The following example defines the values of
max_threads
and
max_final_threads
settings, then queries the system table to check whether these settings were set successfully.
:::note
To keep the default
handlers
such as
query
,
play
,
ping
, add the
<defaults/>
rule.
:::
For example:
yaml
<http_handlers>
<rule>
<url><![CDATA[regex:/query_param_with_url/(?P<name_1>[^/]+)]]></url>
<methods>GET</methods>
<headers>
<XXX>TEST_HEADER_VALUE</XXX>
<PARAMS_XXX><![CDATA[regex:(?P<name_2>[^/]+)]]></PARAMS_XXX>
</headers>
<handler>
<type>predefined_query_handler</type>
<query>
SELECT name, value FROM system.settings
WHERE name IN ({name_1:String}, {name_2:String})
</query>
</handler>
</rule>
<defaults/>
</http_handlers>
bash
curl -H 'XXX:TEST_HEADER_VALUE' -H 'PARAMS_XXX:max_final_threads' 'http://localhost:8123/query_param_with_url/max_threads?max_threads=1&max_final_threads=2'
max_final_threads 2
max_threads 1
:::note
In one
predefined_query_handler
only one
query
is supported.
:::
dynamic_query_handler {#dynamic_query_handler}
In
dynamic_query_handler
, the query is written in the form of parameter of the HTTP request. The difference is that in
predefined_query_handler
, the query is written in the configuration file.
query_param_name
can be configured in
dynamic_query_handler
.
ClickHouse extracts and executes the value corresponding to the
query_param_name
value in the URL of the HTTP request. The default value of
query_param_name
is
/query
. It is an optional configuration. If there is no definition in the configuration file, the parameter is not passed in.
To experiment with this functionality, the following example defines the values of
max_threads
and
max_final_threads
and
queries
whether the settings were set successfully.
Example:
yaml
<http_handlers>
<rule>
<headers>
<XXX>TEST_HEADER_VALUE_DYNAMIC</XXX> </headers>
<handler>
<type>dynamic_query_handler</type>
<query_param_name>query_param</query_param_name>
</handler>
</rule>
<defaults/>
</http_handlers>
bash
curl -H 'XXX:TEST_HEADER_VALUE_DYNAMIC' 'http://localhost:8123/own?max_threads=1&max_final_threads=2¶m_name_1=max_threads¶m_name_2=max_final_threads&query_param=SELECT%20name,value%20FROM%20system.settings%20where%20name%20=%20%7Bname_1:String%7D%20OR%20name%20=%20%7Bname_2:String%7D'
max_threads 1
max_final_threads 2
static {#static}
static
can return
content_type
,
status
and
response_content
.
response_content
can return the specified content.
For example, to return a message "Say Hi!": | {"source_file": "http.md"} | [
0.020066047087311745,
-0.007838023826479912,
-0.07757449895143509,
-0.02749037556350231,
-0.09254022687673569,
-0.010270008817315102,
0.03910638764500618,
0.010815626941621304,
-0.04084259271621704,
0.06342159956693649,
-0.004307011608034372,
-0.055904582142829895,
0.024068236351013184,
-0... |
81f995f8-c477-4729-9b67-71fc8c205954 | static {#static}
static
can return
content_type
,
status
and
response_content
.
response_content
can return the specified content.
For example, to return a message "Say Hi!":
yaml
<http_handlers>
<rule>
<methods>GET</methods>
<headers><XXX>xxx</XXX></headers>
<url>/hi</url>
<handler>
<type>static</type>
<status>402</status>
<content_type>text/html; charset=UTF-8</content_type>
<http_response_headers>
<Content-Language>en</Content-Language>
<X-My-Custom-Header>43</X-My-Custom-Header>
</http_response_headers>
#highlight-next-line
<response_content>Say Hi!</response_content>
</handler>
</rule>
<defaults/>
</http_handlers>
http_response_headers
could be used to set the content type instead of
content_type
.
yaml
<http_handlers>
<rule>
<methods>GET</methods>
<headers><XXX>xxx</XXX></headers>
<url>/hi</url>
<handler>
<type>static</type>
<status>402</status>
#begin-highlight
<http_response_headers>
<Content-Type>text/html; charset=UTF-8</Content-Type>
<Content-Language>en</Content-Language>
<X-My-Custom-Header>43</X-My-Custom-Header>
</http_response_headers>
#end-highlight
<response_content>Say Hi!</response_content>
</handler>
</rule>
<defaults/>
</http_handlers>
```bash
curl -vv -H 'XXX:xxx' 'http://localhost:8123/hi'
* Trying ::1...
* Connected to localhost (::1) port 8123 (#0)
GET /hi HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
XXX:xxx
< HTTP/1.1 402 Payment Required
< Date: Wed, 29 Apr 2020 03:51:26 GMT
< Connection: Keep-Alive
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","memory_usage":"8451671"}
<
* Connection #0 to host localhost left intact
Say Hi!%
```
Find the content from the configuration send to client.
```yaml
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>
GET
xxx
/get_config_static_handler
static
config://get_config_static_handler
```
```bash
$ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
* Trying ::1...
* Connected to localhost (::1) port 8123 (#0)
GET /get_config_static_handler HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
XXX:xxx | {"source_file": "http.md"} | [
-0.014934827573597431,
0.06209772825241089,
0.004833870567381382,
0.030537335202097893,
0.010019645094871521,
-0.03127099573612213,
-0.010508562438189983,
-0.0059967958368361,
-0.018462957814335823,
-0.02122858352959156,
-0.0283722672611475,
-0.021284835413098335,
0.021923355758190155,
0.0... |
d7dec68c-46a9-476c-af41-fcfd37e5a1f8 | GET /get_config_static_handler HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
XXX:xxx
< HTTP/1.1 200 OK
< Date: Wed, 29 Apr 2020 04:01:24 GMT
< Connection: Keep-Alive
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","memory_usage":"8451671"}
<
* Connection #0 to host localhost left intact
%
```
To find the content from the file send to client:
yaml
<http_handlers>
<rule>
<methods>GET</methods>
<headers><XXX>xxx</XXX></headers>
<url>/get_absolute_path_static_handler</url>
<handler>
<type>static</type>
<content_type>text/html; charset=UTF-8</content_type>
<http_response_headers>
<ETag>737060cd8c284d8af7ad3082f209582d</ETag>
</http_response_headers>
<response_content>file:///absolute_path_file.html</response_content>
</handler>
</rule>
<rule>
<methods>GET</methods>
<headers><XXX>xxx</XXX></headers>
<url>/get_relative_path_static_handler</url>
<handler>
<type>static</type>
<content_type>text/html; charset=UTF-8</content_type>
<http_response_headers>
<ETag>737060cd8c284d8af7ad3082f209582d</ETag>
</http_response_headers>
<response_content>file://./relative_path_file.html</response_content>
</handler>
</rule>
</http_handlers>
```bash
$ user_files_path='/var/lib/clickhouse/user_files'
$ sudo echo "
Relative Path File
" > $user_files_path/relative_path_file.html
$ sudo echo "
Absolute Path File
" > $user_files_path/absolute_path_file.html
$ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
* Trying ::1...
* Connected to localhost (::1) port 8123 (#0)
GET /get_absolute_path_static_handler HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
XXX:xxx
< HTTP/1.1 200 OK
< Date: Wed, 29 Apr 2020 04:18:16 GMT
< Connection: Keep-Alive
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","memory_usage":"8451671"}
<
Absolute Path File
Connection #0 to host localhost left intact
$ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
Trying ::1...
Connected to localhost (::1) port 8123 (#0)
GET /get_relative_path_static_handler HTTP/1.1
Host: localhost:8123
User-Agent: curl/7.47.0
Accept:
/
XXX:xxx | {"source_file": "http.md"} | [
-0.0036021468695253134,
0.05277867242693901,
-0.06973520666360855,
-0.008321232162415981,
0.030751653015613556,
-0.0909988209605217,
-0.010364951565861702,
-0.04746898263692856,
0.023530809208750725,
0.05808384716510773,
0.035862796008586884,
0.03500543534755707,
-0.020534636452794075,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.