id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
0cfee71c-914e-4d99-bff1-d3b105021b26 | description: 'The engine allows to import and export data to SQLite and supports queries
to SQLite tables directly from ClickHouse.'
sidebar_label: 'SQLite'
sidebar_position: 185
slug: /engines/table-engines/integrations/sqlite
title: 'SQLite table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
SQLite table engine
The engine allows to import and export data to SQLite and supports queries to SQLite tables directly from ClickHouse.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name
(
name1 [type1],
name2 [type2], ...
) ENGINE = SQLite('db_path', 'table')
Engine Parameters
db_path
β Path to SQLite file with a database.
table
β Name of a table in the SQLite database.
Usage example {#usage-example}
Shows a query creating the SQLite table:
sql
SHOW CREATE TABLE sqlite_db.table2;
text
CREATE TABLE SQLite.table2
(
`col1` Nullable(Int32),
`col2` Nullable(String)
)
ENGINE = SQLite('sqlite.db','table2');
Returns the data from the table:
sql
SELECT * FROM sqlite_db.table2 ORDER BY col1;
text
ββcol1ββ¬βcol2βββ
β 1 β text1 β
β 2 β text2 β
β 3 β text3 β
ββββββββ΄ββββββββ
See Also
SQLite
engine
sqlite
table function | {"source_file": "sqlite.md"} | [
-0.021504055708646774,
-0.06914684921503067,
-0.049359280616045,
0.06589653342962265,
0.0030910130590200424,
-0.0006787538877688348,
0.0652373656630516,
0.012992089614272118,
-0.10843457281589508,
0.025094307959079742,
0.016777245327830315,
-0.04054002836346626,
0.077693872153759,
-0.11722... |
3aa551cc-8ecd-4f10-abfd-75b669f86bff | description: 'This engine provides integration with the Amazon S3 ecosystem and allows
streaming imports. Similar to the Kafka and RabbitMQ engines, but provides S3-specific
features.'
sidebar_label: 'S3Queue'
sidebar_position: 181
slug: /engines/table-engines/integrations/s3queue
title: 'S3Queue table engine'
doc_type: 'reference'
import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
S3Queue table engine
This engine provides integration with
Amazon S3
ecosystem and allows streaming import. This engine is similar to the
Kafka
,
RabbitMQ
engines, but provides S3-specific features.
It is important to understand this note from the
original PR for S3Queue implementation
: when the
MATERIALIZED VIEW
joins the engine, the S3Queue Table Engine starts collecting data in the background.
Create table {#creating-a-table}
sql
CREATE TABLE s3_queue_engine_table (name String, value UInt32)
ENGINE = S3Queue(path, [NOSIGN, | aws_access_key_id, aws_secret_access_key,] format, [compression], [headers])
[SETTINGS]
[mode = '',]
[after_processing = 'keep',]
[keeper_path = '',]
[loading_retries = 0,]
[processing_threads_num = 16,]
[parallel_inserts = false,]
[enable_logging_to_queue_log = true,]
[last_processed_path = "",]
[tracked_files_limit = 1000,]
[tracked_file_ttl_sec = 0,]
[polling_min_timeout_ms = 1000,]
[polling_max_timeout_ms = 10000,]
[polling_backoff_ms = 0,]
[cleanup_interval_min_ms = 10000,]
[cleanup_interval_max_ms = 30000,]
[buckets = 0,]
[list_objects_batch_size = 1000,]
[enable_hash_ring_filtering = 0,]
[max_processed_files_before_commit = 100,]
[max_processed_rows_before_commit = 0,]
[max_processed_bytes_before_commit = 0,]
[max_processing_time_sec_before_commit = 0,]
:::warning
Before
24.7
, it is required to use
s3queue_
prefix for all settings apart from
mode
,
after_processing
and
keeper_path
.
:::
Engine parameters
S3Queue
parameters are the same as
S3
table engine supports. See parameters section
here
.
Example
sql
CREATE TABLE s3queue_engine_table (name String, value UInt32)
ENGINE=S3Queue('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/*', 'CSV', 'gzip')
SETTINGS
mode = 'unordered';
Using named collections:
xml
<clickhouse>
<named_collections>
<s3queue_conf>
<url>'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/*</url>
<access_key_id>test<access_key_id>
<secret_access_key>test</secret_access_key>
</s3queue_conf>
</named_collections>
</clickhouse>
sql
CREATE TABLE s3queue_engine_table (name String, value UInt32)
ENGINE=S3Queue(s3queue_conf, format = 'CSV', compression_method = 'gzip')
SETTINGS
mode = 'ordered';
Settings {#settings}
To get a list of settings, configured for the table, use
system.s3_queue_settings
table. Available from
24.10
.
Mode {#mode} | {"source_file": "s3queue.md"} | [
-0.0647626593708992,
-0.06535153836011887,
-0.1438402682542801,
-0.005026510916650295,
0.03606662526726723,
0.05763305351138115,
0.001641182810999453,
-0.04637071117758751,
0.028710801154375076,
0.00011805837857536972,
-0.004590210039168596,
-0.03814777731895447,
0.04688384011387825,
-0.11... |
c6da609b-4fdb-4cb5-9732-a56af5b7d4eb | Settings {#settings}
To get a list of settings, configured for the table, use
system.s3_queue_settings
table. Available from
24.10
.
Mode {#mode}
Possible values:
unordered β With unordered mode, the set of all already processed files is tracked with persistent nodes in ZooKeeper.
ordered β With ordered mode, the files are processed in lexicographic order. It means that if file named 'BBB' was processed at some point and later on a file named 'AA' is added to the bucket, it will be ignored. Only the max name (in lexicographic sense) of the successfully consumed file, and the names of files that will be retried after unsuccessful loading attempt are being stored in ZooKeeper.
Default value:
ordered
in versions before 24.6. Starting with 24.6 there is no default value, the setting becomes required to be specified manually. For tables created on earlier versions the default value will remain
Ordered
for compatibility.
after_processing
{#after_processing}
Delete or keep file after successful processing.
Possible values:
keep.
delete.
Default value:
keep
.
keeper_path
{#keeper_path}
The path in ZooKeeper can be specified as a table engine setting or default path can be formed from the global configuration-provided path and table UUID.
Possible values:
String.
Default value:
/
.
s3queue_loading_retries
{#loading_retries}
Retry file loading up to specified number of times. By default, there are no retries.
Possible values:
Positive integer.
Default value:
0
.
s3queue_processing_threads_num
{#processing_threads_num}
Number of threads to perform processing. Applies only for
Unordered
mode.
Default value: Number of CPUs or 16.
s3queue_parallel_inserts
{#parallel_inserts}
By default
processing_threads_num
will produce one
INSERT
, so it will only download files and parse in multiple threads.
But this limits the parallelism, so for better throughput use
parallel_inserts=true
, this will allow to insert data in parallel (but keep in mind that it will result in higher number of generated data parts for MergeTree family).
:::note
INSERT
s will be spawned with respect to
max_process*_before_commit
settings.
:::
Default value:
false
.
s3queue_enable_logging_to_s3queue_log
{#enable_logging_to_s3queue_log}
Enable logging to
system.s3queue_log
.
Default value:
0
.
s3queue_polling_min_timeout_ms
{#polling_min_timeout_ms}
Specifies the minimum time, in milliseconds, that ClickHouse waits before making the next polling attempt.
Possible values:
Positive integer.
Default value:
1000
.
s3queue_polling_max_timeout_ms
{#polling_max_timeout_ms}
Defines the maximum time, in milliseconds, that ClickHouse waits before initiating the next polling attempt.
Possible values:
Positive integer.
Default value:
10000
.
s3queue_polling_backoff_ms
{#polling_backoff_ms} | {"source_file": "s3queue.md"} | [
-0.03640495985746384,
-0.01474732905626297,
-0.06831547617912292,
0.02169528789818287,
0.016957037150859833,
-0.026255793869495392,
0.020888429135084152,
-0.016732024028897285,
-0.005434567108750343,
0.07444333285093307,
0.010719073005020618,
0.02541220374405384,
0.021676180884242058,
-0.0... |
76d1bc32-22b6-4e75-8089-6a84d2ffb02b | Possible values:
Positive integer.
Default value:
10000
.
s3queue_polling_backoff_ms
{#polling_backoff_ms}
Determines the additional wait time added to the previous polling interval when no new files are found. The next poll occurs after the sum of the previous interval and this backoff value, or the maximum interval, whichever is lower.
Possible values:
Positive integer.
Default value:
0
.
s3queue_tracked_files_limit
{#tracked_files_limit}
Allows to limit the number of Zookeeper nodes if the 'unordered' mode is used, does nothing for 'ordered' mode.
If limit reached the oldest processed files will be deleted from ZooKeeper node and processed again.
Possible values:
Positive integer.
Default value:
1000
.
s3queue_tracked_file_ttl_sec
{#tracked_file_ttl_sec}
Maximum number of seconds to store processed files in ZooKeeper node (store forever by default) for 'unordered' mode, does nothing for 'ordered' mode.
After the specified number of seconds, the file will be re-imported.
Possible values:
Positive integer.
Default value:
0
.
s3queue_cleanup_interval_min_ms
{#cleanup_interval_min_ms}
For 'Ordered' mode. Defines a minimum boundary for reschedule interval for a background task, which is responsible for maintaining tracked file TTL and maximum tracked files set.
Default value:
10000
.
s3queue_cleanup_interval_max_ms
{#cleanup_interval_max_ms}
For 'Ordered' mode. Defines a maximum boundary for reschedule interval for a background task, which is responsible for maintaining tracked file TTL and maximum tracked files set.
Default value:
30000
.
s3queue_buckets
{#buckets}
For 'Ordered' mode. Available since
24.6
. If there are several replicas of S3Queue table, each working with the same metadata directory in keeper, the value of
s3queue_buckets
needs to be equal to at least the number of replicas. If
s3queue_processing_threads
setting is used as well, it makes sense to increase the value of
s3queue_buckets
setting even further, as it defines the actual parallelism of
S3Queue
processing.
use_persistent_processing_nodes
{#use_persistent_processing_nodes}
By default S3Queue table has always used ephemeral processing nodes, which could lead to duplicates in data in case zookeeper session expires before S3Queue commits processed files in zookeeper, but after it has started processing. This setting forces the server to eliminate possibility of duplicates in case of expired keeper session.
persistent_processing_nodes_ttl_seconds
{#persistent_processing_nodes_ttl_seconds}
In case of non-graceful server termination, it is possible that if
use_persistent_processing_nodes
is enabled, we can have not removed processing nodes. This setting defines a period of time when these processing nodes can safely be cleaned up.
Default value:
3600
(1 hour).
S3-related settings {#s3-settings} | {"source_file": "s3queue.md"} | [
-0.0344192273914814,
-0.00923642423003912,
-0.03732464089989662,
0.024212148040533066,
0.08762483298778534,
-0.010617339052259922,
-0.00941411592066288,
-0.026574861258268356,
0.0134138697758317,
0.07020572572946548,
-0.011302643455564976,
0.03579762578010559,
0.019650425761938095,
-0.0237... |
abd53124-cf06-4202-8154-cebccef5ae95 | Default value:
3600
(1 hour).
S3-related settings {#s3-settings}
Engine supports all s3 related settings. For more information about S3 settings see
here
.
S3 role-based access {#s3-role-based-access}
The s3Queue table engine supports role-based access.
Refer to the documentation
here
for steps to configure a role to access your bucket.
Once the role is configured, a
roleARN
can be passed via an
extra_credentials
parameter as shown below:
sql
CREATE TABLE s3_table
(
ts DateTime,
value UInt64
)
ENGINE = S3Queue(
'https://<your_bucket>/*.csv',
extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/<your_role>')
,'CSV')
SETTINGS
...
S3Queue ordered mode {#ordered-mode}
S3Queue
processing mode allows to store less metadata in ZooKeeper, but has a limitation that files, which added later by time, are required to have alphanumerically bigger names.
S3Queue
ordered
mode, as well as
unordered
, supports
(s3queue_)processing_threads_num
setting (
s3queue_
prefix is optional), which allows to control number of threads, which would do processing of
S3
files locally on the server.
In addition,
ordered
mode also introduces another setting called
(s3queue_)buckets
which means "logical threads". It means that in distributed scenario, when there are several servers with
S3Queue
table replicas, where this setting defines the number of processing units. E.g. each processing thread on each
S3Queue
replica will try to lock a certain
bucket
for processing, each
bucket
is attributed to certain files by hash of the file name. Therefore, in distributed scenario it is highly recommended to have
(s3queue_)buckets
setting to be at least equal to the number of replicas or bigger. This is fine to have the number of buckets bigger than the number of replicas. The most optimal scenario would be for
(s3queue_)buckets
setting to equal a multiplication of
number_of_replicas
and
(s3queue_)processing_threads_num
.
The setting
(s3queue_)processing_threads_num
is not recommended for usage before version
24.6
.
The setting
(s3queue_)buckets
is available starting with version
24.6
.
Description {#description}
SELECT
is not particularly useful for streaming import (except for debugging), because each file can be imported only once. It is more practical to create real-time threads using
materialized views
. To do this:
Use the engine to create a table for consuming from specified path in S3 and consider it a data stream.
Create a table with the desired structure.
Create a materialized view that converts data from the engine and puts it into a previously created table.
When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background.
Example: | {"source_file": "s3queue.md"} | [
-0.011091593652963638,
0.009104701690375805,
-0.09878760576248169,
0.009035538882017136,
-0.010798158124089241,
0.000001564923650221317,
0.02706380933523178,
-0.015997184440493584,
0.009013943374156952,
0.04353807121515274,
-0.048054177314043045,
-0.060295093804597855,
0.06785530596971512,
... |
30cb7d06-95a7-4f10-a489-df00f663affc | When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background.
Example:
```sql
CREATE TABLE s3queue_engine_table (name String, value UInt32)
ENGINE=S3Queue('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/*', 'CSV', 'gzip')
SETTINGS
mode = 'unordered';
CREATE TABLE stats (name String, value UInt32)
ENGINE = MergeTree() ORDER BY name;
CREATE MATERIALIZED VIEW consumer TO stats
AS SELECT name, value FROM s3queue_engine_table;
SELECT * FROM stats ORDER BY name;
```
Virtual columns {#virtual-columns}
_path
β Path to the file.
_file
β Name of the file.
_size
β Size of the file.
_time
β Time of the file creation.
For more information about virtual columns see
here
.
Wildcards in path {#wildcards-in-path}
path
argument can specify multiple files using bash-like wildcards. For being processed file should exist and match to the whole path pattern. Listing of files is determined during
SELECT
(not at
CREATE
moment).
*
β Substitutes any number of any characters except
/
including empty string.
**
β Substitutes any number of any characters include
/
including empty string.
?
β Substitutes any single character.
{some_string,another_string,yet_another_one}
β Substitutes any of strings
'some_string', 'another_string', 'yet_another_one'
.
{N..M}
β Substitutes any number in range from N to M including both borders. N and M can have leading zeroes e.g.
000..078
.
Constructions with
{}
are similar to the
remote
table function.
Limitations {#limitations}
Duplicated rows can be as a result of:
an exception happens during parsing in the middle of file processing and retries are enabled via
s3queue_loading_retries
;
S3Queue
is configured on multiple servers pointing to the same path in zookeeper and keeper session expires before one server managed to commit processed file, which could lead to another server taking processing of the file, which could be partially or fully processed by the first server; However, this is not true since version 25.8 if
use_persistent_processing_nodes = 1
.
abnormal server termination.
S3Queue
is configured on multiple servers pointing to the same path in zookeeper and
Ordered
mode is used, then
s3queue_loading_retries
will not work. This will be fixed soon.
Introspection {#introspection}
For introspection use
system.s3queue
stateless table and
system.s3queue_log
persistent table.
system.s3queue
. This table is not persistent and shows in-memory state of
S3Queue
: which files are currently being processed, which files are processed or failed. | {"source_file": "s3queue.md"} | [
-0.001963173970580101,
-0.004216316621750593,
-0.13314373791217804,
0.01842648535966873,
0.00029134558280929923,
-0.025435835123062134,
-0.036073457449674606,
-0.0015300950035452843,
0.049902718514204025,
0.04963621124625206,
0.01885521039366722,
-0.03899578005075455,
0.040151841938495636,
... |
12200850-071e-4f92-ad3d-d5460d4ba8ed | system.s3queue
. This table is not persistent and shows in-memory state of
S3Queue
: which files are currently being processed, which files are processed or failed.
sql
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE system.s3queue
(
`database` String,
`table` String,
`file_name` String,
`rows_processed` UInt64,
`status` String,
`processing_start_time` Nullable(DateTime),
`processing_end_time` Nullable(DateTime),
`ProfileEvents` Map(String, UInt64)
`exception` String
)
ENGINE = SystemS3Queue
COMMENT 'Contains in-memory state of S3Queue metadata and currently processed rows per file.' β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Example:
```sql
SELECT *
FROM system.s3queue
Row 1:
ββββββ
zookeeper_path: /clickhouse/s3queue/25ea5621-ae8c-40c7-96d0-cec959c5ab88/3b3f66a1-9866-4c2e-ba78-b6bfa154207e
file_name: wikistat/original/pageviews-20150501-030000.gz
rows_processed: 5068534
status: Processed
processing_start_time: 2023-10-13 13:09:48
processing_end_time: 2023-10-13 13:10:31
ProfileEvents: {'ZooKeeperTransactions':3,'ZooKeeperGet':2,'ZooKeeperMulti':1,'SelectedRows':5068534,'SelectedBytes':198132283,'ContextLock':1,'S3QueueSetFileProcessingMicroseconds':2480,'S3QueueSetFileProcessedMicroseconds':9985,'S3QueuePullMicroseconds':273776,'LogTest':17}
exception:
```
system.s3queue_log
. Persistent table. Has the same information as
system.s3queue
, but for
processed
and
failed
files.
The table has the following structure:
```sql
SHOW CREATE TABLE system.s3queue_log
Query id: 0ad619c3-0f2a-4ee4-8b40-c73d86e04314 | {"source_file": "s3queue.md"} | [
-0.012538651935756207,
-0.06656055152416229,
-0.17724764347076416,
0.02067599445581436,
0.05343388020992279,
-0.053660281002521515,
0.08404847234487534,
0.02030664123594761,
0.03953664377331734,
0.07338303327560425,
0.00447538448497653,
0.007806263864040375,
0.07118147611618042,
-0.0396567... |
0cdc0a96-82b1-46a5-bc3d-f6cb5074d89c | The table has the following structure:
```sql
SHOW CREATE TABLE system.s3queue_log
Query id: 0ad619c3-0f2a-4ee4-8b40-c73d86e04314
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE system.s3queue_log
(
event_date
Date,
event_time
DateTime,
table_uuid
String,
file_name
String,
rows_processed
UInt64,
status
Enum8('Processed' = 0, 'Failed' = 1),
processing_start_time
Nullable(DateTime),
processing_end_time
Nullable(DateTime),
ProfileEvents
Map(String, UInt64),
exception
String
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, event_time)
SETTINGS index_granularity = 8192 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
In order to use
system.s3queue_log
define its configuration in server config file:
xml
<s3queue_log>
<database>system</database>
<table>s3queue_log</table>
</s3queue_log>
Example:
```sql
SELECT *
FROM system.s3queue_log
Row 1:
ββββββ
event_date: 2023-10-13
event_time: 2023-10-13 13:10:12
table_uuid:
file_name: wikistat/original/pageviews-20150501-020000.gz
rows_processed: 5112621
status: Processed
processing_start_time: 2023-10-13 13:09:48
processing_end_time: 2023-10-13 13:10:12
ProfileEvents: {'ZooKeeperTransactions':3,'ZooKeeperGet':2,'ZooKeeperMulti':1,'SelectedRows':5112621,'SelectedBytes':198577687,'ContextLock':1,'S3QueueSetFileProcessingMicroseconds':1934,'S3QueueSetFileProcessedMicroseconds':17063,'S3QueuePullMicroseconds':5841972,'LogTest':17}
exception:
``` | {"source_file": "s3queue.md"} | [
0.056875236332416534,
-0.0522296316921711,
-0.07458774000406265,
-0.018800269812345505,
0.004429057706147432,
-0.03451450541615486,
0.015478807501494884,
0.0517779104411602,
0.004615903832018375,
0.03619205206632614,
0.04272833466529846,
-0.05887195095419884,
0.04390609264373779,
-0.098454... |
e00c292d-ce4f-47a2-b127-f942177d8994 | description: 'This engine provides an integration with the Azure Blob Storage ecosystem,
allowing streaming data import.'
sidebar_label: 'AzureQueue'
sidebar_position: 181
slug: /engines/table-engines/integrations/azure-queue
title: 'AzureQueue table engine'
doc_type: 'reference'
AzureQueue table engine
This engine provides an integration with the
Azure Blob Storage
ecosystem, allowing streaming data import.
Create table {#creating-a-table}
sql
CREATE TABLE test (name String, value UInt32)
ENGINE = AzureQueue(...)
[SETTINGS]
[mode = '',]
[after_processing = 'keep',]
[keeper_path = '',]
...
Engine parameters
AzureQueue
parameters are the same as
AzureBlobStorage
table engine supports. See parameters section
here
.
Similar to the
AzureBlobStorage
table engine, users can use Azurite emulator for local Azure Storage development. Further details
here
.
Example
sql
CREATE TABLE azure_queue_engine_table
(
`key` UInt64,
`data` String
)
ENGINE = AzureQueue('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', 'testcontainer', '*', 'CSV')
SETTINGS mode = 'unordered'
Settings {#settings}
The set of supported settings is the same as for
S3Queue
table engine, but without
s3queue_
prefix. See
full list of settings settings
.
To get a list of settings, configured for the table, use
system.azure_queue_settings
table. Available from
24.10
.
Description {#description}
SELECT
is not particularly useful for streaming import (except for debugging), because each file can be imported only once. It is more practical to create real-time threads using
materialized views
. To do this:
Use the engine to create a table for consuming from specified path in S3 and consider it a data stream.
Create a table with the desired structure.
Create a materialized view that converts data from the engine and puts it into a previously created table.
When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background.
Example:
```sql
CREATE TABLE azure_queue_engine_table (key UInt64, data String)
ENGINE=AzureQueue('
', 'CSV', 'gzip')
SETTINGS
mode = 'unordered';
CREATE TABLE stats (key UInt64, data String)
ENGINE = MergeTree() ORDER BY key;
CREATE MATERIALIZED VIEW consumer TO stats
AS SELECT key, data FROM azure_queue_engine_table;
SELECT * FROM stats ORDER BY key;
```
Virtual columns {#virtual-columns}
_path
β Path to the file.
_file
β Name of the file.
For more information about virtual columns see
here
.
Introspection {#introspection}
Enable logging for the table via the table setting
enable_logging_to_queue_log=1
.
Introspection capabilities are the same as the
S3Queue table engine
with several distinct differences: | {"source_file": "azure-queue.md"} | [
-0.005620880983769894,
-0.04055175930261612,
-0.12461699545383453,
0.04792756959795952,
-0.08114388585090637,
0.02583911456167698,
0.0793556347489357,
-0.023136908188462257,
-0.042929183691740036,
0.09906850755214691,
0.028205377981066704,
-0.05078817903995514,
0.13543164730072021,
-0.0245... |
5c385597-d1bd-4179-a9e4-9f3ca0eb679a | Enable logging for the table via the table setting
enable_logging_to_queue_log=1
.
Introspection capabilities are the same as the
S3Queue table engine
with several distinct differences:
Use the
system.azure_queue
for the in-memory state of the queue for server versions >= 25.1. For older versions use the
system.s3queue
(it would contain information for
azure
tables as well).
Enable the
system.azure_queue_log
via the main ClickHouse configuration e.g.
xml
<azure_queue_log>
<database>system</database>
<table>azure_queue_log</table>
</azure_queue_log>
This persistent table has the same information as
system.s3queue
, but for processed and failed files.
The table has the following structure:
```sql
CREATE TABLE system.azure_queue_log
(
hostname
LowCardinality(String) COMMENT 'Hostname',
event_date
Date COMMENT 'Event date of writing this log row',
event_time
DateTime COMMENT 'Event time of writing this log row',
database
String COMMENT 'The name of a database where current S3Queue table lives.',
table
String COMMENT 'The name of S3Queue table.',
uuid
String COMMENT 'The UUID of S3Queue table',
file_name
String COMMENT 'File name of the processing file',
rows_processed
UInt64 COMMENT 'Number of processed rows',
status
Enum8('Processed' = 0, 'Failed' = 1) COMMENT 'Status of the processing file',
processing_start_time
Nullable(DateTime) COMMENT 'Time of the start of processing the file',
processing_end_time
Nullable(DateTime) COMMENT 'Time of the end of processing the file',
exception
String COMMENT 'Exception message if happened'
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, event_time)
SETTINGS index_granularity = 8192
COMMENT 'Contains logging entries with the information files processes by S3Queue engine.'
```
Example:
```sql
SELECT *
FROM system.azure_queue_log
LIMIT 1
FORMAT Vertical
Row 1:
ββββββ
hostname: clickhouse
event_date: 2024-12-16
event_time: 2024-12-16 13:42:47
database: default
table: azure_queue_engine_table
uuid: 1bc52858-00c0-420d-8d03-ac3f189f27c8
file_name: test_1.csv
rows_processed: 3
status: Processed
processing_start_time: 2024-12-16 13:42:47
processing_end_time: 2024-12-16 13:42:47
exception:
1 row in set. Elapsed: 0.002 sec.
``` | {"source_file": "azure-queue.md"} | [
0.036954112350940704,
-0.0782916247844696,
-0.11521261930465698,
0.07083135843276978,
0.0001187500893138349,
-0.061593104153871536,
0.08877497166395187,
-0.09480472654104233,
0.08388996869325638,
0.13989397883415222,
0.009013615548610687,
-0.018392981961369514,
0.08127555251121521,
-0.0214... |
c19c9f76-24f8-461d-95aa-49d06d82dfb9 | description: 'A table engine storing time series, i.e. a set of values associated
with timestamps and tags (or labels).'
sidebar_label: 'TimeSeries'
sidebar_position: 60
slug: /engines/table-engines/special/time_series
title: 'TimeSeries table engine'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
TimeSeries table engine
A table engine storing time series, i.e. a set of values associated with timestamps and tags (or labels):
sql
metric_name1[tag1=value1, tag2=value2, ...] = {timestamp1: value1, timestamp2: value2, ...}
metric_name2[...] = ...
:::info
This is an experimental feature that may change in backwards-incompatible ways in the future releases.
Enable usage of the TimeSeries table engine
with
allow_experimental_time_series_table
setting.
Input the command
set allow_experimental_time_series_table = 1
.
:::
Syntax {#syntax}
sql
CREATE TABLE name [(columns)] ENGINE=TimeSeries
[SETTINGS var1=value1, ...]
[DATA db.data_table_name | DATA ENGINE data_table_engine(arguments)]
[TAGS db.tags_table_name | TAGS ENGINE tags_table_engine(arguments)]
[METRICS db.metrics_table_name | METRICS ENGINE metrics_table_engine(arguments)]
Usage {#usage}
It's easier to start with everything set by default (it's allowed to create a
TimeSeries
table without specifying a list of columns):
sql
CREATE TABLE my_table ENGINE=TimeSeries
Then this table can be used with the following protocols (a port must be assigned in the server configuration):
-
prometheus remote-write
-
prometheus remote-read
Target tables {#target-tables}
A
TimeSeries
table doesn't have its own data, everything is stored in its target tables.
This is similar to how a
materialized view
works,
with the difference that a materialized view has one target table
whereas a
TimeSeries
table has three target tables named
data
,
tags
, and
metrics
.
The target tables can be either specified explicitly in the
CREATE TABLE
query
or the
TimeSeries
table engine can generate inner target tables automatically.
The target tables are the following:
Data table {#data-table}
The
data
table contains time series associated with some identifier.
The
data
table must have columns:
| Name | Mandatory? | Default type | Possible types | Description |
|---|---|---|---|---|
|
id
| [x] |
UUID
| any | Identifies a combination of a metric names and tags |
|
timestamp
| [x] |
DateTime64(3)
|
DateTime64(X)
| A time point |
|
value
| [x] |
Float64
|
Float32
or
Float64
| A value associated with the
timestamp
|
Tags table {#tags-table}
The
tags
table contains identifiers calculated for each combination of a metric name and tags.
The
tags
table must have columns: | {"source_file": "time-series.md"} | [
-0.03243141621351242,
0.029479295015335083,
-0.06021224707365036,
0.03513089567422867,
-0.02900911122560501,
-0.006223399192094803,
0.04306876286864281,
0.017696402966976166,
-0.005061548203229904,
-0.018200896680355072,
0.0004408766108099371,
-0.08307220786809921,
0.03568360209465027,
-0.... |
fc3d9ad5-4e20-475a-9e69-9668339f1209 | Tags table {#tags-table}
The
tags
table contains identifiers calculated for each combination of a metric name and tags.
The
tags
table must have columns:
| Name | Mandatory? | Default type | Possible types | Description |
|---|---|---|---|---|
|
id
| [x] |
UUID
| any (must match the type of
id
in the
data
table) | An
id
identifies a combination of a metric name and tags. The DEFAULT expression specifies how to calculate such an identifier |
|
metric_name
| [x] |
LowCardinality(String)
|
String
or
LowCardinality(String)
| The name of a metric |
|
<tag_value_column>
| [ ] |
String
|
String
or
LowCardinality(String)
or
LowCardinality(Nullable(String))
| The value of a specific tag, the tag's name and the name of a corresponding column are specified in the
tags_to_columns
setting |
|
tags
| [x] |
Map(LowCardinality(String), String)
|
Map(String, String)
or
Map(LowCardinality(String), String)
or
Map(LowCardinality(String), LowCardinality(String))
| Map of tags excluding the tag
__name__
containing the name of a metric and excluding tags with names enumerated in the
tags_to_columns
setting |
|
all_tags
| [ ] |
Map(String, String)
|
Map(String, String)
or
Map(LowCardinality(String), String)
or
Map(LowCardinality(String), LowCardinality(String))
| Ephemeral column, each row is a map of all the tags excluding only the tag
__name__
containing the name of a metric. The only purpose of that column is to be used while calculating
id
|
|
min_time
| [ ] |
Nullable(DateTime64(3))
|
DateTime64(X)
or
Nullable(DateTime64(X))
| Minimum timestamp of time series with that
id
. The column is created if
store_min_time_and_max_time
is
true
|
|
max_time
| [ ] |
Nullable(DateTime64(3))
|
DateTime64(X)
or
Nullable(DateTime64(X))
| Maximum timestamp of time series with that
id
. The column is created if
store_min_time_and_max_time
is
true
|
Metrics table {#metrics-table}
The
metrics
table contains some information about metrics been collected, the types of those metrics and their descriptions.
The
metrics
table must have columns:
| Name | Mandatory? | Default type | Possible types | Description |
|---|---|---|---|---|
|
metric_family_name
| [x] |
String
|
String
or
LowCardinality(String)
| The name of a metric family |
|
type
| [x] |
String
|
String
or
LowCardinality(String)
| The type of a metric family, one of "counter", "gauge", "summary", "stateset", "histogram", "gaugehistogram" |
|
unit
| [x] |
String
|
String
or
LowCardinality(String)
| The unit used in a metric |
|
help
| [x] |
String
|
String
or
LowCardinality(String)
| The description of a metric |
Any row inserted into a
TimeSeries
table will be in fact stored in those three target tables.
A
TimeSeries
table contains all those columns from the
data
,
tags
,
metrics
tables.
Creation {#creation} | {"source_file": "time-series.md"} | [
0.040150124579668045,
0.02197371982038021,
-0.09167636185884476,
-0.02260150946676731,
0.0007299063727259636,
-0.04025573283433914,
0.052773941308259964,
0.039860233664512634,
-0.040000662207603455,
-0.01366262324154377,
0.044411659240722656,
-0.12783338129520416,
0.044431716203689575,
-0.... |
dd18bac9-dde0-43aa-9d06-6323360fba01 | Creation {#creation}
There are multiple ways to create a table with the
TimeSeries
table engine.
The simplest statement
sql
CREATE TABLE my_table ENGINE=TimeSeries
will actually create the following table (you can see that by executing
SHOW CREATE TABLE my_table
):
sql
CREATE TABLE my_table
(
`id` UUID DEFAULT reinterpretAsUUID(sipHash128(metric_name, all_tags)),
`timestamp` DateTime64(3),
`value` Float64,
`metric_name` LowCardinality(String),
`tags` Map(LowCardinality(String), String),
`all_tags` Map(String, String),
`min_time` Nullable(DateTime64(3)),
`max_time` Nullable(DateTime64(3)),
`metric_family_name` String,
`type` String,
`unit` String,
`help` String
)
ENGINE = TimeSeries
DATA ENGINE = MergeTree ORDER BY (id, timestamp)
DATA INNER UUID '01234567-89ab-cdef-0123-456789abcdef'
TAGS ENGINE = AggregatingMergeTree PRIMARY KEY metric_name ORDER BY (metric_name, id)
TAGS INNER UUID '01234567-89ab-cdef-0123-456789abcdef'
METRICS ENGINE = ReplacingMergeTree ORDER BY metric_family_name
METRICS INNER UUID '01234567-89ab-cdef-0123-456789abcdef'
So the columns were generated automatically and also there are three inner UUIDs in this statement -
one per each inner target table that was created.
(Inner UUIDs are not shown normally until setting
show_table_uuid_in_table_create_query_if_not_nil
is set.)
Inner target tables have names like
.inner_id.data.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
,
.inner_id.tags.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
,
.inner_id.metrics.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
and each target table has columns which is a subset of the columns of the main
TimeSeries
table:
sql
CREATE TABLE default.`.inner_id.data.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`
(
`id` UUID,
`timestamp` DateTime64(3),
`value` Float64
)
ENGINE = MergeTree
ORDER BY (id, timestamp)
sql
CREATE TABLE default.`.inner_id.tags.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`
(
`id` UUID DEFAULT reinterpretAsUUID(sipHash128(metric_name, all_tags)),
`metric_name` LowCardinality(String),
`tags` Map(LowCardinality(String), String),
`all_tags` Map(String, String) EPHEMERAL,
`min_time` SimpleAggregateFunction(min, Nullable(DateTime64(3))),
`max_time` SimpleAggregateFunction(max, Nullable(DateTime64(3)))
)
ENGINE = AggregatingMergeTree
PRIMARY KEY metric_name
ORDER BY (metric_name, id)
sql
CREATE TABLE default.`.inner_id.metrics.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`
(
`metric_family_name` String,
`type` String,
`unit` String,
`help` String
)
ENGINE = ReplacingMergeTree
ORDER BY metric_family_name
Adjusting types of columns {#adjusting-column-types}
You can adjust the types of almost any column of the inner target tables by specifying them explicitly
while defining the main table. For example,
sql
CREATE TABLE my_table
(
timestamp DateTime64(6)
) ENGINE=TimeSeries | {"source_file": "time-series.md"} | [
-0.03493044897913933,
-0.03469092771410942,
-0.0265052393078804,
-0.009305793792009354,
-0.09200950711965561,
-0.08023791760206223,
-0.033490125089883804,
0.07802624255418777,
-0.012988024391233921,
0.006756300572305918,
0.02456987462937832,
-0.06103869900107384,
0.0032526368740946054,
-0.... |
216350f9-bda6-44c8-a90f-1e066e4c1c71 | sql
CREATE TABLE my_table
(
timestamp DateTime64(6)
) ENGINE=TimeSeries
will make the inner
data
table store timestamp in microseconds instead of milliseconds:
sql
CREATE TABLE default.`.inner_id.data.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`
(
`id` UUID,
`timestamp` DateTime64(6),
`value` Float64
)
ENGINE = MergeTree
ORDER BY (id, timestamp)
The
id
column {#id-column}
The
id
column contains identifiers, every identifier is calculated for a combination of a metric name and tags.
The DEFAULT expression for the
id
column is an expression which will be used to calculate such identifiers.
Both the type of the
id
column and that expression can be adjusted by specifying them explicitly:
sql
CREATE TABLE my_table
(
id UInt64 DEFAULT sipHash64(metric_name, all_tags)
)
ENGINE=TimeSeries
The
tags
and
all_tags
columns {#tags-and-all-tags}
There are two columns containing maps of tags -
tags
and
all_tags
. In this example they mean the same, however they can be different
if setting
tags_to_columns
is used. This setting allows to specify that a specific tag should be stored in a separate column instead of storing
in a map inside the
tags
column:
sql
CREATE TABLE my_table
ENGINE = TimeSeries
SETTINGS tags_to_columns = {'instance': 'instance', 'job': 'job'}
This statement will add columns:
sql
`instance` String,
`job` String
to the definition of both
my_table
and its inner
tags
target table. In this case the
tags
column will not contain tags
instance
and
job
,
but the
all_tags
column will contain them. The
all_tags
column is ephemeral and its only purpose to be used in the DEFAULT expression
for the
id
column.
The types of columns can be adjusted by specifying them explicitly:
sql
CREATE TABLE my_table (
instance LowCardinality(String),
job LowCardinality(Nullable(String))
)
ENGINE=TimeSeries
SETTINGS tags_to_columns = {'instance': 'instance', 'job': 'job'}
Table engines of inner target tables {#inner-table-engines}
By default inner target tables use the following table engines:
- the
data
table uses
MergeTree
;
- the
tags
table uses
AggregatingMergeTree
because the same data is often inserted multiple times to this table so we need a way
to remove duplicates, and also because it's required to do aggregation for columns
min_time
and
max_time
;
- the
metrics
table uses
ReplacingMergeTree
because the same data is often inserted multiple times to this table so we need a way
to remove duplicates.
Other table engines also can be used for inner target tables if it's specified so:
sql
CREATE TABLE my_table ENGINE=TimeSeries
DATA ENGINE=ReplicatedMergeTree
TAGS ENGINE=ReplicatedAggregatingMergeTree
METRICS ENGINE=ReplicatedReplacingMergeTree
External target tables {#external-target-tables}
It's possible to make a
TimeSeries
table use a manually created table: | {"source_file": "time-series.md"} | [
0.005708985961973667,
0.005552639253437519,
-0.05562007799744606,
-0.0250598955899477,
-0.03826321288943291,
-0.04941481351852417,
0.018026836216449738,
0.06879284977912903,
0.03131631016731262,
0.025044038891792297,
0.042118873447179794,
-0.05685636028647423,
-0.006892666686326265,
-0.045... |
055b651a-7549-4472-808b-86e87362e157 | External target tables {#external-target-tables}
It's possible to make a
TimeSeries
table use a manually created table:
``sql
CREATE TABLE data_for_my_table
(
id
UUID,
timestamp
DateTime64(3),
value` Float64
)
ENGINE = MergeTree
ORDER BY (id, timestamp);
CREATE TABLE tags_for_my_table ...
CREATE TABLE metrics_for_my_table ...
CREATE TABLE my_table ENGINE=TimeSeries DATA data_for_my_table TAGS tags_for_my_table METRICS metrics_for_my_table;
```
Settings {#settings}
Here is a list of settings which can be specified while defining a
TimeSeries
table:
| Name | Type | Default | Description |
|---|---|---|---|
|
tags_to_columns
| Map | {} | Map specifying which tags should be put to separate columns in the
tags
table. Syntax:
{'tag1': 'column1', 'tag2' : column2, ...}
|
|
use_all_tags_column_to_generate_id
| Bool | true | When generating an expression to calculate an identifier of a time series, this flag enables using the
all_tags
column in that calculation |
|
store_min_time_and_max_time
| Bool | true | If set to true then the table will store
min_time
and
max_time
for each time series |
|
aggregate_min_time_and_max_time
| Bool | true | When creating an inner target
tags
table, this flag enables using
SimpleAggregateFunction(min, Nullable(DateTime64(3)))
instead of just
Nullable(DateTime64(3))
as the type of the
min_time
column, and the same for the
max_time
column |
|
filter_by_min_time_and_max_time
| Bool | true | If set to true then the table will use the
min_time
and
max_time
columns for filtering time series |
Functions {#functions}
Here is a list of functions supporting a
TimeSeries
table as an argument:
-
timeSeriesData
-
timeSeriesTags
-
timeSeriesMetrics | {"source_file": "time-series.md"} | [
-0.014798304997384548,
-0.025615768507122993,
-0.06647622585296631,
0.05344739556312561,
-0.017742440104484558,
-0.06224508211016655,
-0.024615557864308357,
0.08641396462917328,
-0.059792935848236084,
0.007438609842211008,
-0.009092633612453938,
-0.10000688582658768,
0.015855079516768456,
... |
98719551-37d8-4d70-b956-ac17bb8db11f | description: 'MongoDB engine is read-only table engine which allows to read data from
a remote collection.'
sidebar_label: 'MongoDB'
sidebar_position: 135
slug: /engines/table-engines/integrations/mongodb
title: 'MongoDB table engine'
doc_type: 'reference'
MongoDB table engine
MongoDB engine is read-only table engine which allows to read data from a remote
MongoDB
collection.
Only MongoDB v3.6+ servers are supported.
Seed list(
mongodb+srv
)
is not yet supported.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name
(
name1 [type1],
name2 [type2],
...
) ENGINE = MongoDB(host:port, database, collection, user, password[, options[, oid_columns]]);
Engine Parameters
| Parameter | Description |
|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
host:port
| MongoDB server address. |
|
database
| Remote database name. |
|
collection
| Remote collection name. |
|
user
| MongoDB user. |
|
password
| User password. |
|
options
| Optional. MongoDB connection string
options
as a URL formatted string. e.g.
'authSource=admin&ssl=true'
|
|
oid_columns
| Comma-separated list of columns that should be treated as
oid
in the WHERE clause.
_id
by default. |
:::tip
If you are using the MongoDB Atlas cloud offering connection url can be obtained from 'Atlas SQL' option.
Seed list(
mongodb**+srv**
) is not yet supported, but will be added in future releases.
:::
Alternatively, you can pass a URI:
sql
ENGINE = MongoDB(uri, collection[, oid_columns]);
Engine Parameters | {"source_file": "mongodb.md"} | [
0.009928509593009949,
-0.014641467481851578,
-0.1015317440032959,
0.07639823853969574,
-0.03448047861456871,
-0.049272578209638596,
-0.052880290895700455,
0.03720826283097267,
-0.033392228186130524,
0.01422155648469925,
-0.010694818571209908,
-0.03770051524043083,
0.022532997652888298,
-0.... |
687d30a1-93f3-47d4-ba2d-15383da28020 | Alternatively, you can pass a URI:
sql
ENGINE = MongoDB(uri, collection[, oid_columns]);
Engine Parameters
| Parameter | Description |
|---------------|--------------------------------------------------------------------------------------------------------|
|
uri
| MongoDB server's connection URI. |
|
collection
| Remote collection name. |
|
oid_columns
| Comma-separated list of columns that should be treated as
oid
in the WHERE clause.
_id
by default. |
Types mappings {#types-mappings}
| MongoDB | ClickHouse |
|-------------------------|-----------------------------------------------------------------------|
| bool, int32, int64 |
any numeric type except Decimals
, Boolean, String |
| double | Float64, String |
| date | Date, Date32, DateTime, DateTime64, String |
| string | String,
any numeric type(except Decimals) if formatted correctly
|
| document | String(as JSON) |
| array | Array, String(as JSON) |
| oid | String |
| binary | String if in column, base64 encoded string if in an array or document |
| uuid (binary subtype 4) | UUID |
|
any other
| String |
If key is not found in MongoDB document (for example, column name doesn't match), default value or
NULL
(if the column is nullable) will be inserted.
OID {#oid}
If you want a
String
to be treated as
oid
in the WHERE clause, just put the column's name in the last argument of the table engine.
This may be necessary when querying a record by the
_id
column, which by default has
oid
type in MongoDB.
If the
_id
field in the table has other type, for example
uuid
, you need to specify empty
oid_columns
, otherwise the default value for this parameter
_id
is used.
```javascript
db.sample_oid.insertMany([
{"another_oid_column": ObjectId()},
]);
db.sample_oid.find();
[
{
"_id": {"$oid": "67bf6cc44ebc466d33d42fb2"},
"another_oid_column": {"$oid": "67bf6cc40000000000ea41b1"}
}
]
```
By default, only
_id
is treated as
oid
column. | {"source_file": "mongodb.md"} | [
0.007073717657476664,
0.06574223190546036,
-0.07732018828392029,
0.08542202413082123,
-0.04679049924015999,
-0.053224917501211166,
-0.01981372758746147,
0.00012501233140937984,
-0.00047052904847078025,
-0.05045497417449951,
-0.05206320434808731,
-0.08065051585435867,
-0.01836632564663887,
... |
56c95f69-6131-40b3-a3a7-2efd4f2eb3fc | By default, only
_id
is treated as
oid
column.
```sql
CREATE TABLE sample_oid
(
_id String,
another_oid_column String
) ENGINE = MongoDB('mongodb://user:pass@host/db', 'sample_oid');
SELECT count() FROM sample_oid WHERE _id = '67bf6cc44ebc466d33d42fb2'; --will output 1.
SELECT count() FROM sample_oid WHERE another_oid_column = '67bf6cc40000000000ea41b1'; --will output 0
```
In this case the output will be
0
, because ClickHouse doesn't know that
another_oid_column
has
oid
type, so let's fix it:
```sql
CREATE TABLE sample_oid
(
_id String,
another_oid_column String
) ENGINE = MongoDB('mongodb://user:pass@host/db', 'sample_oid', '_id,another_oid_column');
-- or
CREATE TABLE sample_oid
(
_id String,
another_oid_column String
) ENGINE = MongoDB('host', 'db', 'sample_oid', 'user', 'pass', '', '_id,another_oid_column');
SELECT count() FROM sample_oid WHERE another_oid_column = '67bf6cc40000000000ea41b1'; -- will output 1 now
```
Supported clauses {#supported-clauses}
Only queries with simple expressions are supported (for example,
WHERE field = <constant> ORDER BY field2 LIMIT <constant>
).
Such expressions are translated to MongoDB query language and executed on the server side.
You can disable all these restriction, using
mongodb_throw_on_unsupported_query
.
In that case ClickHouse tries to convert query on best effort basis, but it can lead to full table scan and processing on ClickHouse side.
:::note
It's always better to explicitly set type of literal because Mongo requires strict typed filters.\
For example you want to filter by
Date
:
sql
SELECT * FROM mongo_table WHERE date = '2024-01-01'
This will not work because Mongo will not cast string to
Date
, so you need to cast it manually:
sql
SELECT * FROM mongo_table WHERE date = '2024-01-01'::Date OR date = toDate('2024-01-01')
This applied for
Date
,
Date32
,
DateTime
,
Bool
,
UUID
.
:::
Usage example {#usage-example}
Assuming MongoDB has
sample_mflix
dataset loaded
Create a table in ClickHouse which allows to read data from MongoDB collection:
sql
CREATE TABLE sample_mflix_table
(
_id String,
title String,
plot String,
genres Array(String),
directors Array(String),
writers Array(String),
released Date,
imdb String,
year String
) ENGINE = MongoDB('mongodb://<USERNAME>:<PASSWORD>@atlas-sql-6634be87cefd3876070caf96-98lxs.a.query.mongodb.net/sample_mflix?ssl=true&authSource=admin', 'movies');
Query:
sql
SELECT count() FROM sample_mflix_table
text
ββcount()ββ
1. β 21349 β
βββββββββββ
```sql
-- JSONExtractString cannot be pushed down to MongoDB
SET mongodb_throw_on_unsupported_query = 0; | {"source_file": "mongodb.md"} | [
0.02078390121459961,
0.040494851768016815,
-0.11819636821746826,
0.07599037140607834,
-0.03009900450706482,
-0.06147390604019165,
0.06533126533031464,
0.035946544259786606,
0.03859393671154976,
-0.0625268816947937,
0.03759586438536644,
-0.09365787357091904,
0.02923743985593319,
-0.10615889... |
ff8a6f8f-4ea8-4801-9303-dcc971a9add3 | text
ββcount()ββ
1. β 21349 β
βββββββββββ
```sql
-- JSONExtractString cannot be pushed down to MongoDB
SET mongodb_throw_on_unsupported_query = 0;
-- Find all 'Back to the Future' sequels with rating > 7.5
SELECT title, plot, genres, directors, released FROM sample_mflix_table
WHERE title IN ('Back to the Future', 'Back to the Future Part II', 'Back to the Future Part III')
AND toFloat32(JSONExtractString(imdb, 'rating')) > 7.5
ORDER BY year
FORMAT Vertical;
```
```text
Row 1:
ββββββ
title: Back to the Future
plot: A young man is accidentally sent 30 years into the past in a time-traveling DeLorean invented by his friend, Dr. Emmett Brown, and must make sure his high-school-age parents unite in order to save his own existence.
genres: ['Adventure','Comedy','Sci-Fi']
directors: ['Robert Zemeckis']
released: 1985-07-03
Row 2:
ββββββ
title: Back to the Future Part II
plot: After visiting 2015, Marty McFly must repeat his visit to 1955 to prevent disastrous changes to 1985... without interfering with his first trip.
genres: ['Action','Adventure','Comedy']
directors: ['Robert Zemeckis']
released: 1989-11-22
```
sql
-- Find top 3 movies based on Cormac McCarthy's books
SELECT title, toFloat32(JSONExtractString(imdb, 'rating')) AS rating
FROM sample_mflix_table
WHERE arrayExists(x -> x LIKE 'Cormac McCarthy%', writers)
ORDER BY rating DESC
LIMIT 3;
text
ββtitleβββββββββββββββββββ¬βratingββ
1. β No Country for Old Men β 8.1 β
2. β The Sunset Limited β 7.4 β
3. β The Road β 7.3 β
ββββββββββββββββββββββββββ΄βββββββββ
Troubleshooting {#troubleshooting}
You can see the generated MongoDB query in DEBUG level logs.
Implementation details can be found in
mongocxx
and
mongoc
documentations. | {"source_file": "mongodb.md"} | [
-0.061664823442697525,
0.01586325094103813,
-0.018918069079518318,
0.0019043191568925977,
-0.013559338636696339,
-0.0035318892914801836,
0.01272444985806942,
0.009684084914624691,
0.044672124087810516,
-0.014943624846637249,
0.04039618745446205,
-0.03304130956530571,
0.016063673421740532,
... |
efe78b54-a733-4857-b2b8-65657880f250 | description: 'Allows for quick writing of object states that are continually changing,
and deleting old object states in the background.'
sidebar_label: 'VersionedCollapsingMergeTree'
sidebar_position: 80
slug: /engines/table-engines/mergetree-family/versionedcollapsingmergetree
title: 'VersionedCollapsingMergeTree table engine'
doc_type: 'reference'
VersionedCollapsingMergeTree table engine
This engine:
Allows quick writing of object states that are continually changing.
Deletes old object states in the background. This significantly reduces the volume of storage.
See the section
Collapsing
for details.
The engine inherits from
MergeTree
and adds the logic for collapsing rows to the algorithm for merging data parts.
VersionedCollapsingMergeTree
serves the same purpose as
CollapsingMergeTree
but uses a different collapsing algorithm that allows inserting the data in any order with multiple threads. In particular, the
Version
column helps to collapse the rows properly even if they are inserted in the wrong order. In contrast,
CollapsingMergeTree
allows only strictly consecutive insertion.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = VersionedCollapsingMergeTree(sign, version)
[PARTITION BY expr]
[ORDER BY expr]
[SAMPLE BY expr]
[SETTINGS name=value, ...]
For a description of query parameters, see the
query description
.
Engine parameters {#engine-parameters}
sql
VersionedCollapsingMergeTree(sign, version)
| Parameter | Description | Type |
|-----------|----------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
sign
| Name of the column with the type of row:
1
is a "state" row,
-1
is a "cancel" row. |
Int8
|
|
version
| Name of the column with the version of the object state. |
Int*
,
UInt*
,
Date
,
Date32
,
DateTime
or
DateTime64
|
Query clauses {#query-clauses} | {"source_file": "versionedcollapsingmergetree.md"} | [
-0.04353385791182518,
0.007301445584744215,
0.0022733225487172604,
0.06383322179317474,
0.038975026458501816,
-0.051921360194683075,
-0.055055394768714905,
-0.010786861181259155,
0.01994936354458332,
0.03730837255716324,
0.023038841784000397,
0.09287835657596588,
0.010935496538877487,
-0.1... |
0618be8b-4757-4e40-a7e3-9c14d8546b3c | Query clauses {#query-clauses}
When creating a
VersionedCollapsingMergeTree
table, the same
clauses
are required as when creating a
MergeTree
table.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects. If possible, switch old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE [=] VersionedCollapsingMergeTree(date-column [, samp#table_engines_versionedcollapsingmergetreeling_expression], (primary, key), index_granularity, sign, version)
```
All of the parameters except `sign` and `version` have the same meaning as in `MergeTree`.
- `sign` β Name of the column with the type of row: `1` is a "state" row, `-1` is a "cancel" row.
Column Data Type β `Int8`.
- `version` β Name of the column with the version of the object state.
The column data type should be `UInt*`.
Collapsing {#table_engines_versionedcollapsingmergetree}
Data {#data}
Consider a situation where you need to save continually changing data for some object. It is reasonable to have one row for an object and update the row whenever there are changes. However, the update operation is expensive and slow for a DBMS because it requires rewriting the data in the storage. Update is not acceptable if you need to write data quickly, but you can write the changes to an object sequentially as follows.
Use the
Sign
column when writing the row. If
Sign = 1
it means that the row is a state of an object (let's call it the "state" row). If
Sign = -1
it indicates the cancellation of the state of an object with the same attributes (let's call it the "cancel" row). Also use the
Version
column, which should identify each state of an object with a separate number.
For example, we want to calculate how many pages users visited on some site and how long they were there. At some point in time we write the following row with the state of user activity:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 5 β 146 β 1 β 1 |
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
At some point later we register the change of user activity and write it with the following two rows.
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 5 β 146 β -1 β 1 |
β 4324182021466249494 β 6 β 185 β 1 β 2 |
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
The first row cancels the previous state of the object (user). It should copy all of the fields of the canceled state except
Sign
.
The second row contains the current state.
Because we need only the last state of user activity, the rows | {"source_file": "versionedcollapsingmergetree.md"} | [
0.011905944906175137,
0.004013546276837587,
0.014877794310450554,
0.032242219895124435,
-0.008909656666219234,
-0.023605838418006897,
-0.013762752525508404,
0.0031778584234416485,
-0.045673809945583344,
0.06477319449186325,
0.037566885352134705,
-0.05627751722931862,
0.03869152441620827,
-... |
07b05391-97fb-4396-9789-3ba927f1f955 | The second row contains the current state.
Because we need only the last state of user activity, the rows
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 5 β 146 β 1 β 1 |
β 4324182021466249494 β 5 β 146 β -1 β 1 |
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
can be deleted, collapsing the invalid (old) state of the object.
VersionedCollapsingMergeTree
does this while merging the data parts.
To find out why we need two rows for each change, see
Algorithm
.
Notes on Usage
The program that writes the data should remember the state of an object to be able to cancel it. "Cancel" string should contain copies of the primary key fields and the version of the "state" string and the opposite
Sign
. It increases the initial size of storage but allows to write the data quickly.
Long growing arrays in columns reduce the efficiency of the engine due to the load for writing. The more straightforward the data, the better the efficiency.
SELECT
results depend strongly on the consistency of the history of object changes. Be accurate when preparing data for inserting. You can get unpredictable results with inconsistent data, such as negative values for non-negative metrics like session depth.
Algorithm {#table_engines-versionedcollapsingmergetree-algorithm}
When ClickHouse merges data parts, it deletes each pair of rows that have the same primary key and version and different
Sign
. The order of rows does not matter.
When ClickHouse inserts data, it orders rows by the primary key. If the
Version
column is not in the primary key, ClickHouse adds it to the primary key implicitly as the last field and uses it for ordering.
Selecting data {#selecting-data}
ClickHouse does not guarantee that all of the rows with the same primary key will be in the same resulting data part or even on the same physical server. This is true both for writing the data and for subsequent merging of the data parts. In addition, ClickHouse processes
SELECT
queries with multiple threads, and it cannot predict the order of rows in the result. This means that aggregation is required if there is a need to get completely "collapsed" data from a
VersionedCollapsingMergeTree
table.
To finalize collapsing, write a query with a
GROUP BY
clause and aggregate functions that account for the sign. For example, to calculate quantity, use
sum(Sign)
instead of
count()
. To calculate the sum of something, use
sum(Sign * x)
instead of
sum(x)
, and add
HAVING sum(Sign) > 0
.
The aggregates
count
,
sum
and
avg
can be calculated this way. The aggregate
uniq
can be calculated if an object has at least one non-collapsed state. The aggregates
min
and
max
can't be calculated because
VersionedCollapsingMergeTree
does not save the history of values of collapsed states. | {"source_file": "versionedcollapsingmergetree.md"} | [
-0.02505054697394371,
0.05224308371543884,
0.009970046579837799,
0.02832869067788124,
-0.02752450481057167,
-0.07887892425060272,
-0.015430120751261711,
-0.059298641979694366,
0.05591532215476036,
0.071363165974617,
-0.050044964998960495,
0.05723848566412926,
-0.044921375811100006,
-0.1387... |
325055d1-38a9-444d-afc3-f473198a321f | If you need to extract the data with "collapsing" but without aggregation (for example, to check whether rows are present whose newest values match certain conditions), you can use the
FINAL
modifier for the
FROM
clause. This approach is inefficient and should not be used with large tables.
Example of use {#example-of-use}
Example data:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 5 β 146 β 1 β 1 |
β 4324182021466249494 β 5 β 146 β -1 β 1 |
β 4324182021466249494 β 6 β 185 β 1 β 2 |
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
Creating the table:
sql
CREATE TABLE UAct
(
UserID UInt64,
PageViews UInt8,
Duration UInt8,
Sign Int8,
Version UInt8
)
ENGINE = VersionedCollapsingMergeTree(Sign, Version)
ORDER BY UserID
Inserting the data:
sql
INSERT INTO UAct VALUES (4324182021466249494, 5, 146, 1, 1)
sql
INSERT INTO UAct VALUES (4324182021466249494, 5, 146, -1, 1),(4324182021466249494, 6, 185, 1, 2)
We use two
INSERT
queries to create two different data parts. If we insert the data with a single query, ClickHouse creates one data part and will never perform any merge.
Getting the data:
sql
SELECT * FROM UAct
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 5 β 146 β 1 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 5 β 146 β -1 β 1 β
β 4324182021466249494 β 6 β 185 β 1 β 2 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
What do we see here and where are the collapsed parts?
We created two data parts using two
INSERT
queries. The
SELECT
query was performed in two threads, and the result is a random order of rows.
Collapsing did not occur because the data parts have not been merged yet. ClickHouse merges data parts at an unknown point in time which we cannot predict.
This is why we need aggregation:
sql
SELECT
UserID,
sum(PageViews * Sign) AS PageViews,
sum(Duration * Sign) AS Duration,
Version
FROM UAct
GROUP BY UserID, Version
HAVING sum(Sign) > 0
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βVersionββ
β 4324182021466249494 β 6 β 185 β 2 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄ββββββββββ
If we do not need aggregation and want to force collapsing, we can use the
FINAL
modifier for the
FROM
clause.
sql
SELECT * FROM UAct FINAL
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ¬βVersionββ
β 4324182021466249494 β 6 β 185 β 1 β 2 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ΄ββββββββββ
This is a very inefficient way to select data. Don't use it for large tables. | {"source_file": "versionedcollapsingmergetree.md"} | [
-0.027427267283201218,
0.010719673708081245,
0.07355505973100662,
0.02316698059439659,
-0.016281938180327415,
0.01723944954574108,
0.06161835417151451,
-0.02324937842786312,
-0.059101782739162445,
0.036087244749069214,
0.05216028541326523,
0.014091040939092636,
0.033297568559646606,
-0.090... |
defbc269-234a-4865-9ef2-b430be869b5c | description: 'Overview of data replication with the Replicated
family of table engines in ClickHouse'
sidebar_label: 'Replicated
'
sidebar_position: 20
slug: /engines/table-engines/mergetree-family/replication
title: 'Replicated* table engines'
doc_type: 'reference'
Replicated* table engines
:::note
In ClickHouse Cloud replication is managed for you. Please create your tables without adding arguments. For example, in the text below you would replace:
sql
ENGINE = ReplicatedMergeTree(
'/clickhouse/tables/{shard}/table_name',
'{replica}'
)
with:
sql
ENGINE = ReplicatedMergeTree
:::
Replication is only supported for tables in the MergeTree family:
ReplicatedMergeTree
ReplicatedSummingMergeTree
ReplicatedReplacingMergeTree
ReplicatedAggregatingMergeTree
ReplicatedCollapsingMergeTree
ReplicatedVersionedCollapsingMergeTree
ReplicatedGraphiteMergeTree
Replication works at the level of an individual table, not the entire server. A server can store both replicated and non-replicated tables at the same time.
Replication does not depend on sharding. Each shard has its own independent replication.
Compressed data for
INSERT
and
ALTER
queries is replicated (for more information, see the documentation for
ALTER
.
CREATE
,
DROP
,
ATTACH
,
DETACH
and
RENAME
queries are executed on a single server and are not replicated:
The
CREATE TABLE
query creates a new replicatable table on the server where the query is run. If this table already exists on other servers, it adds a new replica.
The
DROP TABLE
query deletes the replica located on the server where the query is run.
The
RENAME
query renames the table on one of the replicas. In other words, replicated tables can have different names on different replicas.
ClickHouse uses
ClickHouse Keeper
for storing replicas meta information. It is possible to use ZooKeeper version 3.4.5 or newer, but ClickHouse Keeper is recommended.
To use replication, set parameters in the
zookeeper
server configuration section.
:::note
Don't neglect the security setting. ClickHouse supports the
digest
ACL scheme
of the ZooKeeper security subsystem.
:::
Example of setting the addresses of the ClickHouse Keeper cluster:
xml
<zookeeper>
<node>
<host>example1</host>
<port>2181</port>
</node>
<node>
<host>example2</host>
<port>2181</port>
</node>
<node>
<host>example3</host>
<port>2181</port>
</node>
</zookeeper>
ClickHouse also supports storing replicas meta information in an auxiliary ZooKeeper cluster. Do this by providing the ZooKeeper cluster name and path as engine arguments.
In other words, it supports storing the metadata of different tables in different ZooKeeper clusters.
Example of setting the addresses of the auxiliary ZooKeeper cluster: | {"source_file": "replication.md"} | [
-0.03645034506917,
-0.12061243504285812,
-0.014979583211243153,
0.0010765180923044682,
-0.07378531247377396,
-0.055093783885240555,
-0.03983760625123978,
-0.09236371517181396,
-0.023719370365142822,
0.050674062222242355,
0.044504571706056595,
-0.015182523056864738,
0.1271667629480362,
-0.0... |
dde88c52-af41-48bf-b752-c5a526ecad4a | Example of setting the addresses of the auxiliary ZooKeeper cluster:
xml
<auxiliary_zookeepers>
<zookeeper2>
<node>
<host>example_2_1</host>
<port>2181</port>
</node>
<node>
<host>example_2_2</host>
<port>2181</port>
</node>
<node>
<host>example_2_3</host>
<port>2181</port>
</node>
</zookeeper2>
<zookeeper3>
<node>
<host>example_3_1</host>
<port>2181</port>
</node>
</zookeeper3>
</auxiliary_zookeepers>
To store table metadata in an auxiliary ZooKeeper cluster instead of the default ZooKeeper cluster, we can use SQL to create the table with
ReplicatedMergeTree engine as follows:
sql
CREATE TABLE table_name ( ... ) ENGINE = ReplicatedMergeTree('zookeeper_name_configured_in_auxiliary_zookeepers:path', 'replica_name') ...
You can specify any existing ZooKeeper cluster and the system will use a directory on it for its own data (the directory is specified when creating a replicatable table).
If ZooKeeper is not set in the config file, you can't create replicated tables, and any existing replicated tables will be read-only.
ZooKeeper is not used in
SELECT
queries because replication does not affect the performance of
SELECT
and queries run just as fast as they do for non-replicated tables. When querying distributed replicated tables, ClickHouse behavior is controlled by the settings
max_replica_delay_for_distributed_queries
and
fallback_to_stale_replicas_for_distributed_queries
.
For each
INSERT
query, approximately ten entries are added to ZooKeeper through several transactions. (To be more precise, this is for each inserted block of data; an INSERT query contains one block or one block per
max_insert_block_size = 1048576
rows.) This leads to slightly longer latencies for
INSERT
compared to non-replicated tables. But if you follow the recommendations to insert data in batches of no more than one
INSERT
per second, it does not create any problems. The entire ClickHouse cluster used for coordinating one ZooKeeper cluster has a total of several hundred
INSERTs
per second. The throughput on data inserts (the number of rows per second) is just as high as for non-replicated data.
For very large clusters, you can use different ZooKeeper clusters for different shards. However, from our experience this has not proven necessary based on production clusters with approximately 300 servers. | {"source_file": "replication.md"} | [
0.03892076760530472,
-0.04405733942985535,
-0.07859887927770615,
-0.01340124849230051,
-0.04451170563697815,
-0.060731202363967896,
-0.023574644699692726,
0.01108975987881422,
-0.052341919392347336,
0.031019441783428192,
-0.004967271350324154,
-0.09058944135904312,
0.11878760904073715,
-0.... |
4b3c64a8-a68a-4a4d-a8ae-e531aac108ca | Replication is asynchronous and multi-master.
INSERT
queries (as well as
ALTER
) can be sent to any available server. Data is inserted on the server where the query is run, and then it is copied to the other servers. Because it is asynchronous, recently inserted data appears on the other replicas with some latency. If part of the replicas are not available, the data is written when they become available. If a replica is available, the latency is the amount of time it takes to transfer the block of compressed data over the network. The number of threads performing background tasks for replicated tables can be set by
background_schedule_pool_size
setting.
ReplicatedMergeTree
engine uses a separate thread pool for replicated fetches. Size of the pool is limited by the
background_fetches_pool_size
setting which can be tuned with a server restart.
By default, an INSERT query waits for confirmation of writing the data from only one replica. If the data was successfully written to only one replica and the server with this replica ceases to exist, the stored data will be lost. To enable getting confirmation of data writes from multiple replicas, use the
insert_quorum
option.
Each block of data is written atomically. The INSERT query is divided into blocks up to
max_insert_block_size = 1048576
rows. In other words, if the
INSERT
query has less than 1048576 rows, it is made atomically.
Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application does not know if the data was written to the DB, so the
INSERT
query can simply be repeated. It does not matter which replica INSERTs were sent to with identical data.
INSERTs
are idempotent. Deduplication parameters are controlled by
merge_tree
server settings.
During replication, only the source data to insert is transferred over the network. Further data transformation (merging) is coordinated and performed on all the replicas in the same way. This minimizes network usage, which means that replication works well when replicas reside in different datacenters. (Note that duplicating data in different datacenters is the main goal of replication.)
You can have any number of replicas of the same data. Based on our experiences, a relatively reliable and convenient solution could use double replication in production, with each server using RAID-5 or RAID-6 (and RAID-10 in some cases).
The system monitors data synchronicity on replicas and is able to recover after a failure. Failover is automatic (for small differences in data) or semi-automatic (when data differs too much, which may indicate a configuration error).
Creating replicated tables {#creating-replicated-tables}
:::note
In ClickHouse Cloud, replication is handled automatically. | {"source_file": "replication.md"} | [
-0.012861249037086964,
-0.05270733684301376,
-0.03351663425564766,
0.0457254983484745,
-0.030343038961291313,
-0.10000120848417282,
-0.06601715832948685,
-0.04854917526245117,
0.05570199713110924,
0.06511498987674713,
-0.01773208938539028,
0.0163877010345459,
0.09380123764276505,
-0.078338... |
1853c259-4223-4aed-ac81-77b0f920121e | Creating replicated tables {#creating-replicated-tables}
:::note
In ClickHouse Cloud, replication is handled automatically.
Create tables using
MergeTree
without replication arguments. The system internally rewrites
MergeTree
to
SharedMergeTree
for replication and data distribution.
Avoid using
ReplicatedMergeTree
or specifying replication parameters, as replication is managed by the platform.
:::
Replicated*MergeTree parameters {#replicatedmergetree-parameters}
| Parameter | Description |
|-----------------|------------------------------------------------------------------------------|
|
zoo_path
| The path to the table in ClickHouse Keeper. |
|
replica_name
| The replica name in ClickHouse Keeper. |
|
other_parameters
| Parameters of an engine used for creating the replicated version, for example, version in
ReplacingMergeTree
. |
Example:
sql
CREATE TABLE table_name
(
EventDate DateTime,
CounterID UInt32,
UserID UInt32,
ver UInt16
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{layer}-{shard}/table_name', '{replica}', ver)
PARTITION BY toYYYYMM(EventDate)
ORDER BY (CounterID, EventDate, intHash32(UserID))
SAMPLE BY intHash32(UserID);
Example in deprecated syntax
```sql
CREATE TABLE table_name
(
EventDate DateTime,
CounterID UInt32,
UserID UInt32
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/table_name', '{replica}', EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID), EventTime), 8192);
```
As the example shows, these parameters can contain substitutions in curly brackets. The substituted values are taken from the
macros
section of the configuration file.
Example:
xml
<macros>
<shard>02</shard>
<replica>example05-02-1</replica>
</macros>
The path to the table in ClickHouse Keeper should be unique for each replicated table. Tables on different shards should have different paths.
In this case, the path consists of the following parts:
/clickhouse/tables/
is the common prefix. We recommend using exactly this one.
{shard}
will be expanded to the shard identifier.
table_name
is the name of the node for the table in ClickHouse Keeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query.
HINT
: you could add a database name in front of
table_name
as well. E.g.
db_name.table_name | {"source_file": "replication.md"} | [
-0.014657002873718739,
-0.04915512353181839,
-0.03217502310872078,
0.008293312042951584,
-0.0578170120716095,
-0.04502703994512558,
-0.02900751866400242,
-0.057438116520643234,
-0.03199674189090729,
0.10544496774673462,
0.058770254254341125,
-0.075138621032238,
0.09139709919691086,
-0.0512... |
eea7f39a-0c5d-4e70-9d19-2e4c9faa1358 | HINT
: you could add a database name in front of
table_name
as well. E.g.
db_name.table_name
The two built-in substitutions
{database}
and
{table}
can be used, they expand into the table name and the database name respectively (unless these macros are defined in the
macros
section). So the zookeeper path can be specified as
'/clickhouse/tables/{shard}/{database}/{table}'
.
Be careful with table renames when using these built-in substitutions. The path in ClickHouse Keeper cannot be changed, and when the table is renamed, the macros will expand into a different path, the table will refer to a path that does not exist in ClickHouse Keeper, and will go into read-only mode.
The replica name identifies different replicas of the same table. You can use the server name for this, as in the example. The name only needs to be unique within each shard.
You can define the parameters explicitly instead of using substitutions. This might be convenient for testing and for configuring small clusters. However, you can't use distributed DDL queries (
ON CLUSTER
) in this case.
When working with large clusters, we recommend using substitutions because they reduce the probability of error.
You can specify default arguments for
Replicated
table engine in the server configuration file. For instance:
xml
<default_replica_path>/clickhouse/tables/{shard}/{database}/{table}</default_replica_path>
<default_replica_name>{replica}</default_replica_name>
In this case, you can omit arguments when creating tables:
sql
CREATE TABLE table_name (
x UInt32
) ENGINE = ReplicatedMergeTree
ORDER BY x;
It is equivalent to:
sql
CREATE TABLE table_name (
x UInt32
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/{database}/table_name', '{replica}')
ORDER BY x;
Run the
CREATE TABLE
query on each replica. This query creates a new replicated table, or adds a new replica to an existing one.
If you add a new replica after the table already contains some data on other replicas, the data will be copied from the other replicas to the new one after running the query. In other words, the new replica syncs itself with the others.
To delete a replica, run
DROP TABLE
. However, only one replica is deleted β the one that resides on the server where you run the query.
Recovery after failures {#recovery-after-failures}
If ClickHouse Keeper is unavailable when a server starts, replicated tables switch to read-only mode. The system periodically attempts to connect to ClickHouse Keeper.
If ClickHouse Keeper is unavailable during an
INSERT
, or an error occurs when interacting with ClickHouse Keeper, an exception is thrown.
After connecting to ClickHouse Keeper, the system checks whether the set of data in the local file system matches the expected set of data (ClickHouse Keeper stores this information). If there are minor inconsistencies, the system resolves them by syncing data with the replicas. | {"source_file": "replication.md"} | [
0.0022284123115241528,
-0.05659450218081474,
-0.06338022649288177,
0.016893437132239342,
-0.0439862459897995,
-0.06083904579281807,
0.04960595816373825,
-0.03924034163355827,
-0.020387716591358185,
0.033270418643951416,
0.01963438093662262,
-0.0907355472445488,
0.10093265771865845,
0.00123... |
866e47e7-f833-4424-b334-3a19781f8d61 | If the system detects broken data parts (with the wrong size of files) or unrecognized parts (parts written to the file system but not recorded in ClickHouse Keeper), it moves them to the
detached
subdirectory (they are not deleted). Any missing parts are copied from the replicas.
Note that ClickHouse does not perform any destructive actions such as automatically deleting a large amount of data.
When the server starts (or establishes a new session with ClickHouse Keeper), it only checks the quantity and sizes of all files. If the file sizes match but bytes have been changed somewhere in the middle, this is not detected immediately, but only when attempting to read the data for a
SELECT
query. The query throws an exception about a non-matching checksum or size of a compressed block. In this case, data parts are added to the verification queue and copied from the replicas if necessary.
If the local set of data differs too much from the expected one, a safety mechanism is triggered. The server enters this in the log and refuses to launch. The reason for this is that this case may indicate a configuration error, such as if a replica on a shard was accidentally configured like a replica on a different shard. However, the thresholds for this mechanism are set fairly low, and this situation might occur during normal failure recovery. In this case, data is restored semi-automatically - by "pushing a button".
To start recovery, create the node
/path_to_table/replica_name/flags/force_restore_data
in ClickHouse Keeper with any content, or run the command to restore all replicated tables:
bash
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
Then restart the server. On start, the server deletes these flags and starts recovery.
Recovery after complete data loss {#recovery-after-complete-data-loss}
If all data and metadata disappeared from one of the servers, follow these steps for recovery:
Install ClickHouse on the server. Define substitutions correctly in the config file that contains the shard identifier and replicas, if you use them.
If you had unreplicated tables that must be manually duplicated on the servers, copy their data from a replica (in the directory
/var/lib/clickhouse/data/db_name/table_name/
).
Copy table definitions located in
/var/lib/clickhouse/metadata/
from a replica. If a shard or replica identifier is defined explicitly in the table definitions, correct it so that it corresponds to this replica. (Alternatively, start the server and make all the
ATTACH TABLE
queries that should have been in the .sql files in
/var/lib/clickhouse/metadata/
.)
To start recovery, create the ClickHouse Keeper node
/path_to_table/replica_name/flags/force_restore_data
with any content, or run the command to restore all replicated tables:
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data | {"source_file": "replication.md"} | [
-0.04186347499489784,
-0.02409905195236206,
0.012898129411041737,
0.024047035723924637,
0.08694982528686523,
-0.14338423311710358,
0.025530168786644936,
-0.08439880609512329,
0.09050920605659485,
0.062402501702308655,
0.06164092198014259,
0.029784206300973892,
0.03773952275514603,
-0.03164... |
5aa7a9bb-6f44-43e8-9a1f-545944df8d7a | Then start the server (restart, if it is already running). Data will be downloaded from replicas.
An alternative recovery option is to delete information about the lost replica from ClickHouse Keeper (
/path_to_table/replica_name
), then create the replica again as described in "
Creating replicated tables
".
There is no restriction on network bandwidth during recovery. Keep this in mind if you are restoring many replicas at once.
Converting from MergeTree to ReplicatedMergeTree {#converting-from-mergetree-to-replicatedmergetree}
We use the term
MergeTree
to refer to all table engines in the
MergeTree family
, the same as for
ReplicatedMergeTree
.
If you had a
MergeTree
table that was manually replicated, you can convert it to a replicated table. You might need to do this if you have already collected a large amount of data in a
MergeTree
table and now you want to enable replication.
ATTACH TABLE ... AS REPLICATED
statement allows to attach detached
MergeTree
table as
ReplicatedMergeTree
.
MergeTree
table can be automatically converted on server restart if
convert_to_replicated
flag is set at the table's data directory (
/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/
for
Atomic
database).
Create empty
convert_to_replicated
file and the table will be loaded as replicated on next server restart.
This query can be used to get the table's data path. If table has many data paths, you have to use the first one.
sql
SELECT data_paths FROM system.tables WHERE table = 'table_name' AND database = 'database_name';
Note that ReplicatedMergeTree table will be created with values of
default_replica_path
and
default_replica_name
settings.
To create a converted table on other replicas, you will need to explicitly specify its path in the first argument of the
ReplicatedMergeTree
engine. The following query can be used to get its path.
sql
SELECT zookeeper_path FROM system.replicas WHERE table = 'table_name';
There is also a manual way to do this.
If the data differs on various replicas, first sync it, or delete this data on all the replicas except one.
Rename the existing MergeTree table, then create a
ReplicatedMergeTree
table with the old name.
Move the data from the old table to the
detached
subdirectory inside the directory with the new table data (
/var/lib/clickhouse/data/db_name/table_name/
).
Then run
ALTER TABLE ATTACH PARTITION
on one of the replicas to add these data parts to the working set.
Converting from ReplicatedMergeTree to MergeTree {#converting-from-replicatedmergetree-to-mergetree}
Use
ATTACH TABLE ... AS NOT REPLICATED
statement to attach detached
ReplicatedMergeTree
table as
MergeTree
on a single server. | {"source_file": "replication.md"} | [
0.013946380466222763,
-0.08966471254825592,
-0.037359077483415604,
0.020392877981066704,
-0.016345510259270668,
-0.058752063661813736,
-0.03852532058954239,
-0.005452799145132303,
-0.04935730993747711,
0.09256686270236969,
0.02317613922059536,
0.008284845389425755,
0.03807410970330238,
-0.... |
feaf8ba7-4b92-4c8b-8407-cbd6db89aeae | Use
ATTACH TABLE ... AS NOT REPLICATED
statement to attach detached
ReplicatedMergeTree
table as
MergeTree
on a single server.
Another way to do this involves server restart. Create a MergeTree table with a different name. Move all the data from the directory with the
ReplicatedMergeTree
table data to the new table's data directory. Then delete the
ReplicatedMergeTree
table and restart the server.
If you want to get rid of a
ReplicatedMergeTree
table without launching the server:
Delete the corresponding
.sql
file in the metadata directory (
/var/lib/clickhouse/metadata/
).
Delete the corresponding path in ClickHouse Keeper (
/path_to_table/replica_name
).
After this, you can launch the server, create a
MergeTree
table, move the data to its directory, and then restart the server.
Recovery when metadata in the ClickHouse Keeper cluster is lost or damaged {#recovery-when-metadata-in-the-zookeeper-cluster-is-lost-or-damaged}
If the data in ClickHouse Keeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
See Also
background_schedule_pool_size
background_fetches_pool_size
execute_merges_on_single_replica_time_threshold
max_replicated_fetches_network_bandwidth
max_replicated_sends_network_bandwidth | {"source_file": "replication.md"} | [
0.0031367531046271324,
-0.07768335938453674,
-0.009752768091857433,
0.016949458047747612,
-0.010152410715818405,
-0.0757000669836998,
-0.006158181931823492,
-0.0626528412103653,
-0.008214667439460754,
0.08481337130069733,
0.05878812447190285,
-0.006174973212182522,
0.07975783944129944,
-0.... |
a888887e-f4d3-4717-adec-8b795424d4b9 | description: '
MergeTree
-family table engines are designed for high data ingest rates
and huge data volumes.'
sidebar_label: 'MergeTree'
sidebar_position: 11
slug: /engines/table-engines/mergetree-family/mergetree
title: 'MergeTree table engine'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
MergeTree table engine
The
MergeTree
engine and other engines of the
MergeTree
family (e.g.
ReplacingMergeTree
,
AggregatingMergeTree
) are the most commonly used and most robust table engines in ClickHouse.
MergeTree
-family table engines are designed for high data ingest rates and huge data volumes.
Insert operations create table parts which are merged by a background process with other table parts.
Main features of
MergeTree
-family table engines.
The table's primary key determines the sort order within each table part (clustered index). The primary key also does not reference individual rows but blocks of 8192 rows called granules. This makes primary keys of huge data sets small enough to remain loaded in main memory, while still providing fast access to on-disk data.
Tables can be partitioned using an arbitrary partition expression. Partition pruning ensures partitions are omitted from reading when the query allows it.
Data can be replicated across multiple cluster nodes for high availability, failover, and zero downtime upgrades. See
Data replication
.
MergeTree
table engines support various statistics kinds and sampling methods to help query optimization.
:::note
Despite a similar name, the
Merge
engine is different from
*MergeTree
engines.
:::
Creating tables {#table_engine-mergetree-creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [[NOT] NULL] [DEFAULT|MATERIALIZED|ALIAS|EPHEMERAL expr1] [COMMENT ...] [CODEC(codec1)] [STATISTICS(stat1)] [TTL expr1] [PRIMARY KEY] [SETTINGS (name = value, ...)],
name2 [type2] [[NOT] NULL] [DEFAULT|MATERIALIZED|ALIAS|EPHEMERAL expr2] [COMMENT ...] [CODEC(codec2)] [STATISTICS(stat2)] [TTL expr2] [PRIMARY KEY] [SETTINGS (name = value, ...)],
...
INDEX index_name1 expr1 TYPE type1(...) [GRANULARITY value1],
INDEX index_name2 expr2 TYPE type2(...) [GRANULARITY value2],
...
PROJECTION projection_name_1 (SELECT <COLUMN LIST EXPR> [GROUP BY] [ORDER BY]),
PROJECTION projection_name_2 (SELECT <COLUMN LIST EXPR> [GROUP BY] [ORDER BY])
) ENGINE = MergeTree()
ORDER BY expr
[PARTITION BY expr]
[PRIMARY KEY expr]
[SAMPLE BY expr]
[TTL expr
[DELETE|TO DISK 'xxx'|TO VOLUME 'xxx' [, ...] ]
[WHERE conditions]
[GROUP BY key_expr [SET v1 = aggr_func(v1) [, v2 = aggr_func(v2) ...]] ] ]
[SETTINGS name = value, ...]
For a detailed description of the parameters, see the
CREATE TABLE
statement
Query clauses {#mergetree-query-clauses}
ENGINE {#engine} | {"source_file": "mergetree.md"} | [
-0.036942463368177414,
-0.02530006505548954,
-0.009459229186177254,
-0.014694686979055405,
-0.01674514263868332,
-0.08615246415138245,
-0.03763648867607117,
0.028641052544116974,
-0.03486092761158943,
0.030363138765096664,
-0.011124612763524055,
0.0226894523948431,
0.026262231171131134,
-0... |
405f778d-1280-4055-a730-fea6b4c1c382 | For a detailed description of the parameters, see the
CREATE TABLE
statement
Query clauses {#mergetree-query-clauses}
ENGINE {#engine}
ENGINE
β Name and parameters of the engine.
ENGINE = MergeTree()
. The
MergeTree
engine has no parameters.
ORDER BY {#order_by}
ORDER BY
β The sorting key.
A tuple of column names or arbitrary expressions. Example:
ORDER BY (CounterID + 1, EventDate)
.
If no primary key is defined (i.e.
PRIMARY KEY
was not specified), ClickHouse uses the the sorting key as primary key.
If no sorting is required, you can use syntax
ORDER BY tuple()
.
Alternatively, if setting
create_table_empty_primary_key_by_default
is enabled,
ORDER BY ()
is implicitly added to
CREATE TABLE
statements. See
Selecting a Primary Key
.
PARTITION BY {#partition-by}
PARTITION BY
β The
partitioning key
. Optional. In most cases, you don't need a partition key, and if you do need to partition, generally you do not need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the
toYYYYMM(date_column)
expression, where
date_column
is a column with a date of the type
Date
. The partition names here have the
"YYYYMM"
format.
PRIMARY KEY {#primary-key}
PRIMARY KEY
β The primary key if it
differs from the sorting key
. Optional.
Specifying a sorting key (using
ORDER BY
clause) implicitly specifies a primary key.
It is usually not necessary to specify the primary key in addition to the sorting key.
SAMPLE BY {#sample-by}
SAMPLE BY
β A sampling expression. Optional.
If specified, it must be contained in the primary key.
The sampling expression must result in an unsigned integer.
Example:
SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))
.
TTL {#ttl}
TTL
β A list of rules that specify the storage duration of rows and the logic of automatic parts movement
between disks and volumes
. Optional.
Expression must result in a
Date
or
DateTime
, e.g.
TTL date + INTERVAL 1 DAY
.
Type of the rule
DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'|GROUP BY
specifies an action to be done with the part if the expression is satisfied (reaches current time): removal of expired rows, moving a part (if expression is satisfied for all rows in a part) to specified disk (
TO DISK 'xxx'
) or to volume (
TO VOLUME 'xxx'
), or aggregating values in expired rows. Default type of the rule is removal (
DELETE
). List of multiple rules can be specified, but there should be no more than one
DELETE
rule.
For more details, see
TTL for columns and tables
SETTINGS {#settings}
See
MergeTree Settings
.
Example of Sections Setting | {"source_file": "mergetree.md"} | [
0.00009360658441437408,
-0.037167347967624664,
0.03197856992483139,
0.03157760947942734,
-0.07579676806926727,
-0.04097113013267517,
0.008709317073225975,
0.04888933524489403,
-0.011436102911829948,
0.015432021580636501,
0.03330934792757034,
-0.002939137862995267,
0.005163250025361776,
-0.... |
0768240e-34b6-414c-b0a6-38690d1a80c6 | For more details, see
TTL for columns and tables
SETTINGS {#settings}
See
MergeTree Settings
.
Example of Sections Setting
sql
ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity=8192
In the example, we set partitioning by month.
We also set an expression for sampling as a hash by the user ID. This allows you to pseudorandomize the data in the table for each
CounterID
and
EventDate
. If you define a
SAMPLE
clause when selecting the data, ClickHouse will return an evenly pseudorandom data sample for a subset of users.
The
index_granularity
setting can be omitted because 8192 is the default value.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects. If possible, switch old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE [=] MergeTree(date-column [, sampling_expression], (primary, key), index_granularity)
```
**MergeTree() Parameters**
- `date-column` β The name of a column of the [Date](/sql-reference/data-types/date.md) type. ClickHouse automatically creates partitions by month based on this column. The partition names are in the `"YYYYMM"` format.
- `sampling_expression` β An expression for sampling.
- `(primary, key)` β Primary key. Type: [Tuple()](/sql-reference/data-types/tuple.md)
- `index_granularity` β The granularity of an index. The number of data rows between the "marks" of an index. The value 8192 is appropriate for most tasks.
**Example**
```sql
MergeTree(EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID)), 8192)
```
The `MergeTree` engine is configured in the same way as in the example above for the main engine configuration method.
Data storage {#mergetree-data-storage}
A table consists of data parts sorted by primary key.
When data is inserted in a table, separate data parts are created and each of them is lexicographically sorted by primary key. For example, if the primary key is
(CounterID, Date)
, the data in the part is sorted by
CounterID
, and within each
CounterID
, it is ordered by
Date
.
Data belonging to different partitions are separated into different parts. In the background, ClickHouse merges data parts for more efficient storage. Parts belonging to different partitions are not merged. The merge mechanism does not guarantee that all rows with the same primary key will be in the same data part.
Data parts can be stored in
Wide
or
Compact
format. In
Wide
format each column is stored in a separate file in a filesystem, in
Compact
format all columns are stored in one file.
Compact
format can be used to increase performance of small and frequent inserts. | {"source_file": "mergetree.md"} | [
0.05806613340973854,
-0.05177025496959686,
-0.03213462233543396,
0.0208504106849432,
-0.056623123586177826,
-0.03969084843993187,
0.06339149922132492,
0.03032398782670498,
-0.05696567893028259,
0.03461691737174988,
0.028982864692807198,
-0.0448443703353405,
0.031986843794584274,
-0.0941829... |
6f5420c9-5153-4418-bd0f-09a2e0ef1f78 | Data storing format is controlled by the
min_bytes_for_wide_part
and
min_rows_for_wide_part
settings of the table engine. If the number of bytes or rows in a data part is less then the corresponding setting's value, the part is stored in
Compact
format. Otherwise it is stored in
Wide
format. If none of these settings is set, data parts are stored in
Wide
format.
Each data part is logically divided into granules. A granule is the smallest indivisible data set that ClickHouse reads when selecting data. ClickHouse does not split rows or values, so each granule always contains an integer number of rows. The first row of a granule is marked with the value of the primary key for the row. For each data part, ClickHouse creates an index file that stores the marks. For each column, whether it's in the primary key or not, ClickHouse also stores the same marks. These marks let you find data directly in column files.
The granule size is restricted by the
index_granularity
and
index_granularity_bytes
settings of the table engine. The number of rows in a granule lays in the
[1, index_granularity]
range, depending on the size of the rows. The size of a granule can exceed
index_granularity_bytes
if the size of a single row is greater than the value of the setting. In this case, the size of the granule equals the size of the row.
Primary Keys and Indexes in Queries {#primary-keys-and-indexes-in-queries}
Take the
(CounterID, Date)
primary key as an example. In this case, the sorting and index can be illustrated as follows:
text
Whole data: [---------------------------------------------]
CounterID: [aaaaaaaaaaaaaaaaaabbbbcdeeeeeeeeeeeeefgggggggghhhhhhhhhiiiiiiiiikllllllll]
Date: [1111111222222233331233211111222222333211111112122222223111112223311122333]
Marks: | | | | | | | | | | |
a,1 a,2 a,3 b,3 e,2 e,3 g,1 h,2 i,1 i,3 l,3
Marks numbers: 0 1 2 3 4 5 6 7 8 9 10
If the data query specifies:
CounterID in ('a', 'h')
, the server reads the data in the ranges of marks
[0, 3)
and
[6, 8)
.
CounterID IN ('a', 'h') AND Date = 3
, the server reads the data in the ranges of marks
[1, 3)
and
[7, 8)
.
Date = 3
, the server reads the data in the range of marks
[1, 10]
.
The examples above show that it is always more effective to use an index than a full scan.
A sparse index allows extra data to be read. When reading a single range of the primary key, up to
index_granularity * 2
extra rows in each data block can be read.
Sparse indexes allow you to work with a very large number of table rows, because in most cases, such indexes fit in the computer's RAM.
ClickHouse does not require a unique primary key. You can insert multiple rows with the same primary key. | {"source_file": "mergetree.md"} | [
0.04583568871021271,
0.007527479901909828,
-0.08800362795591354,
0.0039036304224282503,
-0.017583230510354042,
-0.047930583357810974,
0.018511831760406494,
0.016633275896310806,
0.004294017795473337,
-0.0013420601608231664,
-0.031883396208286285,
0.04397101700305939,
0.03141967952251434,
-... |
28391df6-5d4c-4e49-b6e9-b86e3f9a5f51 | ClickHouse does not require a unique primary key. You can insert multiple rows with the same primary key.
You can use
Nullable
-typed expressions in the
PRIMARY KEY
and
ORDER BY
clauses but it is strongly discouraged. To allow this feature, turn on the
allow_nullable_key
setting. The
NULLS_LAST
principle applies for
NULL
values in the
ORDER BY
clause.
Selecting a primary key {#selecting-a-primary-key}
The number of columns in the primary key is not explicitly limited. Depending on the data structure, you can include more or fewer columns in the primary key. This may:
Improve the performance of an index.
If the primary key is
(a, b)
, then adding another column
c
will improve the performance if the following conditions are met:
There are queries with a condition on column
c
.
Long data ranges (several times longer than the
index_granularity
) with identical values for
(a, b)
are common. In other words, when adding another column allows you to skip quite long data ranges.
Improve data compression.
ClickHouse sorts data by primary key, so the higher the consistency, the better the compression.
Provide additional logic when merging data parts in the
CollapsingMergeTree
and
SummingMergeTree
engines.
In this case it makes sense to specify the
sorting key
that is different from the primary key.
A long primary key will negatively affect the insert performance and memory consumption, but extra columns in the primary key do not affect ClickHouse performance during
SELECT
queries.
You can create a table without a primary key using the
ORDER BY tuple()
syntax. In this case, ClickHouse stores data in the order of inserting. If you want to save data order when inserting data by
INSERT ... SELECT
queries, set
max_insert_threads = 1
.
To select data in the initial order, use
single-threaded
SELECT
queries.
Choosing a primary key that differs from the sorting key {#choosing-a-primary-key-that-differs-from-the-sorting-key}
It is possible to specify a primary key (an expression with values that are written in the index file for each mark) that is different from the sorting key (an expression for sorting the rows in data parts). In this case the primary key expression tuple must be a prefix of the sorting key expression tuple.
This feature is helpful when using the
SummingMergeTree
and
AggregatingMergeTree
table engines. In a common case when using these engines, the table has two types of columns:
dimensions
and
measures
. Typical queries aggregate values of measure columns with arbitrary
GROUP BY
and filtering by dimensions. Because SummingMergeTree and AggregatingMergeTree aggregate rows with the same value of the sorting key, it is natural to add all dimensions to it. As a result, the key expression consists of a long list of columns and this list must be frequently updated with newly added dimensions. | {"source_file": "mergetree.md"} | [
-0.02648286148905754,
-0.026870811358094215,
-0.06704125553369522,
-0.02501845359802246,
-0.06999906897544861,
-0.027209561318159103,
-0.028473973274230957,
-0.05815825238823891,
0.003814299590885639,
0.01295389886945486,
0.004125265870243311,
0.004553027916699648,
0.01800423301756382,
-0.... |
24a31986-8f6f-4fca-b873-0ae7a7150684 | In this case it makes sense to leave only a few columns in the primary key that will provide efficient range scans and add the remaining dimension columns to the sorting key tuple.
ALTER
of the sorting key is a lightweight operation because when a new column is simultaneously added to the table and to the sorting key, existing data parts do not need to be changed. Since the old sorting key is a prefix of the new sorting key and there is no data in the newly added column, the data is sorted by both the old and new sorting keys at the moment of table modification.
Use of indexes and partitions in queries {#use-of-indexes-and-partitions-in-queries}
For
SELECT
queries, ClickHouse analyzes whether an index can be used. An index can be used if the
WHERE/PREWHERE
clause has an expression (as one of the conjunction elements, or entirely) that represents an equality or inequality comparison operation, or if it has
IN
or
LIKE
with a fixed prefix on columns or expressions that are in the primary key or partitioning key, or on certain partially repetitive functions of these columns, or logical relationships of these expressions.
Thus, it is possible to quickly run queries on one or many ranges of the primary key. In this example, queries will be fast when run for a specific tracking tag, for a specific tag and date range, for a specific tag and date, for multiple tags with a date range, and so on.
Let's look at the engine configured as follows:
sql
ENGINE MergeTree()
PARTITION BY toYYYYMM(EventDate)
ORDER BY (CounterID, EventDate)
SETTINGS index_granularity=8192
In this case, in queries:
```sql
SELECT count() FROM table
WHERE EventDate = toDate(now())
AND CounterID = 34
SELECT count() FROM table
WHERE EventDate = toDate(now())
AND (CounterID = 34 OR CounterID = 42)
SELECT count() FROM table
WHERE ((EventDate >= toDate('2014-01-01')
AND EventDate <= toDate('2014-01-31')) OR EventDate = toDate('2014-05-01'))
AND CounterID IN (101500, 731962, 160656)
AND (CounterID = 101500 OR EventDate != toDate('2014-05-01'))
```
ClickHouse will use the primary key index to trim improper data and the monthly partitioning key to trim partitions that are in improper date ranges.
The queries above show that the index is used even for complex expressions. Reading from the table is organized so that using the index can't be slower than a full scan.
In the example below, the index can't be used.
sql
SELECT count() FROM table WHERE CounterID = 34 OR URL LIKE '%upyachka%'
To check whether ClickHouse can use the index when running a query, use the settings
force_index_by_date
and
force_primary_key
. | {"source_file": "mergetree.md"} | [
0.017665941268205643,
-0.0031045570503920317,
0.013780062086880207,
0.05028422176837921,
0.018709739670157433,
-0.01583728939294815,
-0.01172794122248888,
-0.0388447642326355,
-0.0017959806136786938,
0.034242138266563416,
0.013504095375537872,
0.08061748743057251,
-0.01854775846004486,
-0.... |
158cd40d-3abf-45d1-bdb4-07f75965a324 | To check whether ClickHouse can use the index when running a query, use the settings
force_index_by_date
and
force_primary_key
.
The key for partitioning by month allows reading only those data blocks which contain dates from the proper range. In this case, the data block may contain data for many dates (up to an entire month). Within a block, data is sorted by primary key, which might not contain the date as the first column. Because of this, using a query with only a date condition that does not specify the primary key prefix will cause more data to be read than for a single date.
Use of index for partially-monotonic primary keys {#use-of-index-for-partially-monotonic-primary-keys}
Consider, for example, the days of the month. They form a
monotonic sequence
for one month, but not monotonic for more extended periods. This is a partially-monotonic sequence. If a user creates the table with partially-monotonic primary key, ClickHouse creates a sparse index as usual. When a user selects data from this kind of table, ClickHouse analyzes the query conditions. If the user wants to get data between two marks of the index and both these marks fall within one month, ClickHouse can use the index in this particular case because it can calculate the distance between the parameters of a query and index marks.
ClickHouse cannot use an index if the values of the primary key in the query parameter range do not represent a monotonic sequence. In this case, ClickHouse uses the full scan method.
ClickHouse uses this logic not only for days of the month sequences, but for any primary key that represents a partially-monotonic sequence.
Data skipping indexes {#table_engine-mergetree-data_skipping-indexes}
The index declaration is in the columns section of the
CREATE
query.
sql
INDEX index_name expr TYPE type(...) [GRANULARITY granularity_value]
For tables from the
*MergeTree
family, data skipping indices can be specified.
These indices aggregate some information about the specified expression on blocks, which consist of
granularity_value
granules (the size of the granule is specified using the
index_granularity
setting in the table engine). Then these aggregates are used in
SELECT
queries for reducing the amount of data to read from the disk by skipping big blocks of data where the
where
query cannot be satisfied.
The
GRANULARITY
clause can be omitted, the default value of
granularity_value
is 1.
Example
sql
CREATE TABLE table_name
(
u64 UInt64,
i32 Int32,
s String,
...
INDEX idx1 u64 TYPE bloom_filter GRANULARITY 3,
INDEX idx2 u64 * i32 TYPE minmax GRANULARITY 3,
INDEX idx3 u64 * length(s) TYPE set(1000) GRANULARITY 4
) ENGINE = MergeTree()
...
Indices from the example can be used by ClickHouse to reduce the amount of data to read from disk in the following queries: | {"source_file": "mergetree.md"} | [
-0.01451475266367197,
-0.023501550778746605,
0.03291986510157585,
0.04228004068136215,
-0.007719514891505241,
-0.04205084219574928,
-0.015893103554844856,
-0.03995063900947571,
0.001558350631967187,
-0.01250361930578947,
-0.0020388567354530096,
0.008559719659388065,
-0.014947404153645039,
... |
99bf53ba-da6e-417c-b579-6d1ee6a422d8 | Indices from the example can be used by ClickHouse to reduce the amount of data to read from disk in the following queries:
sql
SELECT count() FROM table WHERE u64 == 10;
SELECT count() FROM table WHERE u64 * i32 >= 1234
SELECT count() FROM table WHERE u64 * length(s) == 1234
Data skipping indexes can also be created on composite columns:
```sql
-- on columns of type Map:
INDEX map_key_index mapKeys(map_column) TYPE bloom_filter
INDEX map_value_index mapValues(map_column) TYPE bloom_filter
-- on columns of type Tuple:
INDEX tuple_1_index tuple_column.1 TYPE bloom_filter
INDEX tuple_2_index tuple_column.2 TYPE bloom_filter
-- on columns of type Nested:
INDEX nested_1_index col.nested_col1 TYPE bloom_filter
INDEX nested_2_index col.nested_col2 TYPE bloom_filter
```
Skip Index Types {#skip-index-types}
The
MergeTree
table engine supports the following types of skip indexes.
For more information on how skip indexes can be used for performance optimization
see
"Understanding ClickHouse data skipping indexes"
.
MinMax
index
Set
index
bloom_filter
index
ngrambf_v1
index
tokenbf_v1
index
MinMax skip index {#minmax}
For each index granule, the minimum and maximum values of an expression are stored.
(If the expression is of type
tuple
, it stores the minimum and maximum for each tuple element.)
text title="Syntax"
minmax
Set {#set}
For each index granule at most
max_rows
many unique values of the specified expression are stored.
max_rows = 0
means "store all unique values".
text title="Syntax"
set(max_rows)
Bloom filter {#bloom-filter}
For each index granule stores a
bloom filter
for the specified columns.
text title="Syntax"
bloom_filter([false_positive_rate])
The
false_positive_rate
parameter can take on a value between 0 and 1 (by default:
0.025
) and specifies the probability of generating a positive (which increases the amount of data to be read).
The following data types are supported:
-
(U)Int*
-
Float*
-
Enum
-
Date
-
DateTime
-
String
-
FixedString
-
Array
-
LowCardinality
-
Nullable
-
UUID
-
Map
:::note Map data type: specifying index creation with keys or values
For the
Map
data type, the client can specify if the index should be created for keys or for values using the
mapKeys
or
mapValues
functions.
:::
N-gram bloom filter {#n-gram-bloom-filter}
For each index granule stores a
bloom filter
for the
n-grams
of the specified columns.
text title="Syntax"
ngrambf_v1(n, size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed) | {"source_file": "mergetree.md"} | [
0.07373565435409546,
0.01557394303381443,
0.020655451342463493,
0.06246478483080864,
-0.04954047128558159,
-0.05750156193971634,
0.009771340526640415,
-0.023047475144267082,
-0.024915914982557297,
0.03272402659058571,
0.019442129880189896,
0.00546051561832428,
0.020393598824739456,
-0.0914... |
019622c0-f58f-4793-9564-d014d37ce1cf | For each index granule stores a
bloom filter
for the
n-grams
of the specified columns.
text title="Syntax"
ngrambf_v1(n, size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)
| Parameter | Description |
|---------------------------------|-------------|
|
n
| ngram size |
|
size_of_bloom_filter_in_bytes
| Bloom filter size in bytes. You can use a large value here, for example,
256
or
512
, because it can be compressed well).|
|
number_of_hash_functions
|The number of hash functions used in the bloom filter.|
|
random_seed
|Seed for the bloom filter hash functions.|
This index only works with the following data types:
-
String
-
FixedString
-
Map
To estimate the parameters of
ngrambf_v1
, you can use the following
User Defined Functions (UDFs)
.
```sql title="UDFs for ngrambf_v1"
CREATE FUNCTION bfEstimateFunctions [ON CLUSTER cluster]
AS
(total_number_of_all_grams, size_of_bloom_filter_in_bits) -> round((size_of_bloom_filter_in_bits / total_number_of_all_grams) * log(2));
CREATE FUNCTION bfEstimateBmSize [ON CLUSTER cluster]
AS
(total_number_of_all_grams, probability_of_false_positives) -> ceil((total_number_of_all_grams * log(probability_of_false_positives)) / log(1 / pow(2, log(2))));
CREATE FUNCTION bfEstimateFalsePositive [ON CLUSTER cluster]
AS
(total_number_of_all_grams, number_of_hash_functions, size_of_bloom_filter_in_bytes) -> pow(1 - exp(-number_of_hash_functions/ (size_of_bloom_filter_in_bytes / total_number_of_all_grams)), number_of_hash_functions);
CREATE FUNCTION bfEstimateGramNumber [ON CLUSTER cluster]
AS
(number_of_hash_functions, probability_of_false_positives, size_of_bloom_filter_in_bytes) -> ceil(size_of_bloom_filter_in_bytes / (-number_of_hash_functions / log(1 - exp(log(probability_of_false_positives) / number_of_hash_functions))))
```
To use these functions, you need to specify at least two parameters:
-
total_number_of_all_grams
-
probability_of_false_positives
For example, there are
4300
ngrams in the granule and you expect false positives to be less than
0.0001
.
The other parameters can then be estimated by executing the following queries:
```sql
--- estimate number of bits in the filter
SELECT bfEstimateBmSize(4300, 0.0001) / 8 AS size_of_bloom_filter_in_bytes;
ββsize_of_bloom_filter_in_bytesββ
β 10304 β
βββββββββββββββββββββββββββββββββ
--- estimate number of hash functions
SELECT bfEstimateFunctions(4300, bfEstimateBmSize(4300, 0.0001)) as number_of_hash_functions
ββnumber_of_hash_functionsββ
β 13 β
ββββββββββββββββββββββββββββ
```
Of course, you can also use those functions to estimate parameters for other conditions.
The functions above refer to the bloom filter calculator
here
.
Token bloom filter {#token-bloom-filter} | {"source_file": "mergetree.md"} | [
-0.017303509637713432,
-0.004103919956833124,
-0.050503604114055634,
0.0071370555087924,
0.014430089853703976,
-0.026279117912054062,
0.04157638177275658,
0.0225970596075058,
-0.024686254560947418,
-0.015155008062720299,
-0.06337150186300278,
0.015106926672160625,
0.01844044029712677,
-0.0... |
965ea98c-d09a-4e3e-88b2-1c6d753a8745 | Of course, you can also use those functions to estimate parameters for other conditions.
The functions above refer to the bloom filter calculator
here
.
Token bloom filter {#token-bloom-filter}
The token bloom filter is the same as
ngrambf_v1
, but stores tokens (sequences separated by non-alphanumeric characters) instead of ngrams.
text title="Syntax"
tokenbf_v1(size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)
Vector similarity {#vector-similarity}
Supports approximate nearest neighbor search, see
here
for details.
Text (experimental) {#text}
Support full-text search, see
here
for details.
Functions support {#functions-support}
Conditions in the
WHERE
clause contains calls of the functions that operate with columns. If the column is a part of an index, ClickHouse tries to use this index when performing the functions. ClickHouse supports different subsets of functions for using indexes.
Indexes of type
set
can be utilized by all functions. The other index types are supported as follows: | {"source_file": "mergetree.md"} | [
-0.08526474237442017,
0.020204154774546623,
-0.010762392543256283,
0.031179921701550484,
0.02745972014963627,
0.026053693145513535,
0.0423249825835228,
0.009401951916515827,
-0.010025078430771828,
-0.016589656472206116,
-0.04578099027276039,
0.04572552442550659,
0.011558874510228634,
-0.02... |
c9c18d43-19fb-434c-bf0a-64335a242dad | | Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter | text |
|--------------------------------------------------------------------------------------------------------------------------------|-------------|--------|------------|------------|--------------|------|
|
equals (=, ==)
| β | β | β | β | β | β |
|
notEquals(!=, <>)
| β | β | β | β | β | β |
|
like
| β | β | β | β | β | β |
|
notLike
| β | β | β | β | β | β |
|
match
| β | β | β | β | β | β |
|
startsWith
| β | β | β | β | β | β |
|
endsWith
| β | β | β | β | β | β |
|
multiSearchAny
| β | β | β | β | β | β |
|
in
| β | β | β | β | β | β |
|
notIn
| β | β | β | β | β | β |
|
less (
<
)
| β | β | β | β | β | β |
|
greater (
>
)
| β | β | β | β | β | β |
|
lessOrEquals (
<=
)
| β | β | β | β | β | β |
|
greaterOrEquals (
>=
)
| β | β | β | β | β | β |
|
empty
| β | β | β | β | β | β |
|
notEmpty
| β | β | β | β | β | β |
|
has
| β | β | β | β | β | β |
|
hasAny | {"source_file": "mergetree.md"} | [
-0.04140439257025719,
0.06372229754924774,
-0.010378805920481682,
-0.03691123053431511,
-0.07436931133270264,
-0.028740262612700462,
0.06562942266464233,
-0.008633493445813656,
0.034865375608205795,
-0.02261338196694851,
0.019584951922297478,
-0.06017746776342392,
0.04913502559065819,
-0.0... |
b7db3b10-b605-438e-a33f-c8317071909d | |
has
| β | β | β | β | β | β |
|
hasAny
| β | β | β | β | β | β |
|
hasAll
| β | β | β | β | β | β |
|
hasToken
| β | β | β | β | β | β |
|
hasTokenOrNull
| β | β | β | β | β | β |
|
hasTokenCaseInsensitive (
*
)
| β | β | β | β | β | β |
|
hasTokenCaseInsensitiveOrNull (
*
)
| β | β | β | β | β | β |
|
hasAnyTokens
| β | β | β | β | β | β |
|
hasAllTokens
| β | β | β | β | β | β |
|
mapContains
| β | β | β | β | β | β | | {"source_file": "mergetree.md"} | [
0.028054706752300262,
-0.012439119629561901,
0.0012887069024145603,
-0.015603968873620033,
-0.06118441000580788,
0.05905208736658096,
0.023452166467905045,
0.008253779262304306,
-0.09568458795547485,
0.0385696217417717,
0.06600939482450485,
-0.10364600270986557,
0.10337110608816147,
-0.030... |
fa7c0cab-ea5c-46d5-b5e4-7091093a1527 | Functions with a constant argument that is less than ngram size can't be used by
ngrambf_v1
for query optimization.
(*) For
hasTokenCaseInsensitive
and
hasTokenCaseInsensitiveOrNull
to be effective, the
tokenbf_v1
index must be created on lowercased data, for example
INDEX idx (lower(str_col)) TYPE tokenbf_v1(512, 3, 0)
.
:::note
Bloom filters can have false positive matches, so the
ngrambf_v1
,
tokenbf_v1
, and
bloom_filter
indexes can not be used for optimizing queries where the result of a function is expected to be false.
For example:
Can be optimized:
s LIKE '%test%'
NOT s NOT LIKE '%test%'
s = 1
NOT s != 1
startsWith(s, 'test')
Can not be optimized:
NOT s LIKE '%test%'
s NOT LIKE '%test%'
NOT s = 1
s != 1
NOT startsWith(s, 'test')
:::
Projections {#projections}
Projections are like
materialized views
but defined in part-level. It provides consistency guarantees along with automatic usage in queries.
:::note
When you are implementing projections you should also consider the
force_optimize_projection
setting.
:::
Projections are not supported in the
SELECT
statements with the
FINAL
modifier.
Projection query {#projection-query}
A projection query is what defines a projection. It implicitly selects data from the parent table.
Syntax
sql
SELECT <column list expr> [GROUP BY] <group keys expr> [ORDER BY] <expr>
Projections can be modified or dropped with the
ALTER
statement.
Projection storage {#projection-storage}
Projections are stored inside the part directory. It's similar to an index but contains a subdirectory that stores an anonymous
MergeTree
table's part. The table is induced by the definition query of the projection. If there is a
GROUP BY
clause, the underlying storage engine becomes
AggregatingMergeTree
, and all aggregate functions are converted to
AggregateFunction
. If there is an
ORDER BY
clause, the
MergeTree
table uses it as its primary key expression. During the merge process the projection part is merged via its storage's merge routine. The checksum of the parent table's part is combined with the projection's part. Other maintenance jobs are similar to skip indices.
Query analysis {#projection-query-analysis}
Check if the projection can be used to answer the given query, that is, it generates the same answer as querying the base table.
Select the best feasible match, which contains the least granules to read.
The query pipeline which uses projections will be different from the one that uses the original parts. If the projection is absent in some parts, we can add the pipeline to "project" it on the fly.
Concurrent data access {#concurrent-data-access}
For concurrent table access, we use multi-versioning. In other words, when a table is simultaneously read and updated, data is read from a set of parts that is current at the time of the query. There are no lengthy locks. Inserts do not get in the way of read operations. | {"source_file": "mergetree.md"} | [
-0.029706913977861404,
0.03382127732038498,
0.013927011750638485,
0.02005668543279171,
0.03099878877401352,
0.0083469795063138,
0.023014893755316734,
0.02804156206548214,
-0.030694834887981415,
-0.002054936019703746,
-0.061932653188705444,
-0.007996097207069397,
-0.004976326134055853,
-0.0... |
58b3a747-f9d7-4a7f-a64d-82452f34755d | Reading from a table is automatically parallelized.
TTL for columns and tables {#table_engine-mergetree-ttl}
Determines the lifetime of values.
The
TTL
clause can be set for the whole table and for each individual column. Table-level
TTL
can also specify the logic of automatic moving data between disks and volumes, or recompressing parts where all the data has been expired.
Expressions must evaluate to
Date
,
Date32
,
DateTime
or
DateTime64
data type.
Syntax
Setting time-to-live for a column:
sql
TTL time_column
TTL time_column + interval
To define
interval
, use
time interval
operators, for example:
sql
TTL date_time + INTERVAL 1 MONTH
TTL date_time + INTERVAL 15 HOUR
Column TTL {#mergetree-column-ttl}
When the values in the column expire, ClickHouse replaces them with the default values for the column data type. If all the column values in the data part expire, ClickHouse deletes this column from the data part in a filesystem.
The
TTL
clause can't be used for key columns.
Examples
Creating a table with
TTL
: {#creating-a-table-with-ttl}
sql
CREATE TABLE tab
(
d DateTime,
a Int TTL d + INTERVAL 1 MONTH,
b Int TTL d + INTERVAL 1 MONTH,
c String
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(d)
ORDER BY d;
Adding TTL to a column of an existing table {#adding-ttl-to-a-column-of-an-existing-table}
sql
ALTER TABLE tab
MODIFY COLUMN
c String TTL d + INTERVAL 1 DAY;
Altering TTL of the column {#altering-ttl-of-the-column}
sql
ALTER TABLE tab
MODIFY COLUMN
c String TTL d + INTERVAL 1 MONTH;
Table TTL {#mergetree-table-ttl}
Table can have an expression for removal of expired rows, and multiple expressions for automatic move of parts between
disks or volumes
. When rows in the table expire, ClickHouse deletes all corresponding rows. For parts moving or recompressing, all rows of a part must satisfy the
TTL
expression criteria.
sql
TTL expr
[DELETE|RECOMPRESS codec_name1|TO DISK 'xxx'|TO VOLUME 'xxx'][, DELETE|RECOMPRESS codec_name2|TO DISK 'aaa'|TO VOLUME 'bbb'] ...
[WHERE conditions]
[GROUP BY key_expr [SET v1 = aggr_func(v1) [, v2 = aggr_func(v2) ...]] ]
Type of TTL rule may follow each TTL expression. It affects an action which is to be done once the expression is satisfied (reaches current time):
DELETE
- delete expired rows (default action);
RECOMPRESS codec_name
- recompress data part with the
codec_name
;
TO DISK 'aaa'
- move part to the disk
aaa
;
TO VOLUME 'bbb'
- move part to the disk
bbb
;
GROUP BY
- aggregate expired rows.
DELETE
action can be used together with
WHERE
clause to delete only some of the expired rows based on a filtering condition:
sql
TTL time_column + INTERVAL 1 MONTH DELETE WHERE column = 'value'
GROUP BY
expression must be a prefix of the table primary key. | {"source_file": "mergetree.md"} | [
-0.014363749884068966,
-0.05317043140530586,
-0.06976903229951859,
0.03633113577961922,
-0.06572148203849792,
-0.0937008187174797,
-0.03664310276508331,
0.041959766298532486,
0.047325439751148224,
0.05040585622191429,
0.09274432808160782,
0.01914888806641102,
0.004991822876036167,
-0.04279... |
3de36255-ec93-4f72-b05e-43e7c25572da | sql
TTL time_column + INTERVAL 1 MONTH DELETE WHERE column = 'value'
GROUP BY
expression must be a prefix of the table primary key.
If a column is not part of the
GROUP BY
expression and is not set explicitly in the
SET
clause, in result row it contains an occasional value from the grouped rows (as if aggregate function
any
is applied to it).
Examples
Creating a table with
TTL
: {#creating-a-table-with-ttl-1}
sql
CREATE TABLE tab
(
d DateTime,
a Int
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(d)
ORDER BY d
TTL d + INTERVAL 1 MONTH DELETE,
d + INTERVAL 1 WEEK TO VOLUME 'aaa',
d + INTERVAL 2 WEEK TO DISK 'bbb';
Altering
TTL
of the table: {#altering-ttl-of-the-table}
sql
ALTER TABLE tab
MODIFY TTL d + INTERVAL 1 DAY;
Creating a table, where the rows are expired after one month. The expired rows where dates are Mondays are deleted:
sql
CREATE TABLE table_with_where
(
d DateTime,
a Int
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(d)
ORDER BY d
TTL d + INTERVAL 1 MONTH DELETE WHERE toDayOfWeek(d) = 1;
Creating a table, where expired rows are recompressed: {#creating-a-table-where-expired-rows-are-recompressed}
sql
CREATE TABLE table_for_recompression
(
d DateTime,
key UInt64,
value String
) ENGINE MergeTree()
ORDER BY tuple()
PARTITION BY key
TTL d + INTERVAL 1 MONTH RECOMPRESS CODEC(ZSTD(17)), d + INTERVAL 1 YEAR RECOMPRESS CODEC(LZ4HC(10))
SETTINGS min_rows_for_wide_part = 0, min_bytes_for_wide_part = 0;
Creating a table, where expired rows are aggregated. In result rows
x
contains the maximum value across the grouped rows,
y
β the minimum value, and
d
β any occasional value from grouped rows.
sql
CREATE TABLE table_for_aggregation
(
d DateTime,
k1 Int,
k2 Int,
x Int,
y Int
)
ENGINE = MergeTree
ORDER BY (k1, k2)
TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y);
Removing expired data {#mergetree-removing-expired-data}
Data with an expired
TTL
is removed when ClickHouse merges data parts.
When ClickHouse detects that data is expired, it performs an off-schedule merge. To control the frequency of such merges, you can set
merge_with_ttl_timeout
. If the value is too low, it will perform many off-schedule merges that may consume a lot of resources.
If you perform the
SELECT
query between merges, you may get expired data. To avoid it, use the
OPTIMIZE
query before
SELECT
.
See Also
ttl_only_drop_parts
setting
Disk types {#disk-types}
In addition to local block devices, ClickHouse supports these storage types:
-
s3
for S3 and MinIO
-
gcs
for GCS
-
blob_storage_disk
for Azure Blob Storage
-
hdfs
for HDFS
-
web
for read-only from web
-
cache
for local caching
-
s3_plain
for backups to S3
-
s3_plain_rewritable
for immutable, non-replicated tables in S3
Using multiple block devices for data storage {#table_engine-mergetree-multiple-volumes}
Introduction {#introduction} | {"source_file": "mergetree.md"} | [
-0.027693945914506912,
-0.010557246394455433,
0.0598614439368248,
0.037859007716178894,
-0.0836087167263031,
-0.007377047091722488,
0.010963309556245804,
-0.004454403184354305,
0.009163887239992619,
-0.0030956040136516094,
0.06821086257696152,
0.000994913512840867,
0.013865333050489426,
-0... |
2ee89f5e-7b1d-459b-b059-01f83dabbc40 | -
s3_plain_rewritable
for immutable, non-replicated tables in S3
Using multiple block devices for data storage {#table_engine-mergetree-multiple-volumes}
Introduction {#introduction}
MergeTree
family table engines can store data on multiple block devices. For example, it can be useful when the data of a certain table are implicitly split into "hot" and "cold". The most recent data is regularly requested but requires only a small amount of space. On the contrary, the fat-tailed historical data is requested rarely. If several disks are available, the "hot" data may be located on fast disks (for example, NVMe SSDs or in memory), while the "cold" data - on relatively slow ones (for example, HDD).
Data part is the minimum movable unit for
MergeTree
-engine tables. The data belonging to one part are stored on one disk. Data parts can be moved between disks in the background (according to user settings) as well as by means of the
ALTER
queries.
Terms {#terms}
Disk β Block device mounted to the filesystem.
Default disk β Disk that stores the path specified in the
path
server setting.
Volume β Ordered set of equal disks (similar to
JBOD
).
Storage policy β Set of volumes and the rules for moving data between them.
The names given to the described entities can be found in the system tables,
system.storage_policies
and
system.disks
. To apply one of the configured storage policies for a table, use the
storage_policy
setting of
MergeTree
-engine family tables.
Configuration {#table_engine-mergetree-multiple-volumes_configure}
Disks, volumes and storage policies should be declared inside the
<storage_configuration>
tag either in a file in the
config.d
directory.
:::tip
Disks can also be declared in the
SETTINGS
section of a query. This is useful
for ad-hoc analysis to temporarily attach a disk that is, for example, hosted at a URL.
See
dynamic storage
for more details.
:::
Configuration structure:
```xml
/mnt/fast_ssd/clickhouse/
/mnt/hdd1/clickhouse/
10485760
/mnt/hdd2/clickhouse/
10485760
...
</disks>
...
```
Tags:
<disk_name_N>
β Disk name. Names must be different for all disks.
path
β path under which a server will store data (
data
and
shadow
folders), should be terminated with '/'.
keep_free_space_bytes
β the amount of free disk space to be reserved.
The order of the disk definition is not important.
Storage policies configuration markup:
```xml
...
disk_name_from_disks_configuration
1073741824
round_robin
0.2
<!-- more policies -->
</policies>
...
```
Tags:
policy_name_N
β Policy name. Policy names must be unique.
volume_name_N
β Volume name. Volume names must be unique.
disk
β a disk within a volume. | {"source_file": "mergetree.md"} | [
-0.03544727340340614,
-0.052834782749414444,
-0.04453957453370094,
0.018450528383255005,
0.04585369676351547,
-0.09101298451423645,
-0.05944763869047165,
0.00999538879841566,
0.019621793180704117,
0.007549065165221691,
0.042787689715623856,
0.047355107963085175,
0.07605554908514023,
-0.057... |
39221226-d711-4cc7-98a8-906ebf2ea49f | ```
Tags:
policy_name_N
β Policy name. Policy names must be unique.
volume_name_N
β Volume name. Volume names must be unique.
disk
β a disk within a volume.
max_data_part_size_bytes
β the maximum size of a part that can be stored on any of the volume's disks. If the a size of a merged part estimated to be bigger than
max_data_part_size_bytes
then this part will be written to a next volume. Basically this feature allows to keep new/small parts on a hot (SSD) volume and move them to a cold (HDD) volume when they reach large size. Do not use this setting if your policy has only one volume.
move_factor
β when the amount of available space gets lower than this factor, data automatically starts to move on the next volume if any (by default, 0.1). ClickHouse sorts existing parts by size from largest to smallest (in descending order) and selects parts with the total size that is sufficient to meet the
move_factor
condition. If the total size of all parts is insufficient, all parts will be moved.
perform_ttl_move_on_insert
β Disables TTL move on data part INSERT. By default (if enabled) if we insert a data part that already expired by the TTL move rule it immediately goes to a volume/disk declared in move rule. This can significantly slowdown insert in case if destination volume/disk is slow (e.g. S3). If disabled then already expired data part is written into a default volume and then right after moved to TTL volume.
load_balancing
- Policy for disk balancing,
round_robin
or
least_used
.
least_used_ttl_ms
- Configure timeout (in milliseconds) for the updating available space on all disks (
0
- update always,
-1
- never update, default is
60000
). Note, if the disk can be used by ClickHouse only and is not subject to a online filesystem resize/shrink you can use
-1
, in all other cases it is not recommended, since eventually it will lead to incorrect space distribution.
prefer_not_to_merge
β You should not use this setting. Disables merging of data parts on this volume (this is harmful and leads to performance degradation). When this setting is enabled (don't do it), merging data on this volume is not allowed (which is bad). This allows (but you don't need it) controlling (if you want to control something, you're making a mistake) how ClickHouse works with slow disks (but ClickHouse knows better, so please don't use this setting).
volume_priority
β Defines the priority (order) in which volumes are filled. Lower value means higher priority. The parameter values should be natural numbers and collectively cover the range from 1 to N (lowest priority given) without skipping any numbers.
If
all
volumes are tagged, they are prioritized in given order.
If only
some
volumes are tagged, those without the tag have the lowest priority, and they are prioritized in the order they are defined in config. | {"source_file": "mergetree.md"} | [
-0.04889337345957756,
-0.050913479179143906,
0.006313883699476719,
-0.008596898056566715,
-0.008617635816335678,
-0.04929202422499657,
-0.012710314244031906,
-0.013674262911081314,
-0.022344468161463737,
0.03014645166695118,
0.023603560402989388,
0.0723167434334755,
0.06857980042695999,
-0... |
ea9a559a-120a-4996-8d55-7eb96ed187a9 | If only
some
volumes are tagged, those without the tag have the lowest priority, and they are prioritized in the order they are defined in config.
If
no
volumes are tagged, their priority is set correspondingly to their order they are declared in configuration.
Two volumes cannot have the same priority value.
Configuration examples:
```xml
...
disk1
disk2
<moving_from_ssd_to_hdd>
<volumes>
<hot>
<disk>fast_ssd</disk>
<max_data_part_size_bytes>1073741824</max_data_part_size_bytes>
</hot>
<cold>
<disk>disk1</disk>
</cold>
</volumes>
<move_factor>0.2</move_factor>
</moving_from_ssd_to_hdd>
<small_jbod_with_external_no_merges>
<volumes>
<main>
<disk>jbod1</disk>
</main>
<external>
<disk>external</disk>
</external>
</volumes>
</small_jbod_with_external_no_merges>
</policies>
...
```
In given example, the
hdd_in_order
policy implements the
round-robin
approach. Thus this policy defines only one volume (
single
), the data parts are stored on all its disks in circular order. Such policy can be quite useful if there are several similar disks are mounted to the system, but RAID is not configured. Keep in mind that each individual disk drive is not reliable and you might want to compensate it with replication factor of 3 or more.
If there are different kinds of disks available in the system,
moving_from_ssd_to_hdd
policy can be used instead. The volume
hot
consists of an SSD disk (
fast_ssd
), and the maximum size of a part that can be stored on this volume is 1GB. All the parts with the size larger than 1GB will be stored directly on the
cold
volume, which contains an HDD disk
disk1
.
Also, once the disk
fast_ssd
gets filled by more than 80%, data will be transferred to the
disk1
by a background process.
The order of volume enumeration within a storage policy is important in case at least one of the volumes listed has no explicit
volume_priority
parameter.
Once a volume is overfilled, data are moved to the next one. The order of disk enumeration is important as well because data are stored on them in turns.
When creating a table, one can apply one of the configured storage policies to it:
sql
CREATE TABLE table_with_non_default_policy (
EventDate Date,
OrderID UInt64,
BannerID UInt64,
SearchPhrase String
) ENGINE = MergeTree
ORDER BY (OrderID, BannerID)
PARTITION BY toYYYYMM(EventDate)
SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
The
default
storage policy implies using only one volume, which consists of only one disk given in
<path>
.
You could change storage policy after table creation with [ALTER TABLE ... MODIFY SETTING] query, new policy should include all old disks and volumes with same names. | {"source_file": "mergetree.md"} | [
0.009223263710737228,
-0.04785004258155823,
-0.031239960342645645,
-0.014888826757669449,
0.03000751882791519,
-0.03963667154312134,
-0.01101431343704462,
0.05114562436938286,
0.0304265059530735,
-0.0187012180685997,
0.06762264668941498,
0.035977553576231,
0.09824001789093018,
-0.020451605... |
79569b53-e424-4cdb-9778-d787e5494dd6 | The number of threads performing background moves of data parts can be changed by
background_move_pool_size
setting.
Details {#details}
In the case of
MergeTree
tables, data is getting to disk in different ways:
As a result of an insert (
INSERT
query).
During background merges and
mutations
.
When downloading from another replica.
As a result of partition freezing
ALTER TABLE ... FREEZE PARTITION
.
In all these cases except for mutations and partition freezing, a part is stored on a volume and a disk according to the given storage policy:
The first volume (in the order of definition) that has enough disk space for storing a part (
unreserved_space > current_part_size
) and allows for storing parts of a given size (
max_data_part_size_bytes > current_part_size
) is chosen.
Within this volume, that disk is chosen that follows the one, which was used for storing the previous chunk of data, and that has free space more than the part size (
unreserved_space - keep_free_space_bytes > current_part_size
).
Under the hood, mutations and partition freezing make use of
hard links
. Hard links between different disks are not supported, therefore in such cases the resulting parts are stored on the same disks as the initial ones.
In the background, parts are moved between volumes on the basis of the amount of free space (
move_factor
parameter) according to the order the volumes are declared in the configuration file.
Data is never transferred from the last one and into the first one. One may use system tables
system.part_log
(field
type = MOVE_PART
) and
system.parts
(fields
path
and
disk
) to monitor background moves. Also, the detailed information can be found in server logs.
User can force moving a part or a partition from one volume to another using the query
ALTER TABLE ... MOVE PART\|PARTITION ... TO VOLUME\|DISK ...
, all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met.
Moving data does not interfere with data replication. Therefore, different storage policies can be specified for the same table on different replicas.
After the completion of background merges and mutations, old parts are removed only after a certain amount of time (
old_parts_lifetime
).
During this time, they are not moved to other volumes or disks. Therefore, until the parts are finally removed, they are still taken into account for evaluation of the occupied disk space.
User can assign new big parts to different disks of a
JBOD
volume in a balanced way using the
min_bytes_to_rebalance_partition_over_jbod
setting.
Using external storage for data storage {#table_engine-mergetree-s3} | {"source_file": "mergetree.md"} | [
-0.01595020480453968,
-0.05156936123967171,
0.04965697601437569,
-0.016173986718058586,
0.06542381644248962,
-0.0790569931268692,
-0.01859007589519024,
0.03710862994194031,
0.10774119198322296,
0.03905554860830307,
-0.0014017827343195677,
0.051126766949892044,
0.03145120292901993,
-0.08879... |
4a12c2e8-0b81-4ecb-a609-cec9db87f116 | Using external storage for data storage {#table_engine-mergetree-s3}
MergeTree
family table engines can store data to
S3
,
AzureBlobStorage
,
HDFS
using a disk with types
s3
,
azure_blob_storage
,
hdfs
accordingly. See
configuring external storage options
for more details.
Example for
S3
as external storage using a disk with type
s3
.
Configuration markup:
xml
<storage_configuration>
...
<disks>
<s3>
<type>s3</type>
<support_batch_delete>true</support_batch_delete>
<endpoint>https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/root-path/</endpoint>
<access_key_id>your_access_key_id</access_key_id>
<secret_access_key>your_secret_access_key</secret_access_key>
<region></region>
<header>Authorization: Bearer SOME-TOKEN</header>
<server_side_encryption_customer_key_base64>your_base64_encoded_customer_key</server_side_encryption_customer_key_base64>
<server_side_encryption_kms_key_id>your_kms_key_id</server_side_encryption_kms_key_id>
<server_side_encryption_kms_encryption_context>your_kms_encryption_context</server_side_encryption_kms_encryption_context>
<server_side_encryption_kms_bucket_key_enabled>true</server_side_encryption_kms_bucket_key_enabled>
<proxy>
<uri>http://proxy1</uri>
<uri>http://proxy2</uri>
</proxy>
<connect_timeout_ms>10000</connect_timeout_ms>
<request_timeout_ms>5000</request_timeout_ms>
<retry_attempts>10</retry_attempts>
<single_read_retries>4</single_read_retries>
<min_bytes_for_seek>1000</min_bytes_for_seek>
<metadata_path>/var/lib/clickhouse/disks/s3/</metadata_path>
<skip_access_check>false</skip_access_check>
</s3>
<s3_cache>
<type>cache</type>
<disk>s3</disk>
<path>/var/lib/clickhouse/disks/s3_cache/</path>
<max_size>10Gi</max_size>
</s3_cache>
</disks>
...
</storage_configuration>
Also see
configuring external storage options
.
:::note cache configuration
ClickHouse versions 22.3 through 22.7 use a different cache configuration, see
using local cache
if you are using one of those versions.
:::
Virtual columns {#virtual-columns}
_part
β Name of a part.
_part_index
β Sequential index of the part in the query result.
_part_starting_offset
β Cumulative starting row of the part in the query result.
_part_offset
β Number of row in the part.
_part_granule_offset
β Number of granule in the part.
_partition_id
β Name of a partition.
_part_uuid
β Unique part identifier (if enabled MergeTree setting
assign_part_uuids
).
_part_data_version
β Data version of part (either min block number or mutation version).
_partition_value
β Values (a tuple) of a
partition by
expression. | {"source_file": "mergetree.md"} | [
0.00687426095828414,
-0.05835844948887825,
-0.12708380818367004,
0.016041448339819908,
0.0031401761807501316,
0.051555972546339035,
-0.010573972947895527,
0.0005125837633386254,
0.013581151142716408,
0.11055309325456619,
0.00857712421566248,
-0.001804481027647853,
0.10797709226608276,
-0.0... |
f08c07f6-fc3f-43d5-99ac-3807ba64ec7b | _part_data_version
β Data version of part (either min block number or mutation version).
_partition_value
β Values (a tuple) of a
partition by
expression.
_sample_factor
β Sample factor (from the query).
_block_number
β Original number of block for row that was assigned at insert, persisted on merges when setting
enable_block_number_column
is enabled.
_block_offset
β Original number of row in block that was assigned at insert, persisted on merges when setting
enable_block_offset_column
is enabled.
_disk_name
β Disk name used for the storage.
Column statistics {#column-statistics}
The statistics declaration is in the columns section of the
CREATE
query for tables from the
*MergeTree*
Family when we enable
set allow_experimental_statistics = 1
.
sql
CREATE TABLE tab
(
a Int64 STATISTICS(TDigest, Uniq),
b Float64
)
ENGINE = MergeTree
ORDER BY a
We can also manipulate statistics with
ALTER
statements.
sql
ALTER TABLE tab ADD STATISTICS b TYPE TDigest, Uniq;
ALTER TABLE tab DROP STATISTICS a;
These lightweight statistics aggregate information about distribution of values in columns. Statistics are stored in every part and updated when every insert comes.
They can be used for prewhere optimization only if we enable
set allow_statistics_optimize = 1
.
Available types of column statistics {#available-types-of-column-statistics}
MinMax
The minimum and maximum column value which allows to estimate the selectivity of range filters on numeric columns.
Syntax:
minmax
TDigest
TDigest
sketches which allow to compute approximate percentiles (e.g. the 90th percentile) for numeric columns.
Syntax:
tdigest
Uniq
HyperLogLog
sketches which provide an estimation how many distinct values a column contains.
Syntax:
uniq
CountMin
CountMin
sketches which provide an approximate count of the frequency of each value in a column.
Syntax
countmin
Supported data types {#supported-data-types}
| | (U)Int
, Float
, Decimal(
), Date
, Boolean, Enum* | String or FixedString |
|-----------|----------------------------------------------------|-----------------------|
| CountMin | β | β |
| MinMax | β | β |
| TDigest | β | β |
| Uniq | β | β |
Supported operations {#supported-operations} | {"source_file": "mergetree.md"} | [
-0.015801090747117996,
-0.04872981086373329,
-0.052591193467378616,
0.0486910380423069,
-0.015619914047420025,
-0.049021873623132706,
0.013867408968508244,
0.0665273442864418,
-0.022523565217852592,
0.04410269856452942,
0.06624308228492737,
-0.03572715073823929,
0.027585135772824287,
-0.07... |
4461a111-56c8-4331-8d1a-317ecf439efa | Supported operations {#supported-operations}
| | Equality filters (==) | Range filters (
>, >=, <, <=
) |
|-----------|-----------------------|------------------------------|
| CountMin | β | β |
| MinMax | β | β |
| TDigest | β | β |
| Uniq | β | β |
Column-level settings {#column-level-settings}
Certain MergeTree settings can be overridden at column level:
max_compress_block_size
β Maximum size of blocks of uncompressed data before compressing for writing to a table.
min_compress_block_size
β Minimum size of blocks of uncompressed data required for compression when writing the next mark.
Example:
sql
CREATE TABLE tab
(
id Int64,
document String SETTINGS (min_compress_block_size = 16777216, max_compress_block_size = 16777216)
)
ENGINE = MergeTree
ORDER BY id
Column-level settings can be modified or removed using
ALTER MODIFY COLUMN
, for example:
Remove
SETTINGS
from column declaration:
sql
ALTER TABLE tab MODIFY COLUMN document REMOVE SETTINGS;
Modify a setting:
sql
ALTER TABLE tab MODIFY COLUMN document MODIFY SETTING min_compress_block_size = 8192;
Reset one or more settings, also removes the setting declaration in the column expression of the table's CREATE query.
sql
ALTER TABLE tab MODIFY COLUMN document RESET SETTING min_compress_block_size; | {"source_file": "mergetree.md"} | [
0.04712921753525734,
-0.01916884258389473,
-0.05195416882634163,
0.029187187552452087,
-0.07787033915519714,
-0.006713891867548227,
-0.014282984659075737,
0.09735393524169922,
-0.050248999148607254,
0.05172121524810791,
0.046710819005966187,
-0.05684714764356613,
0.04532710090279579,
-0.05... |
fc01ba7e-d14e-4122-bb9f-738899cceb1d | description: 'Documentation for Exact and Approximate Vector Search'
keywords: ['vector similarity search', 'ann', 'knn', 'hnsw', 'indices', 'index', 'nearest neighbor', 'vector search']
sidebar_label: 'Exact and Approximate Vector Search'
slug: /engines/table-engines/mergetree-family/annindexes
title: 'Exact and Approximate Vector Search'
doc_type: 'guide'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
Exact and approximate vector search
The problem of finding the N closest points in a multi-dimensional (vector) space for a given point is known as
nearest neighbor search
or, in short: vector search.
Two general approaches exist for solving vector search:
- Exact vector search calculates the distance between the given point and all points in the vector space. This ensures the best possible accuracy, i.e. the returned points are guaranteed to be the actual nearest neighbors. Since the vector space is explored exhaustively, exact vector search can be too slow for real-world use.
- Approximate vector search refers to a group of techniques (e.g., special data structures like graphs and random forests) which compute results much faster than exact vector search. The result accuracy is typically "good enough" for practical use. Many approximate techniques provide parameters to tune the trade-off between the result accuracy and the search time.
A vector search (exact or approximate) can be written in SQL as follows:
sql
WITH [...] AS reference_vector
SELECT [...]
FROM table
WHERE [...] -- a WHERE clause is optional
ORDER BY <DistanceFunction>(vectors, reference_vector)
LIMIT <N>
The points in the vector space are stored in a column
vectors
of array type, e.g.
Array(Float64)
,
Array(Float32)
, or
Array(BFloat16)
.
The reference vector is a constant array and given as a common table expression.
<DistanceFunction>
computes the distance between the reference point and all stored points.
Any of the available
distance function
can be used for that.
<N>
specifies how many neighbors should be returned.
Exact vector search {#exact-nearest-neighbor-search}
An exact vector search can be performed using above SELECT query as is.
The runtime of such queries is generally proportional to the number of stored vectors and their dimension, i.e. the number of array elements.
Also, since ClickHouse performs a brute-force scan of all vectors, the runtime depends also on the number of threads by the query (see setting
max_threads
).
Example {#exact-nearest-neighbor-search-example}
```sql
CREATE TABLE tab(id Int32, vec Array(Float32)) ENGINE = MergeTree ORDER BY id;
INSERT INTO tab VALUES (0, [1.0, 0.0]), (1, [1.1, 0.0]), (2, [1.2, 0.0]), (3, [1.3, 0.0]), (4, [1.4, 0.0]), (5, [1.5, 0.0]), (6, [0.0, 2.0]), (7, [0.0, 2.1]), (8, [0.0, 2.2]), (9, [0.0, 2.3]), (10, [0.0, 2.4]), (11, [0.0, 2.5]);
WITH [0., 2.] AS reference_vec
SELECT id, vec
FROM tab
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 3;
```
returns | {"source_file": "annindexes.md"} | [
-0.056323252618312836,
-0.013415762223303318,
-0.01309642568230629,
-0.05102090165019035,
0.08157609403133392,
-0.048563916236162186,
-0.033868830651044846,
-0.01894383691251278,
0.03955143317580223,
-0.019752074033021927,
-0.013917995616793633,
0.0006685171974822879,
0.032481640577316284,
... |
92a0a9ab-45b8-4766-a261-5fc684805df2 | WITH [0., 2.] AS reference_vec
SELECT id, vec
FROM tab
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 3;
```
returns
result
ββidββ¬βvecββββββ
1. β 6 β [0,2] β
2. β 7 β [0,2.1] β
3. β 8 β [0,2.2] β
ββββββ΄ββββββββββ
Approximate vector search {#approximate-nearest-neighbor-search}
Vector Similarity Indexes {#vector-similarity-index}
ClickHouse provides a special "vector similarity" index to perform approximate vector search.
:::note
Vector similarity indexes are available in ClickHouse version 25.8 and higher.
If you run into problems, kindly open an issue in the
ClickHouse repository
.
:::
Creating a Vector Similarity Index {#creating-a-vector-similarity-index}
A vector similarity index can be created on a new table like this:
sql
CREATE TABLE table
(
[...],
vectors Array(Float*),
INDEX <index_name> vectors TYPE vector_similarity(<type>, <distance_function>, <dimensions>) [GRANULARITY <N>]
)
ENGINE = MergeTree
ORDER BY [...]
Alternatively, to add a vector similarity index to an existing table:
sql
ALTER TABLE table ADD INDEX <index_name> vectors TYPE vector_similarity(<type>, <distance_function>, <dimensions>) [GRANULARITY <N>];
Vector similarity indexes are special kinds of skipping indexes (see
here
and
here
).
Accordingly, above
ALTER TABLE
statement only causes the index to be build for future new data inserted into the table.
To build the index for existing data as well, you need to materialize it:
sql
ALTER TABLE table MATERIALIZE INDEX <index_name> SETTINGS mutations_sync = 2;
Function
<distance_function>
must be
-
L2Distance
, the
Euclidean distance
, representing the length of a line between two points in Euclidean space, or
-
cosineDistance
, the
cosine distance
, representing the angle between two non-zero vectors.
For normalized data,
L2Distance
is usually the best choice, otherwise
cosineDistance
is recommended to compensate for scale.
<dimensions>
specifies the array cardinality (number of elements) in the underlying column.
If ClickHouse finds an array with a different cardinality during index creation, the index is discarded and an error is returned.
The optional GRANULARITY parameter
<N>
refers to the size of the index granules (see
here
).
The default value of 100 million should work reasonably well for most use cases but it can also be tuned.
We recommend tuning only for advanced users who understand the implications of what they are doing (see
below
).
Vector similarity indexes are generic in the sense that they can accommodate different approximate search method.
The actually used method is specified by parameter
<type>
.
As of now, the only available method is HNSW (
academic paper
), a popular and state-of-the-art technique for approximate vector search based on hierarchical proximity graphs.
If HNSW is used as type, users may optionally specify further HNSW-specific parameters: | {"source_file": "annindexes.md"} | [
-0.004547452088445425,
-0.08788371831178665,
-0.01879505254328251,
-0.031193986535072327,
0.043707139790058136,
-0.0794128030538559,
-0.011131147854030132,
-0.005687818396836519,
-0.029647262766957283,
-0.030034054070711136,
0.016931576654314995,
0.007809519302099943,
0.0027135570999234915,
... |
e85eeb04-1f96-49e5-9f64-2646fb147d54 | sql
CREATE TABLE table
(
[...],
vectors Array(Float*),
INDEX index_name vectors TYPE vector_similarity('hnsw', <distance_function>, <dimensions>[, <quantization>, <hnsw_max_connections_per_layer>, <hnsw_candidate_list_size_for_construction>]) [GRANULARITY N]
)
ENGINE = MergeTree
ORDER BY [...]
These HNSW-specific parameters are available:
-
<quantization>
controls the quantization of the vectors in the proximity graph. Possible values are
f64
,
f32
,
f16
,
bf16
,
i8
, or
b1
. The default value is
bf16
. Note that this parameter does not affect the representation of the vectors in the underlying column.
-
<hnsw_max_connections_per_layer>
controls the number of neighbors per graph node, also known as HNSW hyperparameter
M
. The default value is
32
. Value
0
means using the default value.
-
<hnsw_candidate_list_size_for_construction>
controls the size of the dynamic candidate list during construction of the HNSW graph, also known as HNSW hyperparameter
ef_construction
. The default value is
128
. Value
0
means using the default value.
The default values of all HNSW-specific parameters work reasonably well in the majority of use cases.
We therefore do not recommend customizing the HNSW-specific parameters.
Further restrictions apply:
- Vector similarity indexes can only be build on columns of type
Array(Float32)
,
Array(Float64)
, or
Array(BFloat16)
. Arrays of nullable and low-cardinality floats such as
Array(Nullable(Float32))
and
Array(LowCardinality(Float32))
are not allowed.
- Vector similarity indexes must be build on single columns.
- Vector similarity indexes may be build on calculated expressions (e.g.,
INDEX index_name arraySort(vectors) TYPE vector_similarity([...])
) but such indexes cannot be used for approximate neighbor search later on.
- Vector similarity indexes require that all arrays in the underlying column have
<dimension>
-many elements - this is checked during index creation. To detect violations of this requirement as early as possible, users can add a
constraint
for the vector column, e.g.,
CONSTRAINT same_length CHECK length(vectors) = 256
.
- Likewise, array values in the underlying column must not be empty (
[]
) or have a default value (also
[]
).
Estimating storage and memory consumption
A vector generated for use with a typical AI model (e.g. a Large Language Model,
LLMs
) consists of hundreds or thousands of floating-point values.
Thus, a single vector value can have a memory consumption of multiple kilobyte.
Users who like to estimate the storage required for the underlying vector column in the table, as well as the main memory needed for the vector similarity index, can use below two formula:
Storage consumption of the vector column in the table (uncompressed):
text
Storage consumption = Number of vectors * Dimension * Size of column data type
Example for the
dbpedia dataset
:
text
Storage consumption = 1 million * 1536 * 4 (for Float32) = 6.1 GB | {"source_file": "annindexes.md"} | [
0.07992123812437057,
-0.00667976401746273,
-0.04486937075853348,
-0.06895536929368973,
-0.023082399740815163,
0.004243628587573767,
-0.026236971840262413,
0.01555441040545702,
-0.08180899918079376,
-0.04949558153748512,
0.007915995083749294,
0.0062275417149066925,
-0.010221117176115513,
-0... |
defd67fd-95c9-4874-a0d5-b477a767dda6 | text
Storage consumption = Number of vectors * Dimension * Size of column data type
Example for the
dbpedia dataset
:
text
Storage consumption = 1 million * 1536 * 4 (for Float32) = 6.1 GB
The vector similarity index must be fully loaded from disk into main memory to perform searches.
Similarly, the vector index is also constructed fully in memory and then saved to disk.
Memory consumption required to load a vector index:
```text
Memory for vectors in the index (mv) = Number of vectors * Dimension * Size of quantized data type
Memory for in-memory graph (mg) = Number of vectors * hnsw_max_connections_per_layer * Bytes_per_node_id (= 4) * Layer_node_repetition_factor (= 2)
Memory consumption: mv + mg
```
Example for the
dbpedia dataset
:
```text
Memory for vectors in the index (mv) = 1 million * 1536 * 2 (for BFloat16) = 3072 MB
Memory for in-memory graph (mg) = 1 million * 64 * 2 * 4 = 512 MB
Memory consumption = 3072 + 512 = 3584 MB
```
Above formula does not account for additional memory required by vector similarity indexes to allocate runtime data structures like pre-allocated buffers and caches.
Using a Vector Similarity Index {#using-a-vector-similarity-index}
:::note
To use vector similarity indexes, setting
compatibility
has be
''
(the default value), or
'25.1'
or newer.
:::
Vector similarity indexes support SELECT queries of this form:
sql
WITH [...] AS reference_vector
SELECT [...]
FROM table
WHERE [...] -- a WHERE clause is optional
ORDER BY <DistanceFunction>(vectors, reference_vector)
LIMIT <N>
ClickHouse's query optimizer tries to match above query template and make use of available vector similarity indexes.
A query can only use a vector similarity index if the distance function in the SELECT query is the same as the distance function in the index definition.
Advanced users may provide a custom value for setting
hnsw_candidate_list_size_for_search
(also know as HNSW hyperparameter "ef_search") to tune the size of the candidate list during search (e.g.
SELECT [...] SETTINGS hnsw_candidate_list_size_for_search = <value>
).
The default value of the setting 256 works well in the majority of use cases.
Higher setting values mean better accuracy at the cost of slower performance.
If the query can use a vector similarity index, ClickHouse checks that the LIMIT
<N>
provided in SELECT queries is within reasonable bounds.
More specifically, an error is returned if
<N>
is bigger than the value of setting
max_limit_for_vector_search_queries
with default value 100.
Too large LIMIT values can slow down searches and usually indicate a usage error.
To check if a SELECT query uses a vector similarity index, you can prefix the query with
EXPLAIN indexes = 1
.
As an example, query
sql
EXPLAIN indexes = 1
WITH [0.462, 0.084, ..., -0.110] AS reference_vec
SELECT id, vec
FROM tab
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 10;
may return | {"source_file": "annindexes.md"} | [
0.05746820569038391,
-0.025080077350139618,
-0.1392301768064499,
0.06637616455554962,
0.010094334371387959,
-0.03560500591993332,
0.0054758633486926556,
0.0419054739177227,
0.010341043584048748,
0.03380490094423294,
-0.02643425762653351,
0.017883606255054474,
0.060211651027202606,
-0.04639... |
1b33a372-9f0b-46bd-bcab-5cc68d4c6314 | As an example, query
sql
EXPLAIN indexes = 1
WITH [0.462, 0.084, ..., -0.110] AS reference_vec
SELECT id, vec
FROM tab
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 10;
may return
result
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1. β Expression (Project names) β
2. β Limit (preliminary LIMIT (without OFFSET)) β
3. β Sorting (Sorting for ORDER BY) β
4. β Expression ((Before ORDER BY + (Projection + Change column names to column identifiers))) β
5. β ReadFromMergeTree (default.tab) β
6. β Indexes: β
7. β PrimaryKey β
8. β Condition: true β
9. β Parts: 1/1 β
10. β Granules: 575/575 β
11. β Skip β
12. β Name: idx β
13. β Description: vector_similarity GRANULARITY 100000000 β
14. β Parts: 1/1 β
15. β Granules: 10/575 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
In this example, 1 million vectors in the
dbpedia dataset
, each with dimension 1536, are stored in 575 granules, i.e. 1.7k rows per granule.
The query asks for 10 neighbours and the vector similarity index finds these 10 neighbours in 10 separate granules.
These 10 granules will be read during query execution.
Vector similarity indexes are used if the output contains
Skip
and the name and type of the vector index (in the example,
idx
and
vector_similarity
).
In this case, the vector similarity index dropped two of four granules, i.e. 50% of the data.
The more granules can be dropped, the more effective index usage becomes.
:::tip
To enforce index usage, you can run the SELECT query with setting
force_data_skipping_indexes
(provide the index name as setting value).
:::
Post-filtering and Pre-filtering | {"source_file": "annindexes.md"} | [
0.04820564389228821,
-0.08006281405687332,
-0.022103475406765938,
-0.011382637545466423,
0.08298856765031815,
-0.05187676101922989,
0.029302041977643967,
0.0008022655383683741,
-0.016967032104730606,
0.08904466778039932,
-0.03438984602689743,
-0.0033648398239165545,
0.06938128918409348,
-0... |
274f2efc-8813-4fb4-8a26-9a7ffe264bc7 | :::tip
To enforce index usage, you can run the SELECT query with setting
force_data_skipping_indexes
(provide the index name as setting value).
:::
Post-filtering and Pre-filtering
Users may optionally specify a
WHERE
clause with additional filter conditions for the SELECT query.
ClickHouse will evaluate these filter conditions using post-filtering or pre-filtering strategy.
In short, both strategies determine the order in which the filters are evaluated:
- Post-filtering means that the vector similarity index is evaluated first, afterwards ClickHouse evaluates the additional filter(s) specified in the
WHERE
clause.
- Pre-filtering means that the filter evaluation order is the other way round.
The strategies have different trade-offs:
- Post-filtering has the general problem that it may return less than the number of rows requested in the
LIMIT <N>
clause. This situation happens when one or more result rows returned by the vector similarity index fails to satisfy the additional filters.
- Pre-filtering is generally an unsolved problem. Certain specialized vector databases provide pre-filtering algorithms but most relational databases (including ClickHouse) will fall back to exact neighbor search, i.e., a brute-force scan without index.
What strategy is used depends on the filter condition.
Additional filters are part of the partition key
If the additional filter condition is part of the partition key, then ClickHouse will apply partition pruning.
As an example, a table is range-partitioned by column
year
and the following query is run:
sql
WITH [0., 2.] AS reference_vec
SELECT id, vec
FROM tab
WHERE year = 2025
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 3;
ClickHouse will prune all partitions except the 2025 one.
Additional filters cannot be evaluated using indexes
If additional filter conditions cannot be evaluated using indexes (primary key index, skipping index), ClickHouse will apply post-filtering.
Additional filters can be evaluated using the primary key index
If additional filter conditions can be evaluated using the
primary key
(i.e., they form a prefix of the primary key) and
- the filter condition eliminates at least one row within a part, the ClickHouse will fall back to pre-filtering for the "surviving" ranges within the part,
- the filter condition eliminates no rows within a part, the ClickHouse will perform post-filtering for the part.
In practical use cases, the latter case is rather unlikely.
Additional filters can be evaluated using skipping index
If additional filter conditions can be evaluated using
skipping indexes
(minmax index, set index, etc.), Clickhouse performs post-filtering.
In such cases, the vector similarity index is evaluated first as it is expected to remove the most rows relative to other skipping indexes.
For finer control over post-filtering vs. pre-filtering, two settings can be used: | {"source_file": "annindexes.md"} | [
-0.021018117666244507,
0.003864262718707323,
0.02568894997239113,
-0.030045298859477043,
0.026664825156331062,
0.013266300782561302,
0.01323017943650484,
-0.0929548591375351,
0.016979897394776344,
-0.0161786749958992,
-0.006796145811676979,
0.050784289836883545,
0.002155182883143425,
-0.01... |
f4d5e603-870c-45a9-b76a-22918053fd2b | For finer control over post-filtering vs. pre-filtering, two settings can be used:
Setting
vector_search_filter_strategy
(default:
auto
which implements above heuristics) may be set to
prefilter
.
This is useful to force pre-filtering in cases where the additional filter conditions are extremely selective.
As an example, the following query may benefit from pre-filtering:
sql
SELECT bookid, author, title
FROM books
WHERE price < 2.00
ORDER BY cosineDistance(book_vector, getEmbedding('Books on ancient Asian empires'))
LIMIT 10
Assuming that only a very small number of books cost less than 2 dollar, post-filtering may return zero rows because the top 10 matches returned by the vector index could all be priced above 2 dollar.
By forcing pre-filtering (add
SETTINGS vector_search_filter_strategy = 'prefilter'
to the query), ClickHouse first finds all books with a price of less than 2 dollar and then executes a brute-force vector search for the found books.
As an alternative approach to resolve above issue, setting
vector_search_index_fetch_multiplier
(default:
1.0
, maximum:
1000.0
) may be configured to a value >
1.0
(for example,
2.0
).
The number of nearest neighbors fetched from the vector index is multiplied by the setting value and then the additional filter to be applied on those rows to return LIMIT-many rows.
As an example, we can query again but with multiplier
3.0
:
sql
SELECT bookid, author, title
FROM books
WHERE price < 2.00
ORDER BY cosineDistance(book_vector, getEmbedding('Books on ancient Asian empires'))
LIMIT 10
SETTING vector_search_index_fetch_multiplier = 3.0;
ClickHouse will fetch 3.0 x 10 = 30 nearest neighbors from the vector index in each part and afterwards evaluate the additional filters.
Only the ten closest neighbors will be returned.
We note that setting
vector_search_index_fetch_multiplier
can mitigate the problem but in extreme cases (very selective WHERE condition), it is still possible that less than N requested rows returned.
Rescoring
Skip indexes in ClickHouse generally filter at the granule level, i.e. a lookup in a skip index (internally) returns a list of potentially matching granules which reduces the number of read data in the subsequent scan.
This works well for skip indexes in general but in the case of vector similarity indexes, it creates a "granularity mismatch".
In more detail, the vector similarity index determines the row numbers of the N most similar vectors for a given reference vector, but it then needs to extrapolate these row numbers to granule numbers.
ClickHouse will then load these granules from disk, and repeat the distance calculation for all vectors in these granules.
This step is called rescoring and while it can theoretically improve accuracy - remember the vector similarity index returns only an
approximate
result, it is obvious not optimal in terms of performance. | {"source_file": "annindexes.md"} | [
-0.027134664356708527,
0.04432275518774986,
-0.033515848219394684,
0.021384328603744507,
0.00556469801813364,
0.017586665228009224,
-0.0039055501110851765,
-0.03689563274383545,
0.004985074512660503,
0.029382484033703804,
-0.019346926361322403,
-0.029260694980621338,
0.013169713318347931,
... |
c5f8dd12-bb7d-41a0-907d-695545248bb9 | ClickHouse therefore provides an optimization which disables rescoring and returns the most similar vectors and their distances directly from the index.
The optimization is enabled by default, see setting
vector_search_with_rescoring
.
The way it works at a high level is that ClickHouse makes the most similar vectors and their distances available as a virtual column
_distances
.
To see this, run a vector search query with
EXPLAIN header = 1
:
sql
EXPLAIN header = 1
WITH [0., 2.] AS reference_vec
SELECT id
FROM tab
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 3
SETTINGS vector_search_with_rescoring = 0
```result
Query id: a2a9d0c8-a525-45c1-96ca-c5a11fa66f47
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Expression (Project names) β
β Header: id Int32 β
β Limit (preliminary LIMIT (without OFFSET)) β
β Header: L2Distance(__table1.vec, _CAST([0., 2.]_Array(Float64), 'Array(Float64)'_String)) Float64 β
β __table1.id Int32 β
β Sorting (Sorting for ORDER BY) β
β Header: L2Distance(__table1.vec, _CAST([0., 2.]_Array(Float64), 'Array(Float64)'_String)) Float64 β
β __table1.id Int32 β
β Expression ((Before ORDER BY + (Projection + Change column names to column identifiers))) β
β Header: L2Distance(__table1.vec, _CAST([0., 2.]_Array(Float64), 'Array(Float64)'_String)) Float64 β
β __table1.id Int32 β
β ReadFromMergeTree (default.tab) β
β Header: id Int32 β
β _distance Float32 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
:::note
A query run without rescoring (
vector_search_with_rescoring = 0
) and with parallel replicas enabled may fall back to rescoring.
:::
Performance tuning {#performance-tuning}
Tuning compression
In virtually all use cases, the vectors in the underlying column are dense and do not compress well.
As a result,
compression
slows down inserts and reads into/from the vector column.
We therefore recommend to disable compression.
To do that, specify
CODEC(NONE)
for the vector column like this: | {"source_file": "annindexes.md"} | [
-0.02698151208460331,
-0.07091598957777023,
-0.07759145647287369,
-0.05349650979042053,
0.048936039209365845,
-0.004239732399582863,
-0.022745249792933464,
-0.0017293953569605947,
-0.010842837393283844,
-0.015933893620967865,
0.005175525322556496,
0.0019031530246138573,
0.003371895756572485,... |
fbc063f4-4b2d-41c1-a58a-6e2f055f2a3f | sql
CREATE TABLE tab(id Int32, vec Array(Float32) CODEC(NONE), INDEX idx vec TYPE vector_similarity('hnsw', 'L2Distance', 2)) ENGINE = MergeTree ORDER BY id;
Tuning index creation
The life cycle of vector similarity indexes is tied to the life cycle of parts.
In other words, whenever a new part with defined vector similarity index is created, the index is create as well.
This typically happens when data is
inserted
or during
merges
.
Unfortunately, HNSW is known for long index creation times which can significantly slow down inserts and merges.
Vector similarity indexes are ideally only used if the data is immutable or rarely changed.
To speed up index creation, the following techniques can be used:
First, index creation can be parallelized.
The maximum number of index creation threads can be configured using server setting
max_build_vector_similarity_index_thread_pool_size
.
For optimal performance, the setting value should be configured to the number of CPU cores.
Second, to speed up INSERT statements, users may disable the creation of skipping indexes on newly inserted parts using session setting
materialize_skip_indexes_on_insert
.
SELECT queries on such parts will fall back to exact search.
Since inserted parts tend to be small compared to the total table size, the performance impact of that is expected to be negligible.
Third, to speed up merges, users may disable the creation of skipping indexes on merged parts using session setting
materialize_skip_indexes_on_merge
.
This, in conjunction with statement
ALTER TABLE [...] MATERIALIZE INDEX [...]
, provides explicit control over the life cycle of vector similarity indexes.
For example, index creation can be deferred until all data was ingested or until a period of low system load such as the weekend.
Tuning index usage
SELECT queries need to load vector similarity indexes into main memory to use them.
To avoid that the same vector similarity index is loaded repeatedly into main memory, ClickHouse provides a dedicated in-memory cache for such indexes.
The bigger this cache is, the fewer unnecessary loads will happen.
The maximum cache size can be configured using server setting
vector_similarity_index_cache_size
.
By default, the cache can grow up to 5 GB in size.
:::note
The vector similarity index cache stores vector index granules.
If individual vector index granules are bigger than the cache size, they will not be cached.
Therefore, please make sure to calculate the vector index size (based on the formula in "Estimating storage and memory consumption" or
system.data_skipping_indices
) and size the cache correspondingly.
:::
The current size of the vector similarity index cache is shown in
system.metrics
:
sql
SELECT metric, value
FROM system.metrics
WHERE metric = 'VectorSimilarityIndexCacheBytes'
The cache hits and misses for a query with some query id can be obtained from
system.query_log
:
```sql
SYSTEM FLUSH LOGS query_log; | {"source_file": "annindexes.md"} | [
0.00944194570183754,
-0.04118262976408005,
-0.05776114761829376,
-0.001779178623110056,
-0.039881981909275055,
-0.06965560466051102,
-0.016154291108250618,
-0.02783391810953617,
-0.014153198339045048,
-0.03139391168951988,
0.01222240086644888,
0.013021299615502357,
-0.019420932978391647,
-... |
cc824877-3fd5-4d1f-af34-67c0ffc2913f | The cache hits and misses for a query with some query id can be obtained from
system.query_log
:
```sql
SYSTEM FLUSH LOGS query_log;
SELECT ProfileEvents['VectorSimilarityIndexCacheHits'], ProfileEvents['VectorSimilarityIndexCacheMisses']
FROM system.query_log
WHERE type = 'QueryFinish' AND query_id = '<...>'
ORDER BY event_time_microseconds;
```
For production use-cases, we recommend that the cache is sized large enough so that all vector indexes remain in memory at all times.
Tuning quantization
Quantization
is a technique to reduce the memory footprint of vectors and the computational costs of building and traversing vector indexes.
ClickHouse vector indexes supports the following quantization options:
| Quantization | Name | Storage per dimension |
|----------------|------------------------------|---------------------- |
| f32 | Single precision | 4 bytes |
| f16 | Half precision | 2 bytes |
| bf16 (default) | Half precision (brain float) | 2 bytes |
| i8 | Quarter precision | 1 byte |
| b1 | Binary | 1 bit |
Quantization reduces the precision of vector searches compared to searching the original full-precision floating-point values (
f32
).
However, on most datasets, half-precision brain float quantization (
bf16
) results in a negligible precision loss, therefore vector similarity indexes use this quantization technique by default.
Quarter precision (
i8
) and binary (
b1
) quantization causes appreciable precision loss in vector searches.
We recommend both quantizations only if the the size of the vector similarity index is significantly larger than the available DRAM size.
In this case, we also suggest enabling rescoring (
vector_search_index_fetch_multiplier
,
vector_search_with_rescoring
) to improve accuracy.
Binary quantization is only recommended for 1) normalized embeddings (i.e. vector length = 1, OpenAI models are usually normalized), and 2) if the cosine distance is used as distance function.
Binary quantization internally uses the Hamming distance to construct and search the proximity graph.
The rescoring step uses the original full-precision vectors stored in the table to identify the nearest neighbours via cosine distance.
Tuning data transfer
The reference vector in a vector search query is provided by the user and generally retrieved by making a call to a Large Language Model (LLM).
Typical Python code which runs a vector search in ClickHouse might look like this
```python
search_v = openai_client.embeddings.create(input = "[Good Books]", model='text-embedding-3-large', dimensions=1536).data[0].embedding
params = {'search_v': search_v}
result = chclient.query(
"SELECT id FROM items
ORDER BY cosineDistance(vector, %(search_v)s)
LIMIT 10",
parameters = params)
``` | {"source_file": "annindexes.md"} | [
0.04250573366880417,
0.00774817168712616,
-0.04188104718923569,
0.07560993731021881,
-0.0007204260909929872,
-0.06799472123384476,
0.042066700756549835,
-0.005207034759223461,
0.03807325288653374,
0.008648446761071682,
-0.02526753954589367,
0.03514286130666733,
-0.051528409123420715,
-0.05... |
d7e02147-11be-4110-8853-3309d21c2c85 | params = {'search_v': search_v}
result = chclient.query(
"SELECT id FROM items
ORDER BY cosineDistance(vector, %(search_v)s)
LIMIT 10",
parameters = params)
```
Embedding vectors (
search_v
in above snippet) could have a very large dimension.
For example, OpenAI provides models that generate embeddings vectors with 1536 or even 3072 dimensions.
In above code, the ClickHouse Python driver substitutes the embedding vector by a human readable string and subsequently send the SELECT query entirely as a string.
Assuming the embedding vector consists of 1536 single-precision floating point values, the sent string reaches a length of 20 kB.
This creates a high CPU usage for tokenizing, parsing and performing thousands of string-to-float conversions.
Also, significant space is required in the ClickHouse server log file, causing bloat in
system.query_log
as well.
Note that most LLM models return an embedding vector as a list or NumPy array of native floats.
We therefore recommend Python applications to bind the reference vector parameter in binary form by using the following style:
```python
search_v = openai_client.embeddings.create(input = "[Good Books]", model='text-embedding-3-large', dimensions=1536).data[0].embedding
params = {'$search_v_binary$': np.array(search_v, dtype=np.float32).tobytes()}
result = chclient.query(
"SELECT id FROM items
ORDER BY cosineDistance(vector, (SELECT reinterpret($search_v_binary$, 'Array(Float32)')))
LIMIT 10"
parameters = params)
```
In the example, the reference vector is sent as-is in binary form and reinterpreted as array of floats on the server.
This saves CPU time on the server side, and avoids bloat in the server logs and
system.query_log
.
Administration and monitoring {#administration}
The on-disk size of vector similarity indexes can be obtained from
system.data_skipping_indices
:
sql
SELECT database, table, name, formatReadableSize(data_compressed_bytes)
FROM system.data_skipping_indices
WHERE type = 'vector_similarity';
Example output:
result
ββdatabaseββ¬βtableββ¬βnameββ¬βformatReadabβ―ssed_bytes)ββ
β default β tab β idx β 348.00 MB β
ββββββββββββ΄ββββββββ΄βββββββ΄βββββββββββββββββββββββββββ
Differences to regular skipping indexes {#differences-to-regular-skipping-indexes} | {"source_file": "annindexes.md"} | [
-0.042104244232177734,
-0.011166388168931007,
-0.07713986188173294,
0.06550333648920059,
-0.012560578994452953,
-0.06492046266794205,
-0.013397176750004292,
0.02541438862681389,
0.011895472183823586,
-0.054709672927856445,
-0.07203228026628494,
-0.010037417523562908,
0.09755340218544006,
-... |
afa4de18-8b43-4ece-87c2-e3bb92e8534a | Differences to regular skipping indexes {#differences-to-regular-skipping-indexes}
As all regular
skipping indexes
, vector similarity indexes are constructed over granules and each indexed block consists of
GRANULARITY = [N]
-many granules (
[N]
= 1 by default for normal skipping indexes).
For example, if the primary index granularity of the table is 8192 (setting
index_granularity = 8192
) and
GRANULARITY = 2
, then each indexed block will contain 16384 rows.
However, data structures and algorithms for approximate neighbor search are inherently row-oriented.
They store a compact representation of a set of rows and also return rows for vector search queries.
This causes some rather unintuitive differences in the way vector similarity indexes behave compared to normal skipping indexes.
When a user defines a vector similarity index on a column, ClickHouse internally creates a vector similarity "sub-index" for each index block.
The sub-index is "local" in the sense that it only knows about the rows of its containing index block.
In the previous example and assuming that a column has 65536 rows, we obtain four index blocks (spanning eight granules) and a vector similarity sub-index for each index block.
A sub-index is theoretically able to return the rows with the N closest points within its index block directly.
However, since ClickHouse loads data from disk to memory at the granularity of granules, sub-indexes extrapolate matching rows to granule granularity.
This is different from regular skipping indexes which skip data at the granularity of index blocks.
The
GRANULARITY
parameter determines how many vector similarity sub-indexes are created.
Bigger
GRANULARITY
values mean fewer but larger vector similarity sub-indexes, up to the point where a column (or a column's data part) has only a single sub-index.
In that case, the sub-index has a "global" view of all column rows and can directly return all granules of the column (part) with relevant rows (there are at most
LIMIT [N]
-many such granules).
In a second step, ClickHouse will load these granules and identify the actually best rows by performing a brute-force distance calculation over all rows of the granules.
With a small
GRANULARITY
value, each of the sub-indexes returns up to
LIMIT N
-many granules.
As a result, more granules need to be loaded and post-filtered.
Note that the search accuracy is with both cases equally good, only the processing performance differs.
It is generally recommended to use a large
GRANULARITY
for vector similarity indexes and fall back to a smaller
GRANULARITY
values only in case of problems like excessive memory consumption of the vector similarity structures.
If no
GRANULARITY
was specified for vector similarity indexes, the default value is 100 million.
Example {#approximate-nearest-neighbor-search-example} | {"source_file": "annindexes.md"} | [
0.006866627372801304,
0.010888820514082909,
0.0382956936955452,
-0.03588017448782921,
0.04609650745987892,
-0.05827443301677704,
-0.04964645951986313,
-0.05068773403763771,
0.042118147015571594,
-0.04846547171473503,
-0.008268320001661777,
0.039359159767627716,
0.018758518621325493,
-0.026... |
6404987b-859f-47d7-8d0f-2b3191518368 | Example {#approximate-nearest-neighbor-search-example}
```sql
CREATE TABLE tab(id Int32, vec Array(Float32), INDEX idx vec TYPE vector_similarity('hnsw', 'L2Distance', 2)) ENGINE = MergeTree ORDER BY id;
INSERT INTO tab VALUES (0, [1.0, 0.0]), (1, [1.1, 0.0]), (2, [1.2, 0.0]), (3, [1.3, 0.0]), (4, [1.4, 0.0]), (5, [1.5, 0.0]), (6, [0.0, 2.0]), (7, [0.0, 2.1]), (8, [0.0, 2.2]), (9, [0.0, 2.3]), (10, [0.0, 2.4]), (11, [0.0, 2.5]);
WITH [0., 2.] AS reference_vec
SELECT id, vec
FROM tab
ORDER BY L2Distance(vec, reference_vec) ASC
LIMIT 3;
```
returns
result
ββidββ¬βvecββββββ
1. β 6 β [0,2] β
2. β 7 β [0,2.1] β
3. β 8 β [0,2.2] β
ββββββ΄ββββββββββ
Further example datasets that use approximate vector search:
-
LAION-400M
-
LAION-5B
-
dbpedia
-
hackernews
Quantized Bit (QBit) {#approximate-nearest-neighbor-search-qbit}
One common approach to speed up exact vector search is to use a lower-precision
float data type
.
For example, if vectors are stored as
Array(BFloat16)
instead of
Array(Float32)
, the data size is reduced by half, and query runtimes are expected to decrease proportionally.
This method is known as quantization. While it speeds up computation, it may reduce result accuracy despite performing an exhaustive scan of all vectors.
With traditional quantization, we lose precision both during search and when storing the data. In the example above, we would store
BFloat16
instead of
Float32
, meaning we can never perform a more accurate search later, even if desired. One alternative approach is to store two copies of the data: quantized and full-precision. While this works, it requires redundant storage. Consider a scenario where we have
Float64
as original data and want to run searches with different precision (16-bit, 32-bit, or full 64-bit). We would need to store three separate copies of the data.
ClickHouse offers the Quantized Bit (
QBit
) data type that addresses these limitations by:
1. Storing the original full-precision data.
2. Allowing quantization precision to be specified at query time.
This is achieved by storing data in a bit-grouped format (meaning all i-th bits of all vectors are stored together), enabling reads at only the requested precision level. You get the speed benefits of reduced I/O and computation from quantization while keeping all original data available when needed. When maximum precision is selected, the search becomes exact.
:::note
The
QBit
data type and its associated distance functions are currently experimental. To enable them, run
SET allow_experimental_qbit_type = 1
.
If you encounter problems, please open an issue in the
ClickHouse repository
.
:::
To declare a column of
QBit
type, use the following syntax:
sql
column_name QBit(element_type, dimension)
Where:
*
element_type
β the type of each vector element. Supported types are
BFloat16
,
Float32
, and
Float64
*
dimension
β the number of elements in each vector | {"source_file": "annindexes.md"} | [
0.056434761732816696,
0.0067421868443489075,
0.008527669124305248,
-0.06357549130916595,
0.048814259469509125,
-0.0494355708360672,
0.02059287019073963,
-0.022121313959360123,
-0.07850749790668488,
-0.026538154110312462,
-0.028327951207756996,
-0.03216511011123657,
0.022208016365766525,
-0... |
34e4e854-a8d4-46c6-a5a6-14deb2ef6fd7 | Where:
*
element_type
β the type of each vector element. Supported types are
BFloat16
,
Float32
, and
Float64
*
dimension
β the number of elements in each vector
Creating a
QBit
Table and Adding Data {#qbit-create}
```sql
CREATE TABLE fruit_animal (
word String,
vec QBit(Float64, 5)
) ENGINE = MergeTree
ORDER BY word;
INSERT INTO fruit_animal VALUES
('apple', [-0.99105519, 1.28887844, -0.43526649, -0.98520696, 0.66154391]),
('banana', [-0.69372815, 0.25587061, -0.88226235, -2.54593015, 0.05300475]),
('orange', [0.93338752, 2.06571317, -0.54612565, -1.51625717, 0.69775337]),
('dog', [0.72138876, 1.55757105, 2.10953259, -0.33961248, -0.62217325]),
('cat', [-0.56611276, 0.52267331, 1.27839863, -0.59809804, -1.26721048]),
('horse', [-0.61435682, 0.48542571, 1.21091247, -0.62530446, -1.33082533]);
```
Vector Search with
QBit
{#qbit-search}
Let's find the nearest neighbors to a vector representing word 'lemon' using L2 distance. The third parameter in the distance function specifies the precision in bits - higher values provide more accuracy but require more computation.
You can find all available distance functions for
QBit
here
.
Full precision search (64-bit):
sql
SELECT
word,
L2DistanceTransposed(vec, [-0.88693672, 1.31532824, -0.51182908, -0.99652702, 0.59907770], 64) AS distance
FROM fruit_animal
ORDER BY distance;
text
ββwordββββ¬ββββββββββββdistanceββ
1. β apple β 0.14639757188169716 β
2. β banana β 1.998961369007679 β
3. β orange β 2.039041552613732 β
4. β cat β 2.752802631487914 β
5. β horse β 2.7555776805484813 β
6. β dog β 3.382295083120104 β
ββββββββββ΄ββββββββββββββββββββββ
Reduced precision search:
sql
SELECT
word,
L2DistanceTransposed(vec, [-0.88693672, 1.31532824, -0.51182908, -0.99652702, 0.59907770], 12) AS distance
FROM fruit_animal
ORDER BY distance;
text
ββwordββββ¬βββββββββββdistanceββ
1. β apple β 0.757668703053566 β
2. β orange β 1.5499475034938677 β
3. β banana β 1.6168396735102937 β
4. β cat β 2.429752230904804 β
5. β horse β 2.524650475528617 β
6. β dog β 3.17766975527459 β
ββββββββββ΄βββββββββββββββββββββ
Notice that with 12-bit quantization, we get a good approximation of the distances with faster query execution. The relative ordering remains largely consistent, with 'apple' still being the closest match.
:::note
In the current state, the speed-up is due to reduced I/O as we read less data. If the original data was wide, like
Float64
, choosing a lower precision will still result in distance calculation on data of the same width β just with less precision.
:::
Performance Considerations {#qbit-performance} | {"source_file": "annindexes.md"} | [
0.0588768906891346,
0.004114707000553608,
0.03460037708282471,
0.004569416865706444,
-0.023974288254976273,
0.016254767775535583,
0.031283777207136154,
-0.022896692156791687,
-0.010468674823641777,
0.024658657610416412,
0.06278003752231598,
-0.06299387663602829,
0.08176061511039734,
-0.062... |
70c08909-b00c-4723-83b8-60ef7ac9ab01 | Performance Considerations {#qbit-performance}
The performance benefit of
QBit
comes from reduced I/O operations, as less data needs to be read from storage when using lower precision. Moreover, when the
QBit
contains
Float32
data, if the precision parameter is 16 or below, there will be additional benefits from reduced computation. The precision parameter directly controls the trade-off between accuracy and speed:
Higher precision
(closer to the original data width): More accurate results, slower queries
Lower precision
: Faster queries with approximate results, reduced memory usage
References {#references}
Blogs:
-
Vector Search with ClickHouse - Part 1
-
Vector Search with ClickHouse - Part 2 | {"source_file": "annindexes.md"} | [
-0.05565178394317627,
0.05945234000682831,
-0.08215431123971939,
-0.027899207547307014,
0.014282112009823322,
-0.03104117512702942,
-0.04777280241250992,
0.0157152246683836,
0.02465369738638401,
-0.0328187420964241,
-0.028850292786955833,
-0.007680573500692844,
0.08507072925567627,
-0.0519... |
ad772a26-fa57-4f28-8a4c-211df19cd841 | description: 'CoalescingMergeTree inherits from the MergeTree engine. Its key feature
is the ability to automatically store last non-null value of each column during part merges.'
sidebar_label: 'CoalescingMergeTree'
sidebar_position: 50
slug: /engines/table-engines/mergetree-family/coalescingmergetree
title: 'CoalescingMergeTree table engine'
keywords: ['CoalescingMergeTree']
show_related_blogs: true
doc_type: 'reference'
CoalescingMergeTree table engine
:::note Available from version 25.6
This table engine is available from version 25.6 and higher in both OSS and Cloud.
:::
This engine inherits from
MergeTree
. The key difference is in how data parts are merged: for
CoalescingMergeTree
tables, ClickHouse replaces all rows with the same primary key (or more precisely, the same
sorting key
) with a single row that contains the latest non-NULL values for each column.
This enables column-level upserts, meaning you can update only specific columns rather than entire rows.
CoalescingMergeTree
is intended for use with Nullable types in non-key columns. If the columns are not Nullable, the behavior is the same as with
ReplacingMergeTree
.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = CoalescingMergeTree([columns])
[PARTITION BY expr]
[ORDER BY expr]
[SAMPLE BY expr]
[SETTINGS name=value, ...]
For a description of request parameters, see
request description
.
Parameters of CoalescingMergeTree {#parameters-of-coalescingmergetree}
Columns {#columns}
columns
- a tuple with the names of columns where values will be united. Optional parameter.
The columns must be of a numeric type and must not be in the partition or sorting key.
If
columns
is not specified, ClickHouse unites the values in all columns that are not in the sorting key.
Query clauses {#query-clauses}
When creating a
CoalescingMergeTree
table the same
clauses
are required, as when creating a
MergeTree
table.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects and, if possible, switch the old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE [=] CoalescingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, [columns])
```
All of the parameters excepting `columns` have the same meaning as in `MergeTree`.
- `columns` β tuple with names of columns values of which will be summed. Optional parameter. For a description, see the text above.
Usage example {#usage-example}
Consider the following table: | {"source_file": "coalescingmergetree.md"} | [
-0.04868624359369278,
-0.008932751603424549,
-0.026312462985515594,
-0.002629965776577592,
0.0026987858582288027,
0.0013693406945094466,
-0.021087104454636574,
-0.03140898048877716,
-0.040377791970968246,
0.08290226757526398,
0.06276565045118332,
0.005155918654054403,
0.0012880123686045408,
... |
717e5028-15bb-4574-bb16-05998d86c0a3 | Usage example {#usage-example}
Consider the following table:
sql
CREATE TABLE test_table
(
key UInt64,
value_int Nullable(UInt32),
value_string Nullable(String),
value_date Nullable(Date)
)
ENGINE = CoalescingMergeTree()
ORDER BY key
Insert data to it:
sql
INSERT INTO test_table VALUES(1, NULL, NULL, '2025-01-01'), (2, 10, 'test', NULL);
INSERT INTO test_table VALUES(1, 42, 'win', '2025-02-01');
INSERT INTO test_table(key, value_date) VALUES(2, '2025-02-01');
The result will looks like this:
sql
SELECT * FROM test_table ORDER BY key;
text
ββkeyββ¬βvalue_intββ¬βvalue_stringββ¬βvalue_dateββ
β 1 β 42 β win β 2025-02-01 β
β 1 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2025-01-01 β
β 2 β α΄Ία΅α΄Έα΄Έ β α΄Ία΅α΄Έα΄Έ β 2025-02-01 β
β 2 β 10 β test β α΄Ία΅α΄Έα΄Έ β
βββββββ΄ββββββββββββ΄βββββββββββββββ΄βββββββββββββ
Recommended query for correct and final result:
sql
SELECT * FROM test_table FINAL ORDER BY key;
text
ββkeyββ¬βvalue_intββ¬βvalue_stringββ¬βvalue_dateββ
β 1 β 42 β win β 2025-02-01 β
β 2 β 10 β test β 2025-02-01 β
βββββββ΄ββββββββββββ΄βββββββββββββββ΄βββββββββββββ
Using the
FINAL
modifier forces ClickHouse to apply merge logic at query time, ensuring you get the correct, coalesced "latest" value for each column. This is the safest and most accurate method when querying from a CoalescingMergeTree table.
:::note
An approach with
GROUP BY
may return incorrect results if the underlying parts have not been fully merged.
sql
SELECT key, last_value(value_int), last_value(value_string), last_value(value_date) FROM test_table GROUP BY key; -- Not recommended.
::: | {"source_file": "coalescingmergetree.md"} | [
-0.014469809830188751,
0.03530903905630112,
-0.003587696235626936,
0.052096594125032425,
-0.10244419425725937,
0.027902763336896896,
-0.020491616800427437,
0.04819842800498009,
-0.04035356268286705,
0.07842347025871277,
0.08567478507757187,
0.011784556321799755,
0.020341146737337112,
-0.09... |
9da7abe8-9297-4081-bb80-c71a96db12f7 | description: 'Inherits from MergeTree but adds logic for collapsing rows during the
merge process.'
keywords: ['updates', 'collapsing']
sidebar_label: 'CollapsingMergeTree'
sidebar_position: 70
slug: /engines/table-engines/mergetree-family/collapsingmergetree
title: 'CollapsingMergeTree table engine'
doc_type: 'guide'
CollapsingMergeTree table engine
Description {#description}
The
CollapsingMergeTree
engine inherits from
MergeTree
and adds logic for collapsing rows during the merge process.
The
CollapsingMergeTree
table engine asynchronously deletes (collapses)
pairs of rows if all the fields in a sorting key (
ORDER BY
) are equivalent except for the special field
Sign
,
which can have values of either
1
or
-1
.
Rows without a pair of opposite valued
Sign
are kept.
For more details, see the
Collapsing
section of the document.
:::note
This engine may significantly reduce the volume of storage,
increasing the efficiency of
SELECT
queries as a consequence.
:::
Parameters {#parameters}
All parameters of this table engine, with the exception of the
Sign
parameter,
have the same meaning as in
MergeTree
.
Sign
β The name given to a column with the type of row where
1
is a "state" row and
-1
is a "cancel" row. Type:
Int8
.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
)
ENGINE = CollapsingMergeTree(Sign)
[PARTITION BY expr]
[ORDER BY expr]
[SAMPLE BY expr]
[SETTINGS name=value, ...]
Deprecated Method for Creating a Table
:::note
The method below is not recommended for use in new projects.
We advise, if possible, to update old projects to use the new method.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
)
ENGINE [=] CollapsingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, Sign)
```
`Sign` β The name given to a column with the type of row where `1` is a "state" row and `-1` is a "cancel" row. [Int8](/sql-reference/data-types/int-uint).
For a description of query parameters, see
query description
.
When creating a
CollapsingMergeTree
table, the same
query clauses
are required, as when creating a
MergeTree
table.
Collapsing {#table_engine-collapsingmergetree-collapsing}
Data {#data} | {"source_file": "collapsingmergetree.md"} | [
-0.04835052415728569,
0.048019859939813614,
0.05614285171031952,
0.056691545993089676,
-0.01680806651711464,
-0.04420365020632744,
-0.033405937254428864,
0.03127507492899895,
0.01132238283753395,
0.02770795114338398,
-0.004108560271561146,
0.11562442034482956,
0.006219399161636829,
-0.0882... |
83b95603-c537-4b4b-b07b-793881f65497 | Collapsing {#table_engine-collapsingmergetree-collapsing}
Data {#data}
Consider the situation where you need to save continually changing data for some given object.
It may sound logical to have one row per object and update it anytime something changes,
however, update operations are expensive and slow for the DBMS because they require rewriting the data in storage.
If we need to write data quickly, performing large numbers of updates is not an acceptable approach,
but we can always write the changes of an object sequentially.
To do so, we make use of the special column
Sign
.
If
Sign
=
1
it means that the row is a "state" row:
a row containing fields which represent a current valid state
.
If
Sign
=
-1
it means that the row is a "cancel" row:
a row used for the cancellation of state of an object with the same attributes
.
For example, we want to calculate how many pages users checked on some website and how long they visited them for.
At some given moment in time, we write the following row with the state of user activity:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
At a later moment in time, we register the change of user activity and write it with the following two rows:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β -1 β
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
The first row cancels the previous state of the object (representing a user in this case).
It should copy all the sorting key fields for the "canceled" row except for
Sign
.
The second row above contains the current state.
As we need only the last state of user activity, the original "state" row and the "cancel"
row that we inserted can be deleted as shown below, collapsing the invalid (old) state of an object:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β 1 β -- old "state" row can be deleted
β 4324182021466249494 β 5 β 146 β -1 β -- "cancel" row can be deleted
β 4324182021466249494 β 6 β 185 β 1 β -- new "state" row remains
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
CollapsingMergeTree
carries out precisely this
collapsing
behavior while merging of the data parts takes place.
:::note
The reason for why two rows are needed for each change
is further discussed in the
Algorithm
paragraph.
:::
The peculiarities of such an approach
The program that writes the data should remember the state of an object to be able to cancel it. The "cancel" row should contain copies of sorting key fields of the "state" and the opposite
Sign
. This increases the initial size of storage but allows us to write the data quickly. | {"source_file": "collapsingmergetree.md"} | [
-0.047051526606082916,
0.04821348562836647,
0.01130802184343338,
0.08807975053787231,
-0.04551586136221886,
-0.0559048093855381,
0.010559768415987492,
0.0400271862745285,
0.00004484376768232323,
0.05578760802745819,
0.0004446038219612092,
0.0761522501707077,
0.02931126579642296,
-0.0977387... |
ad40282e-0e0c-40f9-b883-4a7e1c0a8e29 | Long growing arrays in columns reduce the efficiency of the engine due to the increased load for writing. The more straightforward the data, the higher the efficiency.
The
SELECT
results depend strongly on the consistency of the object change history. Be accurate when preparing data for inserting. You can get unpredictable results with inconsistent data. For example, negative values for non-negative metrics such as session depth.
Algorithm {#table_engine-collapsingmergetree-collapsing-algorithm}
When ClickHouse merges data
parts
,
each group of consecutive rows with the same sorting key (
ORDER BY
) is reduced to no more than two rows,
the "state" row with
Sign
=
1
and the "cancel" row with
Sign
=
-1
.
In other words, in ClickHouse entries collapse.
For each resulting data part ClickHouse saves:
| | |
|--|-------------------------------------------------------------------------------------------------------------------------------------|
|1.| The first "cancel" and the last "state" rows, if the number of "state" and "cancel" rows matches and the last row is a "state" row. |
|2.| The last "state" row, if there are more "state" rows than "cancel" rows. |
|3.| The first "cancel" row, if there are more "cancel" rows than "state" rows. |
|4.| None of the rows, in all other cases. |
Additionally, when there are at least two more "state" rows than "cancel"
rows, or at least two more "cancel" rows than "state" rows, the merge continues.
ClickHouse, however, treats this situation as a logical error and records it in the server log.
This error can occur if the same data is inserted more than once.
Thus, collapsing should not change the results of calculating statistics.
Changes are gradually collapsed so that in the end only the last state of almost every object is left.
The
Sign
column is required because the merging algorithm does not guarantee
that all the rows with the same sorting key will be in the same resulting data part and even on the same physical server.
ClickHouse processes
SELECT
queries with multiple threads, and it cannot predict the order of rows in the result.
Aggregation is required if there is a need to get completely "collapsed" data from the
CollapsingMergeTree
table.
To finalize collapsing, write a query with the
GROUP BY
clause and aggregate functions that account for the sign.
For example, to calculate quantity, use
sum(Sign)
instead of
count()
.
To calculate the sum of something, use
sum(Sign * x)
together
HAVING sum(Sign) > 0
instead of
sum(x)
as in the
example
below. | {"source_file": "collapsingmergetree.md"} | [
-0.022706013172864914,
0.01072007231414318,
-0.012971104122698307,
0.04618014395236969,
-0.01641947031021118,
-0.08254821598529816,
-0.0463007390499115,
-0.02634882554411888,
-0.01609223708510399,
0.017984513193368912,
0.007933152839541435,
0.06527792662382126,
-0.02789912559092045,
-0.128... |
dbab0f82-9e50-4a5d-b3a1-ff1786e9213b | as in the
example
below.
The aggregates
count
,
sum
and
avg
could be calculated this way.
The aggregate
uniq
could be calculated if an object has at least one non-collapsed state.
The aggregates
min
and
max
could not be calculated
because
CollapsingMergeTree
does not save the history of the collapsed states.
:::note
If you need to extract data without aggregation
(for example, to check whether rows whose newest values match certain conditions are present),
you can use the
FINAL
modifier for the
FROM
clause. It will merge the data before returning the result.
For CollapsingMergeTree, only the latest state row for each key is returned.
:::
Examples {#examples}
Example of use {#example-of-use}
Given the following example data:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β 1 β
β 4324182021466249494 β 5 β 146 β -1 β
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
Let's create a table
UAct
using the
CollapsingMergeTree
:
sql
CREATE TABLE UAct
(
UserID UInt64,
PageViews UInt8,
Duration UInt8,
Sign Int8
)
ENGINE = CollapsingMergeTree(Sign)
ORDER BY UserID
Next we will insert some data:
sql
INSERT INTO UAct VALUES (4324182021466249494, 5, 146, 1)
sql
INSERT INTO UAct VALUES (4324182021466249494, 5, 146, -1),(4324182021466249494, 6, 185, 1)
We use two
INSERT
queries to create two different data parts.
:::note
If we insert the data with a single query, ClickHouse creates only one data part and will not perform any merge ever.
:::
We can select the data using:
sql
SELECT * FROM UAct
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β -1 β
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
Let's take a look at the returned data above and see if collapsing occurred...
With two
INSERT
queries, we created two data parts.
The
SELECT
query was performed in two threads, and we got a random order of rows.
However, collapsing
did not occur
because there was no merge of the data parts yet
and ClickHouse merges data parts in the background at an unknown moment which we cannot predict.
We therefore need an aggregation
which we perform with the
sum
aggregate function and the
HAVING
clause:
sql
SELECT
UserID,
sum(PageViews * Sign) AS PageViews,
sum(Duration * Sign) AS Duration
FROM UAct
GROUP BY UserID
HAVING sum(Sign) > 0
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ
β 4324182021466249494 β 6 β 185 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ | {"source_file": "collapsingmergetree.md"} | [
-0.04813409969210625,
0.0209970586001873,
0.019304072484374046,
0.10364401340484619,
-0.052115242928266525,
0.04484616592526436,
0.05331238731741905,
0.027100982144474983,
-0.026158232241868973,
0.02964240126311779,
0.0047972979955375195,
0.023865871131420135,
0.06681680679321289,
-0.01953... |
b4b99862-ed76-48aa-99ee-4bd93575f45d | text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ
β 4324182021466249494 β 6 β 185 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ
If we do not need aggregation and want to force collapsing, we can also use the
FINAL
modifier for
FROM
clause.
sql
SELECT * FROM UAct FINAL
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
:::note
This way of selecting the data is less efficient and is not recommended for use with large amounts of scanned data (millions of rows).
:::
Example of another approach {#example-of-another-approach}
The idea with this approach is that merges take into account only key fields.
In the "cancel" row, we can therefore specify negative values
that equalize the previous version of the row when summing without using the
Sign
column.
For this example, we will make use of the sample data below:
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 5 β 146 β 1 β
β 4324182021466249494 β -5 β -146 β -1 β
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
For this approach, it is necessary to change the data types of
PageViews
and
Duration
to store negative values.
We therefore change the values of these columns from
UInt8
to
Int16
when we create our table
UAct
using the
collapsingMergeTree
:
sql
CREATE TABLE UAct
(
UserID UInt64,
PageViews Int16,
Duration Int16,
Sign Int8
)
ENGINE = CollapsingMergeTree(Sign)
ORDER BY UserID
Let's test the approach by inserting data into our table.
For examples or small tables, it is, however, acceptable:
```sql
INSERT INTO UAct VALUES(4324182021466249494, 5, 146, 1);
INSERT INTO UAct VALUES(4324182021466249494, -5, -146, -1);
INSERT INTO UAct VALUES(4324182021466249494, 6, 185, 1);
SELECT * FROM UAct FINAL;
```
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ
sql
SELECT
UserID,
sum(PageViews) AS PageViews,
sum(Duration) AS Duration
FROM UAct
GROUP BY UserID
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ
β 4324182021466249494 β 6 β 185 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ
sql
SELECT COUNT() FROM UAct
text
ββcount()ββ
β 3 β
βββββββββββ
```sql
OPTIMIZE TABLE UAct FINAL;
SELECT * FROM UAct
```
text
βββββββββββββββUserIDββ¬βPageViewsββ¬βDurationββ¬βSignββ
β 4324182021466249494 β 6 β 185 β 1 β
βββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββ | {"source_file": "collapsingmergetree.md"} | [
-0.052871450781822205,
0.029018301516771317,
0.08198481798171997,
0.01789182238280773,
-0.04918541759252548,
-0.026155253872275352,
0.07602982223033905,
-0.022195760160684586,
0.003930181264877319,
0.005368134472519159,
0.07703234255313873,
0.06045788154006004,
0.007783412467688322,
-0.086... |
a9847532-1fe2-44da-b6fb-9efc7ad8d4eb | description: 'Learn how to add a custom partitioning key to MergeTree tables.'
sidebar_label: 'Custom Partitioning Key'
sidebar_position: 30
slug: /engines/table-engines/mergetree-family/custom-partitioning-key
title: 'Custom Partitioning Key'
doc_type: 'guide'
Custom partitioning key
:::note
In most cases you do not need a partition key, and in most other cases you do not need a partition key more granular than by month, unless targeting an observability use case where partitioning by day is common.
You should never use too granular of partitioning. Don't partition your data by client identifiers or names. Instead, make a client identifier or name the first column in the ORDER BY expression.
:::
Partitioning is available for the
MergeTree family tables
, including
replicated tables
and
materialized views
.
A partition is a logical combination of records in a table by a specified criterion. You can set a partition by an arbitrary criterion, such as by month, by day, or by event type. Each partition is stored separately to simplify manipulations of this data. When accessing the data, ClickHouse uses the smallest subset of partitions possible. Partitions improve performance for queries containing a partitioning key because ClickHouse will filter for that partition before selecting the parts and granules within the partition.
The partition is specified in the
PARTITION BY expr
clause when
creating a table
. The partition key can be any expression from the table columns. For example, to specify partitioning by month, use the expression
toYYYYMM(date_column)
:
sql
CREATE TABLE visits
(
VisitDate Date,
Hour UInt8,
ClientID UUID
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(VisitDate)
ORDER BY Hour;
The partition key can also be a tuple of expressions (similar to the
primary key
). For example:
sql
ENGINE = ReplicatedCollapsingMergeTree('/clickhouse/tables/name', 'replica1', Sign)
PARTITION BY (toMonday(StartDate), EventType)
ORDER BY (CounterID, StartDate, intHash32(UserID));
In this example, we set partitioning by the event types that occurred during the current week.
By default, the floating-point partition key is not supported. To use it enable the setting
allow_floating_point_partition_key
.
When inserting new data to a table, this data is stored as a separate part (chunk) sorted by the primary key. In 10-15 minutes after inserting, the parts of the same partition are merged into the entire part.
:::info
A merge only works for data parts that have the same value for the partitioning expression. This means
you shouldn't make overly granular partitions
(more than about a thousand partitions). Otherwise, the
SELECT
query performs poorly because of an unreasonably large number of files in the file system and open file descriptors.
::: | {"source_file": "custom-partitioning-key.md"} | [
-0.0064798640087246895,
0.003534961026161909,
0.02556176297366619,
-0.000715973787009716,
-0.055385466665029526,
-0.025021759793162346,
0.0058102901093661785,
0.0637950748205185,
-0.011374866589903831,
0.040554229170084,
0.03007054328918457,
-0.0021854282822459936,
0.03451944515109062,
-0.... |
1bdfdaab-fbb8-4eec-b88c-8882bfc8f7be | Use the
system.parts
table to view the table parts and partitions. For example, let's assume that we have a
visits
table with partitioning by month. Let's perform the
SELECT
query for the
system.parts
table:
sql
SELECT
partition,
name,
active
FROM system.parts
WHERE table = 'visits'
text
ββpartitionββ¬βnameβββββββββββββββ¬βactiveββ
β 201901 β 201901_1_3_1 β 0 β
β 201901 β 201901_1_9_2_11 β 1 β
β 201901 β 201901_8_8_0 β 0 β
β 201901 β 201901_9_9_0 β 0 β
β 201902 β 201902_4_6_1_11 β 1 β
β 201902 β 201902_10_10_0_11 β 1 β
β 201902 β 201902_11_11_0_11 β 1 β
βββββββββββββ΄ββββββββββββββββββββ΄βββββββββ
The
partition
column contains the names of the partitions. There are two partitions in this example:
201901
and
201902
. You can use this column value to specify the partition name in
ALTER ... PARTITION
queries.
The
name
column contains the names of the partition data parts. You can use this column to specify the name of the part in the
ALTER ATTACH PART
query.
Let's break down the name of the part:
201901_1_9_2_11
:
201901
is the partition name.
1
is the minimum number of the data block.
9
is the maximum number of the data block.
2
is the chunk level (the depth of the merge tree it is formed from).
11
is the mutation version (if a part mutated)
:::info
The parts of old-type tables have the name:
20190117_20190123_2_2_0
(minimum date - maximum date - minimum block number - maximum block number - level).
:::
The
active
column shows the status of the part.
1
is active;
0
is inactive. The inactive parts are, for example, source parts remaining after merging to a larger part. The corrupted data parts are also indicated as inactive.
As you can see in the example, there are several separated parts of the same partition (for example,
201901_1_3_1
and
201901_1_9_2
). This means that these parts are not merged yet. ClickHouse merges the inserted parts of data periodically, approximately 15 minutes after inserting. In addition, you can perform a non-scheduled merge using the
OPTIMIZE
query. Example:
sql
OPTIMIZE TABLE visits PARTITION 201902;
text
ββpartitionββ¬βnameββββββββββββββ¬βactiveββ
β 201901 β 201901_1_3_1 β 0 β
β 201901 β 201901_1_9_2_11 β 1 β
β 201901 β 201901_8_8_0 β 0 β
β 201901 β 201901_9_9_0 β 0 β
β 201902 β 201902_4_6_1 β 0 β
β 201902 β 201902_4_11_2_11 β 1 β
β 201902 β 201902_10_10_0 β 0 β
β 201902 β 201902_11_11_0 β 0 β
βββββββββββββ΄βββββββββββββββββββ΄βββββββββ
Inactive parts will be deleted approximately 10 minutes after merging.
Another way to view a set of parts and partitions is to go into the directory of the table:
/var/lib/clickhouse/data/<database>/<table>/
. For example: | {"source_file": "custom-partitioning-key.md"} | [
0.011283869855105877,
-0.0635061040520668,
0.041517749428749084,
0.06318928301334381,
-0.0190579853951931,
-0.017048709094524384,
0.11383233964443207,
0.042565491050481796,
0.007548855617642403,
-0.015776950865983963,
-0.0018324722768738866,
-0.049954552203416824,
0.017160478979349136,
-0.... |
5ef2d007-3dec-4cb1-a3f4-017ec9ba04e1 | Another way to view a set of parts and partitions is to go into the directory of the table:
/var/lib/clickhouse/data/<database>/<table>/
. For example:
bash
/var/lib/clickhouse/data/default/visits$ ls -l
total 40
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 201901_1_3_1
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201901_1_9_2_11
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 15:52 201901_8_8_0
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 15:52 201901_9_9_0
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201902_10_10_0
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201902_11_11_0
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:19 201902_4_11_2_11
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 12:09 201902_4_6_1
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 detached
The folders '201901_1_1_0', '201901_1_7_1' and so on are the directories of the parts. Each part relates to a corresponding partition and contains data just for a certain month (the table in this example has partitioning by month).
The
detached
directory contains parts that were detached from the table using the
DETACH
query. The corrupted parts are also moved to this directory, instead of being deleted. The server does not use the parts from the
detached
directory. You can add, delete, or modify the data in this directory at any time β the server will not know about this until you run the
ATTACH
query.
Note that on the operating server, you cannot manually change the set of parts or their data on the file system, since the server will not know about it. For non-replicated tables, you can do this when the server is stopped, but it isn't recommended. For replicated tables, the set of parts cannot be changed in any case.
ClickHouse allows you to perform operations with the partitions: delete them, copy from one table to another, or create a backup. See the list of all operations in the section
Manipulations With Partitions and Parts
.
Group By optimisation using partition key {#group-by-optimisation-using-partition-key}
For some combinations of table's partition key and query's group by key it might be possible to execute aggregation for each partition independently.
Then we'll not have to merge partially aggregated data from all execution threads at the end,
because we provided with the guarantee that each group by key value cannot appear in working sets of two different threads.
The typical example is:
```sql
CREATE TABLE session_log
(
UserID UInt64,
SessionID UUID
)
ENGINE = MergeTree
PARTITION BY sipHash64(UserID) % 16
ORDER BY tuple();
SELECT
UserID,
COUNT()
FROM session_log
GROUP BY UserID;
```
:::note
Performance of such a query heavily depends on the table layout. Because of that the optimisation is not enabled by default.
:::
The key factors for a good performance: | {"source_file": "custom-partitioning-key.md"} | [
0.04489690065383911,
-0.06732109934091568,
0.02140861749649048,
0.01550640631467104,
0.02230835147202015,
-0.03405732661485672,
0.05319919064640999,
-0.000004553942744678352,
0.0019904158543795347,
-0.006962998304516077,
0.04228363186120987,
-0.040353547781705856,
0.022228015586733818,
-0.... |
dcea04b9-4ef6-4beb-92fb-713dac1221ff | :::note
Performance of such a query heavily depends on the table layout. Because of that the optimisation is not enabled by default.
:::
The key factors for a good performance:
number of partitions involved in the query should be sufficiently large (more than
max_threads / 2
), otherwise query will under-utilize the machine
partitions shouldn't be too small, so batch processing won't degenerate into row-by-row processing
partitions should be comparable in size, so all threads will do roughly the same amount of work
:::info
It's recommended to apply some hash function to columns in
partition by
clause in order to distribute data evenly between partitions.
:::
Relevant settings are:
allow_aggregate_partitions_independently
- controls if the use of optimisation is enabled
force_aggregate_partitions_independently
- forces its use when it's applicable from the correctness standpoint, but getting disabled by internal logic that estimates its expediency
max_number_of_partitions_for_independent_aggregation
- hard limit on the maximal number of partitions table could have | {"source_file": "custom-partitioning-key.md"} | [
-0.028688989579677582,
-0.05234793946146965,
-0.022024841979146004,
0.014333329163491726,
-0.010595847852528095,
-0.04141688346862793,
-0.010640336200594902,
-0.00010498741903575137,
-0.024154651910066605,
0.004507360048592091,
0.00804454181343317,
0.017563577741384506,
0.02381598763167858,
... |
808caa10-4ef8-4de5-b77f-515c45fdb775 | description: 'Quickly find search terms in text.'
keywords: ['full-text search', 'text index', 'index', 'indices']
sidebar_label: 'Full-text Search using Text Indexes'
slug: /engines/table-engines/mergetree-family/invertedindexes
title: 'Full-text Search using Text Indexes'
doc_type: 'reference'
import PrivatePreviewBadge from '@theme/badges/PrivatePreviewBadge';
Full-text search using text indexes
Text indexes in ClickHouse (also known as
"inverted indexes"
) provide fast full-text capabilities on string data.
The index maps each token in the column to the rows which contain the token.
The tokens are generated by a process called tokenization.
For example, ClickHouse tokenizes the English sentence "All cat like mice." by default as ["All", "cat", "like", "mice"] (note that the trailing dot is ignored).
More advanced tokenizers are available, for example for log data.
Creating a Text Index {#creating-a-text-index}
To create a text index, first enable the corresponding experimental setting:
sql
SET allow_experimental_full_text_index = true;
A text index can be defined on a
String
,
FixedString
,
Array(String)
,
Array(FixedString)
, and
Map
(via
mapKeys
and
mapValues
map functions) column using the following syntax:
sql
CREATE TABLE tab
(
`key` UInt64,
`str` String,
INDEX text_idx(str) TYPE text(
-- Mandatory parameters:
tokenizer = splitByNonAlpha|splitByString(S)|ngrams(N)|array
-- Optional parameters:
[, preprocessor = expression(str)]
-- Optional advanced parameters:
[, dictionary_block_size = D]
[, dictionary_block_frontcoding_compression = B]
[, max_cardinality_for_embedded_postings = M]
[, bloom_filter_false_positive_rate = R]
) [GRANULARITY 64]
)
ENGINE = MergeTree
ORDER BY key
Tokenizer argument
. The
tokenizer
argument specifies the tokenizer:
splitByNonAlpha
splits strings along non-alphanumeric ASCII characters (also see function
splitByNonAlpha
).
splitByString(S)
splits strings along certain user-defined separator strings
S
(also see function
splitByString
).
The separators can be specified using an optional parameter, for example,
tokenizer = splitByString([', ', '; ', '\n', '\\'])
.
Note that each string can consist of multiple characters (
', '
in the example).
The default separator list, if not specified explicitly (for example,
tokenizer = splitByString
), is a single whitespace
[' ']
. | {"source_file": "invertedindexes.md"} | [
-0.005659335292875767,
0.017205316573381424,
-0.004681115970015526,
0.08495399355888367,
0.007584515027701855,
-0.003159881569445133,
0.042018115520477295,
-0.02272319234907627,
-0.0033828981686383486,
-0.0002009562886087224,
0.06486912816762924,
0.04098980873823166,
0.06598450243473053,
-... |
14766834-ccf5-4894-8372-8df6a4853a02 | ngrams(N)
splits strings into equally large
N
-grams (also see function
ngrams
).
The ngram length can be specified using an optional integer parameter between 2 and 8, for example,
tokenizer = ngrams(3)
.
The default ngram size, if not specified explicitly (for example,
tokenizer = ngrams
), is 3.
array
performs no tokenization, i.e. every row value is a token (also see function
array
).
sparseGrams(min_length, max_length, min_cutoff_length)
β uses the algorithm as in the
sparseGrams
function to split a string into all ngrams of
min_length
and several ngrams of larger size up to
max_length
, inclusive. If
min_cutoff_length
is specified, only N-grams with length greater than or equal to
min_cutoff_length
are saved in the index. Unlike
ngrams(N)
, which generates only fixed-length N-grams,
sparseGrams
produces a set of variable-length N-grams within the specified range, allowing for a more flexible representation of text context. For example,
tokenizer = sparseGrams(3, 5, 4)
will generate 3-, 4-, 5-grams from the input string and save only the 4- and 5-grams in the index.
:::note
The
splitByString
tokenizer applies the split separators left-to-right.
This can create ambiguities.
For example, the separator strings
['%21', '%']
will cause
%21abc
to be tokenized as
['abc']
, whereas switching both separators strings
['%', '%21']
will output
['21abc']
.
In the most cases, you want that matching prefers longer separators first.
This can generally be done by passing the separator strings in order of descending length.
If the separator strings happen to form a
prefix code
, they can be passed in arbitrary order.
:::
:::warning
It is at the moment not recommended to build text indexes on top of text in non-western languages, e.g. Chinese.
The currently supported tokenizers may lead to huge index sizes and large query times.
We plan to add specialized language-specific tokenizers in future which will handle these cases better.
:::
To test how the tokenizers split the input string, you can use ClickHouse's
tokens
function:
As an example,
sql
SELECT tokens('abc def', 'ngrams', 3) AS tokens;
returns
result
+-tokens--------------------------+
| ['abc','bc ','c d',' de','def'] |
+---------------------------------+
Preprocessor argument
. The optional argument
preprocessor
is an expression which transforms the input string before tokenization.
Typical use cases for the preprocessor argument include
1. Lower-casing (or upper-casing) the input strings to enable case-insensitive matching, e.g.,
lower
,
lowerUTF8
, see the first example below.
2. UTF-8 normalization, e.g.
normalizeUTF8NFC
,
normalizeUTF8NFD
,
normalizeUTF8NFKC
,
normalizeUTF8NFKD
,
toValidUTF8
.
3. Removing or transforming unwanted characters or substrings, e.g.
extractTextFromHTML
,
substring
,
idnaEncode
. | {"source_file": "invertedindexes.md"} | [
-0.03196723759174347,
-0.018824301660060883,
-0.04655737802386284,
-0.05703011155128479,
-0.0016650951001793146,
-0.0152060491964221,
0.024829577654600143,
0.06735838204622269,
0.011376298032701015,
-0.043399378657341,
-0.01811436377465725,
0.03528182581067085,
-0.009291107766330242,
-0.00... |
d1ec582c-0f16-4682-9627-99734ae52eac | The preprocessor expression must transform an input value of type
String
or
FixedString
to a value of the same type.
Examples:
-
INDEX idx(col) TYPE text(tokenizer = 'splitByNonAlpha', preprocessor = lower(col))
-
INDEX idx(col) TYPE text(tokenizer = 'splitByNonAlpha', preprocessor = substringIndex(col, '\n', 1))
-
INDEX idx(col) TYPE text(tokenizer = 'splitByNonAlpha', preprocessor = lower(extractTextFromHTML(col))
Also, the preprocessor expression must only reference the column on top of which the text index is defined.
Using non-deterministic functions is not allowed.
Functions
hasToken
,
hasAllTokens
and
hasAnyTokens
use the preprocessor to first transform the search term before tokenizing it.
For example:
```sql
CREATE TABLE tab
(
key UInt64,
str String,
INDEX idx(str) TYPE text(tokenizer = 'splitByNonAlpha', preprocessor = lower(str))
)
ENGINE = MergeTree
ORDER BY tuple();
SELECT count() FROM tab WHERE hasToken(str, 'Foo');
```
is equivalent to:
```sql
CREATE TABLE tab
(
key UInt64,
str String,
INDEX idx(lower(str)) TYPE text(tokenizer = 'splitByNonAlpha')
)
ENGINE = MergeTree
ORDER BY tuple();
SELECT count() FROM tab WHERE hasToken(str, lower('Foo'));
```
Other arguments
. Text indexes in ClickHouse are implemented as
secondary indexes
.
However, unlike other skipping indexes, text indexes have a default index GRANULARITY of 64.
This value has been chosen empirically and it provides a good trade-off between speed and index size for most use cases.
Advanced users can specify a different index granularity (we do not recommend this).
Optional advanced parameters
The default values of the following advanced parameters will work well in virtually all situations.
We do not recommend changing them.
Optional parameter `dictionary_block_size` (default: 128) specifies the size of dictionary blocks in rows.
Optional parameter `dictionary_block_frontcoding_compression` (default: 1) specifies if the dictionary blocks use front coding as compression.
Optional parameter `max_cardinality_for_embedded_postings` (default: 16) specifies the cardinality threshold below which posting lists should be embedded into dictionary blocks.
Optional parameter `bloom_filter_false_positive_rate` (default: 0.1) specifies the false-positive rate of the dictionary bloom filter.
Text indexes can be added to or removed from a column after the table has been created:
sql
ALTER TABLE tab DROP INDEX text_idx;
ALTER TABLE tab ADD INDEX text_idx(s) TYPE text(tokenizer = splitByNonAlpha);
Using a Text Index {#using-a-text-index}
Using a text index in SELECT queries is straightforward as common string search functions will leverage the index automatically.
If no index exists, below string search functions will fall back to slow brute-force scans.
Supported functions {#functions-support}
The text index can be used if text functions are used in the
WHERE
clause of a SELECT query: | {"source_file": "invertedindexes.md"} | [
0.010606684722006321,
0.012255958281457424,
-0.0011557467514649034,
0.039804838597774506,
-0.04363448545336723,
0.019873138517141342,
0.07673978060483932,
0.06038171797990799,
-0.008906817063689232,
0.05353757366538048,
0.006215800531208515,
-0.03859676793217659,
0.033446840941905975,
-0.0... |
852a939d-af57-438b-81de-ad09743d897a | Supported functions {#functions-support}
The text index can be used if text functions are used in the
WHERE
clause of a SELECT query:
sql
SELECT [...]
FROM [...]
WHERE string_search_function(column_with_text_index)
=
and
!=
{#functions-example-equals-notequals}
=
(
equals
) and
!=
(
notEquals
) match the entire given search term.
Example:
sql
SELECT * from tab WHERE str = 'Hello';
The text index supports
=
and
!=
, yet equality and inequality search only make sense with the
array
tokenizer (which causes the index to store entire row values).
IN
and
NOT IN
{#functions-example-in-notin}
IN
(
in
) and
NOT IN
(
notIn
) are similar to functions
equals
and
notEquals
but they match all (
IN
) or none (
NOT IN
) of the search terms.
Example:
sql
SELECT * from tab WHERE str IN ('Hello', 'World');
The same restrictions as for
=
and
!=
apply, i.e.
IN
and
NOT IN
only make sense in conjunction with the
array
tokenizer.
LIKE
,
NOT LIKE
and
match
{#functions-example-like-notlike-match}
:::note
These functions currently use the text index for filtering only if the index tokenizer is either
splitByNonAlpha
or
ngrams
.
:::
In order to use
LIKE
like
,
NOT LIKE
(
notLike
), and the
match
function with text indexes, ClickHouse must be able to extract complete tokens from the search term.
Example:
sql
SELECT count() FROM tab WHERE comment LIKE 'support%';
support
in the example could match
support
,
supports
,
supporting
etc.
This kind of query is a substring query and it cannot be sped up by a text index.
To leverage a text index for LIKE queries, the LIKE pattern must be rewritten in the following way:
sql
SELECT count() FROM tab WHERE comment LIKE ' support %'; -- or `% support %`
The spaces left and right of
support
make sure that the term can be extracted as a token.
startsWith
and
endsWith
{#functions-example-startswith-endswith}
Similar to
LIKE
, functions
startsWith
and
endsWith
can only use a text index, if complete tokens can be extracted from the search term.
Example:
sql
SELECT count() FROM tab WHERE startsWith(comment, 'clickhouse support');
In the example, only
clickhouse
is considered a token.
support
is no token because it can match
support
,
supports
,
supporting
etc.
To find all rows that start with
clickhouse supports
, please end the search pattern with a trailing space:
sql
startsWith(comment, 'clickhouse supports ')`
Similarly,
endsWith
should be used with a leading space:
sql
SELECT count() FROM tab WHERE endsWith(comment, ' olap engine');
hasToken
and
hasTokenOrNull
{#functions-example-hastoken-hastokenornull}
Functions
hasToken
and
hasTokenOrNull
match against a single given token.
Unlike the previously mentioned functions, they do not tokenize the search term (they assume the input is a single token).
Example:
sql
SELECT count() FROM tab WHERE hasToken(comment, 'clickhouse'); | {"source_file": "invertedindexes.md"} | [
-0.030553152784705162,
0.040598101913928986,
0.036971546709537506,
0.012046359479427338,
0.00044751589302904904,
-0.011618846096098423,
0.0545947402715683,
-0.017932754009962082,
-0.03248296305537224,
-0.042667124420404434,
-0.009359026327729225,
0.02232273854315281,
0.07619135081768036,
-... |
5115e3eb-d676-4562-ad3d-44555623a11a | Example:
sql
SELECT count() FROM tab WHERE hasToken(comment, 'clickhouse');
Functions
hasToken
and
hasTokenOrNull
are the most performant functions to use with the
text
index.
hasAnyTokens
and
hasAllTokens
{#functions-example-hasanytokens-hasalltokens}
Functions
hasAnyTokens
and
hasAllTokens
match against one or all of the given tokens.
These two functions accept the search tokens as either a string which will be tokenized using the same tokenizer used for the index column, or as an array of already processed tokens to which no tokenization will be applied prior to searching.
See the function documentation for more info.
Example:
```sql
-- Search tokens passed as string argument
SELECT count() FROM tab WHERE hasAnyTokens(comment, 'clickhouse olap');
SELECT count() FROM tab WHERE hasAllTokens(comment, 'clickhouse olap');
-- Search tokens passed as Array(String)
SELECT count() FROM tab WHERE hasAnyTokens(comment, ['clickhouse', 'olap']);
SELECT count() FROM tab WHERE hasAllTokens(comment, ['clickhouse', 'olap']);
```
has
{#functions-example-has}
Array function
has
matches against a single token in the array of strings.
Example:
sql
SELECT count() FROM tab WHERE has(array, 'clickhouse');
mapContains
{#functions-example-mapcontains}
Function
mapContains
(alias of:
mapContainsKey
) matches against a single token in the keys of a map.
Example:
sql
SELECT count() FROM tab WHERE mapContainsKey(map, 'clickhouse');
-- OR
SELECT count() FROM tab WHERE mapContains(map, 'clickhouse');
operator[]
{#functions-example-access-operator}
Access
operator[]
can be used with the text index to filter out keys and values.
Example:
sql
SELECT count() FROM tab WHERE map['engine'] = 'clickhouse'; -- will use the text index if defined
See the following examples for the usage of
Array(T)
and
Map(K, V)
with the text index.
Examples for the text index
Array
and
Map
support. {#text-index-array-and-map-examples}
Indexing Array(String) {#text-index-example-array}
In a simple blogging platform, authors assign keywords to their posts to categorize content.
A common feature allows users to discover related content by clicking on keywords or searching for topics.
Consider this table definition:
sql
CREATE TABLE posts (
post_id UInt64,
title String,
content String,
keywords Array(String) COMMENT 'Author-defined keywords'
)
ENGINE = MergeTree
ORDER BY (post_id);
Without a text index, finding posts with a specific keyword (e.g.
clickhouse
) requires scanning all entries:
sql
SELECT count() FROM posts WHERE has(keywords, 'clickhouse'); -- slow full-table scan - checks every keyword in every post
As the platform grows, this becomes increasingly slow because the query must examine every keywords array in every row. | {"source_file": "invertedindexes.md"} | [
0.011733253486454487,
-0.046620145440101624,
-0.008882047608494759,
0.09309898316860199,
-0.055355433374643326,
0.04421239346265793,
0.11483263224363327,
0.03212318941950798,
0.05904209613800049,
-0.04223135486245155,
0.00507349194958806,
-0.029805541038513184,
0.10419809073209763,
-0.0822... |
7ae0b5c7-dc25-463e-9ec6-e3267bec31ac | As the platform grows, this becomes increasingly slow because the query must examine every keywords array in every row.
To overcome this performance issue, we can define a text index for the
keywords
that creates a search-optimized structure that pre-processes all keywords, enabling instant lookups:
sql
ALTER TABLE posts ADD INDEX keywords_idx(keywords) TYPE text(tokenizer = splitByNonAlpha);
:::note
Important: After adding the text index, you must rebuild it for existing data:
sql
ALTER TABLE posts MATERIALIZE INDEX keywords_idx;
:::
Indexing Map {#text-index-example-map}
In a logging system, server requests often store metadata in key-value pairs. Operations teams need to efficiently search through logs for debugging, security incidents, and monitoring.
Consider this logs table:
sql
CREATE TABLE logs (
id UInt64,
timestamp DateTime,
message String,
attributes Map(String, String)
)
ENGINE = MergeTree
ORDER BY (timestamp);
Without a text index, searching through
Map
data requires full table scans:
Finds all logs with rate limiting:
sql
SELECT count() FROM logs WHERE has(mapKeys(attributes), 'rate_limit'); -- slow full-table scan
Finds all logs from a specific IP:
sql
SELECT count() FROM logs WHERE has(mapValues(attributes), '192.168.1.1'); -- slow full-table scan
As log volume grows, these queries become slow.
The solution is creating a text index for the
Map
keys and values.
Use
mapKeys
to create a text index when you need to find logs by field names or attribute types:
sql
ALTER TABLE logs ADD INDEX attributes_keys_idx mapKeys(attributes) TYPE text(tokenizer = array);
Use
mapValues
to create a text index when you need to search within the actual content of attributes:
sql
ALTER TABLE logs ADD INDEX attributes_vals_idx mapValues(attributes) TYPE text(tokenizer = array);
:::note
Important: After adding the text index, you must rebuild it for existing data:
sql
ALTER TABLE posts MATERIALIZE INDEX attributes_keys_idx;
ALTER TABLE posts MATERIALIZE INDEX attributes_vals_idx;
:::
Find all rate-limited requests:
sql
SELECT * FROM logs WHERE mapContainsKey(attributes, 'rate_limit'); -- fast
Finds all logs from a specific IP:
sql
SELECT * FROM logs WHERE has(mapValues(attributes), '192.168.1.1'); -- fast
Implementation {#implementation}
Index layout {#index-layout}
Each text index consists of two (abstract) data structures:
- a dictionary which maps each token to a postings list, and
- a set of postings lists, each representing a set of row numbers.
Since a text index is a skip index, these data structures exist logically per index granule.
During index creation, three files are created (per part):
Dictionary blocks file (.dct) | {"source_file": "invertedindexes.md"} | [
0.03565194830298424,
0.0425279401242733,
0.04208260029554367,
0.03693953529000282,
0.013293437659740448,
-0.04768545553088188,
0.09322008490562439,
-0.004666037391871214,
0.03645479306578636,
0.0744117945432663,
0.011437596753239632,
0.026534784585237503,
0.05757373198866844,
-0.0206277873... |
5477fe7d-35c9-463b-a7e2-3b669c217c4e | Since a text index is a skip index, these data structures exist logically per index granule.
During index creation, three files are created (per part):
Dictionary blocks file (.dct)
The tokens in an index granule are sorted and stored in dictionary blocks of 128 tokens each (the block size is configurable by parameter
dictionary_block_size
).
A dictionary blocks file (.dct) consists all the dictionary blocks of all index granules in a part.
Index granules file (.idx)
The index granules file contains for each dictionary block the block's first token, its relative offset in the dictionary blocks file, and a bloom filter for all tokens in the block.
This sparse index structure is similar to ClickHouse's
sparse primary key index
).
The bloom filter allows to skip dictionary blocks early if the searched token is not contained in a dictionary block.
Postings lists file (.pst)
The posting lists for all tokens are laid out sequentially in the postings list file.
To save space while still allowing fast intersection and union operations, the posting lists are stored as
roaring bitmaps
.
If the cardinality of a posting list is less than 16 (configurable by parameter
max_cardinality_for_embedded_postings
), it is embedded into the dictionary.
Direct read {#direct-read}
Certain types of text queries can be speed up significantly by an optimization called "direct read".
More specifically, the optimization can be applied if the SELECT query does
not
project from the text column.
Example:
sql
SELECT column_a, column_b, ... -- not: column_with_text_index
FROM [...]
WHERE string_search_function(column_with_text_index)
The direct read optimization in ClickHouse answers the query exclusively using the text index (i.e., text index lookups) without accessing the underlying text column.
Text index lookups read relatively little data and are therefore much faster than usual skip indexes in ClickHouse (which do a skip index lookup, followed by loading and filtering surviving granules).
Direct read is controlled by two settings:
- Setting
query_plan_direct_read_from_text_index
(default: 1) which specifies if direct read is generally enabled.
- Setting
use_skip_indexes_on_data_read
(default: 1) which is another prerequisite for direct read. Note that on ClickHouse databases with
compatibility
< 25.10,
use_skip_indexes_on_data_read
is disabled, so you either need to raise the compatibility setting value or
SET use_skip_indexes_on_data_read = 1
explicitly.
Also, the text index must be fully materialized to use direct reading (use
ALTER TABLE ... MATERIALIZE INDEX
for that).
Supported functions | {"source_file": "invertedindexes.md"} | [
-0.08143149316310883,
0.040432434529066086,
-0.04724372923374176,
0.03829898685216904,
0.026736486703157425,
-0.06444697082042694,
0.041479289531707764,
0.01839071698486805,
0.0902184471487999,
-0.026183834299445152,
0.02801143378019333,
0.11449920386075974,
0.03240754082798958,
-0.0645116... |
b1b56cc6-3b35-4925-8d68-2fbb698937e8 | Also, the text index must be fully materialized to use direct reading (use
ALTER TABLE ... MATERIALIZE INDEX
for that).
Supported functions
The direct read optimization supports functions
hasToken
,
hasAllTokens
, and
hasAnyTokens
.
These functions can also be combined by AND, OR, and NOT operators.
The WHERE clause can also contain additional non-text-search-functions filters (for text columns or other columns) - in that case, the direct read optimization will still be used but less effective (it only applies to the supported text search functions).
To understand a query utilizes direct read, run the query with
EXPLAIN PLAN actions = 1
.
As an example, a query with disabled direct read
sql
EXPLAIN PLAN actions = 1
SELECT count()
FROM tab
WHERE hasToken(col, 'some_token')
SETTINGS query_plan_direct_read_from_text_index = 0;
returns
text
[...]
Filter ((WHERE + Change column names to column identifiers))
Filter column: hasToken(__table1.col, 'some_token'_String) (removed)
Actions: INPUT : 0 -> col String : 0
COLUMN Const(String) -> 'some_token'_String String : 1
FUNCTION hasToken(col :: 0, 'some_token'_String :: 1) -> hasToken(__table1.col, 'some_token'_String) UInt8 : 2
[...]
whereas the same query run with
query_plan_direct_read_from_text_index = 1
sql
EXPLAIN PLAN actions = 1
SELECT count()
FROM tab
WHERE hasToken(col, 'some_token')
SETTINGS query_plan_direct_read_from_text_index = 1;
returns
text
[...]
Expression (Before GROUP BY)
Positions:
Filter
Filter column: __text_index_idx_hasToken_94cc2a813036b453d84b6fb344a63ad3 (removed)
Actions: INPUT :: 0 -> __text_index_idx_hasToken_94cc2a813036b453d84b6fb344a63ad3 UInt8 : 0
[...]
The second EXPLAIN PLAN output contains a virtual column
__text_index_<index_name>_<function_name>_<id>
.
If this column is present, then direct read is used.
Example: Hackernews dataset {#hacker-news-dataset}
Let's look at the performance improvements of text indexes on a large dataset with lots of text.
We will use 28.7M rows of comments on the popular Hacker News website.
Here is the table without text index:
sql
CREATE TABLE hackernews (
id UInt64,
deleted UInt8,
type String,
author String,
timestamp DateTime,
comment String,
dead UInt8,
parent UInt64,
poll UInt64,
children Array(UInt32),
url String,
score UInt32,
title String,
parts Array(UInt32),
descendants UInt32
)
ENGINE = MergeTree
ORDER BY (type, author);
The 28.7M rows are in a Parquet file in S3 - let's insert them into the
hackernews
table: | {"source_file": "invertedindexes.md"} | [
0.03469472751021385,
0.02824511006474495,
0.008365494199097157,
0.13084490597248077,
-0.0204171035438776,
0.02699197642505169,
0.04932105541229248,
0.06064514070749283,
-0.007181954104453325,
0.06558012217283249,
-0.002137988805770874,
0.02708558924496174,
0.020312130451202393,
-0.06995154... |
e77dde58-b14b-43aa-9cfe-9b1dea2570e0 | The 28.7M rows are in a Parquet file in S3 - let's insert them into the
hackernews
table:
sql
INSERT INTO hackernews
SELECT * FROM s3Cluster(
'default',
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.parquet',
'Parquet',
'
id UInt64,
deleted UInt8,
type String,
by String,
time DateTime,
text String,
dead UInt8,
parent UInt64,
poll UInt64,
kids Array(UInt32),
url String,
score UInt32,
title String,
parts Array(UInt32),
descendants UInt32');
We will use
ALTER TABLE
and add a text index on comment column, then materialize it:
```sql
-- Add the index
ALTER TABLE hackernews ADD INDEX comment_idx(comment) TYPE text(tokenizer = splitByNonAlpha);
-- Materialize the index for existing data
ALTER TABLE hackernews MATERIALIZE INDEX comment_idx SETTINGS mutations_sync = 2;
```
Now, let's run queries using
hasToken
,
hasAnyTokens
, and
hasAllTokens
functions.
The following examples will show the dramatic performance difference between a standard index scan and the direct read optimization.
1. Using
hasToken
{#using-hasToken}
hasToken
checks if the text contains a specific single token.
We'll search for the case-sensitive token 'ClickHouse'.
Direct read disabled (Standard scan)
By default, ClickHouse uses the skip index to filter granules and then reads the column data for those granules.
We can simulate this behavior by disabling direct read.
```sql
SELECT count()
FROM hackernews
WHERE hasToken(comment, 'ClickHouse')
SETTINGS query_plan_direct_read_from_text_index = 0, use_skip_indexes_on_data_read = 0;
ββcount()ββ
β 516 β
βββββββββββ
1 row in set. Elapsed: 0.362 sec. Processed 24.90 million rows, 9.51 GB
```
Direct read enabled (Fast index read)
Now we run the same query with direct read enabled (the default).
```sql
SELECT count()
FROM hackernews
WHERE hasToken(comment, 'ClickHouse')
SETTINGS query_plan_direct_read_from_text_index = 1, use_skip_indexes_on_data_read = 1;
ββcount()ββ
β 516 β
βββββββββββ
1 row in set. Elapsed: 0.008 sec. Processed 3.15 million rows, 3.15 MB
```
The direct read query is over 45 times faster (0.362s vs 0.008s) and processes significantly less data (9.51 GB vs 3.15 MB) by reading from the index alone.
2. Using
hasAnyTokens
{#using-hasAnyTokens}
hasAnyTokens
checks if the text contains at least one of the given tokens.
We'll search for comments containing either 'love' or 'ClickHouse'.
Direct read disabled (Standard scan)
```sql
SELECT count()
FROM hackernews
WHERE hasAnyTokens(comment, 'love ClickHouse')
SETTINGS query_plan_direct_read_from_text_index = 0, use_skip_indexes_on_data_read = 0;
ββcount()ββ
β 408426 β
βββββββββββ
1 row in set. Elapsed: 1.329 sec. Processed 28.74 million rows, 9.72 GB
```
Direct read enabled (Fast index read) | {"source_file": "invertedindexes.md"} | [
-0.0562204085290432,
-0.060093022882938385,
-0.07640758156776428,
0.0446489080786705,
0.015624837949872017,
-0.003153197467327118,
0.011116940528154373,
-0.051177144050598145,
-0.004279317334294319,
0.07209329307079315,
0.025497252121567726,
-0.00952512864023447,
0.11480247229337692,
-0.07... |
1e098f9b-86dd-41ed-a257-df0f430dd76d | ββcount()ββ
β 408426 β
βββββββββββ
1 row in set. Elapsed: 1.329 sec. Processed 28.74 million rows, 9.72 GB
```
Direct read enabled (Fast index read)
```sql
SELECT count()
FROM hackernews
WHERE hasAnyTokens(comment, 'love ClickHouse')
SETTINGS query_plan_direct_read_from_text_index = 1, use_skip_indexes_on_data_read = 1;
ββcount()ββ
β 408426 β
βββββββββββ
1 row in set. Elapsed: 0.015 sec. Processed 27.99 million rows, 27.99 MB
```
The speedup is even more dramatic for this common "OR" search.
The query is nearly 89 times faster (1.329s vs 0.015s) by avoiding the full column scan.
3. Using
hasAllTokens
{#using-hasAllTokens}
hasAllTokens
checks if the text contains all of the given tokens.
We'll search for comments containing both 'love' and 'ClickHouse'.
Direct read disabled (Standard scan)
Even with direct read disabled, the standard skip index is still effective.
It filters down the 28.7M rows to just 147.46K rows, but it still must read 57.03 MB from the column.
```sql
SELECT count()
FROM hackernews
WHERE hasAllTokens(comment, 'love ClickHouse')
SETTINGS query_plan_direct_read_from_text_index = 0, use_skip_indexes_on_data_read = 0;
ββcount()ββ
β 11 β
βββββββββββ
1 row in set. Elapsed: 0.184 sec. Processed 147.46 thousand rows, 57.03 MB
```
Direct read enabled (Fast index read)
Direct read answers the query by operating on the index data, reading only 147.46 KB.
```sql
SELECT count()
FROM hackernews
WHERE hasAllTokens(comment, 'love ClickHouse')
SETTINGS query_plan_direct_read_from_text_index = 1, use_skip_indexes_on_data_read = 1;
ββcount()ββ
β 11 β
βββββββββββ
1 row in set. Elapsed: 0.007 sec. Processed 147.46 thousand rows, 147.46 KB
```
For this "AND" search, the direct read optimization is over 26 times faster (0.184s vs 0.007s) than the standard skip index scan.
4. Compound search: OR, AND, NOT, ... {#compound-search}
The direct read optimization also applies to compound boolean expressions.
Here, we'll perform a case-insensitive search for 'ClickHouse' OR 'clickhouse'.
Direct read disabled (Standard scan)
```sql
SELECT count()
FROM hackernews
WHERE hasToken(comment, 'ClickHouse') OR hasToken(comment, 'clickhouse')
SETTINGS query_plan_direct_read_from_text_index = 0, use_skip_indexes_on_data_read = 0;
ββcount()ββ
β 769 β
βββββββββββ
1 row in set. Elapsed: 0.450 sec. Processed 25.87 million rows, 9.58 GB
```
Direct read enabled (Fast index read)
```sql
SELECT count()
FROM hackernews
WHERE hasToken(comment, 'ClickHouse') OR hasToken(comment, 'clickhouse')
SETTINGS query_plan_direct_read_from_text_index = 1, use_skip_indexes_on_data_read = 1;
ββcount()ββ
β 769 β
βββββββββββ
1 row in set. Elapsed: 0.013 sec. Processed 25.87 million rows, 51.73 MB
``` | {"source_file": "invertedindexes.md"} | [
-0.004802973475307226,
-0.024061596021056175,
-0.02084985561668873,
0.09036203473806381,
-0.0007967985002323985,
-0.0313061848282814,
0.06619030982255936,
-0.014872231520712376,
0.045792657881975174,
0.016722630709409714,
0.08696240186691284,
0.013644164428114891,
0.09610596299171448,
-0.0... |
f2e89726-40e5-4a4d-b550-0eddf613b3da | ββcount()ββ
β 769 β
βββββββββββ
1 row in set. Elapsed: 0.013 sec. Processed 25.87 million rows, 51.73 MB
```
By combining the results from the index, the direct read query is 34 times faster (0.450s vs 0.013s) and avoids reading the 9.58 GB of column data.
For this specific case,
hasAnyTokens(comment, ['ClickHouse', 'clickhouse'])
would be the preferred, more efficient syntax.
Tuning the text index {#tuning-the-text-index}
Currently, there are caches for the deserialized dictionary blocks, headers and posting lists of the text index to reduce I/O.
They can be enabled via settings
use_text_index_dictionary_cache
,
use_text_index_header_cache
and
use_text_index_postings_cache
respectively. By default, they are disabled.
Refer the following server settings to configure the cache.
Server Settings {#text-index-tuning-server-settings}
Dictionary blocks cache settings {#text-index-tuning-dictionary-blocks-cache}
| Setting | Description | Default |
|----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|--------------|
|
text_index_dictionary_block_cache_policy
| Text index dictionary block cache policy name. |
SLRU
|
|
text_index_dictionary_block_cache_size
| Maximum cache size in bytes. |
1073741824
|
|
text_index_dictionary_block_cache_max_entries
| Maximum number of deserialized dictionary blocks in cache. |
1'000'000
|
|
text_index_dictionary_block_cache_size_ratio
| The size of the protected queue in the text index dictionary block cache relative to the cache\'s total size. |
0.5
|
Header cache settings {#text-index-tuning-header-cache} | {"source_file": "invertedindexes.md"} | [
-0.020819630473852158,
0.01275116391479969,
-0.05558936297893524,
0.06699177622795105,
-0.06650884449481964,
-0.10097312182188034,
0.03227118030190468,
-0.005692924372851849,
0.02221745252609253,
0.013390484265983105,
0.08534311503171921,
0.042415276169776917,
0.04049639776349068,
-0.06834... |
3f02953d-10e8-4334-ae07-ef783b2127c8 | Header cache settings {#text-index-tuning-header-cache}
| Setting | Description | Default |
|--------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|--------------|
|
text_index_header_cache_policy
| Text index header cache policy name. |
SLRU
|
|
text_index_header_cache_size
| Maximum cache size in bytes. |
1073741824
|
|
text_index_header_cache_max_entries
| Maximum number of deserialized headers in cache. |
100'000
|
|
text_index_header_cache_size_ratio
| The size of the protected queue in the text index header cache relative to the cache\'s total size. |
0.5
|
Posting lists cache settings {#text-index-tuning-posting-lists-cache}
| Setting | Description | Default |
|---------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|--------------|
|
text_index_postings_cache_policy
| Text index postings cache policy name. |
SLRU
|
|
text_index_postings_cache_size
| Maximum cache size in bytes. |
2147483648
|
|
text_index_postings_cache_max_entries
| Maximum number of deserialized postings in cache. |
1'000'000
|
|
text_index_postings_cache_size_ratio
| The size of the protected queue in the text index postings cache relative to the cache\'s total size. |
0.5
|
Related content {#related-content}
Blog:
Introducing Inverted Indices in ClickHouse
Blog:
Inside ClickHouse full-text search: fast, native, and columnar
Video:
Full-Text Indices: Design and Experiments | {"source_file": "invertedindexes.md"} | [
0.027817564085125923,
0.082256980240345,
-0.018692055717110634,
0.013562997803092003,
-0.04368772730231285,
-0.028633523732423782,
0.013866747729480267,
0.039691369980573654,
0.021744202822446823,
-0.047308288514614105,
-0.02034350112080574,
-0.02480853535234928,
-0.010387187823653221,
-0.... |
c2d41e9b-b1de-4cfb-8189-59d60485f37c | description: 'SummingMergeTree inherits from the MergeTree engine. Its key feature
is the ability to automatically sum numeric data during part merges.'
sidebar_label: 'SummingMergeTree'
sidebar_position: 50
slug: /engines/table-engines/mergetree-family/summingmergetree
title: 'SummingMergeTree table engine'
doc_type: 'reference'
SummingMergeTree table engine
The engine inherits from
MergeTree
. The difference is that when merging data parts for
SummingMergeTree
tables ClickHouse replaces all the rows with the same primary key (or more accurately, with the same
sorting key
) with one row which contains summed values for the columns with the numeric data type. If the sorting key is composed in a way that a single key value corresponds to large number of rows, this significantly reduces storage volume and speeds up data selection.
We recommend using the engine together with
MergeTree
. Store complete data in
MergeTree
table, and use
SummingMergeTree
for aggregated data storing, for example, when preparing reports. Such an approach will prevent you from losing valuable data due to an incorrectly composed primary key.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = SummingMergeTree([columns])
[PARTITION BY expr]
[ORDER BY expr]
[SAMPLE BY expr]
[SETTINGS name=value, ...]
For a description of request parameters, see
request description
.
Parameters of SummingMergeTree {#parameters-of-summingmergetree}
Columns {#columns}
columns
- a tuple with the names of columns where values will be summed. Optional parameter.
The columns must be of a numeric type and must not be in the partition or sorting key.
If
columns
is not specified, ClickHouse summarizes the values in all columns with a numeric data type that are not in the sorting key.
Query clauses {#query-clauses}
When creating a
SummingMergeTree
table the same
clauses
are required, as when creating a
MergeTree
table.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects and, if possible, switch the old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE [=] SummingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, [columns])
```
All of the parameters excepting `columns` have the same meaning as in `MergeTree`.
- `columns` β tuple with names of columns values of which will be summed. Optional parameter. For a description, see the text above.
Usage example {#usage-example}
Consider the following table:
sql
CREATE TABLE summtt
(
key UInt32,
value UInt32
)
ENGINE = SummingMergeTree()
ORDER BY key | {"source_file": "summingmergetree.md"} | [
-0.022084172815084457,
-0.013636809773743153,
0.02291264571249485,
0.04796033725142479,
-0.015686169266700745,
-0.029527027159929276,
-0.016479508951306343,
0.03185032308101654,
-0.006549653597176075,
0.023993991315364838,
0.000029474440452759154,
0.006313589867204428,
-0.0055396840907633305... |
dbb3a66e-bee5-4340-af4f-d41be58f819c | Usage example {#usage-example}
Consider the following table:
sql
CREATE TABLE summtt
(
key UInt32,
value UInt32
)
ENGINE = SummingMergeTree()
ORDER BY key
Insert data to it:
sql
INSERT INTO summtt VALUES(1,1),(1,2),(2,1)
ClickHouse may sum all the rows not completely (
see below
), so we use an aggregate function
sum
and
GROUP BY
clause in the query.
sql
SELECT key, sum(value) FROM summtt GROUP BY key
text
ββkeyββ¬βsum(value)ββ
β 2 β 1 β
β 1 β 3 β
βββββββ΄βββββββββββββ
Data processing {#data-processing}
When data are inserted into a table, they are saved as-is. ClickHouse merges the inserted parts of data periodically and this is when rows with the same primary key are summed and replaced with one for each resulting part of data.
ClickHouse can merge the data parts so that different resulting parts of data can consist rows with the same primary key, i.e.Β the summation will be incomplete. Therefore (
SELECT
) an aggregate function
sum()
and
GROUP BY
clause should be used in a query as described in the example above.
Common rules for summation {#common-rules-for-summation}
The values in the columns with the numeric data type are summed. The set of columns is defined by the parameter
columns
.
If the values were 0 in all of the columns for summation, the row is deleted.
If column is not in the primary key and is not summed, an arbitrary value is selected from the existing ones.
The values are not summed for columns in the primary key.
The summation in the AggregateFunction columns {#the-summation-in-the-aggregatefunction-columns}
For columns of
AggregateFunction type
ClickHouse behaves as
AggregatingMergeTree
engine aggregating according to the function.
Nested structures {#nested-structures}
Table can have nested data structures that are processed in a special way.
If the name of a nested table ends with
Map
and it contains at least two columns that meet the following criteria:
the first column is numeric
(*Int*, Date, DateTime)
or a string
(String, FixedString)
, let's call it
key
,
the other columns are arithmetic
(*Int*, Float32/64)
, let's call it
(values...)
,
then this nested table is interpreted as a mapping of
key => (values...)
, and when merging its rows, the elements of two data sets are merged by
key
with a summation of the corresponding
(values...)
.
Examples:
```text
DROP TABLE IF EXISTS nested_sum;
CREATE TABLE nested_sum
(
date Date,
site UInt32,
hitsMap Nested(
browser String,
imps UInt32,
clicks UInt32
)
) ENGINE = SummingMergeTree
PRIMARY KEY (date, site);
INSERT INTO nested_sum VALUES ('2020-01-01', 12, ['Firefox', 'Opera'], [10, 5], [2, 1]);
INSERT INTO nested_sum VALUES ('2020-01-01', 12, ['Chrome', 'Firefox'], [20, 1], [1, 1]);
INSERT INTO nested_sum VALUES ('2020-01-01', 12, ['IE'], [22], [0]);
INSERT INTO nested_sum VALUES ('2020-01-01', 10, ['Chrome'], [4], [3]); | {"source_file": "summingmergetree.md"} | [
-0.05808980390429497,
-0.025981148704886436,
0.0404241718351841,
0.03130624070763588,
-0.06531905382871628,
-0.045909419655799866,
0.05484354496002197,
0.007656192407011986,
0.057784128934144974,
0.026439674198627472,
0.035914693027734756,
0.026129182428121567,
0.05268959701061249,
-0.0963... |
46eae063-b087-4cd7-b9ae-b2b7369dc493 | OPTIMIZE TABLE nested_sum FINAL; -- emulate merge
SELECT * FROM nested_sum;
ββββββββdateββ¬βsiteββ¬βhitsMap.browserββββββββββββββββββββ¬βhitsMap.impsββ¬βhitsMap.clicksββ
β 2020-01-01 β 10 β ['Chrome'] β [4] β [3] β
β 2020-01-01 β 12 β ['Chrome','Firefox','IE','Opera'] β [20,11,22,5] β [1,3,0,1] β
ββββββββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββ
SELECT
site,
browser,
impressions,
clicks
FROM
(
SELECT
site,
sumMap(hitsMap.browser, hitsMap.imps, hitsMap.clicks) AS imps_map
FROM nested_sum
GROUP BY site
)
ARRAY JOIN
imps_map.1 AS browser,
imps_map.2 AS impressions,
imps_map.3 AS clicks;
ββsiteββ¬βbrowserββ¬βimpressionsββ¬βclicksββ
β 12 β Chrome β 20 β 1 β
β 12 β Firefox β 11 β 3 β
β 12 β IE β 22 β 0 β
β 12 β Opera β 5 β 1 β
β 10 β Chrome β 4 β 3 β
ββββββββ΄ββββββββββ΄ββββββββββββββ΄βββββββββ
```
When requesting data, use the
sumMap(key, value)
function for aggregation of
Map
.
For nested data structure, you do not need to specify its columns in the tuple of columns for summation.
Related content {#related-content}
Blog:
Using Aggregate Combinators in ClickHouse | {"source_file": "summingmergetree.md"} | [
0.015867147594690323,
-0.01933637075126171,
0.07133551687002182,
0.043299756944179535,
-0.00955338403582573,
-0.052584994584321976,
0.06810065358877182,
-0.01088617742061615,
-0.060476601123809814,
-0.0051141283474862576,
0.011518360115587711,
-0.0768439844250679,
0.048931095749139786,
-0.... |
fdaf88c1-3630-473b-8864-ee50a185aec3 | description: 'Documentation for MergeTree Engine Family'
sidebar_label: 'MergeTree Family'
sidebar_position: 10
slug: /engines/table-engines/mergetree-family/
title: 'MergeTree Engine Family'
doc_type: 'reference'
MergeTree engine family
Table engines from the MergeTree family are the core of ClickHouse data storage capabilities. They provide most features for resilience and high-performance data retrieval: columnar storage, custom partitioning, sparse primary index, secondary data-skipping indexes, etc.
Base
MergeTree
table engine can be considered the default table engine for single-node ClickHouse instances because it is versatile and practical for a wide range of use cases.
For production usage
ReplicatedMergeTree
is the way to go, because it adds high-availability to all features of regular MergeTree engine. A bonus is automatic data deduplication on data ingestion, so the software can safely retry if there was some network issue during insert.
All other engines of MergeTree family add extra functionality for some specific use cases. Usually, it's implemented as additional data manipulation in background.
The main downside of MergeTree engines is that they are rather heavy-weight. So the typical pattern is to have not so many of them. If you need many small tables, for example for temporary data, consider
Log engine family
. | {"source_file": "index.md"} | [
-0.07605906575918198,
-0.06906133145093918,
0.018338436260819435,
-0.009677139110863209,
-0.021274592727422714,
-0.06836183369159698,
-0.07267128676176071,
0.05756542831659317,
-0.053832441568374634,
0.004233669489622116,
0.010059935040771961,
0.016678432002663612,
-0.0010566061828285456,
... |
b97a7662-8e21-4842-98b6-1c66eb27af72 | description: 'Replaces all rows with the same primary key (or more accurately, with
the same
sorting key
)
with a single row (within a single data part) that stores a combination of states
of aggregate functions.'
sidebar_label: 'AggregatingMergeTree'
sidebar_position: 60
slug: /engines/table-engines/mergetree-family/aggregatingmergetree
title: 'AggregatingMergeTree table engine'
doc_type: 'reference'
AggregatingMergeTree table engine
The engine inherits from
MergeTree
, altering the logic for data parts merging. ClickHouse replaces all rows with the same primary key (or more accurately, with the same
sorting key
) with a single row (within a single data part) that stores a combination of states of aggregate functions.
You can use
AggregatingMergeTree
tables for incremental data aggregation, including for aggregated materialized views.
You can see an example of how to use the AggregatingMergeTree and Aggregate functions in the below video:
The engine processes all columns with the following types:
AggregateFunction
SimpleAggregateFunction
It is appropriate to use
AggregatingMergeTree
if it reduces the number of rows by orders.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = AggregatingMergeTree()
[PARTITION BY expr]
[ORDER BY expr]
[SAMPLE BY expr]
[TTL expr]
[SETTINGS name=value, ...]
For a description of request parameters, see
request description
.
Query clauses
When creating an
AggregatingMergeTree
table, the same
clauses
are required as when creating a
MergeTree
table.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects and, if possible, switch the old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE [=] AggregatingMergeTree(date-column [, sampling_expression], (primary, key), index_granularity)
```
All of the parameters have the same meaning as in `MergeTree`.
SELECT and INSERT {#select-and-insert}
To insert data, use
INSERT SELECT
query with aggregate -State- functions.
When selecting data from
AggregatingMergeTree
table, use
GROUP BY
clause and the same aggregate functions as when inserting data, but using the
-Merge
suffix.
In the results of
SELECT
query, the values of
AggregateFunction
type have implementation-specific binary representation for all of the ClickHouse output formats. For example, if you dump data into
TabSeparated
format with a
SELECT
query, then this dump can be loaded back using an
INSERT
query.
Example of an aggregated materialized view {#example-of-an-aggregated-materialized-view} | {"source_file": "aggregatingmergetree.md"} | [
-0.041456155478954315,
-0.06234116479754448,
0.03036041185259819,
0.02129177376627922,
-0.04750135540962219,
0.002971136011183262,
-0.0029947347939014435,
-0.005320470780134201,
0.013063004240393639,
0.015893127769231796,
0.035660888999700546,
0.027233732864260674,
0.007364973891526461,
-0... |
bee2da02-a08a-4797-bd5b-73ea0f0b4a0e | Example of an aggregated materialized view {#example-of-an-aggregated-materialized-view}
The following example assumes that you have a database named
test
. Create it if it doesn't already exist using the command below:
sql
CREATE DATABASE test;
Now create the table
test.visits
that contains the raw data:
sql
CREATE TABLE test.visits
(
StartDate DateTime64 NOT NULL,
CounterID UInt64,
Sign Nullable(Int32),
UserID Nullable(Int32)
) ENGINE = MergeTree ORDER BY (StartDate, CounterID);
Next, you need an
AggregatingMergeTree
table that will store
AggregationFunction
s that keep track of the total number of visits and the number of unique users.
Create an
AggregatingMergeTree
materialized view that watches the
test.visits
table, and uses the
AggregateFunction
type:
sql
CREATE TABLE test.agg_visits (
StartDate DateTime64 NOT NULL,
CounterID UInt64,
Visits AggregateFunction(sum, Nullable(Int32)),
Users AggregateFunction(uniq, Nullable(Int32))
)
ENGINE = AggregatingMergeTree() ORDER BY (StartDate, CounterID);
Create a materialized view that populates
test.agg_visits
from
test.visits
:
sql
CREATE MATERIALIZED VIEW test.visits_mv TO test.agg_visits
AS SELECT
StartDate,
CounterID,
sumState(Sign) AS Visits,
uniqState(UserID) AS Users
FROM test.visits
GROUP BY StartDate, CounterID;
Insert data into the
test.visits
table:
sql
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
VALUES (1667446031000, 1, 3, 4), (1667446031000, 1, 6, 3);
The data is inserted in both
test.visits
and
test.agg_visits
.
To get the aggregated data, execute a query such as
SELECT ... GROUP BY ...
from the materialized view
test.visits_mv
:
sql
SELECT
StartDate,
sumMerge(Visits) AS Visits,
uniqMerge(Users) AS Users
FROM test.visits_mv
GROUP BY StartDate
ORDER BY StartDate;
text
ββββββββββββββββStartDateββ¬βVisitsββ¬βUsersββ
β 2022-11-03 03:27:11.000 β 9 β 2 β
βββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ
Add another couple of records to
test.visits
, but this time try using a different timestamp for one of the records:
sql
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
VALUES (1669446031000, 2, 5, 10), (1667446031000, 3, 7, 5);
Run the
SELECT
query again, which will return the following output:
text
ββββββββββββββββStartDateββ¬βVisitsββ¬βUsersββ
β 2022-11-03 03:27:11.000 β 16 β 3 β
β 2022-11-26 07:00:31.000 β 5 β 1 β
βββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ | {"source_file": "aggregatingmergetree.md"} | [
0.01774877868592739,
-0.07676951587200165,
-0.04128053039312363,
0.07242205739021301,
-0.1417081654071808,
0.018919438123703003,
0.01722981408238411,
0.05213707685470581,
-0.01255936548113823,
0.015394985675811768,
-0.0007092071464285254,
-0.04188996180891991,
0.05943165346980095,
-0.04062... |
69a19be7-0d4c-4818-970c-12d12827d253 | text
ββββββββββββββββStartDateββ¬βVisitsββ¬βUsersββ
β 2022-11-03 03:27:11.000 β 16 β 3 β
β 2022-11-26 07:00:31.000 β 5 β 1 β
βββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ
In some cases, you might want to avoid pre-aggregating rows at insert time to shift the cost of aggregation from insert time
to merge time. Ordinarily, it is necessary to include the columns which are not part of the aggregation in the
GROUP BY
clause of the materialized view definition to avoid an error. However, you can make use of the
initializeAggregation
function with setting
optimize_on_insert = 0
(it is turned on by default) to achieve this. Use of
GROUP BY
is no longer required in this case:
sql
CREATE MATERIALIZED VIEW test.visits_mv TO test.agg_visits
AS SELECT
StartDate,
CounterID,
initializeAggregation('sumState', Sign) AS Visits,
initializeAggregation('uniqState', UserID) AS Users
FROM test.visits;
:::note
When using
initializeAggregation
, an aggregate state is created for each individual row without grouping.
Each source row produces one row in the materialized view, and the actual aggregation happens later when the
AggregatingMergeTree
merges parts. This is only true if
optimize_on_insert = 0
.
:::
Related content {#related-content}
Blog:
Using Aggregate Combinators in ClickHouse | {"source_file": "aggregatingmergetree.md"} | [
-0.019395150244235992,
-0.05253973230719566,
-0.0026402806397527456,
0.0886494517326355,
-0.06252578645944595,
0.0250970758497715,
0.04376290366053581,
-0.007036240305751562,
0.03504323586821556,
0.010432176291942596,
0.0837957039475441,
-0.043196432292461395,
0.04268736392259598,
-0.03043... |
8ae27bf0-a80e-4080-9038-16a4861d5969 | description: 'Designed for thinning and aggregating/averaging (rollup) Graphite data.'
sidebar_label: 'GraphiteMergeTree'
sidebar_position: 90
slug: /engines/table-engines/mergetree-family/graphitemergetree
title: 'GraphiteMergeTree table engine'
doc_type: 'guide'
GraphiteMergeTree table engine
This engine is designed for thinning and aggregating/averaging (rollup)
Graphite
data. It may be helpful to developers who want to use ClickHouse as a data store for Graphite.
You can use any ClickHouse table engine to store the Graphite data if you do not need rollup, but if you need a rollup use
GraphiteMergeTree
. The engine reduces the volume of storage and increases the efficiency of queries from Graphite.
The engine inherits properties from
MergeTree
.
Creating a table {#creating-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
Path String,
Time DateTime,
Value Float64,
Version <Numeric_type>
...
) ENGINE = GraphiteMergeTree(config_section)
[PARTITION BY expr]
[ORDER BY expr]
[SAMPLE BY expr]
[SETTINGS name=value, ...]
See a detailed description of the
CREATE TABLE
query.
A table for the Graphite data should have the following columns for the following data:
Metric name (Graphite sensor). Data type:
String
.
Time of measuring the metric. Data type:
DateTime
.
Value of the metric. Data type:
Float64
.
Version of the metric. Data type: any numeric (ClickHouse saves the rows with the highest version or the last written if versions are the same. Other rows are deleted during the merge of data parts).
The names of these columns should be set in the rollup configuration.
GraphiteMergeTree parameters
config_section
β Name of the section in the configuration file, where are the rules of rollup set.
Query clauses
When creating a
GraphiteMergeTree
table, the same
clauses
are required, as when creating a
MergeTree
table.
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects and, if possible, switch old projects to the method described above.
:::
```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
EventDate Date,
Path String,
Time DateTime,
Value Float64,
Version
...
) ENGINE [=] GraphiteMergeTree(date-column [, sampling_expression], (primary, key), index_granularity, config_section)
```
All of the parameters excepting `config_section` have the same meaning as in `MergeTree`.
- `config_section` β Name of the section in the configuration file, where are the rules of rollup set.
Rollup configuration {#rollup-configuration}
The settings for rollup are defined by the
graphite_rollup
parameter in the server configuration. The name of the parameter could be any. You can create several configurations and use them for different tables.
Rollup configuration structure:
required-columns
patterns
Required columns {#required-columns} | {"source_file": "graphitemergetree.md"} | [
-0.05578206852078438,
-0.025167714804410934,
-0.039310213178396225,
0.044069740921258926,
-0.04552794620394707,
-0.029605191200971603,
-0.012083016335964203,
0.07551740109920502,
-0.12295997887849808,
0.07596581429243088,
0.05568283423781395,
0.010682289488613605,
0.027843739837408066,
-0.... |
8a4aa27a-83a6-4184-98d9-2a97e4be5c79 | Rollup configuration structure:
required-columns
patterns
Required columns {#required-columns}
path_column_name
{#path_column_name}
path_column_name
β The name of the column storing the metric name (Graphite sensor). Default value:
Path
.
time_column_name
{#time_column_name}
time_column_name
β The name of the column storing the time of measuring the metric. Default value:
Time
.
value_column_name
{#value_column_name}
value_column_name
β The name of the column storing the value of the metric at the time set in
time_column_name
. Default value:
Value
.
version_column_name
{#version_column_name}
version_column_name
β The name of the column storing the version of the metric. Default value:
Timestamp
.
Patterns {#patterns}
Structure of the
patterns
section:
text
pattern
rule_type
regexp
function
pattern
rule_type
regexp
age + precision
...
pattern
rule_type
regexp
function
age + precision
...
pattern
...
default
function
age + precision
...
:::important
Patterns must be strictly ordered:
Patterns without
function
or
retention
.
Patterns with both
function
and
retention
.
Pattern
default
.
:::
When processing a row, ClickHouse checks the rules in the
pattern
sections. Each of
pattern
(including
default
) sections can contain
function
parameter for aggregation,
retention
parameters or both. If the metric name matches the
regexp
, the rules from the
pattern
section (or sections) are applied; otherwise, the rules from the
default
section are used.
Fields for
pattern
and
default
sections:
rule_type
- a rule's type. It's applied only to a particular metrics. The engine use it to separate plain and tagged metrics. Optional parameter. Default value:
all
.
It's unnecessary when performance is not critical, or only one metrics type is used, e.g. plain metrics. By default only one type of rules set is created. Otherwise, if any of special types is defined, two different sets are created. One for plain metrics (root.branch.leaf) and one for tagged metrics (root.branch.leaf;tag1=value1).
The default rules are ended up in both sets.
Valid values:
all
(default) - a universal rule, used when
rule_type
is omitted.
plain
- a rule for plain metrics. The field
regexp
is processed as regular expression.
tagged
- a rule for tagged metrics (metrics are stored in DB in the format of
someName?tag1=value1&tag2=value2&tag3=value3
). Regular expression must be sorted by tags' names, first tag must be
__name__
if exists. The field
regexp
is processed as regular expression. | {"source_file": "graphitemergetree.md"} | [
-0.07079971581697464,
0.044955626130104065,
-0.07439479231834412,
-0.03569736331701279,
-0.028083348646759987,
-0.06057902052998543,
-0.049406860023736954,
0.02665485069155693,
-0.07184233516454697,
0.048866283148527145,
0.035184428095817566,
-0.04848126694560051,
-0.057348281145095825,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.