id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
a21bf0c9-a9d8-4a7c-aef4-9cce6bc400d7 | Restore from the incremental backup {#restore-from-the-incremental-backup}
This command restores the incremental backup into a new table,
data3
. Note that when an incremental backup is restored, the base backup is also included. Specify only the incremental backup when restoring:
sql
RESTORE TABLE data AS data3 FROM S3('https://mars-doc-test.s3.amazonaws.com/backup-S3/my_incremental', 'ABC123', 'Abc+123')
response
┌─id───────────────────────────────────┬─status───┐
│ ff0c8c39-7dff-4324-a241-000796de11ca │ RESTORED │
└──────────────────────────────────────┴──────────┘
Verify the count {#verify-the-count}
There were two inserts into the original table
data
, one with 1,000 rows and one with 100 rows, for a total of 1,100. Verify that the restored table has 1,100 rows:
sql
SELECT count()
FROM data3
response
┌─count()─┐
│ 1100 │
└─────────┘
Verify the content {#verify-the-content}
This compares the content of the original table,
data
with the restored table
data3
:
sql
SELECT throwIf((
SELECT groupArray(tuple(*))
FROM data
) != (
SELECT groupArray(tuple(*))
FROM data3
), 'Data does not match after BACKUP/RESTORE')
BACKUP/RESTORE using an S3 disk {#backuprestore-using-an-s3-disk}
It is also possible to
BACKUP
/
RESTORE
to S3 by configuring an S3 disk in the ClickHouse storage configuration. Configure the disk like this by adding a file to
/etc/clickhouse-server/config.d
:
```xml
s3_plain
s3_plain
<backups>
<allowed_disk>s3_plain</allowed_disk>
</backups>
```
And then
BACKUP
/
RESTORE
as usual:
sql
BACKUP TABLE data TO Disk('s3_plain', 'cloud_backup');
RESTORE TABLE data AS data_restored FROM Disk('s3_plain', 'cloud_backup');
:::note
But keep in mind that:
- This disk should not be used for
MergeTree
itself, only for
BACKUP
/
RESTORE
- If your tables are backed by S3 storage, then it will try to use the S3 server-side copy with
CopyObject
calls to copy parts to the destination bucket using its credentials. If an authentication error occurs, it will fallback to the copy with buffer method (download parts and upload them) which is very inefficient. In this case, you may want to ensure you have
read
permissions on the source bucket with the credentials of the destination bucket.
:::
Using named collections {#using-named-collections}
Named collections can be used for
BACKUP/RESTORE
parameters. See
here
for an example.
Alternatives {#alternatives}
ClickHouse stores data on disk, and there are many ways to backup disks. These are some alternatives that have been used in the past, and that may fit in well in your environment.
Duplicating source data somewhere else {#duplicating-source-data-somewhere-else} | {"source_file": "backup.md"} | [
-0.09017294645309448,
-0.047178227454423904,
-0.026244834065437317,
0.01683354564011097,
0.030910415574908257,
-0.03811066597700119,
-0.013922109268605709,
-0.046584147959947586,
0.019460083916783333,
0.08921259641647339,
0.0704539492726326,
-0.012219532392919064,
0.10650555789470673,
-0.1... |
b6a8d077-42f0-4757-af92-10bfc36bd356 | Duplicating source data somewhere else {#duplicating-source-data-somewhere-else}
Often data that is ingested into ClickHouse is delivered through some sort of persistent queue, such as
Apache Kafka
. In this case it is possible to configure an additional set of subscribers that will read the same data stream while it is being written to ClickHouse and store it in cold storage somewhere. Most companies already have some default recommended cold storage, which could be an object store or a distributed filesystem like
HDFS
.
Filesystem snapshots {#filesystem-snapshots}
Some local filesystems provide snapshot functionality (for example,
ZFS
), but they might not be the best choice for serving live queries. A possible solution is to create additional replicas with this kind of filesystem and exclude them from the
Distributed
tables that are used for
SELECT
queries. Snapshots on such replicas will be out of reach of any queries that modify data. As a bonus, these replicas might have special hardware configurations with more disks attached per server, which would be cost-effective.
For smaller volumes of data, a simple
INSERT INTO ... SELECT ...
to remote tables might work as well.
Manipulations with parts {#manipulations-with-parts}
ClickHouse allows using the
ALTER TABLE ... FREEZE PARTITION ...
query to create a local copy of table partitions. This is implemented using hardlinks to the
/var/lib/clickhouse/shadow/
folder, so it usually does not consume extra disk space for old data. The created copies of files are not handled by ClickHouse server, so you can just leave them there: you will have a simple backup that does not require any additional external system, but it will still be prone to hardware issues. For this reason, it's better to remotely copy them to another location and then remove the local copies. Distributed filesystems and object stores are still a good options for this, but normal attached file servers with a large enough capacity might work as well (in this case the transfer will occur via the network filesystem or maybe
rsync
).
Data can be restored from backup using the
ALTER TABLE ... ATTACH PARTITION ...
For more information about queries related to partition manipulations, see the
ALTER documentation
.
A third-party tool is available to automate this approach:
clickhouse-backup
.
Settings to disallow concurrent backup/restore {#settings-to-disallow-concurrent-backuprestore}
To disallow concurrent backup/restore, you can use these settings respectively.
xml
<clickhouse>
<backups>
<allow_concurrent_backups>false</allow_concurrent_backups>
<allow_concurrent_restores>false</allow_concurrent_restores>
</backups>
</clickhouse>
The default value for both is true, so by default concurrent backup/restores are allowed.
When these settings are false on a cluster, only 1 backup/restore is allowed to run on a cluster at a time. | {"source_file": "backup.md"} | [
-0.06976637244224548,
-0.061547014862298965,
-0.045187633484601974,
0.05137517675757408,
-0.0037521691992878914,
-0.08096811175346375,
-0.017896952107548714,
-0.02434232458472252,
0.04369210824370384,
0.053234297782182693,
-0.0017037641955539584,
0.05519780516624451,
0.046114299446344376,
... |
5f7ae13e-98d6-431c-9455-4539f6f76311 | The default value for both is true, so by default concurrent backup/restores are allowed.
When these settings are false on a cluster, only 1 backup/restore is allowed to run on a cluster at a time.
Configuring BACKUP/RESTORE to use an AzureBlobStorage Endpoint {#configuring-backuprestore-to-use-an-azureblobstorage-endpoint}
To write backups to an AzureBlobStorage container you need the following pieces of information:
- AzureBlobStorage endpoint connection string / url,
- Container,
- Path,
- Account name (if url is specified)
- Account Key (if url is specified)
The destination for a backup will be specified like this:
sql
AzureBlobStorage('<connection string>/<url>', '<container>', '<path>', '<account name>', '<account key>')
sql
BACKUP TABLE data TO AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;',
'testcontainer', 'data_backup');
RESTORE TABLE data AS data_restored FROM AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;',
'testcontainer', 'data_backup');
Backup up system tables {#backup-up-system-tables}
System tables can also be included in your backup and restore workflows, but their inclusion depends on your specific use case.
Backing up log tables {#backing-up-log-tables}
System tables that store historic data, such as those with a _log suffix (e.g.,
query_log
,
part_log
), can be backed up and restored like any other table. If your use case relies on analyzing historic data—for example, using query_log to track query performance or debug issues—it's recommended to include these tables in your backup strategy. However, if historic data from these tables is not required, they can be excluded to save backup storage space.
Backing up access management tables {#backing-up-access-management-tables}
System tables related to access management, such as users, roles, row_policies, settings_profiles, and quotas, receive special treatment during backup and restore operations. When these tables are included in a backup, their content is exported to a special
accessXX.txt
file, which encapsulates the equivalent SQL statements for creating and configuring the access entities. Upon restoration, the restore process interprets these files and re-applies the SQL commands to recreate the users, roles, and other configurations.
This feature ensures that the access control configuration of a ClickHouse cluster can be backed up and restored as part of the cluster's overall setup. | {"source_file": "backup.md"} | [
0.04092180356383324,
-0.06423487514257431,
-0.05390992388129234,
0.06964097172021866,
-0.09016969799995422,
0.1045318990945816,
0.03554939478635788,
-0.05347927287220955,
-0.03836233541369438,
0.09594845026731491,
-0.03374509885907173,
-0.028311850503087044,
0.08488888293504715,
-0.0324676... |
b3cbd977-8094-4088-ad47-8ebb5a0115d5 | This feature ensures that the access control configuration of a ClickHouse cluster can be backed up and restored as part of the cluster's overall setup.
Note: This functionality only works for configurations managed through SQL commands (referred to as
"SQL-driven Access Control and Account Management"
). Access configurations defined in ClickHouse server configuration files (e.g.
users.xml
) are not included in backups and cannot be restored through this method. | {"source_file": "backup.md"} | [
-0.02261715568602085,
-0.05913877859711647,
-0.04381662607192993,
0.054249461740255356,
-0.03502849116921425,
0.015536661259829998,
0.06635070592164993,
-0.13589338958263397,
-0.03669506311416626,
-0.029645927250385284,
0.02429519034922123,
0.09637629985809326,
0.058730628341436386,
-0.086... |
1cdb22c3-ae06-4bd1-ab12-5b416ca1490e | description: 'When performing queries, ClickHouse uses different caches.'
sidebar_label: 'Caches'
sidebar_position: 65
slug: /operations/caches
title: 'Cache types'
keywords: ['cache']
doc_type: 'reference'
Cache types
When performing queries, ClickHouse uses different caches to speed up queries
and reduce the need to read from or write to disk.
The main cache types are:
mark_cache
— Cache of
marks
used by table engines of the
MergeTree
family.
uncompressed_cache
— Cache of uncompressed data used by table engines of the
MergeTree
family.
Operating system page cache (used indirectly, for files with actual data).
There are also a host of additional cache types:
DNS cache.
Regexp
cache.
Compiled expressions cache.
Vector similarity index
cache.
Text index
cache.
Avro format
schemas cache.
Dictionaries
data cache.
Schema inference cache.
Filesystem cache
over S3, Azure, Local and other disks.
Userspace page cache
Query cache
.
Query condition cache
.
Format schema cache.
Should you wish to drop one of the caches, for performance tuning, troubleshooting, or data consistency reasons,
you can use the
SYSTEM DROP ... CACHE
statement. | {"source_file": "caches.md"} | [
0.020645318552851677,
-0.049746621400117874,
-0.006340401247143745,
-0.015171178616583347,
-0.021237151697278023,
-0.09793957322835922,
0.027852896600961685,
-0.0191548690199852,
0.023983865976333618,
0.04561929032206535,
0.009343989193439484,
0.05126386508345604,
-0.03935961797833443,
-0.... |
f9bd7a10-947f-4e21-8f4c-ff1605947cbd | description: 'Documentation for highlight-next-line'
sidebar_label: 'External disks for storing data'
sidebar_position: 68
slug: /operations/storing-data
title: 'External disks for storing data'
doc_type: 'guide'
Data processed in ClickHouse is usually stored in the local file system of the
machine on which ClickHouse server is running. That requires large-capacity disks,
which can be expensive. To avoid storing data locally, various storage options are supported:
1.
Amazon S3
object storage.
2.
Azure Blob Storage
.
3. Unsupported: The Hadoop Distributed File System (
HDFS
)
:::note
ClickHouse also has support for external table engines, which are different from
the external storage option described on this page, as they allow reading data
stored in some general file format (like Parquet). On this page we are describing
storage configuration for the ClickHouse
MergeTree
family or
Log
family tables.
to work with data stored on
Amazon S3
disks, use the
S3
table engine.
to work with data stored in Azure Blob Storage, use the
AzureBlobStorage
table engine.
to work with data in the Hadoop Distributed File System (unsupported), use the
HDFS
table engine.
:::
Configure external storage {#configuring-external-storage}
MergeTree
and
Log
family table engines can store data to
S3
,
AzureBlobStorage
,
HDFS
(unsupported) using a disk with types
s3
,
azure_blob_storage
,
hdfs
(unsupported) respectively.
Disk configuration requires:
A
type
section, equal to one of
s3
,
azure_blob_storage
,
hdfs
(unsupported),
local_blob_storage
,
web
.
Configuration of a specific external storage type.
Starting from 24.1 clickhouse version, it is possible to use a new configuration option.
It requires specifying:
A
type
equal to
object_storage
object_storage_type
, equal to one of
s3
,
azure_blob_storage
(or just
azure
from
24.3
),
hdfs
(unsupported),
local_blob_storage
(or just
local
from
24.3
),
web
.
Optionally,
metadata_type
can be specified (it is equal to
local
by default), but it can also be set to
plain
,
web
and, starting from
24.4
,
plain_rewritable
.
Usage of
plain
metadata type is described in
plain storage section
,
web
metadata type can be used only with
web
object storage type,
local
metadata type stores metadata files locally (each metadata files contains mapping to files in object storage and some additional meta information about them).
For example:
xml
<s3>
<type>s3</type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3>
is equal to the following configuration (from version
24.1
): | {"source_file": "storing-data.md"} | [
-0.009962862357497215,
-0.061663515865802765,
-0.10384803265333176,
0.005207079462707043,
-0.018557634204626083,
0.022793982177972794,
-0.02678842283785343,
0.006415096111595631,
0.031114233657717705,
0.08981026709079742,
0.02443579211831093,
0.04713602364063263,
0.10284028947353363,
-0.01... |
3962626b-681c-4a48-96e1-89d6a8b193e3 | is equal to the following configuration (from version
24.1
):
xml
<s3>
<type>object_storage</type>
<object_storage_type>s3</object_storage_type>
<metadata_type>local</metadata_type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3>
The following configuration:
xml
<s3_plain>
<type>s3_plain</type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3_plain>
is equal to:
xml
<s3_plain>
<type>object_storage</type>
<object_storage_type>s3</object_storage_type>
<metadata_type>plain</metadata_type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3_plain>
An example of full storage configuration will look like:
xml
<clickhouse>
<storage_configuration>
<disks>
<s3>
<type>s3</type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3>
</disks>
<policies>
<s3>
<volumes>
<main>
<disk>s3</disk>
</main>
</volumes>
</s3>
</policies>
</storage_configuration>
</clickhouse>
Starting with version 24.1, it can also look like:
xml
<clickhouse>
<storage_configuration>
<disks>
<s3>
<type>object_storage</type>
<object_storage_type>s3</object_storage_type>
<metadata_type>local</metadata_type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3>
</disks>
<policies>
<s3>
<volumes>
<main>
<disk>s3</disk>
</main>
</volumes>
</s3>
</policies>
</storage_configuration>
</clickhouse>
To make a specific kind of storage a default option for all
MergeTree
tables,
add the following section to the configuration file:
xml
<clickhouse>
<merge_tree>
<storage_policy>s3</storage_policy>
</merge_tree>
</clickhouse>
If you want to configure a specific storage policy for a specific table,
you can define it in settings while creating the table:
sql
CREATE TABLE test (a Int32, b String)
ENGINE = MergeTree() ORDER BY a
SETTINGS storage_policy = 's3'; | {"source_file": "storing-data.md"} | [
0.0022505728993564844,
-0.02216983772814274,
-0.1315307468175888,
-0.05686333030462265,
0.05344993993639946,
-0.037890177220106125,
-0.05250006169080734,
-0.043169181793928146,
0.04135869815945625,
-0.00243445485830307,
0.07055137306451797,
-0.03243233263492584,
0.03442378342151642,
-0.055... |
2206db57-df42-43d9-85a3-11ee549ceb5d | sql
CREATE TABLE test (a Int32, b String)
ENGINE = MergeTree() ORDER BY a
SETTINGS storage_policy = 's3';
You can also use
disk
instead of
storage_policy
. In this case it is not necessary
to have the
storage_policy
section in the configuration file, and a
disk
section is enough.
sql
CREATE TABLE test (a Int32, b String)
ENGINE = MergeTree() ORDER BY a
SETTINGS disk = 's3';
Dynamic Configuration {#dynamic-configuration}
There is also a possibility to specify storage configuration without a predefined
disk in configuration in a configuration file, but can be configured in the
CREATE
/
ATTACH
query settings.
The following example query builds on the above dynamic disk configuration and
shows how to use a local disk to cache data from a table stored at a URL.
sql
ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4),
is_new UInt8,
duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY (postcode1, postcode2, addr1, addr2)
-- highlight-start
SETTINGS disk = disk(
type=web,
endpoint='https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/'
);
-- highlight-end
The example below adds a cache to external storage.
sql
ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4),
is_new UInt8,
duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY (postcode1, postcode2, addr1, addr2)
-- highlight-start
SETTINGS disk = disk(
type=cache,
max_size='1Gi',
path='/var/lib/clickhouse/custom_disk_cache/',
disk=disk(
type=web,
endpoint='https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/'
)
);
-- highlight-end
In the settings highlighted below notice that the disk of
type=web
is nested within
the disk of
type=cache
.
:::note
The example uses
type=web
, but any disk type can be configured as dynamic,
including local disk. Local disks require a path argument to be inside the
server config parameter
custom_local_disks_base_directory
, which has no
default, so set that also when using local disk.
::: | {"source_file": "storing-data.md"} | [
0.03780202195048332,
-0.023583944886922836,
-0.07142627239227295,
0.047609541565179825,
-0.11451355367898941,
-0.04869135469198227,
0.03607850894331932,
0.04084131866693497,
-0.043300360441207886,
0.04573033004999161,
0.02370373159646988,
-0.0257034283131361,
0.0932614803314209,
-0.0343764... |
1bbc51ef-02b7-49c3-bab6-faceaa9aac59 | A combination of config-based configuration and sql-defined configuration is
also possible:
sql
ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4),
is_new UInt8,
duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY (postcode1, postcode2, addr1, addr2)
-- highlight-start
SETTINGS disk = disk(
type=cache,
max_size='1Gi',
path='/var/lib/clickhouse/custom_disk_cache/',
disk=disk(
type=web,
endpoint='https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/'
)
);
-- highlight-end
where
web
is from the server configuration file:
xml
<storage_configuration>
<disks>
<web>
<type>web</type>
<endpoint>'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/'</endpoint>
</web>
</disks>
</storage_configuration>
Using S3 Storage {#s3-storage}
Required parameters {#required-parameters-s3}
| Parameter | Description |
|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
endpoint
| S3 endpoint URL in
path
or
virtual hosted
styles
. Should include the bucket and root path for data storage. |
|
access_key_id
| S3 access key ID used for authentication. |
|
secret_access_key
| S3 secret access key used for authentication. |
Optional parameters {#optional-parameters-s3} | {"source_file": "storing-data.md"} | [
0.06585098803043365,
-0.06382040679454803,
-0.02631630189716816,
0.010131302289664745,
-0.10664766281843185,
-0.040336478501558304,
-0.007387929130345583,
0.028316350653767586,
-0.13615940511226654,
0.015724200755357742,
0.10629288107156754,
-0.11020273715257645,
0.028157895430922508,
-0.0... |
bbfdf9e7-567d-4d20-b59c-85365ab7806b | | Parameter | Description | Default Value |
|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|
|
region
| S3 region name. | - |
|
support_batch_delete
| Controls whether to check for batch delete support. Set to
false
when using Google Cloud Storage (GCS) as GCS doesn't support batch deletes. |
true
|
|
use_environment_credentials
| Reads AWS credentials from environment variables:
AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
, and
AWS_SESSION_TOKEN
if they exist. |
false
|
|
use_insecure_imds_request
| If
true
, uses insecure IMDS request when obtaining credentials from Amazon EC2 metadata. |
false
|
|
expiration_window_seconds
| Grace period (in seconds) for checking if expiration-based credentials have expired. |
120
|
|
proxy
| Proxy configuration for S3 endpoint. Each
uri
element inside
proxy
block should contain a proxy URL. | - |
|
connect_timeout_ms
| Socket connect timeout in milliseconds. |
10000 | {"source_file": "storing-data.md"} | [
0.04249017685651779,
0.08142649382352829,
-0.06255344301462173,
0.007414470426738262,
-0.09843405336141586,
0.046565137803554535,
0.032099299132823944,
0.0641661211848259,
-0.0004241623100824654,
-0.05043478682637215,
0.03937699645757675,
-0.07939525693655014,
0.0021902478765696287,
-0.067... |
0c8e3096-d5d0-4e2b-9001-09864a2696cb | 10000
(10 seconds) |
|
request_timeout_ms
| Request timeout in milliseconds. |
5000
(5 seconds) |
|
retry_attempts
| Number of retry attempts for failed requests. |
10
|
|
single_read_retries
| Number of retry attempts for connection drops during read. |
4
|
|
min_bytes_for_seek
| Minimum number of bytes to use seek operation instead of sequential read. |
1 MB
|
|
metadata_path
| Local filesystem path to store S3 metadata files. |
/var/lib/clickhouse/disks/<disk_name>/
|
|
skip_access_check
| If
true
, skips disk access checks during startup. |
false
|
|
header
| Adds specified HTTP header to requests. Can be specified multiple times. | - |
|
server_side_encryption_customer_key_base64
| Required headers for accessing S3 objects with SSE-C encryption. | - |
|
server_side_encryption_kms_key_id
| Required headers for accessing S3 objects with
SSE-KMS encryption
. Empty string uses AWS managed S3 key. | - |
| | {"source_file": "storing-data.md"} | [
-0.03827250748872757,
-0.04299914464354515,
-0.11970482766628265,
-0.013262332417070866,
0.019922325387597084,
-0.024083446711301804,
-0.04482204467058182,
-0.008890585973858833,
0.043620165437459946,
0.026890337467193604,
0.00464872969314456,
0.03764094039797783,
0.08217000216245651,
-0.0... |
23ba5a17-f35b-425d-8f5f-13cab386945a | SSE-KMS encryption
. Empty string uses AWS managed S3 key. | - |
|
server_side_encryption_kms_encryption_context
| Encryption context header for SSE-KMS (used with
server_side_encryption_kms_key_id
). | - |
|
server_side_encryption_kms_bucket_key_enabled
| Enables S3 bucket keys for SSE-KMS (used with
server_side_encryption_kms_key_id
). | Matches bucket-level setting |
|
s3_max_put_rps
| Maximum PUT requests per second before throttling. |
0
(unlimited) |
|
s3_max_put_burst
| Maximum concurrent PUT requests before hitting RPS limit. | Same as
s3_max_put_rps
|
|
s3_max_get_rps
| Maximum GET requests per second before throttling. |
0
(unlimited) |
|
s3_max_get_burst
| Maximum concurrent GET requests before hitting RPS limit. | Same as
s3_max_get_rps
|
|
read_resource
| Resource name for
scheduling
read requests. | Empty string (disabled) |
|
write_resource
| Resource name for
scheduling
write requests. | Empty string (disabled) |
|
key_template
| Defines object key generation format using
re2
syntax. Requires
storage_metadata_write_full_object_key
flag. Incompatible with
root path
in
endpoint
. Requires | {"source_file": "storing-data.md"} | [
-0.0500820018351078,
-0.0002903595450334251,
-0.0983923003077507,
0.029833273962140083,
-0.0038949314039200544,
-0.0027225168887525797,
0.009892962872982025,
0.01956789195537567,
0.04337024688720703,
0.004803103860467672,
-0.03008328378200531,
-0.004543911665678024,
0.10696656256914139,
-0... |
c5743f0f-3c9f-48ad-9dee-5f4cecc27eff | re2
syntax. Requires
storage_metadata_write_full_object_key
flag. Incompatible with
root path
in
endpoint
. Requires
key_compatibility_prefix
. | - |
|
key_compatibility_prefix
| Required with
key_template
. Specifies the previous
root path
from
endpoint
for reading older metadata versions. | - |
|
read_only
| Only allowing reading from the disk. | - |
:::note
Google Cloud Storage (GCS) is also supported using the type
s3
. See
GCS backed MergeTree
.
::: | {"source_file": "storing-data.md"} | [
0.023761801421642303,
-0.04758802056312561,
-0.00295458035543561,
-0.0344030037522316,
0.01269579865038395,
-0.04999278113245964,
-0.06947055459022522,
0.04192875325679779,
-0.012538489885628223,
0.08820145577192307,
0.04675788804888725,
0.020302405580878258,
0.028373293578624725,
-0.04687... |
46af007a-c253-4221-b973-06b80833e3cd | Using Plain Storage {#plain-storage}
In
22.10
a new disk type
s3_plain
was introduced, which provides a write-once storage.
Configuration parameters for it are the same as for the
s3
disk type.
Unlike the
s3
disk type, it stores data as is. In other words,
instead of having randomly generated blob names, it uses normal file names
(the same way as ClickHouse stores files on local disk) and does not store any
metadata locally. For example, it is derived from data on
s3
.
This disk type allows keeping a static version of the table, as it does not
allow executing merges on the existing data and does not allow inserting of new
data. A use case for this disk type is to create backups on it, which can be done
via
BACKUP TABLE data TO Disk('plain_disk_name', 'backup_name')
. Afterward,
you can do
RESTORE TABLE data AS data_restored FROM Disk('plain_disk_name', 'backup_name')
or use
ATTACH TABLE data (...) ENGINE = MergeTree() SETTINGS disk = 'plain_disk_name'
.
Configuration:
xml
<s3_plain>
<type>s3_plain</type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3_plain>
Starting from
24.1
it is possible configure any object storage disk (
s3
,
azure
,
hdfs
(unsupported),
local
) using
the
plain
metadata type.
Configuration:
xml
<s3_plain>
<type>object_storage</type>
<object_storage_type>azure</object_storage_type>
<metadata_type>plain</metadata_type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3_plain>
Using S3 Plain Rewritable Storage {#s3-plain-rewritable-storage}
A new disk type
s3_plain_rewritable
was introduced in
24.4
.
Similar to the
s3_plain
disk type, it does not require additional storage for
metadata files. Instead, metadata is stored in S3.
Unlike the
s3_plain
disk type,
s3_plain_rewritable
allows executing merges
and supports
INSERT
operations.
Mutations
and replication of tables are not supported.
A use case for this disk type is for non-replicated
MergeTree
tables. Although
the
s3
disk type is suitable for non-replicated
MergeTree
tables, you may opt
for the
s3_plain_rewritable
disk type if you do not require local metadata
for the table and are willing to accept a limited set of operations. This could
be useful, for example, for system tables.
Configuration:
xml
<s3_plain_rewritable>
<type>s3_plain_rewritable</type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3_plain_rewritable>
is equal to | {"source_file": "storing-data.md"} | [
-0.02390718087553978,
-0.053972966969013214,
-0.059225473552942276,
0.02641345001757145,
0.016261832788586617,
-0.044212181121110916,
-0.0014343839138746262,
0.016995882615447044,
-0.021032113581895828,
0.008872087113559246,
0.11594521999359131,
0.05557282269001007,
0.11404392868280411,
-0... |
da61f0f6-db1e-4b8e-993b-cb0978f54bb8 | is equal to
xml
<s3_plain_rewritable>
<type>object_storage</type>
<object_storage_type>s3</object_storage_type>
<metadata_type>plain_rewritable</metadata_type>
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
<use_environment_credentials>1</use_environment_credentials>
</s3_plain_rewritable>
Starting from
24.5
it is possible to configure any object storage disk
(
s3
,
azure
,
local
) using the
plain_rewritable
metadata type.
Using Azure Blob Storage {#azure-blob-storage}
MergeTree
family table engines can store data to
Azure Blob Storage
using a disk with type
azure_blob_storage
.
Configuration markup:
xml
<storage_configuration>
...
<disks>
<blob_storage_disk>
<type>azure_blob_storage</type>
<storage_account_url>http://account.blob.core.windows.net</storage_account_url>
<container_name>container</container_name>
<account_name>account</account_name>
<account_key>pass123</account_key>
<metadata_path>/var/lib/clickhouse/disks/blob_storage_disk/</metadata_path>
<cache_path>/var/lib/clickhouse/disks/blob_storage_disk/cache/</cache_path>
<skip_access_check>false</skip_access_check>
</blob_storage_disk>
</disks>
...
</storage_configuration>
Connection parameters {#azure-blob-storage-connection-parameters}
| Parameter | Description | Default Value |
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|
|
storage_account_url
(Required) | Azure Blob Storage account URL. Examples:
http://account.blob.core.windows.net
or
http://azurite1:10000/devstoreaccount1
. | - |
|
container_name
| Target container name. |
default-container
|
|
container_already_exists
| Controls container creation behavior:
-
false
: Creates a new container
-
true
: Connects directly to existing container
- Unset: Checks if container exists, creates if needed | - |
Authentication parameters (the disk will try all available methods
and
Managed Identity Credential): | {"source_file": "storing-data.md"} | [
0.009024010971188545,
-0.05643770471215248,
-0.1141892820596695,
-0.008749612607061863,
-0.03519352525472641,
0.02363784797489643,
-0.05013161897659302,
-0.015617629513144493,
-0.04043375328183174,
0.06233501434326172,
0.024436000734567642,
-0.02752726525068283,
0.0874130055308342,
-0.0031... |
b0db841a-444b-4e24-988c-d1dd4a12f1aa | Authentication parameters (the disk will try all available methods
and
Managed Identity Credential):
| Parameter | Description |
|---------------------|-----------------------------------------------------------------|
|
connection_string
| For authentication using a connection string. |
|
account_name
| For authentication using Shared Key (used with
account_key
). |
|
account_key
| For authentication using Shared Key (used with
account_name
). |
Limit parameters {#azure-blob-storage-limit-parameters}
| Parameter | Description |
|--------------------------------------|-----------------------------------------------------------------------------|
|
s3_max_single_part_upload_size
| Maximum size of a single block upload to Blob Storage. |
|
min_bytes_for_seek
| Minimum size of a seekable region. |
|
max_single_read_retries
| Maximum number of attempts to read a chunk of data from Blob Storage. |
|
max_single_download_retries
| Maximum number of attempts to download a readable buffer from Blob Storage. |
|
thread_pool_size
| Maximum number of threads for
IDiskRemote
instantiation. |
|
s3_max_inflight_parts_for_one_file
| Maximum number of concurrent put requests for a single object. |
Other parameters {#azure-blob-storage-other-parameters}
| Parameter | Description | Default Value |
|----------------------------------|------------------------------------------------------------------------------------|------------------------------------------|
|
metadata_path
| Local filesystem path to store metadata files for Blob Storage. |
/var/lib/clickhouse/disks/<disk_name>/
|
|
skip_access_check
| If
true
, skips disk access checks during startup. |
false
|
|
read_resource
| Resource name for
scheduling
read requests. | Empty string (disabled) |
|
write_resource
| Resource name for
scheduling
write requests. | Empty string (disabled) |
|
metadata_keep_free_space_bytes
| Amount of free metadata disk space to reserve. | - |
Examples of working configurations can be found in integration tests directory (see e.g.
test_merge_tree_azure_blob_storage
or
test_azure_blob_storage_zero_copy_replication
). | {"source_file": "storing-data.md"} | [
-0.003581271506845951,
-0.0027518898714333773,
-0.11634565889835358,
0.02077854983508587,
-0.09752121567726135,
0.027827642858028412,
0.11036693304777145,
0.021624304354190826,
0.027522334828972816,
0.0852244570851326,
0.0191977359354496,
-0.07884427905082703,
0.14241142570972443,
0.010243... |
edf0a005-2cc9-4d3c-b02b-54f8749f1ae6 | Examples of working configurations can be found in integration tests directory (see e.g.
test_merge_tree_azure_blob_storage
or
test_azure_blob_storage_zero_copy_replication
).
:::note Zero-copy replication is not ready for production
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
:::
Using HDFS storage (Unsupported) {#using-hdfs-storage-unsupported}
In this sample configuration:
- the disk is of type
hdfs
(unsupported)
- the data is hosted at
hdfs://hdfs1:9000/clickhouse/
By the way, HDFS is unsupported and therefore there might be issues when using it. Feel free to make a pull request with the fix if any issue arises.
xml
<clickhouse>
<storage_configuration>
<disks>
<hdfs>
<type>hdfs</type>
<endpoint>hdfs://hdfs1:9000/clickhouse/</endpoint>
<skip_access_check>true</skip_access_check>
</hdfs>
<hdd>
<type>local</type>
<path>/</path>
</hdd>
</disks>
<policies>
<hdfs>
<volumes>
<main>
<disk>hdfs</disk>
</main>
<external>
<disk>hdd</disk>
</external>
</volumes>
</hdfs>
</policies>
</storage_configuration>
</clickhouse>
Keep in mind that HDFS may not work in corner cases.
Using Data Encryption {#encrypted-virtual-file-system}
You can encrypt the data stored on
S3
, or
HDFS
(unsupported) external disks, or on a local disk. To turn on the encryption mode, in the configuration file you must define a disk with the type
encrypted
and choose a disk on which the data will be saved. An
encrypted
disk ciphers all written files on the fly, and when you read files from an
encrypted
disk it deciphers them automatically. So you can work with an
encrypted
disk like with a normal one.
Example of disk configuration:
xml
<disks>
<disk1>
<type>local</type>
<path>/path1/</path>
</disk1>
<disk2>
<type>encrypted</type>
<disk>disk1</disk>
<path>path2/</path>
<key>_16_ascii_chars_</key>
</disk2>
</disks>
For example, when ClickHouse writes data from some table to a file
store/all_1_1_0/data.bin
to
disk1
, then in fact this file will be written to the physical disk along the path
/path1/store/all_1_1_0/data.bin
.
When writing the same file to
disk2
, it will actually be written to the physical disk at the path
/path1/path2/store/all_1_1_0/data.bin
in encrypted mode.
Required Parameters {#required-parameters-encrypted-disk} | {"source_file": "storing-data.md"} | [
0.017247168347239494,
-0.07604595273733139,
-0.05226880684494972,
0.01560608297586441,
-0.025081869214773178,
-0.02630719728767872,
-0.04593610763549805,
-0.04975924268364906,
-0.0013691309140995145,
0.1118711531162262,
0.06951228529214859,
0.005628633312880993,
0.10564734041690826,
0.0446... |
39f1c065-5800-4594-b794-7c466ecf1d5a | Required Parameters {#required-parameters-encrypted-disk}
| Parameter | Type | Description |
|------------|--------|----------------------------------------------------------------------------------------------------------------------------------------------|
|
type
| String | Must be set to
encrypted
to create an encrypted disk. |
|
disk
| String | Type of disk to use for underlying storage. |
|
key
| Uint64 | Key for encryption and decryption. Can be specified in hexadecimal using
key_hex
. Multiple keys can be specified using the
id
attribute. |
Optional Parameters {#optional-parameters-encrypted-disk}
| Parameter | Type | Default | Description |
|------------------|--------|----------------|-----------------------------------------------------------------------------------------------------------------------------------------|
|
path
| String | Root directory | Location on the disk where data will be saved. |
|
current_key_id
| String | - | The key ID used for encryption. All specified keys can be used for decryption. |
|
algorithm
| Enum |
AES_128_CTR
| Encryption algorithm. Options:
-
AES_128_CTR
(16-byte key)
-
AES_192_CTR
(24-byte key)
-
AES_256_CTR
(32-byte key) |
Example of disk configuration:
xml
<clickhouse>
<storage_configuration>
<disks>
<disk_s3>
<type>s3</type>
<endpoint>...
</disk_s3>
<disk_s3_encrypted>
<type>encrypted</type>
<disk>disk_s3</disk>
<algorithm>AES_128_CTR</algorithm>
<key_hex id="0">00112233445566778899aabbccddeeff</key_hex>
<key_hex id="1">ffeeddccbbaa99887766554433221100</key_hex>
<current_key_id>1</current_key_id>
</disk_s3_encrypted>
</disks>
</storage_configuration>
</clickhouse>
Using local cache {#using-local-cache}
It is possible to configure local cache over disks in storage configuration starting from version 22.3.
For versions 22.3 - 22.7 cache is supported only for
s3
disk type. For versions >= 22.8 cache is supported for any disk type: S3, Azure, Local, Encrypted, etc.
For versions >= 23.5 cache is supported only for remote disk types: S3, Azure, HDFS (unsupported).
Cache uses
LRU
cache policy. | {"source_file": "storing-data.md"} | [
0.0013258696999400854,
0.024724094197154045,
-0.13712641596794128,
-0.0209815576672554,
-0.07827235013246536,
-0.02358996495604515,
0.0460965633392334,
0.06730299443006516,
-0.035803623497486115,
-0.007224461529403925,
0.043601151555776596,
-0.08416007459163666,
0.10529059916734695,
-0.028... |
54538f12-49db-4b12-ab7b-cf7fe61f13b3 | Example of configuration for versions later or equal to 22.8:
xml
<clickhouse>
<storage_configuration>
<disks>
<s3>
<type>s3</type>
<endpoint>...</endpoint>
... s3 configuration ...
</s3>
<cache>
<type>cache</type>
<disk>s3</disk>
<path>/s3_cache/</path>
<max_size>10Gi</max_size>
</cache>
</disks>
<policies>
<s3_cache>
<volumes>
<main>
<disk>cache</disk>
</main>
</volumes>
</s3_cache>
<policies>
</storage_configuration>
Example of configuration for versions earlier than 22.8:
xml
<clickhouse>
<storage_configuration>
<disks>
<s3>
<type>s3</type>
<endpoint>...</endpoint>
... s3 configuration ...
<data_cache_enabled>1</data_cache_enabled>
<data_cache_max_size>10737418240</data_cache_max_size>
</s3>
</disks>
<policies>
<s3_cache>
<volumes>
<main>
<disk>s3</disk>
</main>
</volumes>
</s3_cache>
<policies>
</storage_configuration>
File Cache
disk configuration settings
:
These settings should be defined in the disk configuration section. | {"source_file": "storing-data.md"} | [
0.03278345987200737,
-0.014108889736235142,
-0.05439480021595955,
-0.06262268126010895,
0.02954273670911789,
-0.019476210698485374,
-0.07960091531276703,
0.027294447645545006,
-0.018143748864531517,
0.01518682949244976,
0.06556158512830734,
0.09004688262939453,
0.004969719797372818,
-0.049... |
2bc8333e-f34e-43d7-a320-4fe6e3b5e473 | | Parameter | Type | Default | Description |
|---------------------------------------|---------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
path
| String | - |
Required
. Path to the directory where cache will be stored. |
|
max_size
| Size | - |
Required
. Maximum cache size in bytes or readable format (e.g.,
10Gi
). Files are evicted using LRU policy when the limit is reached. Supports
ki
,
Mi
,
Gi
formats (since v22.10). |
|
cache_on_write_operations
| Boolean |
false
| Enables write-through cache for
INSERT
queries and background merges. Can be overridden per query with
enable_filesystem_cache_on_write_operations
. |
|
enable_filesystem_query_cache_limit
| Boolean |
false
| Enables per-query cache size limits based on
max_query_cache_size
. |
|
enable_cache_hits_threshold
| Boolean |
false
| When enabled, data is cached only after being read multiple times. |
|
cache_hits_threshold
| Integer |
0
| Number of reads required before data is cached (requires
enable_cache_hits_threshold
). |
|
enable_bypass_cache_with_threshold
| Boolean |
false
| Skips cache for large read ranges. |
|
bypass_cache_threshold
| Size |
256Mi
| Read range size that triggers cache bypass (requires
enable_bypass_cache_with_threshold
). |
|
max_file_segment_size
| Size |
8Mi
| Maximum size of a single cache file in bytes or readable format. |
|
max_elements
| Integer |
10000000 | {"source_file": "storing-data.md"} | [
0.03475789725780487,
0.05141118913888931,
-0.033738814294338226,
0.009789017029106617,
-0.08381658792495728,
0.04203931242227554,
0.027381030842661858,
0.04271918162703514,
0.0031522971112281084,
-0.0500001385807991,
0.0207861065864563,
-0.053667742758989334,
-0.011882828548550606,
-0.0629... |
f7124fd1-ade8-4ae1-ba6f-99485a4c804f | |
max_elements
| Integer |
10000000
| Maximum number of cache files. |
|
load_metadata_threads
| Integer |
16
| Number of threads for loading cache metadata at startup. | | {"source_file": "storing-data.md"} | [
0.04831887036561966,
0.005787931382656097,
-0.08945991843938828,
-0.029751939699053764,
0.0018839662661775947,
-0.062182217836380005,
0.005436497740447521,
0.05651332437992096,
-0.05555709823966026,
-0.006784971337765455,
0.0058837346732616425,
0.017243050038814545,
-0.008985564112663269,
... |
8a17bab0-fbc8-4d66-979e-14541953936c | Note
: Size values support units like
ki
,
Mi
,
Gi
, etc. (e.g.,
10Gi
).
File Cache Query/Profile Settings {#file-cache-query-profile-settings}
| Setting | Type | Default | Description |
|---------------------------------------------------------------|---------|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
enable_filesystem_cache
| Boolean |
true
| Enables/disables cache usage per query, even when using a
cache
disk type. |
|
read_from_filesystem_cache_if_exists_otherwise_bypass_cache
| Boolean |
false
| When enabled, uses cache only if data exists; new data won't be cached. |
|
enable_filesystem_cache_on_write_operations
| Boolean |
false
(Cloud:
true
) | Enables write-through cache. Requires
cache_on_write_operations
in cache config. |
|
enable_filesystem_cache_log
| Boolean |
false
| Enables detailed cache usage logging to
system.filesystem_cache_log
. |
|
max_query_cache_size
| Size |
false
| Maximum cache size per query. Requires
enable_filesystem_query_cache_limit
in cache config. |
|
skip_download_if_exceeds_query_cache
| Boolean |
true
| Controls behavior when
max_query_cache_size
is reached:
-
true
: Stops downloading new data
-
false
: Evicts old data to make space for new data |
:::warning
Cache configuration settings and cache query settings correspond to the latest ClickHouse version,
for earlier versions something might not be supported.
:::
Cache system tables {#cache-system-tables-file-cache} | {"source_file": "storing-data.md"} | [
0.07885651290416718,
0.04276491701602936,
-0.07830876857042313,
0.0819091945886612,
-0.10823115706443787,
-0.033026073127985,
0.01048959419131279,
0.10102254152297974,
-0.059291865676641464,
0.013861516490578651,
0.011177128180861473,
-0.038871970027685165,
-0.006706381682306528,
-0.034269... |
fb486fea-8da7-4296-b49a-a82e2a83844e | Cache system tables {#cache-system-tables-file-cache}
| Table Name | Description | Requirements |
|-------------------------------|-----------------------------------------------------|-----------------------------------------------|
|
system.filesystem_cache
| Displays the current state of the filesystem cache. | None |
|
system.filesystem_cache_log
| Provides detailed cache usage statistics per query. | Requires
enable_filesystem_cache_log = true
|
Cache commands {#cache-commands-file-cache}
SYSTEM DROP FILESYSTEM CACHE (<cache_name>) (ON CLUSTER)
--
ON CLUSTER
{#system-drop-filesystem-cache-on-cluster}
This command is only supported when no
<cache_name>
is provided
SHOW FILESYSTEM CACHES
{#show-filesystem-caches}
Show a list of filesystem caches which were configured on the server.
(For versions less than or equal to
22.8
the command is named
SHOW CACHES
)
sql title="Query"
SHOW FILESYSTEM CACHES
text title="Response"
┌─Caches────┐
│ s3_cache │
└───────────┘
DESCRIBE FILESYSTEM CACHE '<cache_name>'
{#describe-filesystem-cache}
Show cache configuration and some general statistics for a specific cache.
Cache name can be taken from
SHOW FILESYSTEM CACHES
command. (For versions less
than or equal to
22.8
the command is named
DESCRIBE CACHE
)
sql title="Query"
DESCRIBE FILESYSTEM CACHE 's3_cache'
text title="Response"
┌────max_size─┬─max_elements─┬─max_file_segment_size─┬─boundary_alignment─┬─cache_on_write_operations─┬─cache_hits_threshold─┬─current_size─┬─current_elements─┬─path───────┬─background_download_threads─┬─enable_bypass_cache_with_threshold─┐
│ 10000000000 │ 1048576 │ 104857600 │ 4194304 │ 1 │ 0 │ 3276 │ 54 │ /s3_cache/ │ 2 │ 0 │
└─────────────┴──────────────┴───────────────────────┴────────────────────┴───────────────────────────┴──────────────────────┴──────────────┴──────────────────┴────────────┴─────────────────────────────┴────────────────────────────────────┘ | {"source_file": "storing-data.md"} | [
0.0036662982311099768,
-0.020636815577745438,
-0.0808556005358696,
0.014127248898148537,
0.032080382108688354,
-0.09237496554851532,
0.06687544286251068,
0.08683482557535172,
-0.042473290115594864,
0.07899540662765503,
0.02768026664853096,
-0.039065245538949966,
-0.005562517326325178,
-0.0... |
efd8af60-3bcf-4d23-b085-a4844687e3f3 | | Cache current metrics | Cache asynchronous metrics | Cache profile events |
|---------------------------|----------------------------|-------------------------------------------------------------------------------------------|
|
FilesystemCacheSize
|
FilesystemCacheBytes
|
CachedReadBufferReadFromSourceBytes
,
CachedReadBufferReadFromCacheBytes
|
|
FilesystemCacheElements
|
FilesystemCacheFiles
|
CachedReadBufferReadFromSourceMicroseconds
,
CachedReadBufferReadFromCacheMicroseconds
|
| | |
CachedReadBufferCacheWriteBytes
,
CachedReadBufferCacheWriteMicroseconds
|
| | |
CachedWriteBufferCacheWriteBytes
,
CachedWriteBufferCacheWriteMicroseconds
|
Using static Web storage (read-only) {#web-storage}
This is a read-only disk. Its data is only read and never modified. A new table
is loaded to this disk via
ATTACH TABLE
query (see example below). Local disk
is not actually used, each
SELECT
query will result in a
http
request to
fetch required data. All modification of the table data will result in an
exception, i.e. the following types of queries are not allowed:
CREATE TABLE
,
ALTER TABLE
,
RENAME TABLE
,
DETACH TABLE
and
TRUNCATE TABLE
.
Web storage can be used for read-only purposes. An example use is for hosting
sample data, or for migrating data. There is a tool
clickhouse-static-files-uploader
,
which prepares a data directory for a given table (
SELECT data_paths FROM system.tables WHERE name = 'table_name'
).
For each table you need, you get a directory of files. These files can be uploaded
to, for example, a web server with static files. After this preparation,
you can load this table into any ClickHouse server via
DiskWeb
.
In this sample configuration:
- the disk is of type
web
- the data is hosted at
http://nginx:80/test1/
- a cache on local storage is used
xml
<clickhouse>
<storage_configuration>
<disks>
<web>
<type>web</type>
<endpoint>http://nginx:80/test1/</endpoint>
</web>
<cached_web>
<type>cache</type>
<disk>web</disk>
<path>cached_web_cache/</path>
<max_size>100000000</max_size>
</cached_web>
</disks>
<policies>
<web>
<volumes>
<main>
<disk>web</disk>
</main>
</volumes>
</web>
<cached_web>
<volumes>
<main>
<disk>cached_web</disk>
</main>
</volumes>
</cached_web>
</policies>
</storage_configuration>
</clickhouse> | {"source_file": "storing-data.md"} | [
0.020267030224204063,
-0.023912254720926285,
-0.09559768438339233,
0.0005395048065111041,
0.013910860754549503,
-0.012796883471310139,
0.08302439749240875,
0.0029518259689211845,
0.00023625753237865865,
0.03129177168011665,
0.0453948900103569,
-0.05290994048118591,
0.041167519986629486,
-0... |
0c5d9348-92ba-4db4-9416-ee9cddf2f916 | :::tip
Storage can also be configured temporarily within a query, if a web dataset is
not expected to be used routinely, see
dynamic configuration
and skip
editing the configuration file.
A
demo dataset
is hosted in GitHub. To prepare your own tables for web
storage see the tool
clickhouse-static-files-uploader
:::
In this
ATTACH TABLE
query the
UUID
provided matches the directory name of the data, and the endpoint is the URL for the raw GitHub content.
sql
-- highlight-next-line
ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4),
is_new UInt8,
duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY (postcode1, postcode2, addr1, addr2)
-- highlight-start
SETTINGS disk = disk(
type=web,
endpoint='https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/'
);
-- highlight-end
A ready test case. You need to add this configuration to config:
xml
<clickhouse>
<storage_configuration>
<disks>
<web>
<type>web</type>
<endpoint>https://clickhouse-datasets.s3.yandex.net/disk-with-static-files-tests/test-hits/</endpoint>
</web>
</disks>
<policies>
<web>
<volumes>
<main>
<disk>web</disk>
</main>
</volumes>
</web>
</policies>
</storage_configuration>
</clickhouse>
And then execute this query: | {"source_file": "storing-data.md"} | [
0.010607474483549595,
-0.062483031302690506,
-0.13807259500026703,
0.09635156393051147,
0.007883251644670963,
-0.013768806122243404,
0.005355526227504015,
0.04300416260957718,
-0.06292902678251266,
0.03805217519402504,
0.1276937574148178,
-0.053515151143074036,
0.07329419255256653,
-0.0654... |
1c421241-dd63-47ff-a9ec-22d7604809cf | sql
ATTACH TABLE test_hits UUID '1ae36516-d62d-4218-9ae3-6516d62da218'
(
WatchID UInt64,
JavaEnable UInt8,
Title String,
GoodEvent Int16,
EventTime DateTime,
EventDate Date,
CounterID UInt32,
ClientIP UInt32,
ClientIP6 FixedString(16),
RegionID UInt32,
UserID UInt64,
CounterClass Int8,
OS UInt8,
UserAgent UInt8,
URL String,
Referer String,
URLDomain String,
RefererDomain String,
Refresh UInt8,
IsRobot UInt8,
RefererCategories Array(UInt16),
URLCategories Array(UInt16),
URLRegions Array(UInt32),
RefererRegions Array(UInt32),
ResolutionWidth UInt16,
ResolutionHeight UInt16,
ResolutionDepth UInt8,
FlashMajor UInt8,
FlashMinor UInt8,
FlashMinor2 String,
NetMajor UInt8,
NetMinor UInt8,
UserAgentMajor UInt16,
UserAgentMinor FixedString(2),
CookieEnable UInt8,
JavascriptEnable UInt8,
IsMobile UInt8,
MobilePhone UInt8,
MobilePhoneModel String,
Params String,
IPNetworkID UInt32,
TraficSourceID Int8,
SearchEngineID UInt16,
SearchPhrase String,
AdvEngineID UInt8,
IsArtifical UInt8,
WindowClientWidth UInt16,
WindowClientHeight UInt16,
ClientTimeZone Int16,
ClientEventTime DateTime,
SilverlightVersion1 UInt8,
SilverlightVersion2 UInt8,
SilverlightVersion3 UInt32,
SilverlightVersion4 UInt16,
PageCharset String,
CodeVersion UInt32,
IsLink UInt8,
IsDownload UInt8,
IsNotBounce UInt8,
FUniqID UInt64,
HID UInt32,
IsOldCounter UInt8,
IsEvent UInt8,
IsParameter UInt8,
DontCountHits UInt8,
WithHash UInt8,
HitColor FixedString(1),
UTCEventTime DateTime,
Age UInt8,
Sex UInt8,
Income UInt8,
Interests UInt16,
Robotness UInt8,
GeneralInterests Array(UInt16),
RemoteIP UInt32,
RemoteIP6 FixedString(16),
WindowName Int32,
OpenerName Int32,
HistoryLength Int16,
BrowserLanguage FixedString(2),
BrowserCountry FixedString(2),
SocialNetwork String,
SocialAction String,
HTTPError UInt16,
SendTiming Int32,
DNSTiming Int32,
ConnectTiming Int32,
ResponseStartTiming Int32,
ResponseEndTiming Int32,
FetchTiming Int32,
RedirectTiming Int32,
DOMInteractiveTiming Int32,
DOMContentLoadedTiming Int32,
DOMCompleteTiming Int32,
LoadEventStartTiming Int32,
LoadEventEndTiming Int32,
NSToDOMContentLoadedTiming Int32,
FirstPaintTiming Int32,
RedirectCount Int8,
SocialSourceNetworkID UInt8,
SocialSourcePage String,
ParamPrice Int64,
ParamOrderID String,
ParamCurrency FixedString(3),
ParamCurrencyID UInt16,
GoalsReached Array(UInt32),
OpenstatServiceName String,
OpenstatCampaignID String,
OpenstatAdID String,
OpenstatSourceID String,
UTMSource String,
UTMMedium String,
UTMCampaign String,
UTMContent String,
UTMTerm String,
FromTag String, | {"source_file": "storing-data.md"} | [
0.038943201303482056,
-0.036282557994127274,
-0.06751041114330292,
0.025097772479057312,
-0.06296183913946152,
-0.005500344093888998,
0.04652673378586769,
-0.0074294754303991795,
-0.028403451666235924,
0.05444561317563057,
0.05231894552707672,
-0.07316774129867554,
0.04650069773197174,
-0.... |
c730fe24-376b-4086-9f3a-a78a9eef0580 | OpenstatAdID String,
OpenstatSourceID String,
UTMSource String,
UTMMedium String,
UTMCampaign String,
UTMContent String,
UTMTerm String,
FromTag String,
HasGCLID UInt8,
RefererHash UInt64,
URLHash UInt64,
CLID UInt32,
YCLID UInt64,
ShareService String,
ShareURL String,
ShareTitle String,
ParsedParams Nested(
Key1 String,
Key2 String,
Key3 String,
Key4 String,
Key5 String,
ValueDouble Float64),
IslandID FixedString(16),
RequestNum UInt32,
RequestTry UInt8
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(EventDate)
ORDER BY (CounterID, EventDate, intHash32(UserID))
SAMPLE BY intHash32(UserID)
SETTINGS storage_policy='web'; | {"source_file": "storing-data.md"} | [
0.019476046785712242,
0.0025816934648901224,
-0.027211179956793785,
-0.05152246356010437,
-0.10860489308834076,
-0.005735213868319988,
0.003105033887550235,
0.05028410255908966,
-0.018256794661283493,
0.03201176971197128,
0.07726351171731949,
-0.03729277104139328,
0.08109363913536072,
-0.0... |
9630d19d-cf2f-4fb3-8533-c75e4dc63b4e | Required parameters {#static-web-storage-required-parameters}
| Parameter | Description |
|------------|-------------------------------------------------------------------------------------------------------------------|
|
type
|
web
. Otherwise the disk is not created. |
|
endpoint
| The endpoint URL in
path
format. Endpoint URL must contain a root path to store data, where they were uploaded. |
Optional parameters {#optional-parameters-web}
| Parameter | Description | Default Value |
|-------------------------------------|------------------------------------------------------------------------------|-----------------|
|
min_bytes_for_seek
| The minimal number of bytes to use seek operation instead of sequential read |
1
MB |
|
remote_fs_read_backoff_threashold
| The maximum wait time when trying to read data for remote disk |
10000
seconds |
|
remote_fs_read_backoff_max_tries
| The maximum number of attempts to read with backoff |
5
|
If a query fails with an exception
DB:Exception Unreachable URL
, then you can try to adjust the settings:
http_connection_timeout
,
http_receive_timeout
,
keep_alive_timeout
.
To get files for upload run:
clickhouse static-files-disk-uploader --metadata-path <path> --output-dir <dir>
(
--metadata-path
can be found in query
SELECT data_paths FROM system.tables WHERE name = 'table_name'
).
When loading files by
endpoint
, they must be loaded into
<endpoint>/store/
path, but config must contain only
endpoint
.
If URL is not reachable on disk load when the server is starting up tables, then all errors are caught. If in this case there were errors, tables can be reloaded (become visible) via
DETACH TABLE table_name
->
ATTACH TABLE table_name
. If metadata was successfully loaded at server startup, then tables are available straight away.
Use
http_max_single_read_retries
setting to limit the maximum number of retries during a single HTTP read.
Zero-copy Replication (not ready for production) {#zero-copy}
Zero-copy replication is possible, but not recommended, with
S3
and
HDFS
(unsupported) disks. Zero-copy replication means that if the data is stored remotely on several machines and needs to be synchronized, then only the metadata is replicated (paths to the data parts), but not the data itself.
:::note Zero-copy replication is not ready for production
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
::: | {"source_file": "storing-data.md"} | [
0.004004235845059156,
-0.010054376907646656,
-0.12807996571063995,
0.049661699682474136,
-0.004768740851432085,
-0.04119742289185524,
-0.04958658665418625,
0.044125381857156754,
-0.03199830278754234,
0.06306282430887222,
0.04765217378735542,
-0.06354357302188873,
0.0809711366891861,
0.0026... |
b3641d2a-f0c1-4a6b-bc74-36dfd28572da | description: 'Page detailing allocation profiling in ClickHouse'
sidebar_label: 'Allocation profiling for versions before 25.9'
slug: /operations/allocation-profiling-old
title: 'Allocation profiling for versions before 25.9'
doc_type: 'reference'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Allocation profiling for versions before 25.9
ClickHouse uses
jemalloc
as its global allocator. Jemalloc comes with some tools for allocation sampling and profiling.
To make allocation profiling more convenient,
SYSTEM
commands are provided along with four letter word (4LW) commands in Keeper.
Sampling allocations and flushing heap profiles {#sampling-allocations-and-flushing-heap-profiles}
If you want to sample and profile allocations in
jemalloc
, you need to start ClickHouse/Keeper with profiling enabled using the environment variable
MALLOC_CONF
:
sh
MALLOC_CONF=background_thread:true,prof:true
jemalloc
will sample allocations and store the information internally.
You can tell
jemalloc
to flush the current profile by running:
sql
SYSTEM JEMALLOC FLUSH PROFILE
sh
echo jmfp | nc localhost 9181
By default, the heap profile file will be generated in
/tmp/jemalloc_clickhouse._pid_._seqnum_.heap
where
_pid_
is the PID of ClickHouse and
_seqnum_
is the global sequence number for the current heap profile.
For Keeper, the default file is
/tmp/jemalloc_keeper._pid_._seqnum_.heap
, and follows the same rules.
A different location can be defined by appending the
MALLOC_CONF
environment variable with the
prof_prefix
option.
For example, if you want to generate profiles in the
/data
folder where the filename prefix will be
my_current_profile
, you can run ClickHouse/Keeper with the following environment variable:
sh
MALLOC_CONF=background_thread:true,prof:true,prof_prefix:/data/my_current_profile
The generated file will be appended to the prefix PID and sequence number.
Analyzing heap profiles {#analyzing-heap-profiles}
After heap profiles have been generated, they need to be analyzed.
For that,
jemalloc
's tool called
jeprof
can be used. It can be installed in multiple ways:
- Using the system's package manager
- Cloning the
jemalloc repo
and running
autogen.sh
from the root folder. This will provide you with the
jeprof
script inside the
bin
folder
:::note
jeprof
uses
addr2line
to generate stacktraces which can be really slow.
If that's the case, it is recommended to install an
alternative implementation
of the tool.
bash
git clone https://github.com/gimli-rs/addr2line.git --depth=1 --branch=0.23.0
cd addr2line
cargo build --features bin --release
cp ./target/release/addr2line path/to/current/addr2line
:::
There are many different formats to generate from the heap profile using
jeprof
.
It is recommended to run
jeprof --help
for information on the usage and the various options the tool provides.
In general, the
jeprof
command is used as: | {"source_file": "allocation-profiling-old.md"} | [
0.07179269194602966,
-0.009975196793675423,
-0.12470749020576477,
-0.012780345045030117,
0.013845368288457394,
-0.038706906139850616,
0.08768948167562485,
0.033156197518110275,
-0.12318447977304459,
0.04181496426463127,
-0.01621255651116371,
-0.04452071338891983,
-0.023352742195129395,
-0.... |
30dabf71-1356-4c05-9769-99e378a61167 | In general, the
jeprof
command is used as:
sh
jeprof path/to/binary path/to/heap/profile --output_format [ > output_file]
If you want to compare which allocations happened between two profiles you can set the
base
argument:
sh
jeprof path/to/binary --base path/to/first/heap/profile path/to/second/heap/profile --output_format [ > output_file]
Examples {#examples}
if you want to generate a text file with each procedure written per line:
sh
jeprof path/to/binary path/to/heap/profile --text > result.txt
if you want to generate a PDF file with a call-graph:
sh
jeprof path/to/binary path/to/heap/profile --pdf > result.pdf
Generating a flame graph {#generating-flame-graph}
jeprof
allows you to generate collapsed stacks for building flame graphs.
You need to use the
--collapsed
argument:
sh
jeprof path/to/binary path/to/heap/profile --collapsed > result.collapsed
After that, you can use many different tools to visualize collapsed stacks.
The most popular is
FlameGraph
which contains a script called
flamegraph.pl
:
sh
cat result.collapsed | /path/to/FlameGraph/flamegraph.pl --color=mem --title="Allocation Flame Graph" --width 2400 > result.svg
Another interesting tool is
speedscope
that allows you to analyze collected stacks in a more interactive way.
Controlling allocation profiler during runtime {#controlling-allocation-profiler-during-runtime}
If ClickHouse/Keeper is started with the profiler enabled, additional commands for disabling/enabling allocation profiling during runtime are supported.
Using those commands, it's easier to profile only specific intervals.
To disable the profiler:
sql
SYSTEM JEMALLOC DISABLE PROFILE
sh
echo jmdp | nc localhost 9181
To enable the profiler:
sql
SYSTEM JEMALLOC ENABLE PROFILE
sh
echo jmep | nc localhost 9181
It's also possible to control the initial state of the profiler by setting the
prof_active
option which is enabled by default.
For example, if you don't want to sample allocations during startup but only after, you can enable the profiler. You can start ClickHouse/Keeper with the following environment variable:
sh
MALLOC_CONF=background_thread:true,prof:true,prof_active:false
The profiler can be enabled later.
Additional options for the profiler {#additional-options-for-profiler}
jemalloc
has many different options available, which are related to the profiler. They can be controlled by modifying the
MALLOC_CONF
environment variable.
For example, the interval between allocation samples can be controlled with
lg_prof_sample
.
If you want to dump the heap profile every N bytes you can enable it using
lg_prof_interval
.
It is recommended to check
jemalloc
s
reference page
for a complete list of options.
Other resources {#other-resources}
ClickHouse/Keeper expose
jemalloc
related metrics in many different ways. | {"source_file": "allocation-profiling-old.md"} | [
-0.03120402805507183,
0.06134255602955818,
-0.11955863982439041,
-0.015241347253322601,
0.017161976546049118,
-0.016106147319078445,
-0.01531940046697855,
0.16626964509487152,
-0.004171337001025677,
-0.015315620228648186,
0.003089522011578083,
-0.02504803240299225,
0.08185409009456635,
0.0... |
2ee141ab-2716-4917-89dd-fabc1e80c29a | Other resources {#other-resources}
ClickHouse/Keeper expose
jemalloc
related metrics in many different ways.
:::warning Warning
It's important to be aware that none of these metrics are synchronized with each other and values may drift.
:::
System table
asynchronous_metrics
{#system-table-asynchronous_metrics}
sql
SELECT *
FROM system.asynchronous_metrics
WHERE metric LIKE '%jemalloc%'
FORMAT Vertical
Reference
System table
jemalloc_bins
{#system-table-jemalloc_bins}
Contains information about memory allocations done via the jemalloc allocator in different size classes (bins) aggregated from all arenas.
Reference
Prometheus {#prometheus}
All
jemalloc
related metrics from
asynchronous_metrics
are also exposed using the Prometheus endpoint in both ClickHouse and Keeper.
Reference
jmst
4LW command in Keeper {#jmst-4lw-command-in-keeper}
Keeper supports the
jmst
4LW command which returns
basic allocator statistics
:
sh
echo jmst | nc localhost 9181 | {"source_file": "allocation-profiling-old.md"} | [
0.013085723854601383,
-0.003728072391822934,
-0.15043583512306213,
0.03734416514635086,
0.009003925137221813,
-0.044340670108795166,
0.0614643320441246,
0.01368626393377781,
-0.010770609602332115,
0.03099556639790535,
-0.01905645616352558,
-0.05787699669599533,
0.045389678329229355,
-0.027... |
0bdd0c16-7e81-466a-8160-ee9faea1cf3f | description: 'Guide to using and configuring the query condition cache feature in ClickHouse'
sidebar_label: 'Query condition cache'
sidebar_position: 64
slug: /operations/query-condition-cache
title: 'Query condition cache'
doc_type: 'guide'
Query condition cache
:::note
The query condition cache only works when
enable_analyzer
is set to true, which is the default value.
:::
Many real-world workloads involve repeated queries against the same or almost the same data (for instance, previously existing data plus new data).
ClickHouse provides various optimization techniques to optimize for such query patterns.
One possibility is to tune the physical data layout using index structures (e.g., primary key indexes, skipping indexes, projections) or pre-calculation (materialized views).
Another possibility is to use ClickHouse's
query cache
to avoid repeated query evaluation.
The downside of the first approach is that that it requires manual intervention and monitoring by a database administrator.
The second approach may return stale results (as the query cache is transactionally not consistent) which may or may not be acceptable, depending on the use case.
The query condition cache provides an elegant solution for both problems.
It is based on the idea that evaluating a filter condition (e.g.,
WHERE col = 'xyz'
) on the same data will always return the same results.
More specifically, the query condition cache remembers for each evaluated filter and each granule (= a block of 8192 rows by default) if no row in the granule satisfy the filter condition.
The information is recorded as a single bit: a 0 bit represents that no row matches the filter whereas a 1 bit means that at least one matching row exists.
In the former case, ClickHouse may skip the corresponding granule during filter evaluation, in the latter case, the granule must be loaded and evaluated.
The query condition cache is effective if three prerequisites are fulfilled:
- First, the workload must evaluate the same filter conditions repeatedly. This happens naturally if a query is repeated multiple times but it can also happen if two queries share the same filters, e.g.
SELECT product FROM products WHERE quality > 3
and
SELECT vendor, count() FROM products WHERE quality > 3
.
- Second, the majority of the data is immutable, i.e., does not change between queries. This is generally the case in ClickHouse as parts are immutable and created only by INSERTs.
- Third, filters are selective, i.e. only relatively few rows satisfy the filter condition. The fewer rows match the filter condition, the more granules will be recorded with bit 0 (no matching rows), and the more data can be "pruned" from subsequent filter evaluations.
Memory consumption {#memory-consumption} | {"source_file": "query-condition-cache.md"} | [
-0.0319007970392704,
0.011675617657601833,
0.00814333837479353,
0.05013028532266617,
-0.08342477679252625,
-0.0621531642973423,
-0.011832701042294502,
-0.0716777965426445,
-0.041127774864435196,
-0.0074041481129825115,
-0.012192078866064548,
0.031875938177108765,
0.048597872257232666,
-0.0... |
be224d3e-d1cb-48ea-bc50-6c92f970d910 | Memory consumption {#memory-consumption}
Since the query condition cache stores only a single bit per filter condition and granule, it consumes only little memory.
The maximum size of the query condition cache can be configured using server settings
query_condition_cache_size
(default: 100 MB).
A cache size of 100 MB corresponds to 100 * 1024 * 1024 * 8 = 838,860,800 entries.
Since each entry represents a mark (8192 rows by default), the cache can cover up to 6,871,947,673,600 (6.8 trillion) rows of a single column.
In practice, filter are evaluated on more than one column, so that number needs to be divided by the number of filtered columns.
Configuration settings and usage {#configuration-settings-and-usage}
Setting
use_query_condition_cache
controls whether a specific query or all queries of the current session should utilize the query condition cache.
For example, the first execution of query
sql
SELECT col1, col2
FROM table
WHERE col1 = 'x'
SETTINGS use_query_condition_cache = true;
will store ranges of the table which do not satisfy the predicate.
Subsequent executions of the same query, also with parameter
use_query_condition_cache = true
, will utilize the query condition cache to scan less data.
Administration {#administration}
The query condition cache is not retained between restarts of ClickHouse.
To clear the query condition cache, run
SYSTEM DROP QUERY CONDITION CACHE
.
The content of the cache is displayed in system table
system.query_condition_cache
.
To calculate the current size of the query condition cache in MB, run
SELECT formatReadableSize(sum(entry_size)) FROM system.query_condition_cache
.
If you like to investigate individual filter conditions, you can check field
condition
in
system.query_condition_cache
.
Note that the field is only populated if the query runs with enabled setting
query_condition_cache_store_conditions_as_plaintext
.
The number of query condition cache hits and misses since database start are shown as events "QueryConditionCacheHits" and "QueryConditionCacheMisses" in system table
system.events
.
Both counters are only updated for
SELECT
queries which run with setting
use_query_condition_cache = true
, other queries do not affect "QueryCacheMisses".
Related content {#related-content}
Blog:
Introducing the Query Condition Cache
Predicate Caching: Query-Driven Secondary Indexing for Cloud Data Warehouses (Schmidt et. al., 2024) | {"source_file": "query-condition-cache.md"} | [
0.0744413360953331,
0.022584963589906693,
-0.05744003877043724,
0.02118925377726555,
-0.020572010427713394,
-0.05610295757651329,
0.055853910744190216,
0.024933215230703354,
0.005404233001172543,
0.03270934894680977,
-0.0599953830242157,
0.01058120746165514,
0.014776081778109074,
-0.096259... |
36cf1462-e312-464f-b6a3-94f04898e27c | description: 'Guide to using OpenTelemetry for distributed tracing and metrics collection
in ClickHouse'
sidebar_label: 'Tracing ClickHouse with OpenTelemetry'
sidebar_position: 62
slug: /operations/opentelemetry
title: 'Tracing ClickHouse with OpenTelemetry'
doc_type: 'guide'
OpenTelemetry
is an open standard for collecting traces and metrics from the distributed application. ClickHouse has some support for OpenTelemetry.
Supplying trace context to ClickHouse {#supplying-trace-context-to-clickhouse}
ClickHouse accepts trace context HTTP headers, as described by the
W3C recommendation
. It also accepts trace context over a native protocol that is used for communication between ClickHouse servers or between the client and server. For manual testing, trace context headers conforming to the Trace Context recommendation can be supplied to
clickhouse-client
using
--opentelemetry-traceparent
and
--opentelemetry-tracestate
flags.
If no parent trace context is supplied or the provided trace context does not comply with W3C standard above, ClickHouse can start a new trace, with probability controlled by the
opentelemetry_start_trace_probability
setting.
Propagating the trace context {#propagating-the-trace-context}
The trace context is propagated to downstream services in the following cases:
Queries to remote ClickHouse servers, such as when using
Distributed
table engine.
url
table function. Trace context information is sent in HTTP headers.
Tracing the ClickHouse Itself {#tracing-the-clickhouse-itself}
ClickHouse creates
trace spans
for each query and some of the query execution stages, such as query planning or distributed queries.
To be useful, the tracing information has to be exported to a monitoring system that supports OpenTelemetry, such as
Jaeger
or
Prometheus
. ClickHouse avoids a dependency on a particular monitoring system, instead only providing the tracing data through a system table. OpenTelemetry trace span information
required by the standard
is stored in the
system.opentelemetry_span_log
table.
The table must be enabled in the server configuration, see the
opentelemetry_span_log
element in the default config file
config.xml
. It is enabled by default.
The tags or attributes are saved as two parallel arrays, containing the keys and values. Use
ARRAY JOIN
to work with them.
Log-query-settings {#log-query-settings}
Setting
log_query_settings
allows log changes to query settings during query execution. When enabled, any modifications made to query settings will be recorded in the OpenTelemetry span log. This feature is particularly useful in production environments for tracking configuration changes that may affect query performance.
Integration with monitoring systems {#integration-with-monitoring-systems}
At the moment, there is no ready tool that can export the tracing data from ClickHouse to a monitoring system. | {"source_file": "opentelemetry.md"} | [
-0.021074282005429268,
-0.07395360618829727,
-0.06819581240415573,
0.004924648441374302,
0.025123368948698044,
-0.1190105453133583,
0.0317034050822258,
-0.04695984721183777,
-0.021031027659773827,
-0.03704487532377243,
0.03654441609978676,
-0.05926541984081268,
-0.00304354983381927,
-0.027... |
b731830a-b6b3-4239-babf-4119c147b8f1 | Integration with monitoring systems {#integration-with-monitoring-systems}
At the moment, there is no ready tool that can export the tracing data from ClickHouse to a monitoring system.
For testing, it is possible to setup the export using a materialized view with the
URL
engine over the
system.opentelemetry_span_log
table, which would push the arriving log data to an HTTP endpoint of a trace collector. For example, to push the minimal span data to a Zipkin instance running at
http://localhost:9411
, in Zipkin v2 JSON format:
sql
CREATE MATERIALIZED VIEW default.zipkin_spans
ENGINE = URL('http://127.0.0.1:9411/api/v2/spans', 'JSONEachRow')
SETTINGS output_format_json_named_tuples_as_objects = 1,
output_format_json_array_of_rows = 1 AS
SELECT
lower(hex(trace_id)) AS traceId,
CASE WHEN parent_span_id = 0 THEN '' ELSE lower(hex(parent_span_id)) END AS parentId,
lower(hex(span_id)) AS id,
operation_name AS name,
start_time_us AS timestamp,
finish_time_us - start_time_us AS duration,
cast(tuple('clickhouse'), 'Tuple(serviceName text)') AS localEndpoint,
cast(tuple(
attribute.values[indexOf(attribute.names, 'db.statement')]),
'Tuple("db.statement" text)') AS tags
FROM system.opentelemetry_span_log
In case of any errors, the part of the log data for which the error has occurred will be silently lost. Check the server log for error messages if the data does not arrive.
Related content {#related-content}
Blog:
Building an Observability Solution with ClickHouse - Part 2 - Traces | {"source_file": "opentelemetry.md"} | [
-0.032477833330631256,
-0.0021275843027979136,
-0.10997414588928223,
0.09624937921762466,
0.03938724473118782,
-0.12224012613296509,
-0.010671149007976055,
-0.007522914558649063,
-0.041864071041345596,
0.01632952131330967,
-0.007081379182636738,
-0.008227773942053318,
0.044708333909511566,
... |
20d4f9ed-68d7-4b88-91f3-5b1b432db1b0 | description: 'You can monitor the utilization of hardware resources and also ClickHouse
server metrics.'
keywords: ['monitoring', 'observability', 'advanced dashboard', 'dashboard', 'observability
dashboard']
sidebar_label: 'Monitoring'
sidebar_position: 45
slug: /operations/monitoring
title: 'Monitoring'
doc_type: 'reference'
import Image from '@theme/IdealImage';
Monitoring
:::note
The monitoring data outlined in this guide is accessible in ClickHouse Cloud. In addition to being displayed through the built-in dashboard described below, both basic and advanced performance metrics can also be viewed directly in the main service console.
:::
You can monitor:
Utilization of hardware resources.
ClickHouse server metrics.
Built-in advanced observability dashboard {#built-in-advanced-observability-dashboard}
ClickHouse comes with a built-in advanced observability dashboard feature which can be accessed by
$HOST:$PORT/dashboard
(requires user and password) that shows the following metrics:
- Queries/second
- CPU usage (cores)
- Queries running
- Merges running
- Selected bytes/second
- IO wait
- CPU wait
- OS CPU Usage (userspace)
- OS CPU Usage (kernel)
- Read from disk
- Read from filesystem
- Memory (tracked)
- Inserted rows/second
- Total MergeTree parts
- Max parts for partition
Resource utilization {#resource-utilization}
ClickHouse also monitors the state of hardware resources by itself such as:
Load and temperature on processors.
Utilization of storage system, RAM and network.
This data is collected in the
system.asynchronous_metric_log
table.
ClickHouse server metrics {#clickhouse-server-metrics}
ClickHouse server has embedded instruments for self-state monitoring.
To track server events use server logs. See the
logger
section of the configuration file.
ClickHouse collects:
Different metrics of how the server uses computational resources.
Common statistics on query processing.
You can find metrics in the
system.metrics
,
system.events
, and
system.asynchronous_metrics
tables.
You can configure ClickHouse to export metrics to
Graphite
. See the
Graphite section
in the ClickHouse server configuration file. Before configuring export of metrics, you should set up Graphite by following their official
guide
.
You can configure ClickHouse to export metrics to
Prometheus
. See the
Prometheus section
in the ClickHouse server configuration file. Before configuring export of metrics, you should set up Prometheus by following their official
guide
.
Additionally, you can monitor server availability through the HTTP API. Send the
HTTP GET
request to
/ping
. If the server is available, it responds with
200 OK
. | {"source_file": "monitoring.md"} | [
0.054723743349313736,
-0.039297666400671005,
-0.11502520740032196,
0.012359779328107834,
0.04424726963043213,
-0.08651614189147949,
0.03568902984261513,
-0.004184340126812458,
-0.0006247623241506517,
0.0471884049475193,
-0.020688241347670555,
-0.033850573003292084,
0.028175687417387962,
0.... |
82a532b9-69ca-4d9b-a25a-d89c725437a6 | Additionally, you can monitor server availability through the HTTP API. Send the
HTTP GET
request to
/ping
. If the server is available, it responds with
200 OK
.
To monitor servers in a cluster configuration, you should set the
max_replica_delay_for_distributed_queries
parameter and use the HTTP resource
/replicas_status
. A request to
/replicas_status
returns
200 OK
if the replica is available and is not delayed behind the other replicas. If a replica is delayed, it returns
503 HTTP_SERVICE_UNAVAILABLE
with information about the gap. | {"source_file": "monitoring.md"} | [
0.015858836472034454,
-0.03216182440519333,
-0.07351638376712799,
0.04701318219304085,
0.008535334840416908,
-0.03139775991439819,
-0.06897229701280594,
-0.05568486452102661,
-0.0127958282828331,
0.054995451122522354,
-0.022026915103197098,
-0.04073281213641167,
0.01873747445642948,
-0.059... |
9e0b5024-9e74-4c3a-a708-3092eaf14ebb | description: 'Page detailing the ClickHouse query analyzer'
keywords: ['analyzer']
sidebar_label: 'Analyzer'
slug: /operations/analyzer
title: 'Analyzer'
doc_type: 'reference'
Analyzer
In ClickHouse version
24.3
, the new query analyzer was enabled by default.
You can read more details about how it works
here
.
Known incompatibilities {#known-incompatibilities}
Despite fixing a large number of bugs and introducing new optimizations, it also introduces some breaking changes in ClickHouse behaviour. Please read the following changes to determine how to rewrite your queries for the new analyzer.
Invalid queries are no longer optimized {#invalid-queries-are-no-longer-optimized}
The previous query planning infrastructure applied AST-level optimizations before the query validation step.
Optimizations could rewrite the initial query to be valid and executable.
In the new analyzer, query validation takes place before the optimization step.
This means that invalid queries which were previously possible to execute, are now unsupported.
In such cases, the query must be fixed manually.
Example 1 {#example-1}
The following query uses column
number
in the projection list when only
toString(number)
is available after the aggregation.
In the old analyzer,
GROUP BY toString(number)
was optimized into
GROUP BY number,
making the query valid.
sql
SELECT number
FROM numbers(1)
GROUP BY toString(number)
Example 2 {#example-2}
The same problem occurs in this query. Column
number
is used after aggregation with another key.
The previous query analyzer fixed this query by moving the
number > 5
filter from the
HAVING
clause to the
WHERE
clause.
sql
SELECT
number % 2 AS n,
sum(number)
FROM numbers(10)
GROUP BY n
HAVING number > 5
To fix the query, you should move all conditions that apply to non-aggregated columns to the
WHERE
section to conform to standard SQL syntax:
sql
SELECT
number % 2 AS n,
sum(number)
FROM numbers(10)
WHERE number > 5
GROUP BY n
CREATE VIEW
with an invalid query {#create-view-with-invalid-query}
The new analyzer always performs type-checking.
Previously, it was possible to create a
VIEW
with an invalid
SELECT
query.
It would then fail during the first
SELECT
or
INSERT
(in the case of
MATERIALIZED VIEW
).
It is no longer possible to create a
VIEW
in this way.
Example {#example-view}
```sql
CREATE TABLE source (data String)
ENGINE=MergeTree
ORDER BY tuple();
CREATE VIEW some_view
AS SELECT JSONExtract(data, 'test', 'DateTime64(3)')
FROM source;
```
Known incompatibilities of the
JOIN
clause {#known-incompatibilities-of-the-join-clause}
JOIN
using a column from a projection {#join-using-column-from-projection}
An alias from the
SELECT
list can not be used as a
JOIN USING
key by default. | {"source_file": "analyzer.md"} | [
-0.0398041196167469,
0.01646043360233307,
-0.015903497114777565,
0.047552689909935,
-0.030984947457909584,
-0.09306010603904724,
-0.017327837646007538,
-0.06694184988737106,
-0.08390875905752182,
-0.009333250112831593,
-0.003408120945096016,
0.011064950376749039,
0.03782683238387108,
0.027... |
c469ea97-2dae-49d1-8535-8133dc9f43ad | JOIN
using a column from a projection {#join-using-column-from-projection}
An alias from the
SELECT
list can not be used as a
JOIN USING
key by default.
A new setting,
analyzer_compatibility_join_using_top_level_identifier
, when enabled, alters the behavior of
JOIN USING
to prefer resolving identifiers based on expressions from the projection list of the
SELECT
query, rather than using the columns from the left table directly.
For example:
sql
SELECT a + 1 AS b, t2.s
FROM VALUES('a UInt64, b UInt64', (1, 1)) AS t1
JOIN VALUES('b UInt64, s String', (1, 'one'), (2, 'two')) t2
USING (b);
With
analyzer_compatibility_join_using_top_level_identifier
set to
true
, the join condition is interpreted as
t1.a + 1 = t2.b
, matching the behavior of the earlier versions.
The result will be
2, 'two'
.
When the setting is
false
, the join condition defaults to
t1.b = t2.b
, and the query will return
2, 'one'
.
If
b
is not present in
t1
, the query will fail with an error.
Changes in behavior with
JOIN USING
and
ALIAS
/
MATERIALIZED
columns {#changes-in-behavior-with-join-using-and-aliasmaterialized-columns}
In the new analyzer, using
*
in a
JOIN USING
query that involves
ALIAS
or
MATERIALIZED
columns will include those columns in the result-set by default.
For example:
```sql
CREATE TABLE t1 (id UInt64, payload ALIAS sipHash64(id)) ENGINE = MergeTree ORDER BY id;
INSERT INTO t1 VALUES (1), (2);
CREATE TABLE t2 (id UInt64, payload ALIAS sipHash64(id)) ENGINE = MergeTree ORDER BY id;
INSERT INTO t2 VALUES (2), (3);
SELECT * FROM t1
FULL JOIN t2 USING (payload);
```
In the new analyzer, the result of this query will include the
payload
column along with
id
from both tables.
In contrast, the previous analyzer would only include these
ALIAS
columns if specific settings (
asterisk_include_alias_columns
or
asterisk_include_materialized_columns
) were enabled,
and the columns might appear in a different order.
To ensure consistent and expected results, especially when migrating old queries to the new analyzer, it is advisable to specify columns explicitly in the
SELECT
clause rather than using
*
.
Handling of type modifiers for columns in the
USING
clause {#handling-of-type-modifiers-for-columns-in-using-clause}
In the new version of the analyzer, the rules for determining the common supertype for columns specified in the
USING
clause have been standardized to produce more predictable outcomes,
especially when dealing with type modifiers like
LowCardinality
and
Nullable
.
LowCardinality(T)
and
T
: When a column of type
LowCardinality(T)
is joined with a column of type
T
, the resulting common supertype will be
T
, effectively discarding the
LowCardinality
modifier.
Nullable(T)
and
T
: When a column of type
Nullable(T)
is joined with a column of type
T
, the resulting common supertype will be
Nullable(T)
, ensuring that the nullable property is preserved. | {"source_file": "analyzer.md"} | [
0.011551394127309322,
0.021955477073788643,
-0.031992241740226746,
-0.004749246872961521,
-0.014428442344069481,
-0.02310962788760662,
0.05144884064793587,
0.051914602518081665,
-0.05039994418621063,
-0.022412996739149094,
-0.008513929322361946,
-0.0480864979326725,
0.07981248199939728,
-0... |
30a32f62-9a9b-4305-b6fe-b8833b5d7b8e | For example:
sql
SELECT id, toTypeName(id)
FROM VALUES('id LowCardinality(String)', ('a')) AS t1
FULL OUTER JOIN VALUES('id String', ('b')) AS t2
USING (id);
In this query, the common supertype for
id
is determined as
String
, discarding the
LowCardinality
modifier from
t1
.
Projection column names changes {#projection-column-names-changes}
During projection names computation, aliases are not substituted.
```sql
SELECT
1 + 1 AS x,
x + 1
SETTINGS enable_analyzer = 0
FORMAT PrettyCompact
┌─x─┬─plus(plus(1, 1), 1)─┐
1. │ 2 │ 3 │
└───┴─────────────────────┘
SELECT
1 + 1 AS x,
x + 1
SETTINGS enable_analyzer = 1
FORMAT PrettyCompact
┌─x─┬─plus(x, 1)─┐
1. │ 2 │ 3 │
└───┴────────────┘
```
Incompatible function arguments types {#incompatible-function-arguments-types}
In the new analyzer, type inference happens during initial query analysis.
This change means that type checks are done before short-circuit evaluation; thus, the
if
function arguments must always have a common supertype.
For example, the following query fails with
There is no supertype for types Array(UInt8), String because some of them are Array and some of them are not
:
sql
SELECT toTypeName(if(0, [2, 3, 4], 'String'))
Heterogeneous clusters {#heterogeneous-clusters}
The new analyzer significantly changes the communication protocol between servers in the cluster. Thus, it's impossible to run distributed queries on servers with different
enable_analyzer
setting values.
Mutations are interpreted by previous analyzer {#mutations-are-interpreted-by-previous-analyzer}
Mutations are still using the old analyzer.
This means some new ClickHouse SQL features can't be used in mutations. For example, the
QUALIFY
clause.
The status can be checked
here
.
Unsupported features {#unsupported-features}
The list of features that the new analyzer currently doesn't support is given below:
Annoy index.
Hypothesis index. Work in progress
here
.
Window view is not supported. There are no plans to support it in the future. | {"source_file": "analyzer.md"} | [
-0.02457190491259098,
-0.0030032151844352484,
0.024095557630062103,
0.06588123738765717,
0.008363188244402409,
-0.03249524161219597,
0.032131366431713104,
0.07552294433116913,
-0.03132961690425873,
-0.0295720137655735,
-0.0346398651599884,
-0.03252701833844185,
0.004126109182834625,
0.0016... |
85c4df18-c7d0-430f-9e18-62fc1166fe72 | description: 'Documentation for Syntax'
displayed_sidebar: 'sqlreference'
sidebar_label: 'Syntax'
sidebar_position: 2
slug: /sql-reference/syntax
title: 'Syntax'
doc_type: 'reference'
In this section, we will take a look at ClickHouse's SQL syntax.
ClickHouse uses a syntax based on SQL but offers a number of extensions and optimizations.
Query Parsing {#query-parsing}
There are two types of parsers in ClickHouse:
-
A full SQL parser
(a recursive descent parser).
-
A data format parser
(a fast stream parser).
The full SQL parser is used in all cases except for the
INSERT
query, which uses both parsers.
Let's examine the query below:
sql
INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def')
As mentioned already, the
INSERT
query makes use of both parsers.
The
INSERT INTO t VALUES
fragment is parsed by the full parser,
and the data
(1, 'Hello, world'), (2, 'abc'), (3, 'def')
is parsed by the data format parser, or fast stream parser.
Turning on the full parser
You can also turn on the full parser for the data
by using the [`input_format_values_interpret_expressions`](../operations/settings/settings-formats.md#input_format_values_interpret_expressions) setting.
When the aforementioned setting is set to `1`,
ClickHouse first tries to parse values with the fast stream parser.
If it fails, ClickHouse tries to use the full parser for the data, treating it like an SQL [expression](#expressions).
The data can have any format.
When a query is received, the server calculates no more than
max_query_size
bytes of the request in RAM
(by default, 1 MB), and the rest is stream parsed.
This is to allow for avoiding issues with large
INSERT
queries, which is the recommended way to insert your data in ClickHouse.
When using the
Values
format in an
INSERT
query,
it may appear that data is parsed the same as for expressions in a
SELECT
query however this is not the case.
The
Values
format is much more limited.
The rest of this section covers the full parser.
:::note
For more information about format parsers, see the
Formats
section.
:::
Spaces {#spaces}
There may be any number of space symbols between syntactical constructions (including the beginning and end of a query).
Space symbols include the space, tab, line feed, CR, and form feed.
Comments {#comments}
ClickHouse supports both SQL-style and C-style comments:
SQL-style comments begin with
--
,
#!
or
#
and continue to the end of the line. A space after
--
and
#!
can be omitted.
C-style comments span from
/*
to
*/
and can be multiline. Spaces are not required either.
Keywords {#keywords}
Keywords in ClickHouse can be either
case-sensitive
or
case-insensitive
depending on the context.
Keywords are
case-insensitive
when they correspond to:
SQL standard. For example,
SELECT
,
select
and
SeLeCt
are all valid. | {"source_file": "syntax.md"} | [
-0.02195858396589756,
-0.07427777349948883,
0.03541025519371033,
0.032555460929870605,
-0.07235141098499298,
-0.04207305237650871,
0.05546700209379196,
0.031906142830848694,
-0.029925663024187088,
-0.012728922069072723,
0.03422117233276367,
0.003413791535422206,
0.081056147813797,
-0.08970... |
298082ef-eb28-4f9d-9b32-3663389888c6 | Keywords are
case-insensitive
when they correspond to:
SQL standard. For example,
SELECT
,
select
and
SeLeCt
are all valid.
Implementation in some popular DBMS (MySQL or Postgres). For example,
DateTime
is the same as
datetime
.
:::note
You can check whether a data type name is case-sensitive in the
system.data_type_families
table.
:::
In contrast to standard SQL, all other keywords (including functions names) are
case-sensitive
.
Furthermore, Keywords are not reserved.
They are treated as such only in the corresponding context.
If you use
identifiers
with the same name as the keywords, enclose them into double-quotes or backticks.
For example, the following query is valid if the table
table_name
has a column with the name
"FROM"
:
sql
SELECT "FROM" FROM table_name
Identifiers {#identifiers}
Identifiers are:
Cluster, database, table, partition, and column names.
Functions
.
Data types
.
Expression aliases
.
Identifiers can be quoted or non-quoted, although the latter is preferred.
Non-quoted identifiers must match the regex
^[a-zA-Z_][0-9a-zA-Z_]*$
and cannot be equal to
keywords
.
See the table below for examples of valid and invalid identifiers:
| Valid identifiers | Invalid identifiers |
|------------------------------------------------|----------------------------------------|
|
xyz
,
_internal
,
Id_with_underscores_123_
|
1x
,
tom@gmail.com
,
äußerst_schön
|
If you want to use identifiers the same as keywords or you want to use other symbols in identifiers, quote it using double quotes or backticks, for example,
"id"
,
`id`
.
:::note
The same rules that apply for escaping in quoted identifiers also apply for string literals. See
String
for more details.
:::
Literals {#literals}
In ClickHouse, a literal is a value which is directly represented in a query.
In other words it is a fixed value which does not change during query execution.
Literals can be:
-
String
-
Numeric
-
Compound
-
NULL
-
Heredocs
(custom string literals)
We take a look at each of these in more detail in the sections below.
String {#string}
String literals must be enclosed in single quotes. Double quotes are not supported.
Escaping works by either:
using a preceding single quote where the single-quote character
'
(and only this character) can be escaped as
''
, or
using the preceding backslash with the following supported escape sequences listed in the table below.
:::note
The backslash loses its special meaning i.e. it is interpreted literally should it precede characters other than the ones listed below.
::: | {"source_file": "syntax.md"} | [
-0.011259512975811958,
-0.043198347091674805,
0.0037912132684141397,
-0.001635991153307259,
0.014492535963654518,
-0.01999489963054657,
0.10519532859325409,
0.041833892464637756,
-0.026980048045516014,
0.025626620277762413,
0.04637666791677475,
-0.0016776574775576591,
0.1337205469608307,
-... |
078f94fd-56fb-4359-9eda-7185d73cb842 | :::note
The backslash loses its special meaning i.e. it is interpreted literally should it precede characters other than the ones listed below.
:::
| Supported Escape | Description |
|-------------------------------------|-------------------------------------------------------------------------|
|
\xHH
| 8-bit character specification followed by any number of hex digits (H). |
|
\N
| reserved, does nothing (eg
SELECT 'a\Nb'
returns
ab
) |
|
\a
| alert |
|
\b
| backspace |
|
\e
| escape character |
|
\f
| form feed |
|
\n
| line feed |
|
\r
| carriage return |
|
\t
| horizontal tab |
|
\v
| vertical tab |
|
\0
| null character |
|
\\
| backslash |
|
\'
(or
''
) | single quote |
|
\"
| double quote |
|
`
| backtick |
|
\/
| forward slash |
|
\=
| equal sign |
| ASCII control characters (c <= 31). | |
:::note
In string literals, you need to escape at least
'
and
\
using escape codes
\'
(or:
''
) and
\\
.
:::
Numeric {#numeric}
Numeric literals are parsed as follows:
If the literal is prefixed with a minus sign
-
, the token is skipped and the result is negated after parsing.
The numeric literal is first parsed as a 64-bit unsigned integer, using the
strtoull
function.
If the value is prefixed with
0b
or
0x
/
0X
, the number is parsed as binary or hexadecimal, respectively. | {"source_file": "syntax.md"} | [
-0.04431820660829544,
0.04922650009393692,
0.012432940304279327,
0.04918654263019562,
-0.039818473160266876,
0.05943407118320465,
0.07362411916255951,
0.02903112582862377,
0.008093883283436298,
-0.009165407158434391,
0.04640382155776024,
-0.0893627405166626,
0.09657812118530273,
-0.0791160... |
02e3a424-4815-4823-8034-7045296d782e | If the value is prefixed with
0b
or
0x
/
0X
, the number is parsed as binary or hexadecimal, respectively.
If the value is negative and the absolute magnitude is greater than 2
63
, an error is returned.
If unsuccessful, the value is next parsed as a floating-point number using the
strtod
function.
Otherwise, an error is returned.
Literal values are cast to the smallest type that the value fits in.
For example:
-
1
is parsed as
UInt8
-
256
is parsed as
UInt16
.
:::note Important
Integer values wider than 64-bit (
UInt128
,
Int128
,
UInt256
,
Int256
) must be cast to a larger type to parse properly:
sql
-170141183460469231731687303715884105728::Int128
340282366920938463463374607431768211455::UInt128
-57896044618658097711785492504343953926634992332820282019728792003956564819968::Int256
115792089237316195423570985008687907853269984665640564039457584007913129639935::UInt256
This bypasses the above algorithm and parses the integer with a routine that supports arbitrary precision.
Otherwise, the literal will be parsed as a floating-point number and thus subject to loss of precision due to truncation.
:::
For more information, see
Data types
.
Underscores
_
inside numeric literals are ignored and can be used for better readability.
The following Numeric literals are supported:
| Numeric Literal | Examples |
|-------------------------------------------|-------------------------------------------------|
|
Integers
|
1
,
10_000_000
,
18446744073709551615
,
01
|
|
Decimals
|
0.1
|
|
Exponential notation
|
1e100
,
-1e-100
|
|
Floating point numbers
|
123.456
,
inf
,
nan
|
|
Hex
|
0xc0fe
|
|
SQL Standard compatible hex string
|
x'c0fe'
|
|
Binary
|
0b1101
|
|
SQL Standard compatible binary string
|
b'1101'
|
:::note
Octal literals are not supported to avoid accidental errors in interpretation.
:::
Compound {#compound}
Arrays are constructed with square brackets
[1, 2, 3]
. Tuples are constructed with round brackets
(1, 'Hello, world!', 2)
.
Technically these are not literals, but expressions with the array creation operator and the tuple creation operator, respectively.
An array must consist of at least one item, and a tuple must have at least two items.
:::note
There is a separate case when tuples appear in the
IN
clause of a
SELECT
query.
Query results can include tuples, but tuples cannot be saved to a database (except for tables using the
Memory
engine).
:::
NULL {#null} | {"source_file": "syntax.md"} | [
0.07734845578670502,
-0.016765320673584938,
-0.08937305957078934,
-0.004120741039514542,
-0.11471240967512131,
-0.07411863654851913,
0.03084149956703186,
0.09129200875759125,
-0.07284018397331238,
-0.031485240906476974,
-0.008197633549571037,
-0.05177376791834831,
0.02196534536778927,
-0.0... |
6c0ef63e-749e-4b8a-8437-1d3d94eefe24 | NULL {#null}
NULL
is used to indicate that a value is missing.
To store
NULL
in a table field, it must be of the
Nullable
type.
:::note
The following should be noted for
NULL
:
Depending on the data format (input or output),
NULL
may have a different representation. For more information, see
data formats
.
NULL
processing is nuanced. For example, if at least one of the arguments of a comparison operation is
NULL
, the result of this operation is also
NULL
. The same is true for multiplication, addition, and other operations. We recommend to read the documentation for each operation.
In queries, you can check
NULL
using the
IS NULL
and
IS NOT NULL
operators and the related functions
isNull
and
isNotNull
.
:::
Heredoc {#heredoc}
A
heredoc
is a way to define a string (often multiline), while maintaining the original formatting.
A heredoc is defined as a custom string literal, placed between two
$
symbols.
For example:
```sql
SELECT $heredoc$SHOW CREATE VIEW my_view$heredoc$;
┌─'SHOW CREATE VIEW my_view'─┐
│ SHOW CREATE VIEW my_view │
└────────────────────────────┘
```
:::note
- A value between two heredocs is processed "as-is".
:::
:::tip
- You can use a heredoc to embed snippets of SQL, HTML, or XML code, etc.
:::
Defining and Using Query Parameters {#defining-and-using-query-parameters}
Query parameters allow you to write generic queries that contain abstract placeholders instead of concrete identifiers.
When a query with query parameters is executed,
all placeholders are resolved and replaced by the actual query parameter values.
There are two ways to define a query parameter:
SET param_<name>=<value>
--param_<name>='<value>'
When using the second variant, it is passed as an argument to
clickhouse-client
on the command line where:
-
<name>
is the name of the query parameter.
-
<value>
is its value.
A query parameter can be referenced in a query using
{<name>: <datatype>}
, where
<name>
is the query parameter name and
<datatype>
is the datatype it is converted to.
Example with SET command
For example, the following SQL defines parameters named `a`, `b`, `c` and `d` - each with a different data type:
```sql
SET param_a = 13;
SET param_b = 'str';
SET param_c = '2022-08-04 18:30:53';
SET param_d = {'10': [11, 12], '13': [14, 15]};
SELECT
{a: UInt32},
{b: String},
{c: DateTime},
{d: Map(String, Array(UInt8))};
13 str 2022-08-04 18:30:53 {'10':[11,12],'13':[14,15]}
```
Example with clickhouse-client
If you are using `clickhouse-client`, the parameters are specified as `--param_name=value`. For example, the following parameter has the name `message` and it is retrieved as a `String`:
```bash
clickhouse-client --param_message='hello' --query="SELECT {message: String}"
hello
``` | {"source_file": "syntax.md"} | [
-0.002772883977741003,
0.01165681704878807,
-0.07103036344051361,
0.06864506006240845,
-0.04521632194519043,
0.05086227506399155,
0.0013097630580887198,
-0.002958673285320401,
0.07493524998426437,
0.02823852375149727,
0.10104609280824661,
-0.09425020962953568,
0.02468583732843399,
-0.08691... |
95b77e1a-7ebb-4958-8d0a-bacdcc5d4689 | ```bash
clickhouse-client --param_message='hello' --query="SELECT {message: String}"
hello
```
If the query parameter represents the name of a database, table, function or other identifier, use `Identifier` for its type. For example, the following query returns rows from a table named `uk_price_paid`:
```sql
SET param_mytablename = "uk_price_paid";
SELECT * FROM {mytablename:Identifier};
```
:::note
Query parameters are not general text substitutions which can be used in arbitrary places in arbitrary SQL queries.
They are primarily designed to work in
SELECT
statements in place of identifiers or literals.
:::
Functions {#functions}
Function calls are written like an identifier with a list of arguments (possibly empty) in round brackets.
In contrast to standard SQL, the brackets are required, even for an empty argument list.
For example:
sql
now()
There are also:
-
Regular functions
.
-
Aggregate functions
.
Some aggregate functions can contain two lists of arguments in brackets. For example:
sql
quantile (0.9)(x)
These aggregate functions are called "parametric" functions,
and the arguments in the first list are called "parameters".
:::note
The syntax of aggregate functions without parameters is the same as for regular functions.
:::
Operators {#operators}
Operators are converted to their corresponding functions during query parsing, taking their priority and associativity into account.
For example, the expression
text
1 + 2 * 3 + 4
is transformed to
text
plus(plus(1, multiply(2, 3)), 4)`
Data Types and Database Table Engines {#data-types-and-database-table-engines}
Data types and table engines in the
CREATE
query are written the same way as identifiers or functions.
In other words, they may or may not contain an argument list in brackets.
For more information, see the sections:
-
Data types
-
Table engines
-
CREATE
.
Expressions {#expressions}
An expression can be any of the following:
- a function
- an identifier
- a literal
- the application of an operator
- an expression in brackets
- a subquery
- an asterisk
It can also contain an
alias
.
A list of expressions is one or more expressions separated by commas.
Functions and operators, in turn, can have expressions as arguments.
A constant expression is an expression whose result is known during query analysis, i.e. before execution.
For example, expressions over literals are constant expressions.
Expression Aliases {#expression-aliases}
An alias is a user-defined name for an
expression
in a query.
sql
expr AS alias
The parts of the syntax above are explained below. | {"source_file": "syntax.md"} | [
-0.006085516884922981,
0.021293330937623978,
-0.035280004143714905,
0.07356161624193192,
-0.09238769114017487,
-0.05544423684477806,
0.1273602396249771,
0.019704094156622887,
0.008625040762126446,
-0.031016215682029724,
0.009306050837039948,
-0.024405168369412422,
0.08420634269714355,
-0.0... |
6fffc450-2ce5-40ea-9a7c-16dcd2ed284c | Expression Aliases {#expression-aliases}
An alias is a user-defined name for an
expression
in a query.
sql
expr AS alias
The parts of the syntax above are explained below.
| Part of syntax | Description | Example | Notes |
|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|
|
AS
| The keyword for defining aliases. You can define the alias for a table name or a column name in a
SELECT
clause without using the
AS
keyword.|
SELECT table_name_alias.column_name FROM table_name table_name_alias
. | In the
CAST
function, the
AS
keyword has another meaning. See the description of the function. |
|
expr
| Any expression supported by ClickHouse. |
SELECT column_name * 2 AS double FROM some_table
| |
|
alias
| Name for
expr
. Aliases should comply with the
identifiers
syntax. |
SELECT "table t".column_name FROM table_name AS "table t"
. | |
Notes on Usage {#notes-on-usage}
Aliases are global for a query or subquery, and you can define an alias in any part of a query for any expression. For example:
sql
SELECT (1 AS n) + 2, n`.
Aliases are not visible in subqueries and between subqueries. For example, while executing the following query ClickHouse generates the exception
Unknown identifier: num
:
sql
`SELECT (SELECT sum(b.a) + num FROM b) - a.a AS num FROM a`
If an alias is defined for the result columns in the
SELECT
clause of a subquery, these columns are visible in the outer query. For example:
sql
SELECT n + m FROM (SELECT 1 AS n, 2 AS m)`.
Be careful with aliases that are the same as column or table names. Let's consider the following example:
```sql
CREATE TABLE t
(
a Int,
b Int
)
ENGINE = TinyLog();
SELECT
argMax(a, b),
sum(b) AS b
FROM t; | {"source_file": "syntax.md"} | [
-0.043571267277002335,
0.007711211685091257,
0.05526835843920708,
0.04339916259050369,
-0.06519238650798798,
0.02870801091194153,
0.10737205296754837,
0.09112472832202911,
0.0070603289641439915,
0.03264152258634567,
0.020093156024813652,
-0.06334422528743744,
0.08252140879631042,
-0.032545... |
56c6afa1-d98a-4f88-a934-4f2d3f84168b | ```sql
CREATE TABLE t
(
a Int,
b Int
)
ENGINE = TinyLog();
SELECT
argMax(a, b),
sum(b) AS b
FROM t;
Received exception from server (version 18.14.17):
Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query.
```
In the preceding example, we declared table
t
with column
b
.
Then, when selecting data, we defined the
sum(b) AS b
alias.
As aliases are global,
ClickHouse substituted the literal
b
in the expression
argMax(a, b)
with the expression
sum(b)
.
This substitution caused the exception.
:::note
You can change this default behavior by setting
prefer_column_name_to_alias
to
1
.
:::
Asterisk {#asterisk}
In a
SELECT
query, an asterisk can replace the expression.
For more information, see the section
SELECT
. | {"source_file": "syntax.md"} | [
0.020379967987537384,
0.007852746173739433,
0.0018995505524799228,
0.058552809059619904,
-0.06030536815524101,
-0.02223087102174759,
0.06148792430758476,
0.052925582975149155,
-0.009983519092202187,
0.04907627031207085,
0.05025612190365791,
-0.0689602941274643,
0.06133255362510681,
-0.0710... |
7e6e3b21-a8bb-4742-9625-8a4fffa10bd3 | description: 'Documentation for Distributed Ddl'
sidebar_label: 'Distributed DDL'
sidebar_position: 3
slug: /sql-reference/distributed-ddl
title: 'Distributed DDL Queries (ON CLUSTER Clause)'
doc_type: 'reference'
By default, the
CREATE
,
DROP
,
ALTER
, and
RENAME
queries affect only the current server where they are executed. In a cluster setup, it is possible to run such queries in a distributed manner with the
ON CLUSTER
clause.
For example, the following query creates the
all_hits
Distributed
table on each host in
cluster
:
sql
CREATE TABLE IF NOT EXISTS all_hits ON CLUSTER cluster (p Date, i Int32) ENGINE = Distributed(cluster, default, hits)
In order to run these queries correctly, each host must have the same cluster definition (to simplify syncing configs, you can use substitutions from ZooKeeper). They must also connect to the ZooKeeper servers.
The local version of the query will eventually be executed on each host in the cluster, even if some hosts are currently not available.
:::important
The order for executing queries within a single host is guaranteed.
::: | {"source_file": "distributed-ddl.md"} | [
0.008665782399475574,
-0.10012474656105042,
0.017197487875819206,
0.05232575163245201,
-0.016288524493575096,
-0.08444635570049286,
-0.046565212309360504,
-0.053532931953668594,
0.007494418416172266,
0.015635373070836067,
-0.017872866243124008,
-0.04797632247209549,
0.06206046789884567,
-0... |
025cdef5-bac0-48ec-b866-8a4e9d42ff2f | description: 'Documentation for ClickHouse SQL Reference'
keywords: ['clickhouse', 'docs', 'sql reference', 'sql statements', 'sql', 'syntax']
slug: /sql-reference
title: 'SQL Reference'
doc_type: 'reference'
import { TwoColumnList } from '/src/components/two_column_list'
import { ClickableSquare } from '/src/components/clickable_square'
import { HorizontalDivide } from '/src/components/horizontal_divide'
import { ViewAllLink } from '/src/components/view_all_link'
import { VideoContainer } from '/src/components/video_container'
import LinksDeployment from './sql-reference-links.json'
ClickHouse SQL Reference
ClickHouse supports a declarative query language based on SQL that is identical to the ANSI SQL standard in many cases.
Supported queries include GROUP BY, ORDER BY, subqueries in FROM, JOIN clause, IN operator, window functions and scalar subqueries. | {"source_file": "index.md"} | [
-0.019897133111953735,
-0.12374158948659897,
-0.08127746731042862,
0.05514056608080864,
-0.03183038532733917,
0.02749679796397686,
0.07440809905529022,
0.022619279101490974,
-0.0669049397110939,
-0.03192273527383804,
0.04565572366118431,
0.010579336434602737,
0.06600722670555115,
0.0235088... |
1aed6f18-405f-4987-a497-73e8e1e3659f | description: 'Page describing transactional (ACID) support in ClickHouse'
slug: /guides/developer/transactional
title: 'Transactional (ACID) support'
doc_type: 'guide'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Transactional (ACID) support
Case 1: INSERT into one partition, of one table, of the MergeTree* family {#case-1-insert-into-one-partition-of-one-table-of-the-mergetree-family}
This is transactional (ACID) if the inserted rows are packed and inserted as a single block (see Notes):
- Atomic: an INSERT succeeds or is rejected as a whole: if a confirmation is sent to the client, then all rows were inserted; if an error is sent to the client, then no rows were inserted.
- Consistent: if there are no table constraints violated, then all rows in an INSERT are inserted and the INSERT succeeds; if constraints are violated, then no rows are inserted.
- Isolated: concurrent clients observe a consistent snapshot of the table–the state of the table either as it was before the INSERT attempt, or after the successful INSERT; no partial state is seen. Clients inside of another transaction have
snapshot isolation
, while clients outside of a transaction have
read uncommitted
isolation level.
- Durable: a successful INSERT is written to the filesystem before answering to the client, on a single replica or multiple replicas (controlled by the
insert_quorum
setting), and ClickHouse can ask the OS to sync the filesystem data on the storage media (controlled by the
fsync_after_insert
setting).
- INSERT into multiple tables with one statement is possible if materialized views are involved (the INSERT from the client is to a table which has associate materialized views).
Case 2: INSERT into multiple partitions, of one table, of the MergeTree* family {#case-2-insert-into-multiple-partitions-of-one-table-of-the-mergetree-family}
Same as Case 1 above, with this detail:
- If table has many partitions and INSERT covers many partitions, then insertion into every partition is transactional on its own
Case 3: INSERT into one distributed table of the MergeTree* family {#case-3-insert-into-one-distributed-table-of-the-mergetree-family}
Same as Case 1 above, with this detail:
- INSERT into Distributed table is not transactional as a whole, while insertion into every shard is transactional
Case 4: Using a Buffer table {#case-4-using-a-buffer-table}
insert into Buffer tables is neither atomic nor isolated nor consistent nor durable
Case 5: Using async_insert {#case-5-using-async_insert}
Same as Case 1 above, with this detail:
- atomicity is ensured even if
async_insert
is enabled and
wait_for_async_insert
is set to 1 (the default), but if
wait_for_async_insert
is set to 0, then atomicity is not ensured.
Notes {#notes}
rows inserted from the client in some data format are packed into a single block when: | {"source_file": "transactions.md"} | [
-0.06878494471311569,
-0.03871350735425949,
-0.035322245210409164,
0.011175465770065784,
-0.016778768971562386,
-0.01722131296992302,
0.009123568423092365,
-0.035402562469244,
0.07428459823131561,
0.0029326521325856447,
0.12278591096401215,
0.013202790170907974,
0.09583358466625214,
-0.069... |
1bc7b671-625c-48cd-855c-559c52471622 | Notes {#notes}
rows inserted from the client in some data format are packed into a single block when:
the insert format is row-based (like CSV, TSV, Values, JSONEachRow, etc) and the data contains less then
max_insert_block_size
rows (~1 000 000 by default) or less then
min_chunk_bytes_for_parallel_parsing
bytes (10 MB by default) in case of parallel parsing is used (enabled by default)
the insert format is column-based (like Native, Parquet, ORC, etc) and the data contains only one block of data
the size of the inserted block in general may depend on many settings (for example:
max_block_size
,
max_insert_block_size
,
min_insert_block_size_rows
,
min_insert_block_size_bytes
,
preferred_block_size_bytes
, etc)
if the client did not receive an answer from the server, the client does not know if the transaction succeeded, and it can repeat the transaction, using exactly-once insertion properties
ClickHouse is using
MVCC
with
snapshot isolation
internally for concurrent transactions
all ACID properties are valid even in the case of server kill/crash
either insert_quorum into different AZ or fsync should be enabled to ensure durable inserts in the typical setup
"consistency" in ACID terms does not cover the semantics of distributed systems, see https://jepsen.io/consistency which is controlled by different settings (select_sequential_consistency)
this explanation does not cover a new transactions feature that allow to have full-featured transactions over multiple tables, materialized views, for multiple SELECTs, etc. (see the next section on Transactions, Commit, and Rollback)
Transactions, Commit, and Rollback {#transactions-commit-and-rollback}
In addition to the functionality described at the top of this document, ClickHouse has experimental support for transactions, commits, and rollback functionality.
Requirements {#requirements}
Deploy ClickHouse Keeper or ZooKeeper to track transactions
Atomic DB only (Default)
Non-Replicated MergeTree table engine only
Enable experimental transaction support by adding this setting in
config.d/transactions.xml
:
xml
<clickhouse>
<allow_experimental_transactions>1</allow_experimental_transactions>
</clickhouse>
Notes {#notes-1}
This is an experimental feature, and changes should be expected.
If an exception occurs during a transaction, you cannot commit the transaction. This includes all exceptions, including
UNKNOWN_FUNCTION
exceptions caused by typos.
Nested transactions are not supported; finish the current transaction and start a new one instead
Configuration {#configuration}
These examples are with a single node ClickHouse server with ClickHouse Keeper enabled.
Enable experimental transaction support {#enable-experimental-transaction-support}
xml title=/etc/clickhouse-server/config.d/transactions.xml
<clickhouse>
<allow_experimental_transactions>1</allow_experimental_transactions>
</clickhouse> | {"source_file": "transactions.md"} | [
-0.0754229724407196,
-0.02552926167845726,
-0.06451895087957382,
-0.04196447879076004,
-0.06510620564222336,
-0.04950039088726044,
-0.04888613149523735,
-0.0013442706549540162,
0.06339333951473236,
0.01972568966448307,
0.1058323010802269,
0.024760473519563675,
0.06553277373313904,
-0.06465... |
4f6b646c-6862-416c-9ca6-d3c9b74bbaf9 | xml title=/etc/clickhouse-server/config.d/transactions.xml
<clickhouse>
<allow_experimental_transactions>1</allow_experimental_transactions>
</clickhouse>
Basic configuration for a single ClickHouse server node with ClickHouse Keeper enabled {#basic-configuration-for-a-single-clickhouse-server-node-with-clickhouse-keeper-enabled}
:::note
See the
deployment
documentation for details on deploying ClickHouse server and a proper quorum of ClickHouse Keeper nodes. The configuration shown here is for experimental purposes.
:::
xml title=/etc/clickhouse-server/config.d/config.xml
<clickhouse replace="true">
<logger>
<level>debug</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>1000M</size>
<count>3</count>
</logger>
<display_name>node 1</display_name>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<zookeeper>
<node>
<host>clickhouse-01</host>
<port>9181</port>
</node>
</zookeeper>
<keeper_server>
<tcp_port>9181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>information</raft_logs_level>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>clickhouse-keeper-01</hostname>
<port>9234</port>
</server>
</raft_configuration>
</keeper_server>
</clickhouse>
Example {#example}
Verify that experimental transactions are enabled {#verify-that-experimental-transactions-are-enabled}
Issue a
BEGIN TRANSACTION
or
START TRANSACTION
followed by a
ROLLBACK
to verify that experimental transactions are enabled, and that ClickHouse Keeper is enabled as it is used to track transactions.
sql
BEGIN TRANSACTION
response
Ok.
:::tip
If you see the following error, then check your configuration file to make sure that
allow_experimental_transactions
is set to
1
(or any value other than
0
or
false
).
response
Code: 48. DB::Exception: Received from localhost:9000.
DB::Exception: Transactions are not supported.
(NOT_IMPLEMENTED)
You can also check ClickHouse Keeper by issuing
bash
echo ruok | nc localhost 9181
ClickHouse Keeper should respond with
imok
.
:::
sql
ROLLBACK
response
Ok.
Create a table for testing {#create-a-table-for-testing}
:::tip
Creation of tables is not transactional. Run this DDL query outside of a transaction.
::: | {"source_file": "transactions.md"} | [
0.03292442113161087,
-0.04379776120185852,
-0.02148396708071232,
-0.01608564332127571,
-0.010699575766921043,
-0.07401186227798462,
0.034040115773677826,
-0.08656563609838486,
-0.04938655346632004,
0.021979650482535362,
0.06593986600637436,
0.0005263148923404515,
0.042361412197351456,
-0.0... |
cf404033-55a7-4de4-9357-9fdd438fd8a5 | sql
ROLLBACK
response
Ok.
Create a table for testing {#create-a-table-for-testing}
:::tip
Creation of tables is not transactional. Run this DDL query outside of a transaction.
:::
sql
CREATE TABLE mergetree_table
(
`n` Int64
)
ENGINE = MergeTree
ORDER BY n
response
Ok.
Begin a transaction and insert a row {#begin-a-transaction-and-insert-a-row}
sql
BEGIN TRANSACTION
response
Ok.
sql
INSERT INTO mergetree_table FORMAT Values (10)
response
Ok.
sql
SELECT *
FROM mergetree_table
response
┌──n─┐
│ 10 │
└────┘
:::note
You can query the table from within a transaction and see that the row was inserted even though it has not yet been committed.
:::
Rollback the transaction, and query the table again {#rollback-the-transaction-and-query-the-table-again}
Verify that the transaction is rolled back:
sql
ROLLBACK
response
Ok.
sql
SELECT *
FROM mergetree_table
```response
Ok.
0 rows in set. Elapsed: 0.002 sec.
```
Complete a transaction and query the table again {#complete-a-transaction-and-query-the-table-again}
sql
BEGIN TRANSACTION
response
Ok.
sql
INSERT INTO mergetree_table FORMAT Values (42)
response
Ok.
sql
COMMIT
response
Ok. Elapsed: 0.002 sec.
sql
SELECT *
FROM mergetree_table
response
┌──n─┐
│ 42 │
└────┘
Transactions introspection {#transactions-introspection}
You can inspect transactions by querying the
system.transactions
table, but note that you cannot query that
table from a session that is in a transaction. Open a second
clickhouse client
session to query that table.
sql
SELECT *
FROM system.transactions
FORMAT Vertical
response
Row 1:
──────
tid: (33,61,'51e60bce-6b82-4732-9e1d-b40705ae9ab8')
tid_hash: 11240433987908122467
elapsed: 210.017820947
is_readonly: 1
state: RUNNING
More Details {#more-details}
See this
meta issue
to find much more extensive tests and to keep up to date with the progress. | {"source_file": "transactions.md"} | [
-0.031781379133462906,
-0.027317676693201065,
-0.00989171676337719,
0.023550910875201225,
-0.09075050055980682,
-0.06307718902826309,
-0.022146493196487427,
-0.025822844356298447,
0.02242019958794117,
0.04823336750268936,
0.1229223906993866,
0.0374300591647625,
0.01963241584599018,
-0.0735... |
2b4d1f98-9ea4-44e8-9950-98856e5cab87 | slug: /use-cases
title: 'Use Case Guides'
pagination_prev: null
pagination_next: null
description: 'Landing page for use case guides'
doc_type: 'landing-page'
keywords: ['use cases', 'observability', 'time-series', 'data lake', 'machine learning', 'AI']
In this section of the docs you can find our use case guides.
| Page | Description |
|--------------------------------|-------------------------------------------------------------------------------|
|
Observability
| Use case guide on how to setup and use ClickHouse for Observability |
|
Time-Series
| Use case guide on how to setup and use ClickHouse for time-series |
|
Data Lake
| Use case guide on Data Lakes in ClickHouse |
|
Machine Learning and GenAI
| Use case guides for Machine Learning and GenAI applications with ClickHouse | | {"source_file": "index.md"} | [
-0.03170272335410118,
0.010045009665191174,
-0.033222734928131104,
-0.030871782451868057,
-0.014055049046874046,
0.032620131969451904,
-0.0730535015463829,
-0.02496029995381832,
-0.02302703447639942,
-0.007998058572411537,
0.0547979436814785,
0.044067446142435074,
0.0401475690305233,
-0.02... |
2043d041-31d8-4baf-ab09-e71777046a59 | description: 'The ClickHouse Playground allows people to experiment with ClickHouse
by running queries instantly, without setting up their server or cluster.'
keywords: ['clickhouse', 'playground', 'getting', 'started', 'docs']
sidebar_label: 'ClickHouse playground'
slug: /getting-started/playground
title: 'ClickHouse Playground'
doc_type: 'guide'
ClickHouse playground
ClickHouse Playground
allows people to experiment with ClickHouse by running queries instantly, without setting up their server or cluster.
Several example datasets are available in Playground.
You can make queries to Playground using any HTTP client, for example
curl
or
wget
, or set up a connection using
JDBC
or
ODBC
drivers. More information about software products that support ClickHouse is available
here
.
Credentials {#credentials}
| Parameter | Value |
|:--------------------|:-----------------------------------|
| HTTPS endpoint |
https://play.clickhouse.com:443/
|
| Native TCP endpoint |
play.clickhouse.com:9440
|
| User |
explorer
or
play
|
| Password | (empty) |
Limitations {#limitations}
The queries are executed as a read-only user. It implies some limitations:
DDL queries are not allowed
INSERT queries are not allowed
The service also have quotas on its usage.
Examples {#examples}
HTTPS endpoint example with
curl
:
bash
curl "https://play.clickhouse.com/?user=explorer" --data-binary "SELECT 'Play ClickHouse'"
TCP endpoint example with
CLI
:
bash
clickhouse client --secure --host play.clickhouse.com --user explorer
Playground specifications {#specifications}
our ClickHouse Playground is running with the following specifications:
Hosted on Google Cloud (GCE) in the US Central region (US-Central-1)
3-replica setup
256 GiB of storage and 59 virtual CPUs each. | {"source_file": "playground.md"} | [
-0.0199356060475111,
-0.05376294255256653,
-0.08141379803419113,
0.050389084964990616,
-0.07932606339454651,
-0.07338348031044006,
0.007930414751172066,
0.0009674171451479197,
-0.03523794934153557,
-0.03675887733697891,
-0.012689253315329552,
-0.0199442058801651,
0.11879909038543701,
-0.08... |
1f275efb-bb77-4f96-a2c3-bb0ddd00d00d | description: 'Get started with ClickHouse using our tutorials and example datasets'
keywords: ['clickhouse', 'install', 'tutorial', 'sample', 'datasets']
pagination_next: tutorial
sidebar_label: 'Overview'
sidebar_position: 0
slug: /getting-started/example-datasets/
title: 'Tutorials and Example Datasets'
doc_type: 'landing-page'
Tutorials and example datasets
We have a lot of resources for helping you get started and learn how ClickHouse works:
If you need to get ClickHouse up and running, check out our
Quick Start
The
ClickHouse Tutorial
analyzes a dataset of New York City taxi rides
In addition, the sample datasets provide a great experience on working with ClickHouse,
learning important techniques and tricks, and seeing how to take advantage of the many powerful
functions in ClickHouse. The sample datasets include: | {"source_file": "index.md"} | [
0.008466828614473343,
-0.08182590454816818,
0.018771713599562645,
0.007989412173628807,
-0.040749892592430115,
-0.015725575387477875,
0.02235727570950985,
-0.02460014633834362,
-0.13023681938648224,
-0.0248526930809021,
0.04938402771949768,
0.013767191208899021,
0.018145490437746048,
-0.01... |
d56953ac-05bb-4f17-842b-bebb126a9983 | | Page | Description |
|-----|-----|
|
Amazon Customer Review
| Over 150M customer reviews of Amazon products |
|
AMPLab Big Data Benchmark
| A benchmark dataset used for comparing the performance of data warehousing solutions. |
|
Analyzing Stack Overflow data with ClickHouse
| Analyzing Stack Overflow data with ClickHouse |
|
Anonymized web analytics
| Dataset consisting of two tables containing anonymized web analytics data with hits and visits |
|
Brown University Benchmark
| A new analytical benchmark for machine-generated log data |
|
COVID-19 Open-Data
| COVID-19 Open-Data is a large, open-source database of COVID-19 epidemiological data and related factors like demographics, economics, and government responses |
|
dbpedia dataset
| Dataset containing 1 million articles from Wikipedia and their vector embeddings |
|
Environmental Sensors Data
| Over 20 billion records of data from Sensor.Community, a contributors-driven global sensor network that creates Open Environmental Data. |
|
Foursquare places
| Dataset with over 100 million records containing information about places on a map, such as shops, restaurants, parks, playgrounds, and monuments. |
|
Geo Data using the Cell Tower Dataset
| Learn how to load OpenCelliD data into ClickHouse, connect Apache Superset to ClickHouse and build a dashboard based on data |
|
GitHub Events Dataset
| Dataset containing all events on GitHub from 2011 to Dec 6 2020, with a size of 3.1 billion records. |
|
Hacker News dataset
| Dataset containing 28 million rows of hacker news data. |
|
Hacker News vector search dataset
| Dataset containing 28+ million Hacker News postings & their vector embeddings |
|
LAION 5B dataset
| Dataset containing 100 million vectors from the LAION 5B dataset |
|
Laion-400M dataset
| Dataset containing 400 million images with English image captions |
|
New York Public Library "What's on the Menu?" Dataset
| Dataset containing 1.3 million records of historical data on the menus of hotels, restaurants and cafes with the dishes along with their prices. |
|
New York Taxi Data
| Data for billions of taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in New York City since 2009 |
|
NOAA Global Historical Climatology Network
| 2.5 billion rows of climate data for the last 120 yrs |
|
NYPD Complaint Data
| Ingest and query Tab Separated Value data in 5 steps |
|
OnTime
| Dataset containing the on-time performance of airline flights |
|
Star Schema Benchmark (SSB, 2009)
| The Star Schema Benchmark (SSB) data set and queries |
|
Taiwan historical weather datasets
| 131 million rows of weather observation data for the last 128 yrs |
|
Terabyte click logs from Criteo
| A terabyte of click logs from Criteo |
|
The UK property prices dataset | {"source_file": "index.md"} | [
0.034394290298223495,
-0.06096586212515831,
-0.02535969950258732,
0.06134810298681259,
0.08179327845573425,
-0.03221427649259567,
-0.012410826981067657,
-0.002484651282429695,
-0.06961147487163544,
0.03511226549744606,
0.053561724722385406,
-0.035836633294820786,
0.03872572258114815,
-0.05... |
9ad93c11-4012-4bd7-919e-c60782948873 | | 131 million rows of weather observation data for the last 128 yrs |
|
Terabyte click logs from Criteo
| A terabyte of click logs from Criteo |
|
The UK property prices dataset
| Learn how to use projections to improve the performance of queries that you run frequently using the UK property dataset, which contains data about prices paid for real-estate property in England and Wales |
|
TPC-DS (2012)
| The TPC-DS benchmark data set and queries. |
|
TPC-H (1999)
| The TPC-H benchmark data set and queries. |
|
WikiStat
| Explore the WikiStat dataset containing 0.5 trillion records. |
|
Writing Queries in ClickHouse using GitHub Data
| Dataset containing all of the commits and changes for the ClickHouse repository |
|
YouTube dataset of dislikes
| A collection of dislikes of YouTube videos. | | {"source_file": "index.md"} | [
0.008093458600342274,
-0.052427612245082855,
0.008911303244531155,
0.0715753510594368,
0.0029073923360556364,
-0.07722875475883484,
0.0010309285717085004,
-0.01821533963084221,
-0.06597612053155899,
0.07475719600915909,
0.007721473462879658,
-0.05578972026705742,
0.010748247615993023,
-0.0... |
2cbebbeb-f88d-4253-a455-baa137f6acb7 | title: 'Inserting ClickHouse data'
description: 'How to insert data into ClickHouse'
keywords: ['INSERT', 'Batch Insert']
sidebar_label: 'Inserting ClickHouse data'
slug: /guides/inserting-data
show_related_blogs: true
doc_type: 'guide'
import postgres_inserts from '@site/static/images/guides/postgres-inserts.png';
import Image from '@theme/IdealImage';
Inserting into ClickHouse vs. OLTP databases {#inserting-into-clickhouse-vs-oltp-databases}
As an OLAP (Online Analytical Processing) database, ClickHouse is optimized for high performance and scalability, allowing potentially millions of rows to be inserted per second.
This is achieved through a combination of a highly parallelized architecture and efficient column-oriented compression, but with compromises on immediate consistency.
More specifically, ClickHouse is optimized for append-only operations and offers only eventual consistency guarantees.
In contrast, OLTP databases such as Postgres are specifically optimized for transactional inserts with full ACID compliance, ensuring strong consistency and reliability guarantees.
PostgreSQL uses MVCC (Multi-Version Concurrency Control) to handle concurrent transactions, which involves maintaining multiple versions of the data.
These transactions can potentially involve a small number of rows at a time, with considerable overhead incurred due to the reliability guarantees limiting insert performance.
To achieve high insert performance while maintaining strong consistency guarantees, users should adhere to the simple rules described below when inserting data into ClickHouse.
Following these rules will help to avoid issues users commonly encounter the first time they use ClickHouse, and try to replicate an insert strategy that works for OLTP databases.
Best practices for Inserts {#best-practices-for-inserts}
Insert in large batch sizes {#insert-in-large-batch-sizes}
By default, each insert sent to ClickHouse causes ClickHouse to immediately create a part of storage containing the data from the insert together with other metadata that needs to be stored.
Therefore, sending a smaller amount of inserts that each contain more data, compared to sending a larger amount of inserts that each contain less data, will reduce the number of writes required.
Generally, we recommend inserting data in fairly large batches of at least 1,000 rows at a time, and ideally between 10,000 to 100,000 rows.
(Further details
here
).
If large batches are not possible, use asynchronous inserts described below.
Ensure consistent batches for idempotent retries {#ensure-consistent-batches-for-idempotent-retries}
By default, inserts into ClickHouse are synchronous and idempotent (i.e. performing the same insert operation multiple times has the same effect as performing it once).
For tables of the MergeTree engine family, ClickHouse will, by default, automatically
deduplicate inserts
.
This means inserts remain resilient in the following cases: | {"source_file": "inserting-data.md"} | [
-0.038100820034742355,
-0.0302429236471653,
-0.06433366984128952,
0.07148069888353348,
-0.022944511845707893,
-0.08997925370931625,
-0.03640452027320862,
0.004906552378088236,
0.012953286059200764,
-0.011474791914224625,
0.05086924508213997,
0.0892898216843605,
0.04873773455619812,
-0.0724... |
4f0d67aa-c1f9-4267-ac3d-60e08b8cc6a2 | This means inserts remain resilient in the following cases:
If the node receiving the data has issues, the insert query will time out (or give a more specific error) and not get an acknowledgment.
If the data got written by the node but the acknowledgement can't be returned to the sender of the query because of network interruptions, the sender will either get a time-out or a network error.
From the client's perspective, (i) and (ii) can be hard to distinguish. However, in both cases, the unacknowledged insert can just be immediately retried.
As long as the retried insert query contains the same data in the same order, ClickHouse will automatically ignore the retried insert if the (unacknowledged) original insert succeeded.
Insert to a MergeTree table or a distributed table {#insert-to-a-mergetree-table-or-a-distributed-table}
We recommend inserting directly into a MergeTree (or Replicated table), balancing the requests across a set of nodes if the data is sharded, and setting
internal_replication=true
.
This will leave ClickHouse to replicate the data to any available replica shards and ensure the data is eventually consistent.
If this client side load balancing is inconvenient then users can insert via a
distributed table
which will then distribute writes across the nodes. Again, it is advised to set
internal_replication=true
.
It should be noted however that this approach is a little less performant as writes have to be made locally on the node with the distributed table and then sent to the shards.
Use asynchronous inserts for small batches {#use-asynchronous-inserts-for-small-batches}
There are scenarios where client-side batching is not feasible e.g. an observability use case with 100s or 1000s of single-purpose agents sending logs, metrics, traces, etc.
In this scenario real-time transport of that data is key to detect issues and anomalies as quickly as possible.
Furthermore, there is a risk of event spikes in the observed systems, which could potentially cause large memory spikes and related issues when trying to buffer observability data client-side.
If large batches cannot be inserted, users can delegate batching to ClickHouse using
asynchronous inserts
.
With asynchronous inserts, data is inserted into a buffer first and then written to the database storage later in 3 steps, as illustrated by the diagram below:
With asynchronous inserts enabled, ClickHouse:
(1) receives an insert query asynchronously.
(2) writes the query's data into an in-memory buffer first.
(3) sorts and writes the data as a part to the database storage, only when the next buffer flush takes place. | {"source_file": "inserting-data.md"} | [
-0.05028032138943672,
-0.07102103531360626,
0.01507834903895855,
0.1124124601483345,
-0.017719173803925514,
-0.03931275010108948,
-0.00810344610363245,
-0.040206801146268845,
0.05217776820063591,
0.026217438280582428,
0.08571961522102356,
0.04169885814189911,
0.11864804476499557,
-0.102227... |
093a8ff4-112b-4c67-a957-80775a1a8901 | Before the buffer gets flushed, the data of other asynchronous insert queries from the same or other clients can be collected in the buffer.
The part created from the buffer flush will potentially contain the data from several asynchronous insert queries.
Generally, these mechanics shift the batching of data from the client side to the server side (ClickHouse instance).
:::note
Note that the data is not searchable by queries before being flushed to the database storage and that the buffer flush is configurable.
Full details on configuring asynchronous inserts can be found
here
, with a deep dive
here
.
:::
Use official ClickHouse clients {#use-official-clickhouse-clients}
ClickHouse has clients in the most popular programming languages.
These are optimized to ensure that inserts are performed correctly and natively support asynchronous inserts either directly as in e.g. the
Go client
, or indirectly when enabled in the query, user or connection level settings.
See
Clients and Drivers
for a full list of available ClickHouse clients and drivers.
Prefer the native format {#prefer-the-native-format}
ClickHouse supports many
input formats
at insert (and query) time.
This is a significant difference with OLTP databases and makes loading data from external sources much easier - especially when coupled with
table functions
and the ability to load data from files on disk.
These formats are ideal for ad hoc data loading and data engineering tasks.
For applications looking to achieve optimal insert performance, users should insert using the
Native
format.
This is supported by most clients (such as Go and Python) and ensures the server has to do a minimal amount of work since this format is already column-oriented.
By doing so the responsibility for converting data into a column-oriented format is placed on the client side. This is important for scaling inserts efficiently.
Alternatively, users can use
RowBinary format
(as used by the Java client) if a row format is preferred - this is typically easier to write than the Native format.
This is more efficient, in terms of compression, network overhead, and processing on the server, than alternative row formats such as
JSON
.
The
JSONEachRow
format can be considered for users with lower write throughput looking to integrate quickly. Users should be aware this format will incur a CPU overhead in ClickHouse for parsing.
Use the HTTP interface {#use-the-http-interface} | {"source_file": "inserting-data.md"} | [
0.02189556509256363,
-0.05514637753367424,
-0.030026767402887344,
0.0774138942360878,
-0.117730513215065,
-0.03955148160457611,
0.04781998693943024,
-0.08485709130764008,
0.042619574815034866,
-0.0199191402643919,
0.032360613346099854,
0.05245910584926605,
0.021106457337737083,
-0.11963519... |
fabbbafa-14d5-494b-a472-9df6c415bb7d | Use the HTTP interface {#use-the-http-interface}
Unlike many traditional databases, ClickHouse supports an HTTP interface.
Users can use this for both inserting and querying data, using any of the above formats.
This is often preferable to ClickHouse's native protocol as it allows traffic to be easily switched with load balancers.
We expect small differences in insert performance with the native protocol, which incurs a little less overhead.
Existing clients use either of these protocols (in some cases both e.g. the Go client).
The native protocol does allow query progress to be easily tracked.
See
HTTP Interface
for further details.
Basic example {#basic-example}
You can use the familiar
INSERT INTO TABLE
command with ClickHouse. Let's insert some data into the table that we created in the start guide
"Creating Tables in ClickHouse"
.
sql
INSERT INTO helloworld.my_first_table (user_id, message, timestamp, metric) VALUES
(101, 'Hello, ClickHouse!', now(), -1.0 ),
(102, 'Insert a lot of rows per batch', yesterday(), 1.41421 ),
(102, 'Sort your data based on your commonly-used queries', today(), 2.718 ),
(101, 'Granules are the smallest chunks of data read', now() + 5, 3.14159 )
To verify that worked, we'll run the following
SELECT
query:
sql
SELECT * FROM helloworld.my_first_table
Which returns:
response
user_id message timestamp metric
101 Hello, ClickHouse! 2024-11-13 20:01:22 -1
101 Granules are the smallest chunks of data read 2024-11-13 20:01:27 3.14159
102 Insert a lot of rows per batch 2024-11-12 00:00:00 1.41421
102 Sort your data based on your commonly-used queries 2024-11-13 00:00:00 2.718
Loading data from Postgres {#loading-data-from-postgres}
For loading data from Postgres, users can use:
ClickPipes
, an ETL tool specifically designed for PostgreSQL database replication. This is available in both:
ClickHouse Cloud - available through our
managed ingestion service
in ClickPipes.
Self-managed - via the
PeerDB open-source project
.
The
PostgreSQL table engine
to read data directly as shown in previous examples. Typically appropriate if batch replication based on a known watermark, e.g., timestamp, is sufficient or if it's a one-off migration. This approach can scale to 10's of millions of rows. Users looking to migrate larger datasets should consider multiple requests, each dealing with a chunk of the data. Staging tables can be used for each chunk prior to its partitions being moved to a final table. This allows failed requests to be retried. For further details on this bulk-loading strategy, see here. | {"source_file": "inserting-data.md"} | [
0.0020493795163929462,
-0.03399118408560753,
-0.05944714695215225,
0.05958399549126625,
-0.13518576323986053,
-0.06507924199104309,
0.009319039061665535,
0.009873844683170319,
-0.06299099326133728,
0.0006911059026606381,
0.04201661795377731,
-0.04188770055770874,
0.07287292927503586,
-0.02... |
ff51f520-cfcb-4698-913f-836152cda0e2 | Data can be exported from PostgreSQL in CSV format. This can then be inserted into ClickHouse from either local files or via object storage using table functions.
:::note Need help inserting large datasets?
If you need help inserting large datasets or encounter any errors when importing data into ClickHouse Cloud, please contact us at support@clickhouse.com and we can assist.
:::
Inserting data from the command line {#inserting-data-from-command-line}
Prerequisites
- You have
installed
ClickHouse
-
clickhouse-server
is running
- You have access to a terminal with
wget
,
zcat
and
curl
In this example you'll see how to insert a CSV file into ClickHouse from the command line using clickhouse-client in batch mode. For more information and examples of inserting data via command line using clickhouse-client in batch mode, see
"Batch mode"
.
We'll be using the
Hacker News dataset
for this example, which contains 28 million rows of Hacker News data.
Download the CSV {#download-csv}
Run the following command to download a CSV version of the dataset from our public S3 bucket:
bash
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz
At 4.6GB, and 28m rows, this compressed file should take 5-10 minutes to download.
Create the table {#create-table}
With
clickhouse-server
running, you can create an empty table with the following schema directly from the command line using
clickhouse-client
in batch mode:
bash
clickhouse-client <<'_EOF'
CREATE TABLE hackernews(
`id` UInt32,
`deleted` UInt8,
`type` Enum('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5),
`by` LowCardinality(String),
`time` DateTime,
`text` String,
`dead` UInt8,
`parent` UInt32,
`poll` UInt32,
`kids` Array(UInt32),
`url` String,
`score` Int32,
`title` String,
`parts` Array(UInt32),
`descendants` Int32
)
ENGINE = MergeTree
ORDER BY id
_EOF
If there are no errors, then the table has been successfully created. In the command above single quotes are used around the heredoc delimiter (
_EOF
) to prevent any interpolation. Without single quotes it would be necessary to escape the backticks around the column names.
Insert the data from the command line {#insert-data-via-cmd}
Next run the command below to insert the data from the file you downloaded earlier into your table:
bash
zcat < hacknernews.csv.gz | ./clickhouse client --query "INSERT INTO hackernews FORMAT CSV"
As our data is compressed, we need to first decompress the file using a tool like
gzip
,
zcat
, or similar, and then pipe the decompressed data into
clickhouse-client
with the appropriate
INSERT
statement and
FORMAT
. | {"source_file": "inserting-data.md"} | [
0.049483854323625565,
-0.04189093038439751,
-0.12154702842235565,
0.04855967313051224,
-0.028316698968410492,
-0.007883410900831223,
-0.02089238353073597,
-0.009686303324997425,
-0.04600697383284569,
0.07658398896455765,
0.029158281162381172,
-0.008484046906232834,
0.07610980421304703,
-0.... |
cfb6f6cd-362b-48ab-be2c-3f59faf61b51 | :::note
When inserting data with clickhouse-client in interactive mode, it is possible to let ClickHouse handle the decompression for you on insert using the
COMPRESSION
clause. ClickHouse can automatically detect the compression type from the file extension, but you can also specify it explicitly.
The query to insert would then look like this:
bash
clickhouse-client --query "INSERT INTO hackernews FROM INFILE 'hacknernews.csv.gz' COMPRESSION 'gzip' FORMAT CSV;"
:::
When the data has finished inserting you can run the following command to see the number of rows in the
hackernews
table:
bash
clickhouse-client --query "SELECT formatReadableQuantity(count(*)) FROM hackernews"
28.74 million
inserting data via command line with curl {#insert-using-curl}
In the previous steps you first downloaded the csv file to your local machine using
wget
. It is also possible to directly insert the data from the remote URL using a single command.
Run the following command to truncate the data from the
hackernews
table so that you can insert it again without the intermediate step of downloading to your local machine:
bash
clickhouse-client --query "TRUNCATE hackernews"
Now run:
bash
curl https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.csv.gz | zcat | clickhouse-client --query "INSERT INTO hackernews FORMAT CSV"
You can now run the same command as previously to verify that the data was inserted again:
bash
clickhouse-client --query "SELECT formatReadableQuantity(count(*)) FROM hackernews"
28.74 million | {"source_file": "inserting-data.md"} | [
0.0048535289242863655,
0.039197880774736404,
-0.12046777456998825,
0.07953275740146637,
-0.005345960147678852,
-0.06439689546823502,
-0.010138735175132751,
-0.012192402966320515,
0.037680357694625854,
0.07060099393129349,
-0.0011776707833632827,
-0.008957595564424992,
0.13555942475795746,
... |
7bd6ed0b-24ae-4d0a-8761-b9881dbcabff | title: 'Troubleshooting'
description: 'Installation troubleshooting guide'
slug: /guides/troubleshooting
doc_type: 'guide'
keywords: ['troubleshooting', 'debugging', 'problem solving', 'errors', 'diagnostics']
Installation {#installation}
Cannot import GPG keys from keyserver.ubuntu.com with apt-key {#cannot-import-gpg-keys-from-keyserverubuntucom-with-apt-key}
The
apt-key
feature with the
Advanced package tool (APT) has been deprecated
. Users should use the
gpg
command instead. Please refer the
install guide
article.
Cannot import GPG keys from keyserver.ubuntu.com with gpg {#cannot-import-gpg-keys-from-keyserverubuntucom-with-gpg}
See if your
gpg
is installed:
shell
sudo apt-get install gnupg
Cannot get deb packages from ClickHouse repository with apt-get {#cannot-get-deb-packages-from-clickhouse-repository-with-apt-get}
Check firewall settings.
If you cannot access the repository for any reason, download packages as described in the
install guide
article and install them manually using the
sudo dpkg -i <packages>
command. You will also need the
tzdata
package.
Cannot update deb packages from ClickHouse repository with apt-get {#cannot-update-deb-packages-from-clickhouse-repository-with-apt-get}
The issue may be happened when the GPG key is changed.
Please use the manual from the
setup
page to update the repository configuration.
You get different warnings with
apt-get update
{#you-get-different-warnings-with-apt-get-update}
The completed warning messages are as one of following:
shell
N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'https://packages.clickhouse.com/deb stable InRelease' doesn't support architecture 'i386'
shell
E: Failed to fetch https://packages.clickhouse.com/deb/dists/stable/main/binary-amd64/Packages.gz File has unexpected size (30451 != 28154). Mirror sync in progress?
shell
E: Repository 'https://packages.clickhouse.com/deb stable InRelease' changed its 'Origin' value from 'Artifactory' to 'ClickHouse'
E: Repository 'https://packages.clickhouse.com/deb stable InRelease' changed its 'Label' value from 'Artifactory' to 'ClickHouse'
N: Repository 'https://packages.clickhouse.com/deb stable InRelease' changed its 'Suite' value from 'stable' to ''
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.
shell
Err:11 https://packages.clickhouse.com/deb stable InRelease
400 Bad Request [IP: 172.66.40.249 443]
To resolve the above issue, please use the following script:
shell
sudo rm /var/lib/apt/lists/packages.clickhouse.com_* /var/lib/dpkg/arch /var/lib/apt/lists/partial/packages.clickhouse.com_*
sudo apt-get clean
sudo apt-get autoclean
Can't get packages with Yum because of wrong signature {#cant-get-packages-with-yum-because-of-wrong-signature}
Possible issue: the cache is wrong, maybe it's broken after updated GPG key in 2022-09. | {"source_file": "troubleshooting.md"} | [
-0.04578205198049545,
-0.1123371422290802,
0.017182309180498123,
-0.029506482183933258,
-0.05119550973176956,
-0.05754481256008148,
-0.014297252520918846,
0.0051140920259058475,
-0.0714116245508194,
0.05493796244263649,
0.05992742255330086,
0.00010764376929728314,
-0.02107856795191765,
-0.... |
38cc6580-8f53-493d-a83a-f78a0bb8923d | Possible issue: the cache is wrong, maybe it's broken after updated GPG key in 2022-09.
The solution is to clean out the cache and lib directory for Yum:
shell
sudo find /var/lib/yum/repos/ /var/cache/yum/ -name 'clickhouse-*' -type d -exec rm -rf {} +
sudo rm -f /etc/yum.repos.d/clickhouse.repo
After that follow the
install guide
Connecting to the server {#connecting-to-the-server}
Possible issues:
The server is not running.
Unexpected or wrong configuration parameters.
Server is not running {#server-is-not-running}
Check if server is running {#check-if-server-is-running}
shell
sudo service clickhouse-server status
If the server is not running, start it with the command:
shell
sudo service clickhouse-server start
Check the logs {#check-the-logs}
The main log of
clickhouse-server
is in
/var/log/clickhouse-server/clickhouse-server.log
by default.
If the server started successfully, you should see the strings:
<Information> Application: starting up.
— Server started.
<Information> Application: Ready for connections.
— Server is running and ready for connections.
If
clickhouse-server
start failed with a configuration error, you should see the
<Error>
string with an error description. For example:
plaintext
2019.01.11 15:23:25.549505 [ 45 ] {} <Error> ExternalDictionaries: Failed reloading 'event2id' external dictionary: Poco::Exception. Code: 1000, e.code() = 111, e.displayText() = Connection refused, e.what() = Connection refused
If you do not see an error at the end of the file, look through the entire file starting from the string:
plaintext
<Information> Application: starting up.
If you try to start a second instance of
clickhouse-server
on the server, you see the following log:
```plaintext
2019.01.11 15:25:11.151730 [ 1 ] {}
: Starting ClickHouse 19.1.0 with revision 54413
2019.01.11 15:25:11.154578 [ 1 ] {}
Application: starting up
2019.01.11 15:25:11.156361 [ 1 ] {}
StatusFile: Status file ./status already exists - unclean restart. Contents:
PID: 8510
Started at: 2019-01-11 15:24:23
Revision: 54413
2019.01.11 15:25:11.156673 [ 1 ] {}
Application: DB::Exception: Cannot lock file ./status. Another server instance in same directory is already running.
2019.01.11 15:25:11.156682 [ 1 ] {}
Application: shutting down
2019.01.11 15:25:11.156686 [ 1 ] {}
Application: Uninitializing subsystem: Logging Subsystem
2019.01.11 15:25:11.156716 [ 2 ] {}
BaseDaemon: Stop SignalListener thread
```
See system.d logs {#see-systemd-logs}
If you do not find any useful information in
clickhouse-server
logs or there aren't any logs, you can view
system.d
logs using the command:
shell
sudo journalctl -u clickhouse-server
Start clickhouse-server in interactive mode {#start-clickhouse-server-in-interactive-mode}
shell
sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml | {"source_file": "troubleshooting.md"} | [
0.020102787762880325,
-0.12693659961223602,
0.02464870549738407,
-0.054466329514980316,
0.026076195761561394,
-0.07797371596097946,
-0.0303866658359766,
-0.04941001534461975,
-0.015480847097933292,
-0.0009785112924873829,
0.05549090355634689,
0.02743775211274624,
-0.016236677765846252,
0.0... |
d061c0dd-c515-4161-938b-813bfb2580d1 | Start clickhouse-server in interactive mode {#start-clickhouse-server-in-interactive-mode}
shell
sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml
This command starts the server as an interactive app with standard parameters of the autostart script. In this mode
clickhouse-server
prints all the event messages in the console.
Configuration parameters {#configuration-parameters}
Check:
Docker settings:
If you run ClickHouse in Docker in an IPv6 network, make sure that
network=host
is set.
Endpoint settings.
Check
listen_host
and
tcp_port
settings.
ClickHouse server accepts localhost connections only by default.
HTTP protocol settings:
Check protocol settings for the HTTP API.
Secure connection settings.
Check:
The
tcp_port_secure
setting.
Settings for
SSL certificates
.
Use proper parameters while connecting. For example, use the
port_secure
parameter with
clickhouse_client
.
User settings:
You might be using the wrong user name or password.
Query processing {#query-processing}
If ClickHouse is not able to process the query, it sends an error description to the client. In the
clickhouse-client
you get a description of the error in the console. If you are using the HTTP interface, ClickHouse sends the error description in the response body. For example:
shell
$ curl 'http://localhost:8123/' --data-binary "SELECT a"
Code: 47, e.displayText() = DB::Exception: Unknown identifier: a. Note that there are no tables (FROM clause) in your query, context: required_names: 'a' source_tables: table_aliases: private_aliases: column_aliases: public_columns: 'a' masked_columns: array_join_columns: source_columns: , e.what() = DB::Exception
If you start
clickhouse-client
with the
stack-trace
parameter, ClickHouse returns the server stack trace with the description of an error.
You might see a message about a broken connection. In this case, you can repeat the query. If the connection breaks every time you perform the query, check the server logs for errors.
Efficiency of query processing {#efficiency-of-query-processing}
If you see that ClickHouse is working too slowly, you need to profile the load on the server resources and network for your queries.
You can use the clickhouse-benchmark utility to profile queries. It shows the number of queries processed per second, the number of rows processed per second, and percentiles of query processing times. | {"source_file": "troubleshooting.md"} | [
0.07283979654312134,
0.008842122741043568,
-0.031340714544057846,
-0.03794168680906296,
-0.05742799490690231,
-0.06951447576284409,
-0.03849184140563011,
-0.042289625853300095,
-0.017534760758280754,
-0.003120464039966464,
-0.020624810829758644,
-0.06011374294757843,
-0.007074594032019377,
... |
47e3737e-4da0-4bf2-8991-83fdd9f8f5c2 | title: 'Manage and Deploy Overview'
description: 'Overview page for Manage and Deploy'
slug: /guides/manage-and-deploy-index
doc_type: 'landing-page'
keywords: ['deployment', 'management', 'administration', 'operations', 'guides']
Manage and deploy
This section contains the following topics: | {"source_file": "manage-and-deploy-index.md"} | [
0.04736379534006119,
-0.051575422286987305,
-0.009690607897937298,
-0.002721769269555807,
0.024810289964079857,
-0.04105166345834732,
0.009968504309654236,
-0.022843142971396446,
-0.09927590936422348,
0.06089617311954498,
0.02837439998984337,
0.03506328538060188,
0.03707056865096092,
0.027... |
d3b4e2d8-a9b7-4b74-a7fe-0ead63e0049c | | Topic | Description |
|-------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
|
Deployment and Scaling
| Working deployment examples based on the advice provided to ClickHouse users by the ClickHouse Support and Services organization. |
|
Separation of Storage and Compute
| Guide exploring how you can use ClickHouse and S3 to implement an architecture with separated storage and compute. |
|
Sizing and hardware recommendations'
| Guide discussing general recommendations regarding hardware, compute, memory, and disk configurations for open-source users. |
|
Configuring ClickHouse Keeper
| Information and examples on how to configure ClickHouse Keeper. |
|
Network ports
| List of network ports used by ClickHouse. |
|
Re-balancing Shards
| Recommendations on re-balancing shards. |
|
Does ClickHouse support multi-region replication?
| FAQ on multi-region replication. |
|
Which ClickHouse version to use in production?
| FAQ on ClickHouse versions for production use. |
|
Cluster Discovery
| Information and examples of ClickHouse's cluster discovery feature. |
|
Monitoring
| Information on how you can monitor hardware resource utilization and server metrics of ClickHouse. |
|
Tracing ClickHouse with OpenTelemetry
| Information on using OpenTelemetry with ClickHouse. |
|
Quotas
| Information and examples on quotas in ClickHouse. |
|
Secured Communication with Zookeeper | {"source_file": "manage-and-deploy-index.md"} | [
0.03133268281817436,
0.02908882312476635,
-0.005776834674179554,
-0.03413020074367523,
0.012673391960561275,
-0.02501087822020054,
0.005750063806772232,
0.03251589462161064,
-0.03307458013296127,
-0.018630672246217728,
0.04859962686896324,
-0.018744628876447678,
0.059898268431425095,
-0.01... |
9422433e-c191-44cc-9495-5e30d075c4ec | |
Secured Communication with Zookeeper
| Guide to setting up secured communication between ClickHouse and Zookeeper. |
|
Startup Scripts
| Example of how to run startup scripts during startup, useful for migrations or automatic schema creation. |
|
External Disks for Storing Data
| Information and examples on configuring external storage with ClickHouse. |
|
Allocation profiling
| Information and examples on allocation sampling and profiling with jemalloc. |
|
Backup and Restore
| Guide to backing up to a local disk or external storage. |
|
Caches
| Explainer on the various cache types in ClickHouse. |
|
Workload scheduling
| Explainer on workload scheduling in ClickHouse. |
|
Self-managed Upgrade
| Guidelines on carrying out a self-managed upgrade. |
|
Troubleshooting
| Assorted troubleshooting tips. |
|
Usage Recommendations
| Assorted ClickHouse hardware and software usage recommendations. |
|
Distributed DDL
| Explainer of the
ON CLUSTER
clause. | | {"source_file": "manage-and-deploy-index.md"} | [
-0.02014504000544548,
-0.045586276799440384,
-0.026580529287457466,
-0.023722756654024124,
0.017346879467368126,
-0.06939741224050522,
0.013746681623160839,
0.04305173456668854,
-0.10784661769866943,
0.041785482317209244,
-0.004478951450437307,
0.005755463615059853,
-0.01667214371263981,
-... |
a5258569-bbc4-4d0f-88df-5f02c6e76972 | sidebar_position: 1
sidebar_label: 'Creating tables'
title: 'Creating tables in ClickHouse'
slug: /guides/creating-tables
description: 'Learn about Creating Tables in ClickHouse'
keywords: ['creating tables', 'CREATE TABLE', 'table creation', 'database guide', 'MergeTree engine']
doc_type: 'guide'
Creating tables in ClickHouse
Like most databases, ClickHouse logically groups tables into
databases
. Use the
CREATE DATABASE
command to create a new database in ClickHouse:
sql
CREATE DATABASE IF NOT EXISTS helloworld
Similarly, use
CREATE TABLE
to define a new table. If you do not specify the database name, the table will be in the
default
database.
The following table named
my_first_table
is created in the
helloworld
database:
sql
CREATE TABLE helloworld.my_first_table
(
user_id UInt32,
message String,
timestamp DateTime,
metric Float32
)
ENGINE = MergeTree()
PRIMARY KEY (user_id, timestamp)
In the example above,
my_first_table
is a
MergeTree
table with four columns:
user_id
: a 32-bit unsigned integer
message
: a
String
data type, which replaces types like
VARCHAR
,
BLOB
,
CLOB
and others from other database systems
timestamp
: a
DateTime
value, which represents an instant in time
metric
: a 32-bit floating point number
:::note
The table engine determines:
- How and where the data is stored
- Which queries are supported
- Whether or not the data is replicated
There are many engines to choose from, but for a simple table on a single-node ClickHouse server,
MergeTree
is your likely choice.
:::
A brief intro to primary keys {#a-brief-intro-to-primary-keys}
Before you go any further, it is important to understand how primary keys work in ClickHouse (the implementation
of primary keys might seem unexpected!):
primary keys in ClickHouse are
not unique
for each row in a table
The primary key of a ClickHouse table determines how the data is sorted when written to disk. Every 8,192 rows or 10MB of
data (referred to as the
index granularity
) creates an entry in the primary key index file. This granularity concept
creates a
sparse index
that can easily fit in memory, and the granules represent a stripe of the smallest amount of
column data that gets processed during
SELECT
queries.
The primary key can be defined using the
PRIMARY KEY
parameter. If you define a table without a
PRIMARY KEY
specified,
then the key becomes the tuple specified in the
ORDER BY
clause. If you specify both a
PRIMARY KEY
and an
ORDER BY
, the primary key must be a prefix of the sort order.
The primary key is also the sorting key, which is a tuple of
(user_id, timestamp)
. Therefore, the data stored in each
column file will be sorted by
user_id
, then
timestamp
.
:::tip
For more details, check out the
Modeling Data training module
in ClickHouse Academy.
::: | {"source_file": "creating-tables.md"} | [
0.0750913992524147,
-0.09663616865873337,
-0.05254790931940079,
0.03588397055864334,
-0.09249358624219894,
-0.08777209371328354,
0.04940657317638397,
0.059691838920116425,
-0.09224808216094971,
0.022994481027126312,
0.09213200211524963,
-0.06890828907489777,
0.09920024871826172,
-0.0527879... |
4c16b9b6-4e4a-44a6-a1df-208ee4a72dcb | title: 'Using JOINs in ClickHouse'
description: 'How to join tables in ClickHouse'
keywords: ['joins', 'join tables']
slug: /guides/joining-tables
doc_type: 'guide'
import Image from '@theme/IdealImage';
import joins_1 from '@site/static/images/guides/joins-1.png';
import joins_2 from '@site/static/images/guides/joins-2.png';
import joins_3 from '@site/static/images/guides/joins-3.png';
import joins_4 from '@site/static/images/guides/joins-4.png';
import joins_5 from '@site/static/images/guides/joins-5.png';
ClickHouse has
full
JOIN
support
, with a wide selection of join algorithms. To maximize performance, we recommend following the join optimization suggestions listed in this guide.
For optimal performance, users should aim to reduce the number of
JOIN
s in queries, especially for real-time analytical workloads where millisecond performance is required. Aim for a maximum of 3 to 4 joins in a query. We detail a number of changes to minimize joins in the
data modeling section
, including denormalization, dictionaries, and materialized views.
Currently, ClickHouse does not reorder joins. Always ensure the smallest table is on the right-hand side of the Join. This will be held in memory for most join algorithms and will ensure the lowest memory overhead for the query.
If your query requires a direct join i.e. a
LEFT ANY JOIN
- as shown below, we recommend using
Dictionaries
where possible.
If performing inner joins, it is often more optimal to write these as sub-queries using the
IN
clause. Consider the following queries, which are functionally equivalent. Both find the number of
posts
that don't mention ClickHouse in the question but do in the
comments
.
``sql
SELECT count()
FROM stackoverflow.posts AS p
ANY INNER
JOIN` stackoverflow.comments AS c ON p.Id = c.PostId
WHERE (p.Title != '') AND (p.Title NOT ILIKE '%clickhouse%') AND (p.Body NOT ILIKE '%clickhouse%') AND (c.Text ILIKE '%clickhouse%')
┌─count()─┐
│ 86 │
└─────────┘
1 row in set. Elapsed: 8.209 sec. Processed 150.20 million rows, 56.05 GB (18.30 million rows/s., 6.83 GB/s.)
Peak memory usage: 1.23 GiB.
```
Note we use an
ANY INNER JOIN
vs. just an
INNER
join as we don't want the cartesian product i.e. we want only one match for each post.
This join can be rewritten using a subquery, improving performance significantly:
```sql
SELECT count()
FROM stackoverflow.posts
WHERE (Title != '') AND (Title NOT ILIKE '%clickhouse%') AND (Body NOT ILIKE '%clickhouse%') AND (Id IN (
SELECT PostId
FROM stackoverflow.comments
WHERE Text ILIKE '%clickhouse%'
))
┌─count()─┐
│ 86 │
└─────────┘
1 row in set. Elapsed: 2.284 sec. Processed 150.20 million rows, 16.61 GB (65.76 million rows/s., 7.27 GB/s.)
Peak memory usage: 323.52 MiB.
``` | {"source_file": "joining-tables.md"} | [
-0.00616102758795023,
-0.004553537350147963,
-0.02422116883099079,
0.052163586020469666,
0.01628209836781025,
-0.08968430757522583,
0.037009771913290024,
0.04172651469707489,
-0.06793263554573059,
-0.027325913310050964,
-0.01506066508591175,
0.0382586345076561,
0.14301925897598267,
0.02022... |
371ee028-921f-4eba-90c7-92b7ec9cdb67 | 1 row in set. Elapsed: 2.284 sec. Processed 150.20 million rows, 16.61 GB (65.76 million rows/s., 7.27 GB/s.)
Peak memory usage: 323.52 MiB.
```
Although ClickHouse makes attempts to push down conditions to all join clauses and subqueries, we recommend users always manually apply conditions to all sub-clauses where possible - thus minimizing the size of the data to
JOIN
. Consider the following example below, where we want to compute the number of up-votes for Java-related posts since 2020.
A naive query, with the larger table on the left side, completes in 56s:
```sql
SELECT countIf(VoteTypeId = 2) AS upvotes
FROM stackoverflow.posts AS p
INNER JOIN stackoverflow.votes AS v ON p.Id = v.PostId
WHERE has(arrayFilter(t -> (t != ''), splitByChar('|', p.Tags)), 'java') AND (p.CreationDate >= '2020-01-01')
┌─upvotes─┐
│ 261915 │
└─────────┘
1 row in set. Elapsed: 56.642 sec. Processed 252.30 million rows, 1.62 GB (4.45 million rows/s., 28.60 MB/s.)
```
Re-ordering this join improves performance dramatically to 1.5s:
```sql
SELECT countIf(VoteTypeId = 2) AS upvotes
FROM stackoverflow.votes AS v
INNER JOIN stackoverflow.posts AS p ON v.PostId = p.Id
WHERE has(arrayFilter(t -> (t != ''), splitByChar('|', p.Tags)), 'java') AND (p.CreationDate >= '2020-01-01')
┌─upvotes─┐
│ 261915 │
└─────────┘
1 row in set. Elapsed: 1.519 sec. Processed 252.30 million rows, 1.62 GB (166.06 million rows/s., 1.07 GB/s.)
```
Adding a filter to the left side table improves performance even further to 0.5s.
```sql
SELECT countIf(VoteTypeId = 2) AS upvotes
FROM stackoverflow.votes AS v
INNER JOIN stackoverflow.posts AS p ON v.PostId = p.Id
WHERE has(arrayFilter(t -> (t != ''), splitByChar('|', p.Tags)), 'java') AND (p.CreationDate >= '2020-01-01') AND (v.CreationDate >= '2020-01-01')
┌─upvotes─┐
│ 261915 │
└─────────┘
1 row in set. Elapsed: 0.597 sec. Processed 81.14 million rows, 1.31 GB (135.82 million rows/s., 2.19 GB/s.)
Peak memory usage: 249.42 MiB.
```
This query can be improved even more by moving the
INNER JOIN
to a subquery, as noted earlier, maintaining the filter on both the outer and inner queries.
```sql
SELECT count() AS upvotes
FROM stackoverflow.votes
WHERE (VoteTypeId = 2) AND (PostId IN (
SELECT Id
FROM stackoverflow.posts
WHERE (CreationDate >= '2020-01-01') AND has(arrayFilter(t -> (t != ''), splitByChar('|', Tags)), 'java')
))
┌─upvotes─┐
│ 261915 │
└─────────┘
1 row in set. Elapsed: 0.383 sec. Processed 99.64 million rows, 804.55 MB (259.85 million rows/s., 2.10 GB/s.)
Peak memory usage: 250.66 MiB.
```
Choosing a JOIN algorithm {#choosing-a-join-algorithm}
ClickHouse supports a number of
join algorithms
. These algorithms typically trade memory usage for performance. The following provides an overview of the ClickHouse join algorithms based on their relative memory consumption and execution time: | {"source_file": "joining-tables.md"} | [
-0.003033627988770604,
0.061743129044771194,
-0.022231800481677055,
0.029678575694561005,
-0.01190185546875,
-0.053587257862091064,
0.005764393601566553,
0.029537415131926537,
-0.06852765381336212,
0.030113598331809044,
-0.045779094099998474,
-0.011620940640568733,
0.0721140205860138,
-0.0... |
d2ca54c0-311d-4eaf-a1fe-e5a2be2ffcab | These algorithms dictate the manner in which a join query is planned and executed. By default, ClickHouse uses the direct or the hash join algorithm based on the used join type and strictness and engine of the joined tables. Alternatively, ClickHouse can be configured to adaptively choose and dynamically change the join algorithm to use at runtime, depending on resource availability and usage: When
join_algorithm=auto
, ClickHouse tries the hash join algorithm first, and if that algorithm's memory limit is violated, the algorithm is switched on the fly to partial merge join. You can observe which algorithm was chosen via trace logging. ClickHouse also allows users to specify the desired join algorithm themselves via the
join_algorithm
setting.
The supported
JOIN
types for each join algorithm are shown below and should be considered prior to optimization:
A full detailed description of each
JOIN
algorithm can be found
here
, including their pros, cons, and scaling properties.
Selecting the appropriate join algorithms depends on whether you are looking to optimize for memory or performance.
Optimizing JOIN performance {#optimizing-join-performance}
If your key optimization metric is performance and you are looking to execute the join as fast as possible, you can use the following decision tree for choosing the right join algorithm:
(1)
If the data from the right-hand side table can be pre-loaded into an in-memory low-latency key-value data structure, e.g. a dictionary, and if the join key matches the key attribute of the underlying key-value storage, and if
LEFT ANY JOIN
semantics are adequate - then the
direct join
is applicable and offers the fastest approach.
(2)
If your table's
physical row order
matches the join key sort order, then it depends. In this case, the
full sorting merge join
skips
the sorting phase resulting in significantly reduced memory usage plus, depending on data size and join key value distribution, faster execution times than some of the hash join algorithms.
(3)
If the right table fits into memory, even with the
additional memory usage overhead
of the
parallel hash join
, then this algorithm or the hash join can be faster. This depends on data size, data types, and value distribution of the join key columns.
(4)
If the right table doesn't fit into memory, then it depends again. ClickHouse offers three non-memory bound join algorithms. All three temporarily spill data to disk.
Full sorting merge join
and
partial merge join
require prior sorting of the data.
Grace hash join
is building hash tables from the data instead. Based on the volume of data, the data types and the value distribution of the join key columns, there can be scenarios where building hash tables from the data is faster than sorting the data. And vice versa. | {"source_file": "joining-tables.md"} | [
-0.016733719035983086,
-0.023105934262275696,
-0.055674925446510315,
0.009888676926493645,
-0.020077858120203018,
-0.08394128084182739,
0.024226676672697067,
-0.010234220884740353,
-0.040266722440719604,
-0.055349163711071014,
-0.034452710300683975,
0.05346086993813515,
0.027078742161393166,... |
ea011467-2375-430c-8022-0eb26d1d50c3 | Partial merge join is optimized for minimizing memory usage when large tables are joined, at the expense of join speed which is quite slow. This is especially the case when the physical row order of the left table doesn't match the join key sorting order.
Grace hash join is the most flexible of the three non-memory-bound join algorithms and offers good control of memory usage vs. join speed with its
grace_hash_join_initial_buckets
setting. Depending on the data volume the grace hash can be faster or slower than the partial merge algorithm, when the amount of
buckets
is chosen such that the memory usage of both algorithms is approximately aligned. When the memory usage of grace hash join is configured to be approximately aligned with the memory usage of full sorting merge, then full sorting merge was always faster in our test runs.
Which one of the three non-memory-bound algorithms is the fastest depends on the volume of data, the data types, and the value distribution of the join key columns. It is always best to run some benchmarks with realistic data volumes of realistic data in order to determine which algorithm is the fastest.
Optimizing for memory {#optimizing-for-memory}
If you want to optimize a join for the lowest memory usage instead of the fastest execution time, then you can use this decision tree instead:
(1)
If your table's physical row order matches the join key sort order, then the memory usage of the
full sorting merge join
is as low as it gets. With the additional benefit of good join speed because the sorting phase is
disabled
.
(2)
The
grace hash join
can be tuned for very low memory usage by
configuring
a high number of
buckets
at the expense of join speed. The
partial merge join
intentionally uses a low amount of main memory. The
full sorting merge join
with external sorting enabled generally uses more memory than the partial merge join (assuming the row order does not match the key sort order), with the benefit of significantly better join execution time.
For users needing more details on the above, we recommend the following
blog series
. | {"source_file": "joining-tables.md"} | [
0.033611081540584564,
0.025620538741350174,
-0.011903990060091019,
-0.030968165025115013,
-0.0005400742520578206,
-0.052961088716983795,
-0.026008471846580505,
0.03463447839021683,
0.09020805358886719,
-0.010479813441634178,
-0.0032233339734375477,
0.10485399514436722,
0.003963089548051357,
... |
a5d0d239-d076-46a3-a076-baaeb3e65e42 | sidebar_position: 3
sidebar_label: 'Selecting data'
title: 'Selecting ClickHouse Data'
slug: /guides/writing-queries
description: 'Learn about Selecting ClickHouse Data'
keywords: ['SELECT', 'data formats']
show_related_blogs: true
doc_type: 'guide'
ClickHouse is a SQL database, and you query your data by writing the same type of
SELECT
queries you are already familiar with. For example:
sql
SELECT *
FROM helloworld.my_first_table
ORDER BY timestamp
:::note
View the
SQL Reference
for more details on the syntax and available clauses and options.
:::
Notice the response comes back in a nice table format:
```response
┌─user_id─┬─message────────────────────────────────────────────┬───────────timestamp─┬──metric─┐
│ 102 │ Insert a lot of rows per batch │ 2022-03-21 00:00:00 │ 1.41421 │
│ 102 │ Sort your data based on your commonly-used queries │ 2022-03-22 00:00:00 │ 2.718 │
│ 101 │ Hello, ClickHouse! │ 2022-03-22 14:04:09 │ -1 │
│ 101 │ Granules are the smallest chunks of data read │ 2022-03-22 14:04:14 │ 3.14159 │
└─────────┴────────────────────────────────────────────────────┴─────────────────────┴─────────┘
4 rows in set. Elapsed: 0.008 sec.
```
Add a
FORMAT
clause to specify one of the
many supported output formats of ClickHouse
:
sql
SELECT *
FROM helloworld.my_first_table
ORDER BY timestamp
FORMAT TabSeparated
In the above query, the output is returned as tab-separated:
```response
Query id: 3604df1c-acfd-4117-9c56-f86c69721121
102 Insert a lot of rows per batch 2022-03-21 00:00:00 1.41421
102 Sort your data based on your commonly-used queries 2022-03-22 00:00:00 2.718
101 Hello, ClickHouse! 2022-03-22 14:04:09 -1
101 Granules are the smallest chunks of data read 2022-03-22 14:04:14 3.14159
4 rows in set. Elapsed: 0.005 sec.
```
:::note
ClickHouse supports over 70 input and output formats, so between the thousands of functions and all the data formats, you can use ClickHouse to perform some impressive and fast ETL-like data transformations. In fact, you don't even
need a ClickHouse server up and running to transform data - you can use the
clickhouse-local
tool. View the
docs page of
clickhouse-local
for details.
::: | {"source_file": "writing-queries.md"} | [
0.010136422701179981,
-0.020391855388879776,
0.029711667448282242,
0.09546201676130295,
-0.041914600878953934,
-0.042722783982753754,
0.03769690915942192,
0.008671245537698269,
-0.045614130795001984,
0.05741841718554497,
0.0627584233880043,
0.030238451436161995,
0.05354367569088936,
-0.096... |
889cedb6-56bb-4e72-a7e3-9550ed5d35b5 | sidebar_position: 1
sidebar_label: 'Separation of storage and compute'
slug: /guides/separation-storage-compute
title: 'Separation of Storage and Compute'
description: 'This guide explores how you can use ClickHouse and S3 to implement an architecture with separated storage and compute.'
doc_type: 'guide'
keywords: ['storage', 'compute', 'architecture', 'scalability', 'cloud']
import Image from '@theme/IdealImage';
import BucketDetails from '@site/docs/_snippets/_S3_authentication_and_bucket.md';
import s3_bucket_example from '@site/static/images/guides/s3_bucket_example.png';
Separation of storage and compute
Overview {#overview}
This guide explores how you can use ClickHouse and S3 to implement an architecture with separated storage and compute.
Separation of storage and compute means that computing resources and storage resources are managed independently. In ClickHouse, this allows for better scalability, cost-efficiency, and flexibility. You can scale storage and compute resources separately as needed, optimizing performance and costs.
Using ClickHouse backed by S3 is especially useful for use cases where query performance on "cold" data is less critical. ClickHouse provides support for using S3 as the storage for the
MergeTree
engine using
S3BackedMergeTree
. This table engine enables users to exploit the scalability and cost benefits of S3 while maintaining the insert and query performance of the
MergeTree
engine.
Please note that implementing and managing a separation of storage and compute architecture is more complicated compared to standard ClickHouse deployments. While self-managed ClickHouse allows for separation of storage and compute as discussed in this guide, we recommend using
ClickHouse Cloud
, which allows you to use ClickHouse in this architecture without configuration using the
SharedMergeTree
table engine
.
This guide assumes you are using ClickHouse version 22.8 or higher.
:::warning
Do not configure any AWS/GCS life cycle policy. This is not supported and could lead to broken tables.
:::
1. Use S3 as a ClickHouse disk {#1-use-s3-as-a-clickhouse-disk}
Creating a disk {#creating-a-disk}
Create a new file in the ClickHouse
config.d
directory to store the storage configuration:
bash
vim /etc/clickhouse-server/config.d/storage_config.xml
Copy the following XML in to the newly created file, replacing
BUCKET
,
ACCESS_KEY_ID
,
SECRET_ACCESS_KEY
with the AWS bucket details where you'd like to store your data: | {"source_file": "separation-storage-compute.md"} | [
-0.049249134957790375,
0.03110264241695404,
-0.08654963225126266,
-0.017386574298143387,
0.03307994455099106,
-0.07299275696277618,
0.006058524828404188,
0.0010375645942986012,
0.014599205926060677,
0.03097880631685257,
0.004813365638256073,
0.01541789062321186,
0.11352429538965225,
-0.043... |
04716d60-d86f-4b98-bd23-47e05af538f2 | Copy the following XML in to the newly created file, replacing
BUCKET
,
ACCESS_KEY_ID
,
SECRET_ACCESS_KEY
with the AWS bucket details where you'd like to store your data:
xml
<clickhouse>
<storage_configuration>
<disks>
<s3_disk>
<type>s3</type>
<endpoint>$BUCKET</endpoint>
<access_key_id>$ACCESS_KEY_ID</access_key_id>
<secret_access_key>$SECRET_ACCESS_KEY</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/s3_disk/</metadata_path>
</s3_disk>
<s3_cache>
<type>cache</type>
<disk>s3_disk</disk>
<path>/var/lib/clickhouse/disks/s3_cache/</path>
<max_size>10Gi</max_size>
</s3_cache>
</disks>
<policies>
<s3_main>
<volumes>
<main>
<disk>s3_disk</disk>
</main>
</volumes>
</s3_main>
</policies>
</storage_configuration>
</clickhouse>
If you need to further specify settings for the S3 disk, for example to specify a
region
or send a custom HTTP
header
, you can find the list of relevant settings
here
.
You can also replace
access_key_id
and
secret_access_key
with the following, which will attempt to obtain credentials from environment variables and Amazon EC2 metadata:
bash
<use_environment_credentials>true</use_environment_credentials>
After you've created your configuration file, you need to update the owner of the file to the clickhouse user and group:
bash
chown clickhouse:clickhouse /etc/clickhouse-server/config.d/storage_config.xml
You can now restart the ClickHouse server to have the changes take effect:
bash
service clickhouse-server restart
2. Create a table backed by S3 {#2-create-a-table-backed-by-s3}
To test that we've configured the S3 disk properly, we can attempt to create and query a table.
Create a table specifying the new S3 storage policy:
sql
CREATE TABLE my_s3_table
(
`id` UInt64,
`column1` String
)
ENGINE = MergeTree
ORDER BY id
SETTINGS storage_policy = 's3_main';
Note that we did not have to specify the engine as
S3BackedMergeTree
. ClickHouse automatically converts the engine type internally if it detects the table is using S3 for storage.
Show that the table was created with the correct policy:
sql
SHOW CREATE TABLE my_s3_table;
You should see the following result:
response
┌─statement────────────────────────────────────────────────────
│ CREATE TABLE default.my_s3_table
(
`id` UInt64,
`column1` String
)
ENGINE = MergeTree
ORDER BY id
SETTINGS storage_policy = 's3_main', index_granularity = 8192
└──────────────────────────────────────────────────────────────
Let's now insert some rows into our new table:
sql
INSERT INTO my_s3_table (id, column1)
VALUES (1, 'abc'), (2, 'xyz');
Let's verify that our rows were inserted:
sql
SELECT * FROM my_s3_table;
```response
┌─id─┬─column1─┐
│ 1 │ abc │
│ 2 │ xyz │
└────┴─────────┘
2 rows in set. Elapsed: 0.284 sec.
``` | {"source_file": "separation-storage-compute.md"} | [
-0.0062012136913836,
-0.02205152064561844,
-0.09190143644809723,
-0.05688656121492386,
0.06239868700504303,
0.006862107198685408,
-0.031046070158481598,
-0.019481154158711433,
0.050813257694244385,
0.08251632750034332,
0.03866183012723923,
-0.010893176309764385,
0.06733398884534836,
-0.105... |
ac5f21e3-ec33-4702-84b1-1289976908ac | Let's verify that our rows were inserted:
sql
SELECT * FROM my_s3_table;
```response
┌─id─┬─column1─┐
│ 1 │ abc │
│ 2 │ xyz │
└────┴─────────┘
2 rows in set. Elapsed: 0.284 sec.
```
In the AWS console, if your data was successfully inserted to S3, you should see that ClickHouse has created new files in your specified bucket.
If everything worked successfully, you are now using ClickHouse with separated storage and compute!
3. Implementing replication for fault tolerance (optional) {#3-implementing-replication-for-fault-tolerance-optional}
:::warning
Do not configure any AWS/GCS life cycle policy. This is not supported and could lead to broken tables.
:::
For fault tolerance, you can use multiple ClickHouse server nodes distributed across multiple AWS regions, with an S3 bucket for each node.
Replication with S3 disks can be accomplished by using the
ReplicatedMergeTree
table engine. See the following guide for details:
-
Replicating a single shard across two AWS regions using S3 Object Storage
.
Further reading {#further-reading}
SharedMergeTree table engine
SharedMergeTree announcement blog | {"source_file": "separation-storage-compute.md"} | [
-0.03320954367518425,
-0.11576094478368759,
-0.02558564767241478,
-0.015351717360317707,
0.02292422391474247,
-0.07136354595422745,
-0.03653464838862419,
-0.08365098387002945,
0.019651008769869804,
0.08177295327186584,
0.032819714397192,
-0.026145940646529198,
0.1413029283285141,
-0.090482... |
f59f217b-930f-4a1a-bc5e-4af733210233 | sidebar_label: 'Generating random test data'
title: 'Generating random test data in ClickHouse'
slug: /guides/generating-test-data
description: 'Learn about Generating Random Test Data in ClickHouse'
show_related_blogs: true
doc_type: 'guide'
keywords: ['random data', 'test data']
Generating random test data in ClickHouse
Generating random data is useful when testing new use cases or benchmarking your implementation.
ClickHouse has a
wide range of functions for generating random data
that, in many cases, avoid the need for an external data generator.
This guide provides several examples of how to generate random datasets in ClickHouse with different randomness requirements.
Simple uniform dataset {#simple-uniform-dataset}
Use-case
: Generate a quick dataset of user events with random timestamps and event types.
```sql
CREATE TABLE user_events (
event_id UUID,
user_id UInt32,
event_type LowCardinality(String),
event_time DateTime
) ENGINE = MergeTree
ORDER BY event_time;
INSERT INTO user_events
SELECT
generateUUIDv4() AS event_id,
rand() % 10000 AS user_id,
arrayJoin(['click','view','purchase']) AS event_type,
now() - INTERVAL rand() % 3600*24 SECOND AS event_time
FROM numbers(1000000);
```
rand() % 10000
: uniform distribution of 10k users
arrayJoin(...)
: randomly selects one of three event types
Timestamps spread over the previous 24 hours
Exponential distribution {#exponential-distribution}
Use-case
: Simulate purchase amounts where most values are low, but a few are high.
```sql
CREATE TABLE purchases (
dt DateTime,
customer_id UInt32,
total_spent Float32
) ENGINE = MergeTree
ORDER BY dt;
INSERT INTO purchases
SELECT
now() - INTERVAL randUniform(1,1_000_000) SECOND AS dt,
number AS customer_id,
15 + round(randExponential(1/10), 2) AS total_spent
FROM numbers(500000);
```
Uniform timestamps over recent period
randExponential(1/10)
— most totals near 0, offset by 15 as a minimum ([ClickHouse][1], [ClickHouse][2], [Atlantic.Net][3], [GitHub][4])
Time-distributed events (Poisson) {#poisson-distribution}
Use-case
: Simulate event arrivals that cluster around a specific period (e.g., peak hour).
```sql
CREATE TABLE events (
dt DateTime,
event_type String
) ENGINE = MergeTree
ORDER BY dt;
INSERT INTO events
SELECT
toDateTime('2022-12-12 12:00:00')
- ((12 + randPoisson(12)) * 3600) AS dt,
'click' AS event_type
FROM numbers(200000);
```
Events peak around noon, with Poisson-distributed deviation
Time-varying normal distribution {#time-varying-normal-distribution}
Use-case
: Emulate system metrics (e.g., CPU usage) that vary over time.
```sql
CREATE TABLE cpu_metrics (
host String,
ts DateTime,
usage Float32
) ENGINE = MergeTree
ORDER BY (host, ts); | {"source_file": "generating-test-data.md"} | [
-0.0092766797170043,
-0.0172528475522995,
-0.04725692421197891,
0.026180880144238472,
-0.047306906431913376,
-0.08899155259132385,
0.07409311830997467,
-0.0546446368098259,
-0.05535545200109482,
-0.011428949423134327,
0.07333380728960037,
-0.08171132951974869,
0.08344852179288864,
-0.09512... |
195a36bb-1b84-4314-a5d8-043e1740e3c2 | Use-case
: Emulate system metrics (e.g., CPU usage) that vary over time.
```sql
CREATE TABLE cpu_metrics (
host String,
ts DateTime,
usage Float32
) ENGINE = MergeTree
ORDER BY (host, ts);
INSERT INTO cpu_metrics
SELECT
arrayJoin(['host1','host2','host3']) AS host,
now() - INTERVAL number SECOND AS ts,
greatest(0.0, least(100.0,
randNormal(50 + 30
sin(toUInt32(ts)%86400/86400
2*pi()), 10)
)) AS usage
FROM numbers(10000);
```
usage
follows a diurnal sine wave + randomness
Values bounded to [0,100]
Categorical and nested data {#categorical-and-nested-data}
Use-case
: Create user profiles with multi-valued interests.
```sql
CREATE TABLE user_profiles (
user_id UInt32,
interests Array(String),
scores Array(UInt8)
) ENGINE = MergeTree
ORDER BY user_id;
INSERT INTO user_profiles
SELECT
number AS user_id,
arrayShuffle(['sports','music','tech'])[1 + rand() % 3 : 1 + rand() % 3] AS interests,
[rand() % 100, rand() % 100, rand() % 100] AS scores
FROM numbers(20000);
```
Random array length between 1–3
Three per-user scores for each interest
:::tip
Read the
Generating Random Data in ClickHouse
blog for even more examples.
:::
Generating random tables {#generating-random-tables}
The
generateRandomStructure
function is particularly useful when combined with the
generateRandom
table engine for testing, benchmarking, or creating mock data with arbitrary schemas.
Let's start by just seeing what a random structure looks like using the
generateRandomStructure
function:
sql
SELECT generateRandomStructure(5);
You might see something like:
response
c1 UInt32, c2 Array(String), c3 DateTime, c4 Nullable(Float64), c5 Map(String, Int16)
You can also use a seed to get the same structure every time:
sql
SELECT generateRandomStructure(3, 42);
response
c1 String, c2 Array(Nullable(Int32)), c3 Tuple(UInt8, Date)
Now let's create an actual table and fill it with random data:
```sql
CREATE TABLE my_test_table
ENGINE = MergeTree
ORDER BY tuple()
AS SELECT *
FROM generateRandom(
'col1 UInt32, col2 String, col3 Float64, col4 DateTime',
1, -- seed for data generation
10 -- number of different random values
)
LIMIT 100; -- 100 rows
-- Step 2: Query your new table
SELECT * FROM my_test_table LIMIT 5;
```
response
┌───────col1─┬─col2──────┬─────────────────────col3─┬────────────────col4─┐
│ 4107652264 │ &b!M-e;7 │ 1.0013455832230728e-158 │ 2059-08-14 19:03:26 │
│ 652895061 │ Dj7peUH{T │ -1.032074207667996e112 │ 2079-10-06 04:18:16 │
│ 2319105779 │ =D[ │ -2.066555415720528e88 │ 2015-04-26 11:44:13 │
│ 1835960063 │ _@}a │ -1.4998020545039013e110 │ 2063-03-03 20:36:55 │
│ 730412674 │ _}! │ -1.3578492992094465e-275 │ 2098-08-23 18:23:37 │
└────────────┴───────────┴──────────────────────────┴─────────────────────┘
Let's combine both functions for a completely random table.
First, see what structure we'll get: | {"source_file": "generating-test-data.md"} | [
0.07949326187372208,
0.002123456448316574,
-0.004557368811219931,
0.04323028773069382,
-0.09613548964262009,
-0.09800052642822266,
0.055944111198186874,
-0.006303601432591677,
-0.03592412918806076,
-0.0009464125032536685,
-0.035672158002853394,
-0.09124340862035751,
0.0026936642825603485,
... |
88194aa6-c756-4cdc-bf50-e6ed0ad18821 | Let's combine both functions for a completely random table.
First, see what structure we'll get:
sql
SELECT generateRandomStructure(7, 123) AS structure FORMAT vertical;
response
┌─structure──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ c1 Decimal64(7), c2 Enum16('c2V0' = -21744, 'c2V1' = 5380), c3 Int8, c4 UUID, c5 UUID, c6 FixedString(190), c7 Map(Enum16('c7V0' = -19581, 'c7V1' = -10024, 'c7V2' = 27615, 'c7V3' = -10177, 'c7V4' = -19644, 'c7V5' = 3554, 'c7V6' = 29073, 'c7V7' = 28800, 'c7V8' = -11512), Float64) │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Now create the table with that structure and use the
DESCRIBE
statement to see what we created:
```sql
CREATE TABLE fully_random_table
ENGINE = MergeTree
ORDER BY tuple()
AS SELECT *
FROM generateRandom(generateRandomStructure(7, 123), 1, 10)
LIMIT 1000;
DESCRIBE TABLE fully_random_table;
``` | {"source_file": "generating-test-data.md"} | [
0.006521495524793863,
0.013527177274227142,
0.0009719113004393876,
-0.026363078504800797,
-0.05687960237264633,
-0.0633561834692955,
0.008452965877950191,
0.001491329981945455,
-0.032947346568107605,
0.040472451597452164,
0.024999607354402542,
-0.1236330047249794,
0.019735153764486313,
-0.... |
c4c816ba-8221-4c96-954a-8cb35e28bed9 | DESCRIBE TABLE fully_random_table;
```
response
┌─name─┬─type─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
1. │ c1 │ Decimal(18, 7) │ │ │ │ │ │
2. │ c2 │ Enum16('c2V0' = -21744, 'c2V1' = 5380) │ │ │ │ │ │
3. │ c3 │ Int8 │ │ │ │ │ │
4. │ c4 │ UUID │ │ │ │ │ │
5. │ c5 │ UUID │ │ │ │ │ │
6. │ c6 │ FixedString(190) │ │ │ │ │ │
7. │ c7 │ Map(Enum16('c7V4' = -19644, 'c7V0' = -19581, 'c7V8' = -11512, 'c7V3' = -10177, 'c7V1' = -10024, 'c7V5' = 3554, 'c7V2' = 27615, 'c7V7' = 28800, 'c7V6' = 29073), Float64) │ │ │ │ │ │
└──────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
Inspect the first row for a sample of the generated data:
sql
SELECT * FROM fully_random_table LIMIT 1 FORMAT vertical;
response
Row 1:
──────
c1: 80416293882.257732 -- 80.42 billion
c2: c2V1
c3: -84
c4: 1a9429b3-fd8b-1d72-502f-c051aeb7018e
c5: 7407421a-031f-eb3b-8571-44ff279ddd36
c6: g̅b�&��rҵ���5C�\�|��H�>���l'V3��R�[��=3�G�LwVMR*s緾/2�J.���6#��(�h>�lە��L^�M�:�R�9%d�ž�zv��W����Y�S��_no��BP+��u��.0��UZ!x�@7:�nj%3�Λd�S�k>���w��|�&��~
c7: {'c7V8':-1.160941256852442} | {"source_file": "generating-test-data.md"} | [
0.046428874135017395,
0.03333672881126404,
-0.012402013875544071,
-0.04728759452700615,
-0.0006659587379544973,
-0.08421442657709122,
-0.026149192824959755,
-0.006444691680371761,
-0.07199317216873169,
0.08977236598730087,
0.0464160181581974,
-0.09973708540201187,
0.040487468242645264,
-0.... |
20726c12-3305-4b65-9b25-71d57e4103bb | description: 'Table of Contents page for Engines'
slug: /engines
title: 'Engines'
doc_type: 'landing-page'
| Page | Description |
|----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Database Engines
| Database engines in ClickHouse allow you to work with tables and determine how data is stored and managed. Learn more about the various database engines available in ClickHouse. |
|
Table Engines
| Table engines in ClickHouse are a fundamental concept that determines how data is stored, written, and read. Learn more about the various table engines available in ClickHouse. | | {"source_file": "index.md"} | [
-0.011799899861216545,
0.0005139027489349246,
0.021425973623991013,
-0.0218034740537405,
0.01924905553460121,
0.029117170721292496,
-0.03770185261964798,
0.00280819833278656,
0.02108149230480194,
-0.05198357254266739,
-0.06811019778251648,
-0.02548992820084095,
0.011031039990484715,
-0.041... |
67f23c52-a1b9-4e03-a2b5-5216541911f2 | slug: /dictionary
title: 'Dictionary'
keywords: ['dictionary', 'dictionaries']
description: 'A dictionary provides a key-value representation of data for fast lookups.'
doc_type: 'reference'
import dictionaryUseCases from '@site/static/images/dictionary/dictionary-use-cases.png';
import dictionaryLeftAnyJoin from '@site/static/images/dictionary/dictionary-left-any-join.png';
import Image from '@theme/IdealImage';
Dictionary
A dictionary in ClickHouse provides an in-memory
key-value
representation of data from various
internal and external sources
, optimizing for super-low latency lookup queries.
Dictionaries are useful for:
- Improving the performance of queries, especially when used with
JOIN
s
- Enriching ingested data on the fly without slowing down the ingestion process
Speeding up joins using a Dictionary {#speeding-up-joins-using-a-dictionary}
Dictionaries can be used to speed up a specific type of
JOIN
: the
LEFT ANY
type
where the join key needs to match the key attribute of the underlying key-value storage.
If this is the case, ClickHouse can exploit the dictionary to perform a
Direct Join
. This is ClickHouse's fastest join algorithm and is applicable when the underlying
table engine
for the right-hand side table supports low-latency key-value requests. ClickHouse has three table engines providing this:
Join
(that is basically a pre-calculated hash table),
EmbeddedRocksDB
and
Dictionary
. We will describe the dictionary-based approach, but the mechanics are the same for all three engines.
The direct join algorithm requires that the right table is backed by a dictionary, such that the to-be-joined data from that table is already present in memory in the form of a low-latency key-value data structure.
Example {#example}
Using the Stack Overflow dataset, let's answer the question:
What is the most controversial post concerning SQL on Hacker News?
We will define controversial as when posts have a similar number of up and down votes. We compute this absolute difference, where a value closer to 0 means more controversy. We'll assume the post must have at least 10 up and down votes - posts which people don't vote on aren't very controversial.
With our data normalized, this query currently requires a
JOIN
using the
posts
and
votes
table:
```sql
WITH PostIds AS
(
SELECT Id
FROM posts
WHERE Title ILIKE '%SQL%'
)
SELECT
Id,
Title,
UpVotes,
DownVotes,
abs(UpVotes - DownVotes) AS Controversial_ratio
FROM posts
INNER JOIN
(
SELECT
PostId,
countIf(VoteTypeId = 2) AS UpVotes,
countIf(VoteTypeId = 3) AS DownVotes
FROM votes
WHERE PostId IN (PostIds)
GROUP BY PostId
HAVING (UpVotes > 10) AND (DownVotes > 10)
) AS votes ON posts.Id = votes.PostId
WHERE Id IN (PostIds)
ORDER BY Controversial_ratio ASC
LIMIT 1 | {"source_file": "index.md"} | [
-0.030996039509773254,
0.02219429425895214,
-0.0646720603108406,
0.017163876444101334,
0.009068802930414677,
-0.09487740695476532,
0.03118884190917015,
0.005184069741517305,
-0.01472439430654049,
-0.014474038034677505,
0.05664849653840065,
0.08624549210071564,
0.10742662847042084,
0.015225... |
1462c973-6f21-4cad-a4cf-0e5e2c10f0fc | Row 1:
──────
Id: 25372161
Title: How to add exception handling to SqlDataSource.UpdateCommand
UpVotes: 13
DownVotes: 13
Controversial_ratio: 0
1 rows in set. Elapsed: 1.283 sec. Processed 418.44 million rows, 7.23 GB (326.07 million rows/s., 5.63 GB/s.)
Peak memory usage: 3.18 GiB.
```
Use smaller datasets on the right side of
JOIN
: This query may seem more verbose than is required, with the filtering on
PostId
s occurring in both the outer and sub queries. This is a performance optimization which ensures the query response time is fast. For optimal performance, always ensure the right side of the
JOIN
is the smaller set and as small as possible. For tips on optimizing JOIN performance and understanding the algorithms available, we recommend
this series of blog articles
.
While this query is fast, it relies on us to write the
JOIN
carefully to achieve good performance. Ideally, we would simply filter the posts to those containing "SQL", before looking at the
UpVote
and
DownVote
counts for the subset of blogs to compute our metric.
Applying a dictionary {#applying-a-dictionary}
To demonstrate these concepts, we use a dictionary for our vote data. Since dictionaries are usually held in memory (
ssd_cache
is the exception), users should be cognizant of the size of the data. Confirming our
votes
table size:
```sql
SELECT table,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size,
formatReadableSize(sum(data_uncompressed_bytes)) AS uncompressed_size,
round(sum(data_uncompressed_bytes) / sum(data_compressed_bytes), 2) AS ratio
FROM system.columns
WHERE table IN ('votes')
GROUP BY table
┌─table───────────┬─compressed_size─┬─uncompressed_size─┬─ratio─┐
│ votes │ 1.25 GiB │ 3.79 GiB │ 3.04 │
└─────────────────┴─────────────────┴───────────────────┴───────┘
```
Data will be stored uncompressed in our dictionary, so we need at least 4GB of memory if we were to store all columns (we won't) in a dictionary. The dictionary will be replicated across our cluster, so this amount of memory needs to be reserved
per node
.
In the example below the data for our dictionary originates from a ClickHouse table. While this represents the most common source of dictionaries,
a number of sources
are supported including files, http and databases including
Postgres
. As we'll show, dictionaries can be automatically refreshed providing an ideal way to ensure small datasets subject to frequent changes are available for direct joins. | {"source_file": "index.md"} | [
-0.0722697451710701,
0.01931944116950035,
-0.007680466864258051,
0.08123141527175903,
-0.022365842014551163,
0.014262859709560871,
0.051006827503442764,
0.031269609928131104,
-0.05378381907939911,
-0.033884406089782715,
-0.08276760578155518,
0.0020202205050736666,
0.1032288521528244,
-0.07... |
b33126df-59cd-4dd7-b6ea-826776e66575 | Our dictionary requires a primary key on which lookups will be performed. This is conceptually identical to a transactional database primary key and should be unique. Our above query requires a lookup on the join key -
PostId
. The dictionary should in turn be populated with the total of the up and down votes per
PostId
from our
votes
table. Here's the query to obtain this dictionary data:
sql
SELECT PostId,
countIf(VoteTypeId = 2) AS UpVotes,
countIf(VoteTypeId = 3) AS DownVotes
FROM votes
GROUP BY PostId
To create our dictionary requires the following DDL - note the use of our above query:
``sql
CREATE DICTIONARY votes_dict
(
PostId
UInt64,
UpVotes
UInt32,
DownVotes` UInt32
)
PRIMARY KEY PostId
SOURCE(CLICKHOUSE(QUERY 'SELECT PostId, countIf(VoteTypeId = 2) AS UpVotes, countIf(VoteTypeId = 3) AS DownVotes FROM votes GROUP BY PostId'))
LIFETIME(MIN 600 MAX 900)
LAYOUT(HASHED())
0 rows in set. Elapsed: 36.063 sec.
```
In self-managed OSS, the above command needs to be executed on all nodes. In ClickHouse Cloud, the dictionary will automatically be replicated to all nodes. The above was executed on a ClickHouse Cloud node with 64GB of RAM, taking 36s to load.
To confirm the memory consumed by our dictionary:
```sql
SELECT formatReadableSize(bytes_allocated) AS size
FROM system.dictionaries
WHERE name = 'votes_dict'
┌─size─────┐
│ 4.00 GiB │
└──────────┘
```
Retrieving the up and down votes for a specific
PostId
can be now achieved with a simple
dictGet
function. Below we retrieve the values for the post
11227902
:
```sql
SELECT dictGet('votes_dict', ('UpVotes', 'DownVotes'), '11227902') AS votes
┌─votes──────┐
│ (34999,32) │
└────────────┘
Exploiting this in our earlier query, we can remove the JOIN:
WITH PostIds AS
(
SELECT Id
FROM posts
WHERE Title ILIKE '%SQL%'
)
SELECT Id, Title,
dictGet('votes_dict', 'UpVotes', Id) AS UpVotes,
dictGet('votes_dict', 'DownVotes', Id) AS DownVotes,
abs(UpVotes - DownVotes) AS Controversial_ratio
FROM posts
WHERE (Id IN (PostIds)) AND (UpVotes > 10) AND (DownVotes > 10)
ORDER BY Controversial_ratio ASC
LIMIT 3
3 rows in set. Elapsed: 0.551 sec. Processed 119.64 million rows, 3.29 GB (216.96 million rows/s., 5.97 GB/s.)
Peak memory usage: 552.26 MiB.
```
Not only is this query much simpler, it's also over twice as fast! This could be optimized further by only loading posts with more than 10 up and down votes into the dictionary and only storing a pre-computed controversial value.
Query time enrichment {#query-time-enrichment}
Dictionaries can be used to look up values at query time. These values can be returned in results or used in aggregations. Suppose we create a dictionary to map user IDs to their location: | {"source_file": "index.md"} | [
-0.051438651978969574,
-0.014497390948235989,
-0.06939470022916794,
-0.009571284055709839,
-0.10108611732721329,
-0.052940309047698975,
0.03333333134651184,
0.014683290384709835,
0.044274259358644485,
-0.011636348441243172,
0.04349930211901665,
0.000056193250202341005,
0.09924473613500595,
... |
247acbf4-1e6e-4c1e-bdb8-acf46a184de3 | Dictionaries can be used to look up values at query time. These values can be returned in results or used in aggregations. Suppose we create a dictionary to map user IDs to their location:
sql
CREATE DICTIONARY users_dict
(
`Id` Int32,
`Location` String
)
PRIMARY KEY Id
SOURCE(CLICKHOUSE(QUERY 'SELECT Id, Location FROM stackoverflow.users'))
LIFETIME(MIN 600 MAX 900)
LAYOUT(HASHED())
We can use this dictionary to enrich post results:
```sql
SELECT
Id,
Title,
dictGet('users_dict', 'Location', CAST(OwnerUserId, 'UInt64')) AS location
FROM posts
WHERE Title ILIKE '%clickhouse%'
LIMIT 5
FORMAT PrettyCompactMonoBlock
┌───────Id─┬─Title─────────────────────────────────────────────────────────┬─Location──────────────┐
│ 52296928 │ Comparison between two Strings in ClickHouse │ Spain │
│ 52345137 │ How to use a file to migrate data from mysql to a clickhouse? │ 中国江苏省Nanjing Shi │
│ 61452077 │ How to change PARTITION in clickhouse │ Guangzhou, 广东省中国 │
│ 55608325 │ Clickhouse select last record without max() on all table │ Moscow, Russia │
│ 55758594 │ ClickHouse create temporary table │ Perm', Russia │
└──────────┴───────────────────────────────────────────────────────────────┴───────────────────────┘
5 rows in set. Elapsed: 0.033 sec. Processed 4.25 million rows, 82.84 MB (130.62 million rows/s., 2.55 GB/s.)
Peak memory usage: 249.32 MiB.
```
Similar to our above join example, we can use the same dictionary to efficiently determine where most posts originate from:
```sql
SELECT
dictGet('users_dict', 'Location', CAST(OwnerUserId, 'UInt64')) AS location,
count() AS c
FROM posts
WHERE location != ''
GROUP BY location
ORDER BY c DESC
LIMIT 5
┌─location───────────────┬──────c─┐
│ India │ 787814 │
│ Germany │ 685347 │
│ United States │ 595818 │
│ London, United Kingdom │ 538738 │
│ United Kingdom │ 537699 │
└────────────────────────┴────────┘
5 rows in set. Elapsed: 0.763 sec. Processed 59.82 million rows, 239.28 MB (78.40 million rows/s., 313.60 MB/s.)
Peak memory usage: 248.84 MiB.
```
Index time enrichment {#index-time-enrichment}
In the above example, we used a dictionary at query time to remove a join. Dictionaries can also be used to enrich rows at insert time. This is typically appropriate if the enrichment value does not change and exists in an external source which can be used to populate the dictionary. In this case, enriching the row at insert time avoids the query time lookup to the dictionary.
Let's suppose that the
Location
of a user in Stack Overflow never changes (in reality they do) - specifically the
Location
column of the
users
table. Suppose we want to do an analytics query on the posts table by location. This contains a
UserId
. | {"source_file": "index.md"} | [
-0.0011665672063827515,
0.0023383772931993008,
-0.03253154084086418,
0.013942074961960316,
-0.0987764373421669,
-0.06399226933717728,
0.05664009600877762,
0.017309557646512985,
-0.06046764925122261,
0.011313136667013168,
0.024513961747288704,
-0.006956818047910929,
0.06125273555517197,
-0.... |
8f1c86de-b6c0-46bd-abff-6be303728baa | A dictionary provides a mapping from user id to location, backed by the
users
table:
sql
CREATE DICTIONARY users_dict
(
`Id` UInt64,
`Location` String
)
PRIMARY KEY Id
SOURCE(CLICKHOUSE(QUERY 'SELECT Id, Location FROM users WHERE Id >= 0'))
LIFETIME(MIN 600 MAX 900)
LAYOUT(HASHED())
We omit users with an
Id < 0
, allowing us to use the
Hashed
dictionary type. Users with
Id < 0
are system users.
To exploit this dictionary at insert time for the posts table, we need to modify the schema:
sql
CREATE TABLE posts_with_location
(
`Id` UInt32,
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
...
`Location` MATERIALIZED dictGet(users_dict, 'Location', OwnerUserId::'UInt64')
)
ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CommentCount)
In the above example the
Location
is declared as a
MATERIALIZED
column. This means the value can be provided as part of an
INSERT
query and will always be calculated.
ClickHouse also supports
DEFAULT
columns
(where the value can be inserted or calculated if not provided).
To populate the table we can use the usual
INSERT INTO SELECT
from S3:
```sql
INSERT INTO posts_with_location SELECT Id, PostTypeId::UInt8, AcceptedAnswerId, CreationDate, Score, ViewCount, Body, OwnerUserId, OwnerDisplayName, LastEditorUserId, LastEditorDisplayName, LastEditDate, LastActivityDate, Title, Tags, AnswerCount, CommentCount, FavoriteCount, ContentLicense, ParentId, CommunityOwnedDate, ClosedDate FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/*.parquet')
0 rows in set. Elapsed: 36.830 sec. Processed 238.98 million rows, 2.64 GB (6.49 million rows/s., 71.79 MB/s.)
```
We can now obtain the name of the location from which most posts originate:
```sql
SELECT Location, count() AS c
FROM posts_with_location
WHERE Location != ''
GROUP BY Location
ORDER BY c DESC
LIMIT 4
┌─Location───────────────┬──────c─┐
│ India │ 787814 │
│ Germany │ 685347 │
│ United States │ 595818 │
│ London, United Kingdom │ 538738 │
└────────────────────────┴────────┘
4 rows in set. Elapsed: 0.142 sec. Processed 59.82 million rows, 1.08 GB (420.73 million rows/s., 7.60 GB/s.)
Peak memory usage: 666.82 MiB.
```
Advanced dictionary topics {#advanced-dictionary-topics}
Choosing the Dictionary
LAYOUT
{#choosing-the-dictionary-layout}
The
LAYOUT
clause controls the internal data structure for the dictionary. A number of options exist and are documented
here
. Some tips on choosing the correct layout can be found
here
.
Refreshing dictionaries {#refreshing-dictionaries} | {"source_file": "index.md"} | [
0.04435765743255615,
0.023031629621982574,
-0.015956901013851166,
-0.03583487495779991,
-0.08039236068725586,
-0.07031667232513428,
0.02576267160475254,
-0.024837328121066093,
-0.039102185517549515,
0.06268982589244843,
0.08127795904874802,
-0.01257439237087965,
0.08133840560913086,
-0.027... |
637e36c3-75b4-45cb-9c25-cd735c0b6d48 | Refreshing dictionaries {#refreshing-dictionaries}
We have specified a
LIFETIME
for the dictionary of
MIN 600 MAX 900
. LIFETIME is the update interval for the dictionary, with the values here causing a periodic reload at a random interval between 600 and 900s. This random interval is necessary in order to distribute the load on the dictionary source when updating on a large number of servers. During updates, the old version of a dictionary can still be queried, with only the initial load blocking queries. Note that setting
(LIFETIME(0))
prevents dictionaries from updating.
Dictionaries can be forcibly reloaded using the
SYSTEM RELOAD DICTIONARY
command.
For database sources such as ClickHouse and Postgres, you can set up a query that will update the dictionaries only if they really changed (the response of the query determines this), rather than at a periodic interval. Further details can be found
here
.
Other dictionary types {#other-dictionary-types}
ClickHouse also supports
Hierarchical
,
Polygon
and
Regular Expression
dictionaries.
More reading {#more-reading}
Using Dictionaries to Accelerate Queries
Advanced Configuration for Dictionaries | {"source_file": "index.md"} | [
-0.03001543879508972,
-0.055490635335445404,
-0.019237643107771873,
0.06333668529987335,
-0.05675739422440529,
-0.12322478741407394,
0.0005901921540498734,
-0.09445550292730331,
0.027793634682893753,
0.004687437321990728,
0.0034019513987004757,
0.09221795946359634,
-0.0050069550052285194,
... |
731f2bc6-67ca-4bc5-9faa-90c9e1eeb86c | sidebar_position: 2
sidebar_label: 'What is OLAP?'
description: 'OLAP stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business.'
title: 'What is OLAP?'
slug: /concepts/olap
keywords: ['OLAP']
doc_type: 'reference'
What is OLAP?
OLAP
stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business. At the highest level, you can just read these words backward:
Processing
— Some source data is processed…
Analytical
— …to produce some analytical reports and insights…
Online
— …in real-time.
OLAP from the business perspective {#olap-from-the-business-perspective}
In recent years business people have started to realize the value of data. Companies who make their decisions blindly more often than not fail to keep up with the competition. The data-driven approach of successful companies forces them to collect all data that might be even remotely useful for making business decisions, and imposes on them a need for mechanisms which allow them to analyze this data in a timely manner. Here's where OLAP database management systems (DBMS) come in.
In a business sense, OLAP allows companies to continuously plan, analyze, and report operational activities, thus maximizing efficiency, reducing expenses, and ultimately conquering the market share. It could be done either in an in-house system or outsourced to SaaS providers like web/mobile analytics services, CRM services, etc. OLAP is the technology behind many BI (business intelligence) applications.
ClickHouse is an OLAP database management system that is pretty often used as a backend for those SaaS solutions for analyzing domain-specific data. However, some businesses are still reluctant to share their data with third-party providers and so an in-house data warehouse scenario is also viable.
OLAP from the technical perspective {#olap-from-the-technical-perspective}
All database management systems could be classified into two groups: OLAP (Online
Analytical
Processing) and OLTP (Online
Transactional
Processing). The former focuses on building reports, each based on large volumes of historical data, but by doing it less frequently. The latter usually handles a continuous stream of transactions, constantly modifying the current state of data.
In practice OLAP and OLTP are not viewed as binary categories, but more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems that are integrated. This might not be such a big deal, but having more systems increases maintenance costs, and as such the trend in recent years is towards HTAP (
Hybrid Transactional/Analytical Processing
) when both kinds of workload are handled equally well by a single database management system. | {"source_file": "olap.md"} | [
-0.06621648371219635,
0.018360944464802742,
-0.048850663006305695,
0.053910452872514725,
0.0162640530616045,
-0.09458157420158386,
-0.0008447885629720986,
0.04595636576414108,
-0.000040844621253199875,
-0.028323329985141754,
-0.013625089079141617,
0.033464159816503525,
0.05137196555733681,
... |
b823e534-c0be-4ff6-bc7a-c6f67ef80933 | Even if a DBMS started out as a pure OLAP or pure OLTP, it is forced to move in the HTAP direction to keep up with the competition. ClickHouse is no exception. Initially, it has been designed as a
fast-as-possible OLAP system
and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data have been added.
The fundamental trade-off between OLAP and OLTP systems remains:
To build analytical reports efficiently it's crucial to be able to read columns separately, thus most OLAP databases are
columnar
;
While storing columns separately increases costs of operations on rows, like append or in-place modification, proportionally to the number of columns (which can be huge if the systems try to collect all details of an event just in case). Thus, most OLTP systems store data arranged by rows. | {"source_file": "olap.md"} | [
-0.0022780296858400106,
-0.041138503700494766,
-0.0747518241405487,
0.059175193309783936,
0.0008271135156974196,
-0.06738302856683731,
-0.02313827909529209,
0.026468731462955475,
0.07418303191661835,
0.04952794313430786,
0.0207919143140316,
0.11356294900178909,
0.031181376427412033,
-0.050... |
de9be494-0b48-4f70-9c24-c142c50cfe43 | sidebar_label: 'Glossary'
description: 'This page contains a list of commonly used words and phrases regarding ClickHouse, as well as their definitions.'
title: 'Glossary'
slug: /concepts/glossary
keywords: ['glossary', 'definitions', 'terminology']
doc_type: 'reference'
Glossary
Atomicity {#atomicity}
Atomicity ensures that a transaction (a series of database operations) is treated as a single, indivisible unit. This means that either all operations within the transaction occur, or none do. An example of an atomic transaction is transferring money from one bank account to another. If either step of the transfer fails, the transaction fails, and the money stays in the first account. Atomicity ensures no money is lost or created.
Block {#block}
A block is a logical unit for organizing data processing and storage. Each block contains columnar data which is processed together to enhance performance during query execution. By processing data in blocks, ClickHouse utilizes CPU cores efficiently by minimizing cache misses and facilitating vectorized execution. ClickHouse uses various compression algorithms, such as LZ4, ZSTD, and Delta, to compress data in blocks.
Cluster {#cluster}
A collection of nodes (servers) that work together to store and process data.
CMEK {#cmek}
Customer-managed encryption keys (CMEK) allow customers to use their key-management service (KMS) key to encrypt the ClickHouse disk data key and protect their data at rest.
Dictionary {#dictionary}
A dictionary is a mapping of key-value pairs that is useful for various types of reference lists. It is a powerful feature that allows for the efficient use of dictionaries in queries, which is often more efficient than using a
JOIN
with reference tables.
Distributed table {#distributed-table}
A distributed table in ClickHouse is a special type of table that does not store data itself but provides a unified view for distributed query processing across multiple servers in a cluster.
Granule {#granule}
A granule is a batch of rows in an uncompressed block. When reading data, ClickHouse accesses granules, but not individual rows, which enables faster data processing in analytical workloads. A granule contains 8192 rows by default. The primary index contains one entry per granule.
Incremental materialized view {#incremental-materialized-view}
In ClickHouse is a type of materialized view that processes and aggregates data at insert time. When new data is inserted into the source table, the materialized view executes a predefined SQL aggregation query only on the newly inserted blocks and writes the aggregated results to a target table.
Lightweight update {#lightweight-update} | {"source_file": "glossary.md"} | [
-0.09719756990671158,
-0.024984540417790413,
-0.10212749987840652,
0.0071130492724478245,
-0.05727999284863472,
-0.09441574662923813,
0.06985831260681152,
-0.01387658342719078,
0.037592507898807526,
0.014516348019242287,
0.05872657522559166,
0.010131250135600567,
0.0022883673664182425,
-0.... |
15702cb4-ab82-458d-8b82-11ae6b4e42fd | Lightweight update {#lightweight-update}
A lightweight update in ClickHouse is an experimental feature that allows you to update rows in a table using standard SQL UPDATE syntax, but instead of rewriting entire columns or data parts (as with traditional mutations), it creates "patch parts" containing only the updated columns and rows. These updates are immediately visible in SELECT queries through patch application, but the physical data is only updated during subsequent merges.
Materialized view {#materialized-view}
A materialized view in ClickHouse is a mechanism that automatically runs a query on data as it is inserted into a source table, storing the transformed or aggregated results in a separate target table for faster querying.
MergeTree {#mergetree}
A MergeTree in ClickHouse is a table engine designed for high data ingest rates and large data volumes. It is the core storage engine in ClickHouse, providing features such as columnar storage, custom partitioning, sparse primary indexes, and support for background data merges.
Mutation {#mutation}
A mutation in ClickHouse refers to an operation that modifies or deletes existing data in a table, typically using commands like ALTER TABLE ... UPDATE or ALTER TABLE ... DELETE. Mutations are implemented as asynchronous background processes that rewrite entire data parts affected by the change, rather than modifying rows in place.
On-the-fly mutation {#on-the-fly-mutation}
On-the-fly mutations in ClickHouse are a mechanism that allows updates or deletes to be visible in subsequent SELECT queries immediately after the mutation is submitted, without waiting for the background mutation process to finish.
Parts {#parts}
A physical file on a disk that stores a portion of the table's data. This is different from a partition, which is a logical division of a table's data that is created using a partition key.
Partitioning key {#partitioning-key}
A partitioning key in ClickHouse is a SQL expression defined in the PARTITION BY clause when creating a table. It determines how data is logically grouped into partitions on disk. Each unique value of the partitioning key forms its own physical partition, allowing for efficient data management operations such as dropping, moving, or archiving entire partitions.
Primary key {#primary-key}
In ClickHouse, a primary key determines the order in which data is stored on disk and is used to build a sparse index that speeds up query filtering. Unlike traditional databases, the primary key in ClickHouse does not enforce uniqueness—multiple rows can have the same primary key value.
Projection {#projection}
A projection in ClickHouse is a hidden, automatically maintained table that stores data in a different order or with precomputed aggregations to speed up queries, especially those filtering on columns not in the main primary key.
Refreshable materialized view {#refreshable-materialized-view} | {"source_file": "glossary.md"} | [
-0.052608855068683624,
-0.03443961590528488,
-0.027777697890996933,
0.03937765955924988,
-0.0108528733253479,
-0.13418419659137726,
-0.0028320588171482086,
-0.014345947653055191,
-0.02092221938073635,
0.049218252301216125,
0.03471788391470909,
0.035748496651649475,
0.016199998557567596,
-0... |
4c4a9c82-a9a4-401a-97b4-d41abbe1c0fa | Refreshable materialized view {#refreshable-materialized-view}
Refreshable materialized view is a type of materialized view that periodically re-executes its query over the full dataset and stores the result in a target table. Unlike incremental materialized views, refreshable materialized views are updated on a schedule and can support complex queries, including JOINs and UNIONs, without restrictions.
Replica {#replica}
A copy of the data stored in a ClickHouse database. You can have any number of replicas of the same data for redundancy and reliability. Replicas are used in conjunction with the ReplicatedMergeTree table engine, which enables ClickHouse to keep multiple copies of data in sync across different servers.
Shard {#shard}
A subset of data. ClickHouse always has at least one shard for your data. If you do not split the data across multiple servers, your data will be stored in one shard. Sharding data across multiple servers can be used to divide the load if you exceed the capacity of a single server.
Skipping index {#skipping-index}
Skipping indices are used to store small amounts of metadata at the level of multiple consecutive granules which allows ClickHouse to avoid scanning irrelevant rows. Skipping indices provide a lightweight alternative to projections.
Sorting key {#sorting-key}
In ClickHouse, a sorting key defines the physical order of rows on disk. If you do not specify a primary key, ClickHouse uses the sorting key as the primary key. If you specify both, the primary key must be a prefix of the sorting key.
Sparse index {#sparse-index}
A type of indexing when the primary index contains one entry for a group of rows, rather than a single row. The entry that corresponds to a group of rows is referred to as a mark. With sparse indexes, ClickHouse first identifies groups of rows that potentially match the query and then processes them separately to find a match. Because of this, the primary index is small enough to be loaded into the memory.
Table engine {#table-engine}
Table engines in ClickHouse determine how data is written, stored and accessed. MergeTree is the most common table engine, and allows quick insertion of large amounts of data which get processed in the background.
TTL {#ttl}
Time To Live (TTL) is A ClickHouse feature that automatically moves, deletes, or rolls up columns or rows after a certain time period. This allows you to manage storage more efficiently because you can delete, move, or archive the data that you no longer need to access frequently. | {"source_file": "glossary.md"} | [
-0.057678524404764175,
-0.07590124011039734,
-0.05128728598356247,
0.034219421446323395,
-0.041594427078962326,
-0.02930961176753044,
-0.030890440568327904,
-0.07935784012079239,
0.0409080907702446,
0.019623911008238792,
-0.01866087317466736,
0.07197399437427521,
0.0682925432920456,
-0.065... |
0baf916d-4ec5-4474-8d61-4bb5dcc3dfc3 | title: 'Concepts'
slug: /concepts
description: 'Landing page for concepts'
pagination_next: null
pagination_prev: null
keywords: ['concepts', 'OLAP', 'fast']
doc_type: 'landing-page'
In this section of the docs we'll dive into the concepts around what makes ClickHouse so fast and efficient.
| Page | Description |
|------------------------------------------------------------------|---------------------------------------------------------------------------------------|
|
Why is ClickHouse so Fast?
| Learn what makes ClickHouse so fast.
|
What is OLAP?
| Learn what Online Analytical Processing is.
|
Why is ClickHouse unique?
| Learn what makes ClickHouse unique.
|
Glossary
| This page contains a glossary of terms you'll commonly encounter throughout the docs.
|
FAQ
| A compilation of the most frequently asked questions we get about ClickHouse. | {"source_file": "index.md"} | [
0.022206006571650505,
-0.010808460414409637,
-0.009820455685257912,
0.01570669561624527,
-0.034652989357709885,
-0.017592422664165497,
0.00883170124143362,
-0.08422770351171494,
0.017411867156624794,
0.0146362679079175,
0.026890814304351807,
0.04141509160399437,
0.0032311961986124516,
-0.0... |
d6b8e608-6350-45f3-a4f1-1aaf92b48ba0 | slug: /operations/utilities/static-files-disk-uploader
title: 'clickhouse-static-files-disk-uploader'
keywords: ['clickhouse-static-files-disk-uploader', 'utility', 'disk', 'uploader']
description: 'Provides a description of the clickhouse-static-files-disk-uploader utility'
doc_type: 'guide'
clickhouse-static-files-disk-uploader
Outputs a data directory containing metadata for a specified ClickHouse table. This metadata can be used to create a ClickHouse table on a different server containing a read-only dataset backed by a
web
disk.
Do not use this tool to migrate data. Instead, use the
BACKUP
and
RESTORE
commands
.
Usage {#usage}
bash
$ clickhouse static-files-disk-uploader [args]
Commands {#commands}
|Command|Description|
|---|---|
|
-h
,
--help
|Prints help information|
|
--metadata-path [path]
|The path containing metadata for the specified table|
|
--test-mode
|Enables
test
mode, which submits a PUT request to the given URL with the table metadata|
|
--link
|Creates symlinks instead of copying files to the output directory|
|
--url [url]
|Web server URL for
test
mode|
|
--output-dir [dir]
|Directory to output files in
non-test
mode|
Retrieve metadata path for the specified table {#retrieve-metadata-path-for-the-specified-table}
When using
clickhouse-static-files-disk-uploader
, you must obtain the metadata path for your desired table.
Run the following query specifying your target table and database:
sql
SELECT data_paths
FROM system.tables
WHERE name = 'mytable' AND database = 'default';
This should return the path to the data directory for the specified table:
response
┌─data_paths────────────────────────────────────────────┐
│ ['./store/bcc/bccc1cfd-d43d-43cf-a5b6-1cda8178f1ee/'] │
└───────────────────────────────────────────────────────┘
Output table metadata directory to the local filesystem {#output-table-metadata-directory-to-the-local-filesystem}
Using the target output directory
output
and a given metadata path, execute the following command:
bash
$ clickhouse static-files-disk-uploader --output-dir output --metadata-path ./store/bcc/bccc1cfd-d43d-43cf-a5b6-1cda8178f1ee/
If successful, you should see the following message, and the
output
directory should contain the metadata for the specified table:
repsonse
Data path: "/Users/john/store/bcc/bccc1cfd-d43d-43cf-a5b6-1cda8178f1ee", destination path: "output"
Output table metadata directory to an external URL {#output-table-metadata-directory-to-an-external-url}
This step is similar to outputting the data directory to the local filesystem but with the addition of the
--test-mode
flag. Instead of specifying an output directory, you must specify a target URL via the
--url
flag.
With
test
mode enabled, the table metadata directory is uploaded to the specified URL via a PUT request. | {"source_file": "static-files-disk-uploader.md"} | [
-0.009283964522182941,
-0.0698176771402359,
-0.05015011876821518,
0.008272492326796055,
0.026356739923357964,
-0.10020799189805984,
0.0050306725315749645,
0.029926659539341927,
-0.0909554585814476,
0.053991563618183136,
0.0952925831079483,
-0.003916451707482338,
0.07762172073125839,
-0.006... |
9100ef91-1971-43eb-ac4e-2115bd3f809a | With
test
mode enabled, the table metadata directory is uploaded to the specified URL via a PUT request.
bash
$ clickhouse static-files-disk-uploader --test-mode --url http://nginx:80/test1 --metadata-path ./store/bcc/bccc1cfd-d43d-43cf-a5b6-1cda8178f1ee/
Using the table metadata directory to create a ClickHouse table {#using-the-table-metadata-directory-to-create-a-clickhouse-table}
Once you have the table metadata directory, you can use it to create a ClickHouse table on a different server.
Please see
this GitHub repo
showing a demo. In the example, we create a table using a
web
disk, which allows us to attach the table to a dataset on a different server. | {"source_file": "static-files-disk-uploader.md"} | [
-0.012832854874432087,
-0.07345543801784515,
-0.1332423985004425,
0.013596386648714542,
-0.024227555841207504,
-0.08445338159799576,
-0.004928476642817259,
0.02513733319938183,
-0.027516141533851624,
0.0955267921090126,
0.05181200057268143,
-0.03965621441602707,
0.09484069794416428,
0.0463... |
8104a247-7ba1-44df-a76a-882a12304dd2 | slug: /native-protocol/basics
sidebar_position: 1
title: 'Basics'
description: 'Native protocol basics'
keywords: ['native protocol', 'TCP protocol', 'protocol basics', 'binary protocol', 'client-server communication']
doc_type: 'guide'
Basics
:::note
Client protocol reference is in progress.
Most examples are only in Go.
:::
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
This document describes binary protocol for ClickHouse TCP clients.
Varint {#varint}
For lengths, packet codes and other cases the
unsigned varint
encoding is used.
Use
binary.PutUvarint
and
binary.ReadUvarint
.
:::note
Signed
varint is not used.
:::
String {#string}
Variable length strings are encoded as
(length, value)
, where
length
is
varint
and
value
is utf8 string.
:::important
Validate length to prevent OOM:
0 ≤ len < MAX
:::
```go
s := "Hello, world!"
// Writing string length as uvarint.
buf := make([]byte, binary.MaxVarintLen64)
n := binary.PutUvarint(buf, uint64(len(s)))
buf = buf[:n]
// Writing string value.
buf = append(buf, s...)
```
```go
r := bytes.NewReader([]byte{
0xd, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x2c,
0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64, 0x21,
})
// Read length.
n, err := binary.ReadUvarint(r)
if err != nil {
panic(err)
}
// Check n to prevent OOM or runtime exception in make().
const maxSize = 1024 * 1024 * 10 // 10 MB
if n > maxSize || n < 0 {
panic("invalid n")
}
buf := make([]byte, n)
if _, err := io.ReadFull(r, buf); err != nil {
panic(err)
}
fmt.Println(string(buf))
// Hello, world!
```
hexdump
00000000 0d 48 65 6c 6c 6f 2c 20 77 6f 72 6c 64 21 |.Hello, world!|
text
DUhlbGxvLCB3b3JsZCE
go
data := []byte{
0xd, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x2c,
0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64, 0x21,
}
Integers {#integers}
:::tip
ClickHouse uses
Little Endian
for fixed size integers.
:::
Int32 {#int32}
```go
v := int32(1000)
// Encode.
buf := make([]byte, 8)
binary.LittleEndian.PutUint32(buf, uint32(v))
// Decode.
d := int32(binary.LittleEndian.Uint32(buf))
fmt.Println(d) // 1000
```
hexdump
00000000 e8 03 00 00 00 00 00 00 |........|
text
6AMAAAAAAAA
Boolean {#boolean}
Booleans are represented by single byte,
1
is
true
and
0
is
false
. | {"source_file": "basics.md"} | [
0.015302582643926144,
0.05210546404123306,
-0.05168762430548668,
-0.09254486113786697,
-0.09135065227746964,
-0.02720891684293747,
0.07740350067615509,
0.029730452224612236,
-0.05714276060461998,
-0.02890700474381447,
0.02149573341012001,
0.007830190472304821,
0.04120270162820816,
0.037230... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.