id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
d90e447a-fab7-4099-8c13-f360644a82e9 | If your production uses complex features like replication, distributed tables and cascading materialized views, make sure they are configured similarly in pre-production.
There's a trade-off on using the roughly same number of servers or VMs in pre-production as in production but of smaller size, or much less of them but of the same size. The first option might catch extra network-related issues, while the latter is easier to manage.
The second area to invest in is
automated testing infrastructure
. Don't assume that if some kind of query has executed successfully once, it'll continue to do so forever. It's OK to have some unit tests where ClickHouse is mocked, but make sure your product has a reasonable set of automated tests that are run against real ClickHouse and check that all important use cases are still working as expected.
An extra step forward could be contributing those automated tests to
ClickHouse's open-source test infrastructure
that are continuously used in its day-to-day development. It definitely will take some additional time and effort to learn
how to run it
and then how to adapt your tests to this framework, but it'll pay off by ensuring that ClickHouse releases are already tested against them when they are announced stable, instead of repeatedly losing time on reporting the issue after the fact and then waiting for a bugfix to be implemented, backported and released. Some companies even have such test contributions to infrastructure by its use as an internal policy, (called
Beyonce's Rule
at Google).
When you have your pre-production environment and testing infrastructure in place, choosing the best version is straightforward:
Routinely run your automated tests against new ClickHouse releases. You can do it even for ClickHouse releases that are marked as
testing
, but going forward to the next steps with them is not recommended.
Deploy the ClickHouse release that passed the tests to pre-production and check that all processes are running as expected.
Report any issues you discovered to
ClickHouse GitHub Issues
.
If there were no major issues, it should be safe to start deploying ClickHouse release to your production environment. Investing in gradual release automation that implements an approach similar to
canary releases
or
green-blue deployments
might further reduce the risk of issues in production.
As you might have noticed, there's nothing specific to ClickHouse in the approach described above - people do that for any piece of infrastructure they rely on if they take their production environment seriously.
How to choose between ClickHouse releases? {#how-to-choose-between-clickhouse-releases}
If you look into the contents of the ClickHouse package repository, you'll see two kinds of packages:
stable
lts
(long-term support)
Here is some guidance on how to choose between them: | {"source_file": "production.md"} | [
-0.0038496865890920162,
-0.08793605118989944,
-0.01836487278342247,
0.017732901498675346,
-0.03161980211734772,
-0.07004977017641068,
-0.0948123037815094,
-0.06963223963975906,
-0.048321302980184555,
0.011349011212587357,
0.004661354701966047,
-0.02566586434841156,
0.008904688991606236,
-0... |
c509bae1-9c37-44d9-a2b0-2c17a48a0546 | stable
lts
(long-term support)
Here is some guidance on how to choose between them:
stable
is the kind of package we recommend by default. They are released roughly monthly (and thus provide new features with reasonable delay) and three latest stable releases are supported in terms of diagnostics and backporting of bug fixes.
lts
are released twice a year and are supported for a year after their initial release. You might prefer them over
stable
in the following cases:
Your company has some internal policies that do not allow for frequent upgrades or using non-LTS software.
You are using ClickHouse in some secondary products that either do not require any complex ClickHouse features or do not have enough resources to keep it updated.
Many teams who initially think that
lts
is the way to go often switch to
stable
anyway because of some recent feature that's important for their product.
:::tip
One more thing to keep in mind when upgrading ClickHouse: we're always keeping an eye on compatibility across releases, but sometimes it's not reasonable to keep and some minor details might change. So make sure you check the
changelog
before upgrading to see if there are any notes about backward-incompatible changes.
::: | {"source_file": "production.md"} | [
-0.025394637137651443,
-0.1342206746339798,
0.014857128262519836,
-0.06949703395366669,
0.03979523479938507,
0.009246349334716797,
-0.1113106906414032,
0.028732961043715477,
-0.06935102492570877,
-0.04140494391322136,
0.028202347457408905,
0.0707201361656189,
-0.061753157526254654,
0.01824... |
69d27097-ced9-4c3c-b537-4a41c72dcba9 | slug: /faq/operations/
sidebar_position: 3
sidebar_label: 'Question about operating ClickHouse servers and clusters'
title: 'Question about operating ClickHouse servers and clusters'
description: 'Landing page for questions about operating ClickHouse servers and clusters'
doc_type: 'landing-page'
keywords: ['operations', 'administration', 'deployment', 'cluster management', 'faq']
Question about operating ClickHouse servers and clusters
Which ClickHouse version should I use in production?
Is it possible to deploy ClickHouse with separate storage and compute?
Is it possible to delete old records from a ClickHouse table?
How do I configure ClickHouse Keeper?
Can ClickHouse integrate with LDAP?
How do I configure users, roles and permissions in ClickHouse?
Can you update or delete rows in ClickHouse?
Does ClickHouse support multi-region replication?
:::info Don't see what you're looking for?
Check out our
Knowledge Base
and also browse the many helpful articles found here in the documentation.
::: | {"source_file": "index.md"} | [
0.027572091668844223,
-0.1074100062251091,
-0.08854737132787704,
-0.003466560272499919,
-0.04843446612358093,
-0.05036477744579315,
-0.00839967094361782,
-0.09754950553178787,
-0.10799553990364075,
0.0325273796916008,
0.04875319451093674,
0.021770931780338287,
0.07993379980325699,
-0.02991... |
c5a44915-b627-41e5-9821-8ab1d6f8ca1f | slug: /faq/operations/multi-region-replication
title: 'Does ClickHouse support multi-region replication?'
toc_hidden: true
toc_priority: 30
description: 'This page answers whether ClickHouse supports multi-region replication'
doc_type: 'reference'
keywords: ['multi-region', 'replication', 'geo-distributed', 'distributed systems', 'data synchronization']
Does ClickHouse support multi-region replication? {#does-clickhouse-support-multi-region-replication}
The short answer is "yes". However, we recommend keeping latency between all regions/datacenters in two-digit range, otherwise write performance will suffer as it goes through distributed consensus protocol. For example, replication between US coasts will likely work fine, but between the US and Europe won't.
Configuration-wise there's no difference compared to single-region replication, simply use hosts that are located in different locations for replicas.
For more information, see
full article on data replication
. | {"source_file": "multi-region-replication.md"} | [
0.0007590966997668147,
-0.1200670450925827,
-0.050672497600317,
-0.04478009417653084,
-0.05322900041937828,
-0.06056274473667145,
-0.11605897545814514,
-0.09286961704492569,
-0.05025578290224075,
0.027522316202521324,
-0.02670995146036148,
0.07146432250738144,
0.04453811049461365,
0.004880... |
5918519f-942f-448e-84d9-594da8600f97 | slug: /faq/operations/delete-old-data
title: 'Is it possible to delete old records from a ClickHouse table?'
toc_hidden: true
toc_priority: 20
description: 'This page answers the question of whether it is possible to delete old records from a ClickHouse table'
doc_type: 'reference'
keywords: ['delete data', 'TTL', 'data retention', 'cleanup', 'data lifecycle']
Is it possible to delete old records from a ClickHouse table? {#is-it-possible-to-delete-old-records-from-a-clickhouse-table}
The short answer is βyesβ. ClickHouse has multiple mechanisms that allow freeing up disk space by removing old data. Each mechanism is aimed for different scenarios.
TTL {#ttl}
ClickHouse allows to automatically drop values when some condition happens. This condition is configured as an expression based on any columns, usually just static offset for any timestamp column.
The key advantage of this approach is that it does not need any external system to trigger, once TTL is configured, data removal happens automatically in background.
:::note
TTL can also be used to move data not only to
/dev/null
, but also between different storage systems, like from SSD to HDD.
:::
More details on
configuring TTL
.
DELETE FROM {#delete-from}
DELETE FROM
allows standard DELETE queries to be run in ClickHouse. The rows targeted in the filter clause are marked as deleted, and removed from future result sets. Cleanup of the rows happens asynchronously.
:::note
DELETE FROM is generally available from version 23.3 and newer. On older versions, it is experimental and must be enabled with:
sql
SET allow_experimental_lightweight_delete = true;
:::
ALTER DELETE {#alter-delete}
ALTER DELETE removes rows using asynchronous batch operations. Unlike DELETE FROM, queries run after the ALTER DELETE and before the batch operations complete will include the rows targeted for deletion. For more details see the
ALTER DELETE
docs.
ALTER DELETE
can be issued to flexibly remove old data. If you need to do it regularly, the main downside will be the need to have an external system to submit the query. There are also some performance considerations since mutations rewrite complete parts even there is only a single row to be deleted.
This is the most common approach to make your system based on ClickHouse
GDPR
-compliant.
More details on
mutations
.
DROP PARTITION {#drop-partition}
ALTER TABLE ... DROP PARTITION
provides a cost-efficient way to drop a whole partition. It's not that flexible and needs proper partitioning scheme configured on table creation, but still covers most common cases. Like mutations need to be executed from an external system for regular use.
More details on
manipulating partitions
.
TRUNCATE {#truncate}
It's rather radical to drop all data from a table, but in some cases it might be exactly what you need.
More details on
table truncation
. | {"source_file": "delete-old-data.md"} | [
-0.10349497944116592,
-0.048416245728731155,
-0.0534345842897892,
0.04563174769282341,
-0.011861860752105713,
-0.04643704369664192,
-0.002972251968458295,
-0.09688815474510193,
0.002776575740426779,
0.041656870394945145,
0.05877591669559479,
0.10505414009094238,
0.07858925312757492,
-0.042... |
464530a6-589f-48c8-aeb5-0bfd167e7710 | slug: /faq/operations/deploy-separate-storage-and-compute
title: 'Is it possible to deploy ClickHouse with separate storage and compute?'
sidebar_label: 'Is it possible to deploy ClickHouse with separate storage and compute?'
toc_hidden: true
toc_priority: 20
description: 'This page provides an answer as to whether it is possible to deploy ClickHouse with separate storage and compute'
doc_type: 'guide'
keywords: ['storage', 'disk configuration', 'data organization', 'volume management', 'storage tiers']
The short answer is "yes".
Object storage (S3, GCS) can be used as the elastic primary storage backend for data in ClickHouse tables.
S3-backed MergeTree
and
GCS-backed MergeTree
guides are published. Only metadata is stored locally on compute nodes in this configuration. You can easily upscale and downscale compute resources in this setup as additional nodes only need to replicate metadata. | {"source_file": "separate_storage.md"} | [
-0.04277594015002251,
-0.08727910369634628,
-0.09811294078826904,
-0.004699851851910353,
-0.030049409717321396,
-0.05997142940759659,
0.0063097476959228516,
-0.05226990580558777,
-0.03851497918367386,
0.10595360398292542,
-0.011080946773290634,
-0.0035834196023643017,
0.08249504119157791,
... |
dad0e808-64a4-47e3-990e-3965d1d2fac1 | slug: /faq/use-cases/key-value
title: 'Can I use ClickHouse as a key-value storage?'
toc_hidden: true
toc_priority: 101
description: 'Answers the frequently asked question of whether or not ClickHouse can be used as a key-value storage?'
doc_type: 'reference'
keywords: ['key-value', 'data model', 'use case', 'schema design', 'storage pattern']
Can I use ClickHouse as a key-value storage? {#can-i-use-clickhouse-as-a-key-value-storage}
The short answer is
"no"
. The key-value workload is among top positions in the list of cases when
NOT
to use ClickHouse. It's an
OLAP
system after all, while there are many excellent key-value storage systems out there.
However, there might be situations where it still makes sense to use ClickHouse for key-value-like queries. Usually, it's some low-budget products where the main workload is analytical in nature and fits ClickHouse well, but there's also some secondary process that needs a key-value pattern with not so high request throughput and without strict latency requirements. If you had an unlimited budget, you would have installed a secondary key-value database for this secondary workload, but in reality, there's an additional cost of maintaining one more storage system (monitoring, backups, etc.) which might be desirable to avoid.
If you decide to go against recommendations and run some key-value-like queries against ClickHouse, here are some tips:
The key reason why point queries are expensive in ClickHouse is its sparse primary index of main
MergeTree table engine family
. This index can't point to each specific row of data, instead, it points to each N-th and the system has to scan from the neighboring N-th row to the desired one, reading excessive data along the way. In a key-value scenario, it might be useful to reduce the value of N with the
index_granularity
setting.
ClickHouse keeps each column in a separate set of files, so to assemble one complete row it needs to go through each of those files. Their count increases linearly with the number of columns, so in the key-value scenario, it might be worth avoiding using many columns and put all your payload in a single
String
column encoded in some serialization format like JSON, Protobuf, or whatever makes sense.
There's an alternative approach that uses
Join
table engine instead of normal
MergeTree
tables and
joinGet
function to retrieve the data. It can provide better query performance but might have some usability and reliability issues. Here's an
usage example
. | {"source_file": "key-value.md"} | [
-0.06558132916688919,
-0.04664397984743118,
-0.10892031341791153,
-0.0034125871025025845,
-0.03759395703673363,
-0.029848096892237663,
0.02914932183921337,
0.009165417402982712,
-0.005762385204434395,
-0.009889552369713783,
0.0036438745446503162,
0.07719828188419342,
0.07148881256580353,
0... |
c75a6d94-c579-47ba-895c-f4db3604ac12 | slug: /faq/use-cases/
sidebar_position: 2
sidebar_label: 'Questions about ClickHouse use cases'
title: 'Questions About ClickHouse Use Cases'
description: 'Landing page listing common questions about ClickHouse use cases'
keywords: ['ClickHouse use cases', 'time-series database', 'key-value storage', 'database applications', 'OLAP use cases']
doc_type: 'landing-page'
Questions about ClickHouse use cases
Can I use ClickHouse as a time-series database?
Can I use ClickHouse as a key-value storage?
:::info Don't see what you're looking for?
Check out our
Knowledge Base
and also browse the many helpful articles found here in the documentation.
::: | {"source_file": "index.md"} | [
0.011929871514439583,
0.0019665351137518883,
-0.0884813517332077,
0.0015223551308736205,
-0.026855990290641785,
0.002294069156050682,
0.015291956253349781,
0.015166038647294044,
-0.047418415546417236,
-0.017120007425546646,
0.03303280100226402,
0.05395922064781189,
0.065609872341156,
-0.02... |
d4e706c5-5fd8-4b13-830d-8d7e05127a8f | slug: /faq/use-cases/time-series
title: 'Can I use ClickHouse as a time-series database?'
toc_hidden: true
toc_priority: 101
description: 'Page describing how to use ClickHouse as a time-series database'
doc_type: 'guide'
keywords: ['time series', 'temporal data', 'use case', 'time-based analytics', 'timeseries']
Can I use ClickHouse as a time-series database? {#can-i-use-clickhouse-as-a-time-series-database}
Note: Please see the blog
Working with Time series data in ClickHouse
for additional examples of using ClickHouse for time series analysis.
ClickHouse is a generic data storage solution for
OLAP
workloads, while there are many specialized
time-series database management systems
. Nevertheless, ClickHouse's
focus on query execution speed
allows it to outperform specialized systems in many cases. There are many independent benchmarks on this topic out there, so we're not going to conduct one here. Instead, let's focus on ClickHouse features that are important to use if that's your use case.
First of all, there are
specialized codecs
which make typical time-series. Either common algorithms like
DoubleDelta
and
Gorilla
or specific to ClickHouse like
T64
.
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse
TTL
feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
Even though it's against ClickHouse philosophy of storing and processing raw data, you can use
materialized views
to fit into even tighter latency or costs requirements. | {"source_file": "time-series.md"} | [
-0.06917021423578262,
-0.06989463418722153,
-0.06704836338758469,
0.025676483288407326,
-0.04068109020590782,
-0.03978152200579643,
-0.0032466521952301264,
-0.01915881410241127,
-0.05463143438100815,
-0.03918304294347763,
-0.018941478803753853,
0.027181275188922882,
-0.010686430148780346,
... |
03eeab77-a202-490c-9b5d-404fbfa53a78 | description: 'Details alternative backup or restore methods'
sidebar_label: 'Alternative methods'
slug: /operations/backup/alternative_methods
title: 'Alternative backup or restore methods'
doc_type: 'reference'
Alternative backup methods
ClickHouse stores data on disk, and there are many ways to back up disks.
These are some alternatives that have been used in the past, and that may fit
your use case.
Duplicating source data somewhere else {#duplicating-source-data-somewhere-else}
Often data ingested into ClickHouse is delivered through some sort of persistent
queue, such as
Apache Kafka
. In this case, it is possible to configure an
additional set of subscribers that will read the same data stream while it is
being written to ClickHouse and store it in cold storage somewhere. Most companies
already have some default recommended cold storage, which could be an object store
or a distributed filesystem like
HDFS
.
Filesystem Snapshots {#filesystem-snapshots}
Some local filesystems provide snapshot functionality (for example,
ZFS
),
but they might not be the best choice for serving live queries. A possible solution
is to create additional replicas with this kind of filesystem and exclude them
from the
Distributed
tables that are used for
SELECT
queries.
Snapshots on such replicas will be out of reach of any queries that modify data.
As a bonus, these replicas might have special hardware configurations with more
disks attached per server, which would be cost-effective.
For smaller volumes of data, a simple
INSERT INTO ... SELECT ...
to remote tables
might work as well.
Manipulations with Parts {#manipulations-with-parts}
ClickHouse allows using the
ALTER TABLE ... FREEZE PARTITION ...
query to create
a local copy of table partitions. This is implemented using hardlinks to the
/var/lib/clickhouse/shadow/
folder, so it usually does not consume extra disk space for old data. The created
copies of files are not handled by ClickHouse server, so you can just leave them there:
you will have a simple backup that does not require any additional external system,
but it will still be prone to hardware issues. For this reason, it's better to
remotely copy them to another location and then remove the local copies.
Distributed filesystems and object stores are still a good options for this,
but normal attached file servers with a large enough capacity might work as well
(in this case the transfer will occur via the network filesystem or maybe
rsync
).
Data can be restored from backup using the
ALTER TABLE ... ATTACH PARTITION ...
For more information about queries related to partition manipulations, see the
ALTER
documentation
.
A third-party tool is available to automate this approach:
clickhouse-backup
. | {"source_file": "04_alternative_methods.md"} | [
-0.10453934967517853,
-0.07341035455465317,
-0.05212490260601044,
0.0368490032851696,
-0.03350095823407173,
-0.03050871007144451,
-0.017597414553165436,
-0.00011459564120741561,
-0.030359463766217232,
0.028417982161045074,
0.009294621646404266,
0.09328474849462509,
0.07326850295066833,
-0.... |
c998f136-38cf-4d80-8267-10e0bbcf00cf | description: 'Details backup/restore to or from a local disk'
sidebar_label: 'Local disk / S3 disk'
slug: /operations/backup/disk
title: 'Backup and Restore in ClickHouse'
doc_type: 'guide'
import GenericSettings from '@site/docs/operations_/backup_restore/
snippets/_generic_settings.md';
import S3Settings from '@site/docs/operations
/backup_restore/
snippets/_s3_settings.md';
import ExampleSetup from '@site/docs/operations
/backup_restore/
snippets/_example_setup.md';
import Syntax from '@site/docs/operations
/backup_restore/_snippets/_syntax.md';
BACKUP / RESTORE to disk {#backup-to-a-local-disk}
Syntax {#syntax}
Configure backup destinations for disk {#configure-backup-destinations-for-disk}
Configure a backup destination for local disk {#configure-a-backup-destination}
In the examples below you will see the backup destination specified as
Disk('backups', '1.zip')
.
To use the
Disk
backup engine it is necessary to first add a file specifying
the backup destination at the path below:
text
/etc/clickhouse-server/config.d/backup_disk.xml
For example, the configuration below defines a disk named
backups
and then adds that disk to
the
allowed_disk
list of
backups
:
```xml
<backups>
<type>local</type>
<path>/backups/</path>
</backups>
</disks>
</storage_configuration>
<backups>
<allowed_disk>backups</allowed_disk>
<allowed_path>/backups/</allowed_path>
</backups>
```
Configure a backup destination for S3 disk {#backuprestore-using-an-s3-disk}
It is also possible to
BACKUP
/
RESTORE
to S3 by configuring an S3 disk in the
ClickHouse storage configuration. Configure the disk like this by adding a file to
/etc/clickhouse-server/config.d
as was done above for the local disk.
```xml
s3_plain
s3_plain
<backups>
<allowed_disk>s3_plain</allowed_disk>
</backups>
```
BACKUP
/
RESTORE
for S3 disk is done in the same way as for local disk:
sql
BACKUP TABLE data TO Disk('s3_plain', 'cloud_backup');
RESTORE TABLE data AS data_restored FROM Disk('s3_plain', 'cloud_backup');
:::note
- This disk should not be used for
MergeTree
itself, only for
BACKUP
/
RESTORE
- If your tables are backed by S3 storage and the types of the disks are different,
it doesn't use
CopyObject
calls to copy parts to the destination bucket, instead,
it downloads and uploads them, which is very inefficient. In this case prefer using
the
BACKUP ... TO S3(<endpoint>)
syntax for this use-case.
:::
Usage examples of backup/restore to local disk {#usage-examples}
Backup and restore a table {#backup-and-restore-a-table}
To backup the table you can run:
sql title="Query"
BACKUP TABLE test_db.test_table TO Disk('backups', '1.zip') | {"source_file": "01_local_disk.md"} | [
-0.022314339876174927,
-0.05704967677593231,
-0.035684652626514435,
-0.021368514746427536,
0.02676774561405182,
-0.006185681093484163,
-0.019932659342885017,
-0.005725284572690725,
-0.07368262857198715,
0.006736268289387226,
0.017778154462575912,
0.00805577915161848,
0.09654077887535095,
-... |
58961084-5c0a-4b31-b4d3-804fdc6b2adb | Backup and restore a table {#backup-and-restore-a-table}
To backup the table you can run:
sql title="Query"
BACKUP TABLE test_db.test_table TO Disk('backups', '1.zip')
response title="Response"
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
1. β 065a8baf-9db7-4393-9c3f-ba04d1e76bcd β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
The table can be restored from the backup using the following command if the table is empty:
sql title="Query"
RESTORE TABLE test_db.test_table FROM Disk('backups', '1.zip')
response title="Response"
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββ
1. β f29c753f-a7f2-4118-898e-0e4600cd2797 β RESTORED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββ
:::note
The above
RESTORE
would fail if the table
test.table
contains data.
The setting
allow_non_empty_tables=true
allows
RESTORE TABLE
to insert data
into non-empty tables. This will mix earlier data in the table with the data extracted from the backup.
This setting can therefore cause data duplication in the table, and should be used with caution.
:::
To restore the table with data already in it, run:
sql
RESTORE TABLE test_db.table_table FROM Disk('backups', '1.zip')
SETTINGS allow_non_empty_tables=true
Tables can be restored, or backed up, with new names:
sql
RESTORE TABLE test_db.table_table AS test_db.test_table_renamed FROM Disk('backups', '1.zip')
The backup archive for this backup has the following structure:
text
βββ .backup
βββ metadata
βββ test_db
βββ test_table.sql
Formats other than zip can be used. See
"Backups as tar archives"
below for further details.
Incremental backups to disk {#incremental-backups}
A base backup in ClickHouse is the initial, full backup from which the following
incremental backups are created. Incremental backups only store the changes
made since the base backup, so the base backup must be kept available to
restore from any incremental backup. The base backup destination can be set with setting
base_backup
.
:::note
Incremental backups depend on the base backup. The base backup must be kept available
to be able to restore from an incremental backup.
:::
To make an incremental backup of a table, first make a base backup:
sql
BACKUP TABLE test_db.test_table TO Disk('backups', 'd.zip')
sql
BACKUP TABLE test_db.test_table TO Disk('backups', 'incremental-a.zip')
SETTINGS base_backup = Disk('backups', 'd.zip')
All data from the incremental backup and the base backup can be restored into a
new table
test_db.test_table2
with command:
sql
RESTORE TABLE test_db.test_table AS test_db.test_table2
FROM Disk('backups', 'incremental-a.zip');
Securing a backup {#assign-a-password-to-the-backup}
Backups written to disk can have a password applied to the file.
The password can be specified using the
password
setting: | {"source_file": "01_local_disk.md"} | [
-0.06428342312574387,
-0.05568857863545418,
0.002687867498025298,
0.08703050762414932,
-0.032998863607645035,
-0.02825545333325863,
0.0071031078696250916,
-0.022799478843808174,
-0.07410188019275665,
0.041739027947187424,
0.0477827712893486,
0.0176297128200531,
0.11063931882381439,
-0.0887... |
7f1aa33a-04bf-4d46-9e5f-e289532e2819 | Securing a backup {#assign-a-password-to-the-backup}
Backups written to disk can have a password applied to the file.
The password can be specified using the
password
setting:
sql
BACKUP TABLE test_db.test_table
TO Disk('backups', 'password-protected.zip')
SETTINGS password='qwerty'
To restore a password-protected backup, the password must again
be specified using the
password
setting:
sql
RESTORE TABLE test_db.test_table
FROM Disk('backups', 'password-protected.zip')
SETTINGS password='qwerty'
Backups as tar archives {#backups-as-tar-archives}
Backups can be stored not only as zip archives, but also as tar archives.
The functionality is the same as for zip, except that password protection is not
supported for tar archives. Additionally, tar archives support a variety of
compression methods.
To make a backup of a table as a tar:
sql
BACKUP TABLE test_db.test_table TO Disk('backups', '1.tar')
to restore from a tar archive:
sql
RESTORE TABLE test_db.test_table FROM Disk('backups', '1.tar')
To change the compression method, the correct file suffix should be appended to
the backup name. For example, to compress the tar archive using gzip run:
sql
BACKUP TABLE test_db.test_table TO Disk('backups', '1.tar.gz')
The supported compression file suffixes are:
-
tar.gz
-
.tgz
-
tar.bz2
-
tar.lzma
-
.tar.zst
-
.tzst
-
.tar.xz
Compression settings {#compression-settings}
The compression method and level of compression can be specified using
setting
compression_method
and
compression_level
respectively.
sql
BACKUP TABLE test_db.test_table
TO Disk('backups', 'filename.zip')
SETTINGS compression_method='lzma', compression_level=3
Restore specific partitions {#restore-specific-partitions}
If specific partitions associated with a table need to be restored, these can be specified.
Let's create a simple partitioned table into four parts, insert some data into it and then
take a backup of only the first and fourth partitions:
Setup
```sql
CREATE IF NOT EXISTS test_db;
-- Create a partitioend table
CREATE TABLE test_db.partitioned (
id UInt32,
data String,
partition_key UInt8
) ENGINE = MergeTree()
PARTITION BY partition_key
ORDER BY id;
INSERT INTO test_db.partitioned VALUES
(1, 'data1', 1),
(2, 'data2', 2),
(3, 'data3', 3),
(4, 'data4', 4);
SELECT count() FROM test_db.partitioned;
SELECT partition_key, count()
FROM test_db.partitioned
GROUP BY partition_key
ORDER BY partition_key;
```
```response
ββcount()ββ
1. β 4 β
βββββββββββ
ββpartition_keyββ¬βcount()ββ
1. β 1 β 1 β
2. β 2 β 1 β
3. β 3 β 1 β
4. β 4 β 1 β
βββββββββββββββββ΄ββββββββββ
```
Run the following command to back up partitions 1 and 4:
sql
BACKUP TABLE test_db.partitioned PARTITIONS '1', '4'
TO Disk('backups', 'partitioned.zip')
Run the following command to restore partitions 1 and 4: | {"source_file": "01_local_disk.md"} | [
-0.06236037239432335,
0.034954171627759933,
-0.11279437690973282,
0.0392647422850132,
-0.03960666432976723,
0.008463388308882713,
0.028076395392417908,
0.03439037501811981,
-0.06743065267801285,
0.049599457532167435,
-0.03219017758965492,
0.05264688655734062,
0.12580180168151855,
-0.019362... |
f88bf84a-8dc4-456c-a5be-6704951c506e | sql
BACKUP TABLE test_db.partitioned PARTITIONS '1', '4'
TO Disk('backups', 'partitioned.zip')
Run the following command to restore partitions 1 and 4:
sql
RESTORE TABLE test_db.partitioned PARTITIONS '1', '4'
FROM Disk('backups', 'partitioned.zip')
SETTINGS allow_non_empty_tables=true | {"source_file": "01_local_disk.md"} | [
0.01718912459909916,
-0.06351099163293839,
0.002053176751360297,
0.06763879209756851,
-0.00873838271945715,
-0.0504218190908432,
0.0491073802113533,
-0.004642317071557045,
-0.08055104315280914,
0.006004752591252327,
-0.006742812693119049,
0.00790109671652317,
0.053959041833877563,
0.007035... |
e72e63d9-cb14-4cf2-a9a8-2103b9fc9db0 | description: 'Overview of ClickHouse backup and restore'
sidebar_label: 'S3 endpoint'
slug: /operations/backup/s3_endpoint
title: 'Backup and restore to/from an S3 endpoint'
doc_type: 'guide'
import Syntax from '@site/docs/operations_/backup_restore/_snippets/_syntax.md';
BACKUP / RESTORE to or from an S3 endpoint {#backup-to-a-local-disk}
This article covers backing up or restoring backups to/from an S3 bucket
via an S3 endpoint.
Syntax {#syntax}
Usage example {#usage-examples}
Incremental backup to an S3 endpoint {#incremental-backup-to-an-s3-endpoint}
In this example, we will create a backup to an S3 endpoint and then restore from it
again.
:::note
For an explanation of the differences between a full backup and an incremental
backup, see
"Backup types"
:::
You will need the following information to use this method:
| Parameter | Example |
|-------------------|--------------------------------------------------------------|
| An S3 endpoint |
https://backup-ch-docs.s3.us-east-1.amazonaws.com/backups/
|
| Access key ID |
BKIOZLE2VYN3VXXTP9RC
|
| Secret access key |
40bwYnbqN7xU8bVePaUCh3+YEyGXu8UOMV9ANpwL
|
:::tip
Creating an S3 bucket is covered in section
"use S3 Object Storage as a ClickHouse disk"
:::
The destination for a backup is specified as:
sql
S3('<s3 endpoint>/<directory>', '<access key id>', '<secret access key>', '<extra_credentials>')
Setup {#create-a-table}
Create the following database and table and insert some random data into it:
sql
CREATE DATABASE IF NOT EXISTS test_db;
CREATE TABLE test_db.test_table
(
`key` Int,
`value` String,
`array` Array(String)
)
ENGINE = MergeTree
ORDER BY tuple()
sql
INSERT INTO test_db.test_table SELECT *
FROM generateRandom('key Int, value String, array Array(String)')
LIMIT 1000
Create a base backup {#create-a-base-initial-backup}
Incremental backups require a
base
backup to start from. The first parameter of
the S3 destination is the S3 endpoint followed by the directory within the bucket
to use for this backup. In this example the directory is named
my_backup
.
Run the following command to create the base backup:
sql
BACKUP TABLE test_db.test_table TO S3(
'https://backup-ch-docs.s3.us-east-1.amazonaws.com/backups/base_backup',
'<access key id>',
'<secret access key>'
)
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
β de442b75-a66c-4a3c-a193-f76f278c70f3 β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
Add more data {#add-more-data}
Incremental backups are populated with the difference between the base backup and
the current content of the table being backed up. Add more data before taking the
incremental backup:
sql
INSERT INTO test_db.test_table SELECT *
FROM generateRandom('key Int, value String, array Array(String)')
LIMIT 100 | {"source_file": "02_s3_endpoint.md"} | [
-0.0875001847743988,
-0.0591944195330143,
-0.009371723979711533,
-0.01709580421447754,
0.0293309073895216,
0.02364746481180191,
-0.043239738792181015,
-0.030931537970900536,
-0.02461036667227745,
-0.009359504096210003,
0.006197083741426468,
0.03095104545354843,
0.09404563903808594,
-0.0805... |
72e2379e-6279-490e-9110-b7b5d700a7f3 | sql
INSERT INTO test_db.test_table SELECT *
FROM generateRandom('key Int, value String, array Array(String)')
LIMIT 100
Take an incremental backup {#take-an-incremental-backup}
This backup command is similar to the base backup, but adds
SETTINGS base_backup
and the location of the base backup. Note that the destination for the incremental backup is not the same directory as the base, it is the same endpoint with a different target directory within the bucket. The base backup is in
my_backup
, and the incremental will be written to
my_incremental
:
sql
BACKUP TABLE test_db.test_table TO S3(
'https://backup-ch-docs.s3.us-east-1.amazonaws.com/backups/incremental_backup',
'<access key id>',
'<secret access key>'
)
SETTINGS base_backup = S3(
'https://backup-ch-docs.s3.us-east-1.amazonaws.com/backups/base_backup',
'<access key id>',
'<secret access key>'
)
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
β f6cd3900-850f-41c9-94f1-0c4df33ea528 β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
Restore from the incremental backup {#restore-from-the-incremental-backup}
This command restores the incremental backup into a new table,
test_table_restored
.
Note that when an incremental backup is restored, the base backup is also included.
Specify only the
incremental backup
when restoring:
sql
RESTORE TABLE data AS test_db.test_table_restored FROM S3(
'https://backup-ch-docs.s3.us-east-1.amazonaws.com/backups/incremental_backup',
'<access key id>',
'<secret access key>'
)
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββ
β ff0c8c39-7dff-4324-a241-000796de11ca β RESTORED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββ
Verify the count {#verify-the-count}
There were two inserts into the original table
data
, one with 1,000 rows and one with 100 rows, for a total of 1,100.
Verify that the restored table has 1,100 rows:
sql
SELECT count()
FROM test_db.test_table_restored
response
ββcount()ββ
β 1100 β
βββββββββββ
Verify the content {#verify-the-content}
This compares the content of the original table,
test_table
with the restored table
test_table_restored
:
sql
SELECT throwIf((
SELECT groupArray(tuple(*))
FROM test_db.test_table
) != (
SELECT groupArray(tuple(*))
FROM test_db.test_table_restored
), 'Data does not match after BACKUP/RESTORE') | {"source_file": "02_s3_endpoint.md"} | [
-0.013727766461670399,
-0.021577076986432076,
-0.08917933702468872,
-0.0014277866575866938,
0.009174078702926636,
-0.02552182599902153,
-0.01071729976683855,
0.016514386981725693,
-0.033111169934272766,
0.0660456046462059,
0.0520353764295578,
-0.03305897116661072,
0.13177455961704254,
-0.1... |
eb5c2e2c-c090-4b75-b0e4-889fd2d16a81 | description: 'Details backup/restore to or from an Azure Blob Storage endpoint'
sidebar_label: 'AzureBlobStorage'
slug: /operations/backup/azure
title: 'Backup and restore to/from Azure Blob Storage'
doc_type: 'guide'
import Syntax from '@site/docs/operations_/backup_restore/_snippets/_syntax.md';
BACKUP/RESTORE to or from Azure Blob Storage {#backup-to-azure-blob-storage}
Syntax {#syntax}
Configuring BACKUP / RESTORE to use an AzureBlobStorage endpoint {#configuring-backuprestore-to-use-an-azureblobstorage-endpoint}
To write backups to an AzureBlobStorage container you need the following pieces of information:
- AzureBlobStorage endpoint connection string / url,
- Container,
- Path,
- Account name (if url is specified)
- Account Key (if url is specified)
The destination for a backup will be specified as:
sql
AzureBlobStorage('<connection string>/<url>', '<container>', '<path>', '<account name>', '<account key>')
sql
BACKUP TABLE data TO AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;',
'testcontainer', 'data_backup');
RESTORE TABLE data AS data_restored FROM AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;',
'testcontainer', 'data_backup'); | {"source_file": "03_azure_blob_storage.md"} | [
0.011136146262288094,
-0.018156396225094795,
-0.049581777304410934,
0.06603077054023743,
-0.0676487386226654,
0.10753217339515686,
0.0559692308306694,
0.011596012860536575,
-0.06545029580593109,
0.0840580016374588,
-0.039398789405822754,
-0.03231346607208252,
0.09361802786588669,
-0.021146... |
cc8817d7-21c7-4e51-9bef-1599ce4fb278 | description: 'Overview of ClickHouse backup and restore'
sidebar_label: 'Overview'
slug: /operations/backup/overview
title: 'Backup and Restore in ClickHouse'
doc_type: 'reference'
import GenericSettings from '@site/docs/operations_/backup_restore/
snippets/_generic_settings.md';
import Syntax from '@site/docs/operations
/backup_restore/
snippets/_syntax.md';
import AzureSettings from '@site/docs/operations
/backup_restore/
snippets/_azure_settings.md';
import S3Settings from '@site/docs/operations
/backup_restore/_snippets/_s3_settings.md';
This section broadly covers backups and restores in ClickHouse. For a more
detailed description of each backup method, see the pages for specific methods
in the sidebar.
Introduction {#introduction}
While
replication
provides protection from hardware failures, it does not
protect against human errors: accidental deletion of data, deletion of the wrong
table or a table on the wrong cluster, and software bugs that result in incorrect
data processing or data corruption.
In many cases mistakes like these will affect all replicas. ClickHouse has built-in
safeguards to prevent some types of mistakes, for example, by
default
you can't just drop tables with a
MergeTree
family engine containing more than
50 Gb of data. However, these safeguards do not cover all possible cases and
problems can still occur.
To effectively mitigate possible human errors, you should carefully prepare a
strategy for backing up and restoring your data
in advance
.
Each company has different resources available and business requirements, so
there's no universal solution for ClickHouse backups and restores that will fit
every situation. What works for one gigabyte of data likely won't work for tens
of petabytes of data. There are a variety of possible approaches with their own pros
and cons, which are presented in this section of the docs. It is a good idea to
use several approaches instead of just one such as to compensate for their various
shortcomings.
:::note
Keep in mind that if you backed something up and never tried to restore it,
chances are that the restore will not work properly when you actually need it (or at
least it will take longer than the business can tolerate). So whatever backup
approach you choose, make sure to automate the restore process as well, and practice
it on a spare ClickHouse cluster regularly.
:::
The following pages detail the various backup and
restore methods available in ClickHouse: | {"source_file": "00_overview.md"} | [
-0.0808582752943039,
-0.03931593894958496,
-0.043547626584768295,
0.01305723749101162,
0.009085124358534813,
0.038567908108234406,
0.006988752167671919,
-0.05419708788394928,
-0.061528757214546204,
0.0671260803937912,
0.02895219810307026,
0.07067089527845383,
0.11502621322870255,
-0.087983... |
0497ef1d-44e9-40e5-82d8-cdfa501cc083 | The following pages detail the various backup and
restore methods available in ClickHouse:
| Page | Description |
|---------------------------------------------------------------------|-----------------------------------------------------------|
|
Backup/restore using local disk or S3 disk
| Details backup/restore to or from a local disk or S3 disk |
|
Backup/restore using S3 endpoint
| Details backup/restore to or from an S3 endpoint |
|
Backup/restore using AzureBlobStorage
| Details backup/restore to or from Azure blob storage |
|
Alternative methods
| Discusses alternative backup methods |
Backups can:
- be
full or incremental
- be
synchronous or asynchronous
- be
concurrent or non-concurrent
- be
compressed or uncompressed
- use
named collections
- be password protected
- be taken of
system tables, log tables, or access management tables
Backup types {#backup-types}
Backups can be either full or incremental. Full backups are a complete copy of the
data, while incremental backups are a delta of the data from the last full backup.
Full backups have the advantage of being a simple, independent (of other backups)
and reliable recovery method. However, they can take a long time to complete and
can consume a lot of space. Incremental backups, on the other hand, are more
efficient in terms of both time and space, but restoring the data requires all
the backups to be available.
Depending on your needs, you may want to use:
-
Full backups
for smaller databases or critical data.
-
Incremental backups
for larger databases or when backups need to be done frequently and cost effectively.
-
Both
, for instance, weekly full backups and daily incremental backups.
Synchronous vs asynchronous backups {#synchronous-vs-asynchronous}
BACKUP
and
RESTORE
commands can also be marked
ASYNC
. In this case, the
backup command returns immediately, and the backup process runs in the background.
If the commands are not marked
ASYNC
, the backup process is synchronous and
the command blocks until the backup completes.
Concurrent vs non-concurrent backups {#concurrent-vs-non-concurrent}
By default, ClickHouse allows concurrent backups and restores. This means you
can initiate multiple backup or restore operations simultaneously. However,
there are server-level settings that let you disallow this behavior. If you set
these settings to false, only one backup or restore operation is allowed to run
on a cluster at a time. This can help avoid resource contention or potential
conflicts between operations.
To disallow concurrent backup/restore, you can use these settings respectively: | {"source_file": "00_overview.md"} | [
-0.04825889691710472,
-0.05664356052875519,
-0.0457790270447731,
0.008243069984018803,
0.01834021508693695,
0.09281112998723984,
-0.042311906814575195,
-0.057496972382068634,
-0.0029734685085713863,
0.13278144598007202,
0.03242085129022598,
0.03773984685540199,
0.09098157286643982,
-0.1035... |
379f4ad3-92c1-40a0-b610-48ba7fcf7c9e | To disallow concurrent backup/restore, you can use these settings respectively:
xml
<clickhouse>
<backups>
<allow_concurrent_backups>false</allow_concurrent_backups>
<allow_concurrent_restores>false</allow_concurrent_restores>
</backups>
</clickhouse>
The default value for both is true, so by default concurrent backup/restores are
allowed. When these settings are false on a cluster, only a single backup/restore
is allowed to run on a cluster at a time.
Compressed vs uncompressed backups {#compressed-vs-uncompressed}
ClickHouse backups support compression through the
compression_method
and
compression_level
settings.
When creating a backup, you can specify:
sql
BACKUP TABLE test.table
TO Disk('backups', 'filename.zip')
SETTINGS compression_method='lzma', compression_level=3
Using named collections {#using-named-collections}
Named collections allow you to store key-value pairs (like S3 credentials, endpoints, and settings) that can be reused across backup/restore operations.
They help to:
Hide credentials from users without admin access
Simplify commands by storing complex configuration centrally
Maintain consistency across operations
Avoid credential exposure in query logs
See
"named collections"
for further details.
Backing up system, log or access management tables {#system-backups}
System tables can also be included in your backup and restore workflows, but their
inclusion depends on your specific use case.
System tables that store historic data, such as those with a
_log
suffix (e.g.,
query_log
,
part_log
), can be backed up and restored like any other table.
If your use case relies on analyzing historic data - for example, using
query_log
to track query performance or debug issues - it's recommended to include these
tables in your backup strategy. However, if historic data from these tables is
not required, they can be excluded to save backup storage space.
System tables related to access management, such as users, roles, row_policies,
settings_profiles, and quotas, receive special treatment during backup and restore operations.
When these tables are included in a backup, their content is exported to a special
accessXX.txt
file, which encapsulates the equivalent SQL statements for creating
and configuring the access entities. Upon restoration, the restore process
interprets these files and re-applies the SQL commands to recreate the users,
roles, and other configurations. This feature ensures that the access control
configuration of a ClickHouse cluster can be backed up and restored as part of
the cluster's overall setup.
This functionality only works for configurations managed through SQL commands
(referred to as
"SQL-driven Access Control and Account Management"
).
Access configurations defined in ClickHouse server configuration files (e.g.
users.xml
)
are not included in backups and cannot be restored through this method. | {"source_file": "00_overview.md"} | [
-0.0530063658952713,
-0.06597744673490524,
-0.0636935830116272,
0.04984905570745468,
-0.020571572706103325,
-0.008921219035983086,
-0.03765731304883957,
-0.0726940706372261,
-0.06599108874797821,
0.012587144039571285,
-0.014285523444414139,
0.019513709470629692,
0.10136133432388306,
-0.067... |
c1b2fd5b-d5b0-4473-a786-baef661dbf33 | General syntax {#syntax}
Command summary {#command-summary}
Each of the commands above is detailed below: | {"source_file": "00_overview.md"} | [
0.01218806579709053,
0.03747354820370674,
0.02422005869448185,
0.04364866390824318,
0.05616426467895508,
0.049477893859148026,
0.023808736354112625,
0.06615397334098816,
-0.023382149636745453,
0.03393753990530968,
0.03540355712175369,
-0.04425964504480362,
0.004694822710007429,
-0.01505566... |
261cb088-1b1b-41cb-8cda-cee1d7f07077 | |
Command
|
Description
|
|------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|
|
BACKUP
| Creates a backup of specified objects |
|
RESTORE
| Restores objects from a backup |
|
[ASYNC]
| Makes the operation run asynchronously (returns immediately with an ID you can monitor) |
|
TABLE [db.]table_name [AS [db.]table_name_in_backup]
| Backs up/restores a specific table (can be renamed) |
|
[PARTITION[S] partition_expr [,...]]
| Only backup/restore specific partitions of the table |
|
DICTIONARY [db.]dictionary_name [AS [db.]name_in_backup]
| Backs up/restores a dictionary object |
|
DATABASE database_name [AS database_name_in_backup]
| Backs up/restores an entire database (can be renamed) |
|
TEMPORARY TABLE table_name [AS table_name_in_backup]
| Backs up/restores a temporary table (can be renamed) |
|
VIEW view_name [AS view_name_in_backup]
| Backs up/restores a view (can be renamed) |
|
[EXCEPT TABLES ...]
| Exclude specific tables when backing up a database |
|
ALL
| Backs up/restores everything (all databases, tables, etc.). Prior to version 23.4 of ClickHouse,
ALL
was only applicable to the
RESTORE
command. |
|
[EXCEPT {TABLES\|DATABASES}...] | {"source_file": "00_overview.md"} | [
-0.09007996320724487,
0.01639515720307827,
-0.06806756556034088,
0.07547900825738907,
-0.046563513576984406,
-0.034666482359170914,
0.012316625565290451,
-0.03682509809732437,
-0.019388796761631966,
0.028089573606848717,
0.013119013980031013,
-0.0408424474298954,
0.026709401980042458,
-0.1... |
b7ad6cb6-23c1-4bdf-a066-ef633335c8e6 | ALL
was only applicable to the
RESTORE
command. |
|
[EXCEPT {TABLES\|DATABASES}...]
| Exclude specific tables or databases when using
ALL
|
|
[ON CLUSTER 'cluster_name']
| Execute the backup/restore across a ClickHouse cluster |
|
TO\|FROM
| Direction:
TO
for backup destination,
FROM
for restore source |
|
File('<path>/<filename>')
| Store to/restore from local file system |
|
Disk('<disk_name>', '<path>/')
| Store to/restore from a configured disk |
|
S3('<S3 endpoint>/<path>', '<Access key ID>', '<Secret access key>')
| Store to/restore from Amazon S3 or S3-compatible storage |
|
[SETTINGS ...]
| See below for complete list of settings | | | {"source_file": "00_overview.md"} | [
0.02216787077486515,
-0.0839395523071289,
-0.05301740765571594,
0.02301054075360298,
0.07130250334739685,
0.04544782266020775,
-0.007808188907802105,
-0.07933421432971954,
-0.039510391652584076,
0.031235085800290108,
0.03112799860537052,
-0.014551106840372086,
0.13945342600345612,
-0.15171... |
d630333b-e25a-4e82-b7d7-8e3f38345333 | Settings {#settings}
Generic backup/restore settings
S3 specific settings
Azure specific settings
Administration and troubleshooting {#check-the-status-of-backups}
The backup command returns an
id
and
status
, and that
id
can be used to
get the status of the backup. This is very useful to check the progress of long
ASYNC
backups. The example below shows a failure that happened when trying to
overwrite an existing backup file:
sql
BACKUP TABLE helloworld.my_first_table TO Disk('backups', '1.zip') ASYNC
```response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusβββββββββββ
β 7678b0b3-f519-4e6e-811f-5a0781a4eb52 β CREATING_BACKUP β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
sql
SELECT
*
FROM system.backups
WHERE id='7678b0b3-f519-4e6e-811f-5a0781a4eb52'
FORMAT Vertical
```response
Row 1:
ββββββ
id: 7678b0b3-f519-4e6e-811f-5a0781a4eb52
name: Disk('backups', '1.zip')
highlight-next-line
status: BACKUP_FAILED
num_files: 0
uncompressed_size: 0
compressed_size: 0
highlight-next-line
error: Code: 598. DB::Exception: Backup Disk('backups', '1.zip') already exists. (BACKUP_ALREADY_EXISTS) (version 22.8.2.11 (official build))
start_time: 2022-08-30 09:21:46
end_time: 2022-08-30 09:21:46
1 row in set. Elapsed: 0.002 sec.
```
Along with the
system.backups
table, all backup and restore operations are also tracked in the system log table
system.backup_log
:
sql
SELECT *
FROM system.backup_log
WHERE id = '7678b0b3-f519-4e6e-811f-5a0781a4eb52'
ORDER BY event_time_microseconds ASC
FORMAT Vertical
```response
Row 1:
ββββββ
event_date: 2023-08-18
event_time_microseconds: 2023-08-18 11:13:43.097414
id: 7678b0b3-f519-4e6e-811f-5a0781a4eb52
name: Disk('backups', '1.zip')
status: CREATING_BACKUP
error:
start_time: 2023-08-18 11:13:43
end_time: 1970-01-01 03:00:00
num_files: 0
total_size: 0
num_entries: 0
uncompressed_size: 0
compressed_size: 0
files_read: 0
bytes_read: 0
Row 2:
ββββββ
event_date: 2023-08-18
event_time_microseconds: 2023-08-18 11:13:43.174782
id: 7678b0b3-f519-4e6e-811f-5a0781a4eb52
name: Disk('backups', '1.zip')
status: BACKUP_FAILED
highlight-next-line
error: Code: 598. DB::Exception: Backup Disk('backups', '1.zip') already exists. (BACKUP_ALREADY_EXISTS) (version 23.8.1.1)
start_time: 2023-08-18 11:13:43
end_time: 2023-08-18 11:13:43
num_files: 0
total_size: 0
num_entries: 0
uncompressed_size: 0
compressed_size: 0
files_read: 0
bytes_read: 0
2 rows in set. Elapsed: 0.075 sec.
``` | {"source_file": "00_overview.md"} | [
-0.07444016635417938,
-0.03181295096874237,
-0.06422373652458191,
0.06246266886591911,
0.012617352418601513,
-0.04793887212872505,
0.0075466688722372055,
-0.04811897501349449,
-0.021053064614534378,
0.10528483241796494,
-0.00017644143372308463,
-0.009154651314020157,
0.06735356152057648,
-... |
e63277f8-eea9-479f-89d5-db01876eae55 | description: 'Composable protocols allows more flexible configuration of TCP access
to the ClickHouse server.'
sidebar_label: 'Composable protocols'
sidebar_position: 64
slug: /operations/settings/composable-protocols
title: 'Composable protocols'
doc_type: 'reference'
Composable protocols
Overview {#overview}
Composable protocols allow more flexible configuration of TCP access to the
ClickHouse server. This configuration can co-exist alongside, or replace,
conventional configuration.
Configuring composable protocols {#composable-protocols-section-is-denoted-as-protocols-in-configuration-xml}
Composable protocols can be configured in an XML configuration file. The protocols
section is denoted with
protocols
tags in the XML config file:
```xml
```
Configuring protocol layers {#basic-modules-define-protocol-layers}
You can define protocol layers using basic modules. For example, to define an
HTTP layer, you can add a new basic module to the
protocols
section:
```xml
http
```
Modules can be configured according to:
plain_http
- name which can be referred to by another layer
type
- denotes the protocol handler which will be instantiated to process data.
It has the following set of predefined protocol handlers:
tcp
- native clickhouse protocol handler
http
- HTTP clickhouse protocol handler
tls
- TLS encryption layer
proxy1
- PROXYv1 layer
mysql
- MySQL compatibility protocol handler
postgres
- PostgreSQL compatibility protocol handler
prometheus
- Prometheus protocol handler
interserver
- clickhouse interserver handler
:::note
gRPC
protocol handler is not implemented for
Composable protocols
:::
Configuring endpoints {#endpoint-ie-listening-port-is-denoted-by-port-and-optional-host-tags}
Endpoints (listening ports) are denoted by
<port>
and optional
<host>
tags.
For example, to configure an endpoint on the previously added HTTP layer we
could modify our configuration as follows:
```xml
<type>http</type>
<!-- endpoint -->
<host>127.0.0.1</host>
<port>8123</port>
```
If the
<host>
tag is omitted, then the
<listen_host>
from the root config is
used.
Configuring layer sequences {#layers-sequence-is-defined-by-impl-tag-referencing-another-module}
Layers sequences are defined using the
<impl>
tag, and referencing another
module. For example, to configure a TLS layer on top of our plain_http module
we could further modify our configuration as follows:
```xml
http
tls
plain_http
127.0.0.1
8443
```
Attaching endpoints to layers {#endpoint-can-be-attached-to-any-layer}
Endpoints can be attached to any layer. For example, we can define endpoints for
HTTP (port 8123) and HTTPS (port 8443):
```xml
http
127.0.0.1
8123
tls
plain_http
127.0.0.1
8443
```
Defining additional endpoints {#additional-endpoints-can-be-defined-by-referencing-any-module-and-omitting-type-tag} | {"source_file": "composable-protocols.md"} | [
-0.03795603662729263,
0.05026021599769592,
-0.05374457687139511,
-0.04541731998324394,
-0.07452838122844696,
0.004569465760141611,
-0.01493990421295166,
-0.014138573780655861,
-0.07824137061834335,
-0.06360075622797012,
0.02272791415452957,
0.01486954465508461,
-0.00011516410449985415,
-0.... |
67895b9e-8db7-40a0-b10d-31c7a1c3c296 | http
127.0.0.1
8123
tls
plain_http
127.0.0.1
8443
```
Defining additional endpoints {#additional-endpoints-can-be-defined-by-referencing-any-module-and-omitting-type-tag}
Additional endpoints can be defined by referencing any module and omitting the
<type>
tag. For example, we can define
another_http
endpoint for the
plain_http
module as follows:
```xml
http
127.0.0.1
8123
tls
plain_http
127.0.0.1
8443
plain_http
127.0.0.1
8223
```
Specifying additional layer parameters {#some-modules-can-contain-specific-for-its-layer-parameters}
Some modules can contain additional layer parameters. For example, the TLS layer
allows a private key (
privateKeyFile
) and certificate files (
certificateFile
)
to be specified as follows:
```xml
http
127.0.0.1
8123
tls
plain_http
127.0.0.1
8443
another_server.key
another_server.crt
``` | {"source_file": "composable-protocols.md"} | [
-0.045012544840574265,
0.06705985218286514,
-0.027911894023418427,
0.0036421148106455803,
-0.026897979900240898,
-0.04112473502755165,
-0.03379468619823456,
0.017144039273262024,
0.0368700809776783,
-0.0884239673614502,
0.044633228331804276,
-0.051640961319208145,
0.030626816675066948,
0.0... |
59e501cb-9ae0-46d5-bbf1-e153dc147cf1 | description: 'A collection of settings grouped under the same name.'
sidebar_label: 'Settings profiles'
sidebar_position: 61
slug: /operations/settings/settings-profiles
title: 'Settings profiles'
doc_type: 'reference'
Settings profiles
A settings profile is a collection of settings grouped under the same name.
:::note
ClickHouse also supports
SQL-driven workflow
for managing settings profiles. We recommend using it.
:::
The profile can have any name. You can specify the same profile for different users. The most important thing you can write in the settings profile is
readonly=1
, which ensures read-only access.
Settings profiles can inherit from each other. To use inheritance, indicate one or multiple
profile
settings before the other settings that are listed in the profile. In case when one setting is defined in different profiles, the latest defined is used.
To apply all the settings in a profile, set the
profile
setting.
Example:
Install the
web
profile.
sql
SET profile = 'web'
Settings profiles are declared in the user config file. This is usually
users.xml
.
Example:
```xml
8
<!-- Settings for queries from the user interface -->
<web>
<max_rows_to_read>1000000000</max_rows_to_read>
<max_bytes_to_read>100000000000</max_bytes_to_read>
<max_rows_to_group_by>1000000</max_rows_to_group_by>
<group_by_overflow_mode>any</group_by_overflow_mode>
<max_rows_to_sort>1000000</max_rows_to_sort>
<max_bytes_to_sort>1000000000</max_bytes_to_sort>
<max_result_rows>100000</max_result_rows>
<max_result_bytes>100000000</max_result_bytes>
<result_overflow_mode>break</result_overflow_mode>
<max_execution_time>600</max_execution_time>
<min_execution_speed>1000000</min_execution_speed>
<timeout_before_checking_execution_speed>15</timeout_before_checking_execution_speed>
<max_columns_to_read>25</max_columns_to_read>
<max_temporary_columns>100</max_temporary_columns>
<max_temporary_non_const_columns>50</max_temporary_non_const_columns>
<max_subquery_depth>2</max_subquery_depth>
<max_pipeline_depth>25</max_pipeline_depth>
<max_ast_depth>50</max_ast_depth>
<max_ast_elements>100</max_ast_elements>
<max_sessions_for_user>4</max_sessions_for_user>
<readonly>1</readonly>
</web>
```
The example specifies two profiles:
default
and
web
.
The
default
profile has a special purpose: it must always be present and is applied when starting the server. In other words, the
default
profile contains default settings.
The
web
profile is a regular profile that can be set using the
SET
query or using a URL parameter in an HTTP query. | {"source_file": "settings-profiles.md"} | [
-0.008733590133488178,
-0.03742563724517822,
-0.07785095274448395,
0.045239031314849854,
-0.10128534585237503,
0.004026110749691725,
0.054333966225385666,
0.05031046271324158,
-0.18465061485767365,
-0.07435755431652069,
0.05424657464027405,
-0.020772431045770645,
0.10404323041439056,
-0.06... |
0471f269-f327-459a-b1c1-8d9bf62ac4fd | title: 'Session Settings'
sidebar_label: 'Session Settings'
slug: /operations/settings/settings
toc_max_heading_level: 2
description: 'Settings which are found in the
system.settings
table.'
doc_type: 'reference'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import BetaBadge from '@theme/badges/BetaBadge';
import CloudOnlyBadge from '@theme/badges/CloudOnlyBadge';
import SettingsInfoBlock from '@theme/SettingsInfoBlock/SettingsInfoBlock';
import VersionHistory from '@theme/VersionHistory/VersionHistory';
All below settings are also available in table
system.settings
. These settings are autogenerated from
source
.
add_http_cors_header {#add_http_cors_header}
Write add http CORS header.
additional_result_filter {#additional_result_filter}
An additional filter expression to apply to the result of
SELECT
query.
This setting is not applied to any subquery.
Example
sql
INSERT INTO table_1 VALUES (1, 'a'), (2, 'bb'), (3, 'ccc'), (4, 'dddd');
SElECT * FROM table_1;
response
ββxββ¬βyβββββ
β 1 β a β
β 2 β bb β
β 3 β ccc β
β 4 β dddd β
βββββ΄βββββββ
sql
SELECT *
FROM table_1
SETTINGS additional_result_filter = 'x != 2'
response
ββxββ¬βyβββββ
β 1 β a β
β 3 β ccc β
β 4 β dddd β
βββββ΄βββββββ
additional_table_filters {#additional_table_filters}
An additional filter expression that is applied after reading
from the specified table.
Example
sql
INSERT INTO table_1 VALUES (1, 'a'), (2, 'bb'), (3, 'ccc'), (4, 'dddd');
SELECT * FROM table_1;
response
ββxββ¬βyβββββ
β 1 β a β
β 2 β bb β
β 3 β ccc β
β 4 β dddd β
βββββ΄βββββββ
sql
SELECT *
FROM table_1
SETTINGS additional_table_filters = {'table_1': 'x != 2'}
response
ββxββ¬βyβββββ
β 1 β a β
β 3 β ccc β
β 4 β dddd β
βββββ΄βββββββ
aggregate_functions_null_for_empty {#aggregate_functions_null_for_empty}
Enables or disables rewriting all aggregate functions in a query, adding
-OrNull
suffix to them. Enable it for SQL standard compatibility.
It is implemented via query rewrite (similar to
count_distinct_implementation
setting) to get consistent results for distributed queries.
Possible values:
0 β Disabled.
1 β Enabled.
Example
Consider the following query with aggregate functions:
sql
SELECT SUM(-1), MAX(0) FROM system.one WHERE 0;
With
aggregate_functions_null_for_empty = 0
it would produce:
text
ββSUM(-1)ββ¬βMAX(0)ββ
β 0 β 0 β
βββββββββββ΄βββββββββ
With
aggregate_functions_null_for_empty = 1
the result would be:
text
ββSUMOrNull(-1)ββ¬βMAXOrNull(0)ββ
β NULL β NULL β
βββββββββββββββββ΄βββββββββββββββ
aggregation_in_order_max_block_bytes {#aggregation_in_order_max_block_bytes}
Maximal size of block in bytes accumulated during aggregation in order of primary key. Lower block size allows to parallelize more final merge stage of aggregation.
aggregation_memory_efficient_merge_threads {#aggregation_memory_efficient_merge_threads} | {"source_file": "settings.md"} | [
-0.05101504921913147,
0.09952680766582489,
-0.01639603264629841,
0.05739070475101471,
-0.011364554055035114,
0.07126230746507645,
0.0239196065813303,
0.010389694944024086,
-0.03657834231853485,
0.028190718963742256,
-0.02458348125219345,
-0.05775385722517967,
0.06543586403131485,
-0.085599... |
f1af0cd8-9d11-4424-a055-e1a1c4893c3e | aggregation_memory_efficient_merge_threads {#aggregation_memory_efficient_merge_threads}
Number of threads to use for merge intermediate aggregation results in memory efficient mode. When bigger, then more memory is consumed. 0 means - same as 'max_threads'.
allow_aggregate_partitions_independently {#allow_aggregate_partitions_independently}
Enable independent aggregation of partitions on separate threads when partition key suits group by key. Beneficial when number of partitions close to number of cores and partitions have roughly the same size
allow_archive_path_syntax {#allow_archive_path_syntax}
File/S3 engines/table function will parse paths with '::' as
<archive> :: <file>
if the archive has correct extension.
allow_asynchronous_read_from_io_pool_for_merge_tree {#allow_asynchronous_read_from_io_pool_for_merge_tree}
Use background I/O pool to read from MergeTree tables. This setting may increase performance for I/O bound queries
allow_changing_replica_until_first_data_packet {#allow_changing_replica_until_first_data_packet}
If it's enabled, in hedged requests we can start new connection until receiving first data packet even if we have already made some progress
(but progress haven't updated for
receive_data_timeout
timeout), otherwise we disable changing replica after the first time we made progress.
allow_create_index_without_type {#allow_create_index_without_type}
Allow CREATE INDEX query without TYPE. Query will be ignored. Made for SQL compatibility tests.
allow_custom_error_code_in_throwif {#allow_custom_error_code_in_throwif}
Enable custom error code in function throwIf(). If true, thrown exceptions may have unexpected error codes.
allow_ddl {#allow_ddl}
If it is set to true, then a user is allowed to executed DDL queries.
allow_deprecated_database_ordinary {#allow_deprecated_database_ordinary}
Allow to create databases with deprecated Ordinary engine
allow_deprecated_error_prone_window_functions {#allow_deprecated_error_prone_window_functions}
Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)
allow_deprecated_snowflake_conversion_functions {#allow_deprecated_snowflake_conversion_functions}
Functions
snowflakeToDateTime
,
snowflakeToDateTime64
,
dateTimeToSnowflake
, and
dateTime64ToSnowflake
are deprecated and disabled by default.
Please use functions
snowflakeIDToDateTime
,
snowflakeIDToDateTime64
,
dateTimeToSnowflakeID
, and
dateTime64ToSnowflakeID
instead.
To re-enable the deprecated functions (e.g., during a transition period), please set this setting to
true
.
allow_deprecated_syntax_for_merge_tree {#allow_deprecated_syntax_for_merge_tree}
Allow to create *MergeTree tables with deprecated engine definition syntax
allow_distributed_ddl {#allow_distributed_ddl} | {"source_file": "settings.md"} | [
-0.026084771379828453,
-0.0423332154750824,
-0.06558046489953995,
0.021967677399516106,
0.03741517290472984,
-0.066156305372715,
-0.06592263281345367,
0.026116937398910522,
0.02519833855330944,
0.009742296300828457,
0.0018949699588119984,
0.05630797520279884,
-0.0002802993403747678,
-0.063... |
5b05b194-b2b0-4443-8343-b67f9a17d44a | Allow to create *MergeTree tables with deprecated engine definition syntax
allow_distributed_ddl {#allow_distributed_ddl}
If it is set to true, then a user is allowed to executed distributed DDL queries.
allow_drop_detached {#allow_drop_detached}
Allow ALTER TABLE ... DROP DETACHED PART[ITION] ... queries
allow_dynamic_type_in_join_keys {#allow_dynamic_type_in_join_keys}
Allows using Dynamic type in JOIN keys. Added for compatibility. It's not recommended to use Dynamic type in JOIN keys because comparison with other types may lead to unexpected results.
allow_execute_multiif_columnar {#allow_execute_multiif_columnar}
Allow execute multiIf function columnar
allow_experimental_alias_table_engine {#allow_experimental_alias_table_engine}
Allow to create table with the Alias engine.
allow_experimental_analyzer {#allow_experimental_analyzer}
Allow new query analyzer.
allow_experimental_codecs {#allow_experimental_codecs}
If it is set to true, allow to specify experimental compression codecs (but we don't have those yet and this option does nothing).
allow_experimental_correlated_subqueries {#allow_experimental_correlated_subqueries}
Allow to execute correlated subqueries.
allow_experimental_database_glue_catalog {#allow_experimental_database_glue_catalog}
Allow experimental database engine DataLakeCatalog with catalog_type = 'glue'
allow_experimental_database_hms_catalog {#allow_experimental_database_hms_catalog}
Allow experimental database engine DataLakeCatalog with catalog_type = 'hms'
allow_experimental_database_iceberg {#allow_experimental_database_iceberg}
Allow experimental database engine DataLakeCatalog with catalog_type = 'iceberg'
allow_experimental_database_materialized_postgresql {#allow_experimental_database_materialized_postgresql}
Allow to create database with Engine=MaterializedPostgreSQL(...).
allow_experimental_database_unity_catalog {#allow_experimental_database_unity_catalog}
Allow experimental database engine DataLakeCatalog with catalog_type = 'unity'
allow_experimental_delta_kernel_rs {#allow_experimental_delta_kernel_rs}
Allow experimental delta-kernel-rs implementation.
allow_experimental_delta_lake_writes {#allow_experimental_delta_lake_writes}
Enables delta-kernel writes feature.
allow_experimental_full_text_index {#allow_experimental_full_text_index}
If set to true, allow using the experimental text index.
allow_experimental_funnel_functions {#allow_experimental_funnel_functions}
Enable experimental functions for funnel analysis.
allow_experimental_hash_functions {#allow_experimental_hash_functions}
Enable experimental hash functions
allow_experimental_iceberg_compaction {#allow_experimental_iceberg_compaction}
Allow to explicitly use 'OPTIMIZE' for iceberg tables.
allow_experimental_insert_into_iceberg {#allow_experimental_insert_into_iceberg} | {"source_file": "settings.md"} | [
0.006855509243905544,
-0.027831576764583588,
-0.021869005635380745,
0.01657632552087307,
-0.03809422627091408,
-0.09874989837408066,
-0.05081984028220177,
0.040421012789011,
-0.10532896220684052,
0.018253987655043602,
0.001798361074179411,
-0.055017005652189255,
0.024762792512774467,
-0.03... |
b7f2635f-0252-42ca-a8a6-1dd5ccc21279 | Allow to explicitly use 'OPTIMIZE' for iceberg tables.
allow_experimental_insert_into_iceberg {#allow_experimental_insert_into_iceberg}
Allow to execute
insert
queries into iceberg.
allow_experimental_join_right_table_sorting {#allow_experimental_join_right_table_sorting}
join_to_sort_minimum_perkey_rows
and
join_to_sort_maximum_table_rows
are met, rerange the right table by key to improve the performance in left or inner hash join"}]}]}/>
If it is set to true, and the conditions of
join_to_sort_minimum_perkey_rows
and
join_to_sort_maximum_table_rows
are met, rerange the right table by key to improve the performance in left or inner hash join.
allow_experimental_kafka_offsets_storage_in_keeper {#allow_experimental_kafka_offsets_storage_in_keeper}
Allow experimental feature to store Kafka related offsets in ClickHouse Keeper. When enabled a ClickHouse Keeper path and replica name can be specified to the Kafka table engine. As a result instead of the regular Kafka engine, a new type of storage engine will be used that stores the committed offsets primarily in ClickHouse Keeper
allow_experimental_kusto_dialect {#allow_experimental_kusto_dialect}
Enable Kusto Query Language (KQL) - an alternative to SQL.
allow_experimental_materialized_postgresql_table {#allow_experimental_materialized_postgresql_table}
Allows to use the MaterializedPostgreSQL table engine. Disabled by default, because this feature is experimental
allow_experimental_nlp_functions {#allow_experimental_nlp_functions}
Enable experimental functions for natural language processing.
allow_experimental_object_type {#allow_experimental_object_type}
Allow the obsolete Object data type
allow_experimental_parallel_reading_from_replicas {#allow_experimental_parallel_reading_from_replicas}
Use up to
max_parallel_replicas
the number of replicas from each shard for SELECT query execution. Reading is parallelized and coordinated dynamically. 0 - disabled, 1 - enabled, silently disable them in case of failure, 2 - enabled, throw an exception in case of failure
allow_experimental_prql_dialect {#allow_experimental_prql_dialect}
Enable PRQL - an alternative to SQL.
allow_experimental_qbit_type {#allow_experimental_qbit_type}
Allows creation of
QBit
data type.
allow_experimental_query_deduplication {#allow_experimental_query_deduplication}
Experimental data deduplication for SELECT queries based on part UUIDs
allow_experimental_statistics {#allow_experimental_statistics}
allow_experimental_statistic
."}]}]}/>
Allows defining columns with
statistics
and
manipulate statistics
.
allow_experimental_time_series_aggregate_functions {#allow_experimental_time_series_aggregate_functions}
Experimental timeSeries* aggregate functions for Prometheus-like timeseries resampling, rate, delta calculation. | {"source_file": "settings.md"} | [
-0.021081468090415,
0.0033576972782611847,
-0.07923229783773422,
0.02941276505589485,
-0.06642331182956696,
-0.024526583030819893,
-0.052409760653972626,
0.03008638136088848,
-0.0458604097366333,
0.05402154475450516,
0.00950302742421627,
0.041429661214351654,
0.08890920877456665,
-0.069549... |
72496eac-d662-4d13-a2d4-d140ddce9713 | Experimental timeSeries* aggregate functions for Prometheus-like timeseries resampling, rate, delta calculation.
allow_experimental_time_series_table {#allow_experimental_time_series_table}
Allows creation of tables with the
TimeSeries
table engine. Possible values:
- 0 β the
TimeSeries
table engine is disabled.
- 1 β the
TimeSeries
table engine is enabled.
allow_experimental_time_time64_type {#allow_experimental_time_time64_type}
Allows creation of
Time
and
Time64
data types.
allow_experimental_window_view {#allow_experimental_window_view}
Enable WINDOW VIEW. Not mature enough.
allow_experimental_ytsaurus_dictionary_source {#allow_experimental_ytsaurus_dictionary_source}
Experimental dictionary source for integration with YTsaurus.
allow_experimental_ytsaurus_table_engine {#allow_experimental_ytsaurus_table_engine}
Experimental table engine for integration with YTsaurus.
allow_experimental_ytsaurus_table_function {#allow_experimental_ytsaurus_table_function}
Experimental table engine for integration with YTsaurus.
allow_general_join_planning {#allow_general_join_planning}
Allows a more general join planning algorithm that can handle more complex conditions, but only works with hash join. If hash join is not enabled, then the usual join planning algorithm is used regardless of the value of this setting.
allow_get_client_http_header {#allow_get_client_http_header}
Allow to use the function
getClientHTTPHeader
which lets to obtain a value of an the current HTTP request's header. It is not enabled by default for security reasons, because some headers, such as
Cookie
, could contain sensitive info. Note that the
X-ClickHouse-*
and
Authentication
headers are always restricted and cannot be obtained with this function.
allow_hyperscan {#allow_hyperscan}
Allow functions that use Hyperscan library. Disable to avoid potentially long compilation times and excessive resource usage.
allow_introspection_functions {#allow_introspection_functions}
Enables or disables
introspection functions
for query profiling.
Possible values:
1 β Introspection functions enabled.
0 β Introspection functions disabled.
See Also
Sampling Query Profiler
System table
trace_log
allow_materialized_view_with_bad_select {#allow_materialized_view_with_bad_select}
Allow CREATE MATERIALIZED VIEW with SELECT query that references nonexistent tables or columns. It must still be syntactically valid. Doesn't apply to refreshable MVs. Doesn't apply if the MV schema needs to be inferred from the SELECT query (i.e. if the CREATE has no column list and no TO table). Can be used for creating MV before its source table.
allow_named_collection_override_by_default {#allow_named_collection_override_by_default}
Allow named collections' fields override by default.
allow_non_metadata_alters {#allow_non_metadata_alters} | {"source_file": "settings.md"} | [
-0.016238141804933548,
-0.015245333313941956,
-0.05210625380277634,
0.01151058729737997,
-0.04403088241815567,
-0.07360301911830902,
-0.10765119642019272,
0.001038464019075036,
-0.1145947128534317,
0.022969555109739304,
-0.009596025571227074,
-0.07718425244092941,
-0.03582056611776352,
0.0... |
f4b50e77-8d04-498b-8931-01a4c868b52e | Allow named collections' fields override by default.
allow_non_metadata_alters {#allow_non_metadata_alters}
Allow to execute alters which affects not only tables metadata, but also data on disk
allow_nonconst_timezone_arguments {#allow_nonconst_timezone_arguments}
Allow non-const timezone arguments in certain time-related functions like toTimeZone(), fromUnixTimestamp
(), snowflakeToDateTime
().
This setting exists only for compatibility reasons. In ClickHouse, the time zone is a property of the data type, respectively of the column.
Enabling this setting gives the wrong impression that different values within a column can have different timezones.
Therefore, please do not enable this setting.
allow_nondeterministic_mutations {#allow_nondeterministic_mutations}
User-level setting that allows mutations on replicated tables to make use of non-deterministic functions such as
dictGet
.
Given that, for example, dictionaries, can be out of sync across nodes, mutations that pull values from them are disallowed on replicated tables by default. Enabling this setting allows this behavior, making it the user's responsibility to ensure that the data used is in sync across all nodes.
Example
```xml
1
<!-- ... -->
</default>
<!-- ... -->
```
allow_nondeterministic_optimize_skip_unused_shards {#allow_nondeterministic_optimize_skip_unused_shards}
Allow nondeterministic (like
rand
or
dictGet
, since later has some caveats with updates) functions in sharding key.
Possible values:
0 β Disallowed.
1 β Allowed.
allow_not_comparable_types_in_comparison_functions {#allow_not_comparable_types_in_comparison_functions}
Allows or restricts using not comparable types (like JSON/Object/AggregateFunction) in comparison functions
equal/less/greater/etc
.
allow_not_comparable_types_in_order_by {#allow_not_comparable_types_in_order_by}
Allows or restricts using not comparable types (like JSON/Object/AggregateFunction) in ORDER BY keys.
allow_prefetched_read_pool_for_local_filesystem {#allow_prefetched_read_pool_for_local_filesystem}
Prefer prefetched threadpool if all parts are on local filesystem
allow_prefetched_read_pool_for_remote_filesystem {#allow_prefetched_read_pool_for_remote_filesystem}
Prefer prefetched threadpool if all parts are on remote filesystem
allow_push_predicate_ast_for_distributed_subqueries {#allow_push_predicate_ast_for_distributed_subqueries}
Allows push predicate on AST level for distributed subqueries with enabled anlyzer
allow_push_predicate_when_subquery_contains_with {#allow_push_predicate_when_subquery_contains_with}
Allows push predicate when subquery contains WITH clause
allow_reorder_prewhere_conditions {#allow_reorder_prewhere_conditions}
When moving conditions from WHERE to PREWHERE, allow reordering them to optimize filtering
allow_settings_after_format_in_insert {#allow_settings_after_format_in_insert} | {"source_file": "settings.md"} | [
-0.01930183544754982,
-0.0010889391414821148,
-0.04760219529271126,
-0.010836896486580372,
0.03375423327088356,
-0.0890231803059578,
-0.061806321144104004,
-0.04172537848353386,
0.015474051237106323,
0.028973136097192764,
-0.005298782140016556,
-0.053836457431316376,
-0.02998509258031845,
... |
4ea659c7-de39-45b5-9070-14a9feb6fcad | When moving conditions from WHERE to PREWHERE, allow reordering them to optimize filtering
allow_settings_after_format_in_insert {#allow_settings_after_format_in_insert}
Control whether
SETTINGS
after
FORMAT
in
INSERT
queries is allowed or not. It is not recommended to use this, since this may interpret part of
SETTINGS
as values.
Example:
sql
INSERT INTO FUNCTION null('foo String') SETTINGS max_threads=1 VALUES ('bar');
But the following query will work only with
allow_settings_after_format_in_insert
:
sql
SET allow_settings_after_format_in_insert=1;
INSERT INTO FUNCTION null('foo String') VALUES ('bar') SETTINGS max_threads=1;
Possible values:
0 β Disallow.
1 β Allow.
:::note
Use this setting only for backward compatibility if your use cases depend on old syntax.
:::
allow_simdjson {#allow_simdjson}
Allow using simdjson library in 'JSON*' functions if AVX2 instructions are available. If disabled rapidjson will be used.
allow_special_serialization_kinds_in_output_formats {#allow_special_serialization_kinds_in_output_formats}
Allows to output columns with special serialization kinds like Sparse and Replicated without converting them to full column representation.
It helps to avoid unnecessary data copy during formatting.
allow_statistics_optimize {#allow_statistics_optimize}
allow_statistic_optimize
."}]}]}/>
Allows using statistics to optimize queries
allow_suspicious_codecs {#allow_suspicious_codecs}
If it is set to true, allow to specify meaningless compression codecs.
allow_suspicious_fixed_string_types {#allow_suspicious_fixed_string_types}
In CREATE TABLE statement allows creating columns of type FixedString(n) with n > 256. FixedString with length >= 256 is suspicious and most likely indicates a misuse
allow_suspicious_indices {#allow_suspicious_indices}
Reject primary/secondary indexes and sorting keys with identical expressions
allow_suspicious_low_cardinality_types {#allow_suspicious_low_cardinality_types}
Allows or restricts using
LowCardinality
with data types with fixed size of 8 bytes or less: numeric data types and
FixedString(8_bytes_or_less)
.
For small fixed values using of
LowCardinality
is usually inefficient, because ClickHouse stores a numeric index for each row. As a result:
Disk space usage can rise.
RAM consumption can be higher, depending on a dictionary size.
Some functions can work slower due to extra coding/encoding operations.
Merge times in
MergeTree
-engine tables can grow due to all the reasons described above.
Possible values:
1 β Usage of
LowCardinality
is not restricted.
0 β Usage of
LowCardinality
is restricted.
allow_suspicious_primary_key {#allow_suspicious_primary_key}
Allow suspicious
PRIMARY KEY
/
ORDER BY
for MergeTree (i.e. SimpleAggregateFunction).
allow_suspicious_ttl_expressions {#allow_suspicious_ttl_expressions} | {"source_file": "settings.md"} | [
-0.011206061579287052,
0.00047206130693666637,
-0.0435987189412117,
-0.009432245045900345,
-0.11259496212005615,
0.07311457395553589,
-0.02390924096107483,
-0.009425666183233261,
-0.05755884200334549,
-0.006835126783698797,
0.06306961923837662,
0.007897543720901012,
0.03637146204710007,
-0... |
1217f26b-115f-499f-9085-a322eec139df | Allow suspicious
PRIMARY KEY
/
ORDER BY
for MergeTree (i.e. SimpleAggregateFunction).
allow_suspicious_ttl_expressions {#allow_suspicious_ttl_expressions}
Reject TTL expressions that don't depend on any of table's columns. It indicates a user error most of the time.
allow_suspicious_types_in_group_by {#allow_suspicious_types_in_group_by}
Allows or restricts using
Variant
and
Dynamic
types in GROUP BY keys.
allow_suspicious_types_in_order_by {#allow_suspicious_types_in_order_by}
Allows or restricts using
Variant
and
Dynamic
types in ORDER BY keys.
allow_suspicious_variant_types {#allow_suspicious_variant_types}
In CREATE TABLE statement allows specifying Variant type with similar variant types (for example, with different numeric or date types). Enabling this setting may introduce some ambiguity when working with values with similar types.
allow_unrestricted_reads_from_keeper {#allow_unrestricted_reads_from_keeper}
Allow unrestricted (without condition on path) reads from system.zookeeper table, can be handy, but is not safe for zookeeper
alter_move_to_space_execute_async {#alter_move_to_space_execute_async}
Execute ALTER TABLE MOVE ... TO [DISK|VOLUME] asynchronously
alter_partition_verbose_result {#alter_partition_verbose_result}
Enables or disables the display of information about the parts to which the manipulation operations with partitions and parts have been successfully applied.
Applicable to
ATTACH PARTITION|PART
and to
FREEZE PARTITION
.
Possible values:
0 β disable verbosity.
1 β enable verbosity.
Example
```sql
CREATE TABLE test(a Int64, d Date, s String) ENGINE = MergeTree PARTITION BY toYYYYMDECLARE(d) ORDER BY a;
INSERT INTO test VALUES(1, '2021-01-01', '');
INSERT INTO test VALUES(1, '2021-01-01', '');
ALTER TABLE test DETACH PARTITION ID '202101';
ALTER TABLE test ATTACH PARTITION ID '202101' SETTINGS alter_partition_verbose_result = 1;
ββcommand_typeββββββ¬βpartition_idββ¬βpart_nameβββββ¬βold_part_nameββ
β ATTACH PARTITION β 202101 β 202101_7_7_0 β 202101_5_5_0 β
β ATTACH PARTITION β 202101 β 202101_8_8_0 β 202101_6_6_0 β
ββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββ
ALTER TABLE test FREEZE SETTINGS alter_partition_verbose_result = 1;
ββcommand_typeββ¬βpartition_idββ¬βpart_nameβββββ¬βbackup_nameββ¬βbackup_pathββββββββββββββββββββ¬βpart_backup_pathβββββββββββββββββββββββββββββββββββββββββββββ
β FREEZE ALL β 202101 β 202101_7_7_0 β 8 β /var/lib/clickhouse/shadow/8/ β /var/lib/clickhouse/shadow/8/data/default/test/202101_7_7_0 β
β FREEZE ALL β 202101 β 202101_8_8_0 β 8 β /var/lib/clickhouse/shadow/8/ β /var/lib/clickhouse/shadow/8/data/default/test/202101_8_8_0 β
ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
alter_sync {#alter_sync} | {"source_file": "settings.md"} | [
0.033052414655685425,
0.021521706134080887,
-0.013077848590910435,
0.008031669072806835,
-0.030184002593159676,
-0.021209964528679848,
0.03354140743613243,
0.04059012606739998,
-0.0535159632563591,
0.03246339038014412,
0.03781738504767418,
0.007493365090340376,
0.04875722900032997,
0.02574... |
57c6fbfe-b991-4bdf-b1ab-5b1210dbc6bc | alter_sync {#alter_sync}
Allows to set up waiting for actions to be executed on replicas by
ALTER
,
OPTIMIZE
or
TRUNCATE
queries.
Possible values:
0
β Do not wait.
1
β Wait for own execution.
2
β Wait for everyone.
Cloud default value:
1
.
:::note
alter_sync
is applicable to
Replicated
tables only, it does nothing to alters of not
Replicated
tables.
:::
alter_update_mode {#alter_update_mode}
A mode for
ALTER
queries that have the
UPDATE
commands.
Possible values:
-
heavy
- run regular mutation.
-
lightweight
- run lightweight update if possible, run regular mutation otherwise.
-
lightweight_force
- run lightweight update if possible, throw otherwise.
analyze_index_with_space_filling_curves {#analyze_index_with_space_filling_curves}
If a table has a space-filling curve in its index, e.g.
ORDER BY mortonEncode(x, y)
or
ORDER BY hilbertEncode(x, y)
, and the query has conditions on its arguments, e.g.
x >= 10 AND x <= 20 AND y >= 20 AND y <= 30
, use the space-filling curve for index analysis.
analyzer_compatibility_allow_compound_identifiers_in_unflatten_nested {#analyzer_compatibility_allow_compound_identifiers_in_unflatten_nested}
Allow to add compound identifiers to nested. This is a compatibility setting because it changes the query result. When disabled,
SELECT a.b.c FROM table ARRAY JOIN a
does not work, and
SELECT a FROM table
does not include
a.b.c
column into
Nested a
result.
analyzer_compatibility_join_using_top_level_identifier {#analyzer_compatibility_join_using_top_level_identifier}
Force to resolve identifier in JOIN USING from projection (for example, in
SELECT a + 1 AS b FROM t1 JOIN t2 USING (b)
join will be performed by
t1.a + 1 = t2.b
, rather then
t1.b = t2.b
).
any_join_distinct_right_table_keys {#any_join_distinct_right_table_keys}
Enables legacy ClickHouse server behaviour in
ANY INNER|LEFT JOIN
operations.
:::note
Use this setting only for backward compatibility if your use cases depend on legacy
JOIN
behaviour.
:::
When the legacy behaviour is enabled:
Results of
t1 ANY LEFT JOIN t2
and
t2 ANY RIGHT JOIN t1
operations are not equal because ClickHouse uses the logic with many-to-one left-to-right table keys mapping.
Results of
ANY INNER JOIN
operations contain all rows from the left table like the
SEMI LEFT JOIN
operations do.
When the legacy behaviour is disabled:
Results of
t1 ANY LEFT JOIN t2
and
t2 ANY RIGHT JOIN t1
operations are equal because ClickHouse uses the logic which provides one-to-many keys mapping in
ANY RIGHT JOIN
operations.
Results of
ANY INNER JOIN
operations contain one row per key from both the left and right tables.
Possible values:
0 β Legacy behaviour is disabled.
1 β Legacy behaviour is enabled.
See also:
JOIN strictness
apply_deleted_mask {#apply_deleted_mask} | {"source_file": "settings.md"} | [
-0.06290240585803986,
-0.026904119178652763,
-0.010686771012842655,
0.041909247636795044,
-0.045643892139196396,
-0.0851399376988411,
-0.03974797949194908,
-0.07305414974689484,
-0.019482871517539024,
0.08340244740247726,
0.03777909278869629,
0.0005257497541606426,
0.05150342732667923,
-0.... |
ca9ee462-7c0b-4113-a522-2c6ca5be1a36 | Possible values:
0 β Legacy behaviour is disabled.
1 β Legacy behaviour is enabled.
See also:
JOIN strictness
apply_deleted_mask {#apply_deleted_mask}
Enables filtering out rows deleted with lightweight DELETE. If disabled, a query will be able to read those rows. This is useful for debugging and \"undelete\" scenarios
apply_mutations_on_fly {#apply_mutations_on_fly}
If true, mutations (UPDATEs and DELETEs) which are not materialized in data part will be applied on SELECTs.
apply_patch_parts {#apply_patch_parts}
If true, patch parts (that represent lightweight updates) are applied on SELECTs.
apply_patch_parts_join_cache_buckets {#apply_patch_parts_join_cache_buckets}
The number of buckets in the temporary cache for applying patch parts in Join mode.
apply_settings_from_server {#apply_settings_from_server}
Whether the client should accept settings from server.
This only affects operations performed on the client side, in particular parsing the INSERT input data and formatting the query result. Most of query execution happens on the server and is not affected by this setting.
Normally this setting should be set in user profile (users.xml or queries like
ALTER USER
), not through the client (client command line arguments,
SET
query, or
SETTINGS
section of
SELECT
query). Through the client it can be changed to false, but can't be changed to true (because the server won't send the settings if user profile has
apply_settings_from_server = false
).
Note that initially (24.12) there was a server setting (
send_settings_to_client
), but latter it got replaced with this client setting, for better usability.
asterisk_include_alias_columns {#asterisk_include_alias_columns}
Include
ALIAS
columns for wildcard query (
SELECT *
).
Possible values:
0 - disabled
1 - enabled
asterisk_include_materialized_columns {#asterisk_include_materialized_columns}
Include
MATERIALIZED
columns for wildcard query (
SELECT *
).
Possible values:
0 - disabled
1 - enabled
async_insert {#async_insert}
If true, data from INSERT query is stored in queue and later flushed to table in background. If wait_for_async_insert is false, INSERT query is processed almost instantly, otherwise client will wait until data will be flushed to table
async_insert_busy_timeout_decrease_rate {#async_insert_busy_timeout_decrease_rate}
The exponential growth rate at which the adaptive asynchronous insert timeout decreases
async_insert_busy_timeout_increase_rate {#async_insert_busy_timeout_increase_rate}
The exponential growth rate at which the adaptive asynchronous insert timeout increases
async_insert_busy_timeout_max_ms {#async_insert_busy_timeout_max_ms}
Maximum time to wait before dumping collected data per query since the first data appeared.
async_insert_busy_timeout_min_ms {#async_insert_busy_timeout_min_ms} | {"source_file": "settings.md"} | [
-0.06334996968507767,
0.036448393017053604,
0.0003093661507591605,
0.03660067915916443,
0.020768729969859123,
-0.08946935087442398,
0.02368297055363655,
-0.04823712632060051,
-0.04184149578213692,
0.00460701622068882,
0.03322707861661911,
0.03443923220038414,
0.07316525280475616,
-0.130052... |
899af885-f60b-42e9-928b-a15aaf4a8c56 | Maximum time to wait before dumping collected data per query since the first data appeared.
async_insert_busy_timeout_min_ms {#async_insert_busy_timeout_min_ms}
If auto-adjusting is enabled through async_insert_use_adaptive_busy_timeout, minimum time to wait before dumping collected data per query since the first data appeared. It also serves as the initial value for the adaptive algorithm
async_insert_deduplicate {#async_insert_deduplicate}
For async INSERT queries in the replicated table, specifies that deduplication of inserting blocks should be performed
async_insert_max_data_size {#async_insert_max_data_size}
Maximum size in bytes of unparsed data collected per query before being inserted
async_insert_max_query_number {#async_insert_max_query_number}
Maximum number of insert queries before being inserted.
Only takes effect if setting
async_insert_deduplicate
is 1.
async_insert_poll_timeout_ms {#async_insert_poll_timeout_ms}
Timeout for polling data from asynchronous insert queue
async_insert_use_adaptive_busy_timeout {#async_insert_use_adaptive_busy_timeout}
If it is set to true, use adaptive busy timeout for asynchronous inserts
async_query_sending_for_remote {#async_query_sending_for_remote}
Enables asynchronous connection creation and query sending while executing remote query.
Enabled by default.
async_socket_for_remote {#async_socket_for_remote}
Enables asynchronous read from socket while executing remote query.
Enabled by default.
azure_allow_parallel_part_upload {#azure_allow_parallel_part_upload}
Use multiple threads for azure multipart upload.
azure_check_objects_after_upload {#azure_check_objects_after_upload}
Check each uploaded object in azure blob storage to be sure that upload was successful
azure_connect_timeout_ms {#azure_connect_timeout_ms}
Connection timeout for host from azure disks.
azure_create_new_file_on_insert {#azure_create_new_file_on_insert}
Enables or disables creating a new file on each insert in azure engine tables
azure_ignore_file_doesnt_exist {#azure_ignore_file_doesnt_exist}
Ignore absence of file if it does not exist when reading certain keys.
Possible values:
- 1 β
SELECT
returns empty result.
- 0 β
SELECT
throws an exception.
azure_list_object_keys_size {#azure_list_object_keys_size}
Maximum number of files that could be returned in batch by ListObject request
azure_max_blocks_in_multipart_upload {#azure_max_blocks_in_multipart_upload}
Maximum number of blocks in multipart upload for Azure.
azure_max_get_burst {#azure_max_get_burst}
Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to
azure_max_get_rps
azure_max_get_rps {#azure_max_get_rps}
Limit on Azure GET request per second rate before throttling. Zero means unlimited. | {"source_file": "settings.md"} | [
-0.08055279403924942,
-0.038125935941934586,
-0.030025236308574677,
0.08108509331941605,
-0.05737670883536339,
-0.11238832771778107,
-0.06434475630521774,
-0.013256137259304523,
-0.009237440302968025,
0.012233546935021877,
0.05212551727890968,
0.010766023769974709,
0.061952732503414154,
-0... |
f308ef81-b620-4a50-afa1-7ac8d0d43777 | azure_max_get_rps {#azure_max_get_rps}
Limit on Azure GET request per second rate before throttling. Zero means unlimited.
azure_max_inflight_parts_for_one_file {#azure_max_inflight_parts_for_one_file}
The maximum number of a concurrent loaded parts in multipart upload request. 0 means unlimited.
azure_max_put_burst {#azure_max_put_burst}
Max number of requests that can be issued simultaneously before hitting request per second limit. By default (0) equals to
azure_max_put_rps
azure_max_put_rps {#azure_max_put_rps}
Limit on Azure PUT request per second rate before throttling. Zero means unlimited.
azure_max_redirects {#azure_max_redirects}
Max number of azure redirects hops allowed.
azure_max_single_part_copy_size {#azure_max_single_part_copy_size}
The maximum size of object to copy using single part copy to Azure blob storage.
azure_max_single_part_upload_size {#azure_max_single_part_upload_size}
The maximum size of object to upload using singlepart upload to Azure blob storage.
azure_max_single_read_retries {#azure_max_single_read_retries}
The maximum number of retries during single Azure blob storage read.
azure_max_unexpected_write_error_retries {#azure_max_unexpected_write_error_retries}
The maximum number of retries in case of unexpected errors during Azure blob storage write
azure_max_upload_part_size {#azure_max_upload_part_size}
The maximum size of part to upload during multipart upload to Azure blob storage.
azure_min_upload_part_size {#azure_min_upload_part_size}
The minimum size of part to upload during multipart upload to Azure blob storage.
azure_request_timeout_ms {#azure_request_timeout_ms}
Idleness timeout for sending and receiving data to/from azure. Fail if a single TCP read or write call blocks for this long.
azure_sdk_max_retries {#azure_sdk_max_retries}
Maximum number of retries in azure sdk
azure_sdk_retry_initial_backoff_ms {#azure_sdk_retry_initial_backoff_ms}
Minimal backoff between retries in azure sdk
azure_sdk_retry_max_backoff_ms {#azure_sdk_retry_max_backoff_ms}
Maximal backoff between retries in azure sdk
azure_skip_empty_files {#azure_skip_empty_files}
Enables or disables skipping empty files in S3 engine.
Possible values:
- 0 β
SELECT
throws an exception if empty file is not compatible with requested format.
- 1 β
SELECT
returns empty result for empty file.
azure_strict_upload_part_size {#azure_strict_upload_part_size}
The exact size of part to upload during multipart upload to Azure blob storage.
azure_throw_on_zero_files_match {#azure_throw_on_zero_files_match}
Throw an error if matched zero files according to glob expansion rules.
Possible values:
- 1 β
SELECT
throws an exception.
- 0 β
SELECT
returns empty result.
azure_truncate_on_insert {#azure_truncate_on_insert}
Enables or disables truncate before insert in azure engine tables. | {"source_file": "settings.md"} | [
0.014182738959789276,
-0.009703579358756542,
-0.011947656981647015,
-0.007231163326650858,
-0.04062628000974655,
-0.002184901852160692,
-0.02863900735974312,
-0.008740553632378578,
0.03439301997423172,
0.08570190519094467,
-0.0010164289269596338,
0.02656397968530655,
0.03590156510472298,
-... |
68c51439-0041-4622-9380-95c476d2d54b | azure_truncate_on_insert {#azure_truncate_on_insert}
Enables or disables truncate before insert in azure engine tables.
azure_upload_part_size_multiply_factor {#azure_upload_part_size_multiply_factor}
Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage.
azure_upload_part_size_multiply_parts_count_threshold {#azure_upload_part_size_multiply_parts_count_threshold}
Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor.
azure_use_adaptive_timeouts {#azure_use_adaptive_timeouts}
When set to
true
than for all azure requests first two attempts are made with low send and receive timeouts.
When set to
false
than all attempts are made with identical timeouts.
backup_restore_batch_size_for_keeper_multi {#backup_restore_batch_size_for_keeper_multi}
Maximum size of batch for multi request to [Zoo]Keeper during backup or restore
backup_restore_batch_size_for_keeper_multiread {#backup_restore_batch_size_for_keeper_multiread}
Maximum size of batch for multiread request to [Zoo]Keeper during backup or restore
backup_restore_failure_after_host_disconnected_for_seconds {#backup_restore_failure_after_host_disconnected_for_seconds}
If a host during a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation doesn't recreate its ephemeral 'alive' node in ZooKeeper for this amount of time then the whole backup or restore is considered as failed.
This value should be bigger than any reasonable time for a host to reconnect to ZooKeeper after a failure.
Zero means unlimited.
backup_restore_finish_timeout_after_error_sec {#backup_restore_finish_timeout_after_error_sec}
How long the initiator should wait for other host to react to the 'error' node and stop their work on the current BACKUP ON CLUSTER or RESTORE ON CLUSTER operation.
backup_restore_keeper_fault_injection_probability {#backup_restore_keeper_fault_injection_probability}
Approximate probability of failure for a keeper request during backup or restore. Valid value is in interval [0.0f, 1.0f]
backup_restore_keeper_fault_injection_seed {#backup_restore_keeper_fault_injection_seed}
0 - random seed, otherwise the setting value
backup_restore_keeper_max_retries {#backup_restore_keeper_max_retries}
Max retries for [Zoo]Keeper operations in the middle of a BACKUP or RESTORE operation.
Should be big enough so the whole operation won't fail because of a temporary [Zoo]Keeper failure.
backup_restore_keeper_max_retries_while_handling_error {#backup_restore_keeper_max_retries_while_handling_error}
Max retries for [Zoo]Keeper operations while handling an error of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation.
backup_restore_keeper_max_retries_while_initializing {#backup_restore_keeper_max_retries_while_initializing} | {"source_file": "settings.md"} | [
0.027209123596549034,
0.006411771755665541,
-0.012317023240029812,
0.06234541907906532,
-0.03754175454378128,
-0.02334488555788994,
-0.008073433302342892,
-0.017164288088679314,
0.013181783258914948,
0.06507613509893417,
0.04317497834563255,
0.013637622818350792,
0.110102079808712,
-0.0409... |
a84af51e-45f5-4650-96a9-0372755274dd | backup_restore_keeper_max_retries_while_initializing {#backup_restore_keeper_max_retries_while_initializing}
Max retries for [Zoo]Keeper operations during the initialization of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation.
backup_restore_keeper_retry_initial_backoff_ms {#backup_restore_keeper_retry_initial_backoff_ms}
Initial backoff timeout for [Zoo]Keeper operations during backup or restore
backup_restore_keeper_retry_max_backoff_ms {#backup_restore_keeper_retry_max_backoff_ms}
Max backoff timeout for [Zoo]Keeper operations during backup or restore
backup_restore_keeper_value_max_size {#backup_restore_keeper_value_max_size}
Maximum size of data of a [Zoo]Keeper's node during backup
backup_restore_s3_retry_attempts {#backup_restore_s3_retry_attempts}
Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries. It takes place only for backup/restore.
backup_restore_s3_retry_initial_backoff_ms {#backup_restore_s3_retry_initial_backoff_ms}
Initial backoff delay in milliseconds before the first retry attempt during backup and restore. Each subsequent retry increases the delay exponentially, up to the maximum specified by `backup_restore_s3_retry_max_backoff_ms`
backup_restore_s3_retry_jitter_factor {#backup_restore_s3_retry_jitter_factor}
Jitter factor applied to the retry backoff delay in Aws::Client::RetryStrategy during backup and restore operations. The computed backoff delay is multiplied by a random factor in the range [1.0, 1.0 + jitter], up to the maximum `backup_restore_s3_retry_max_backoff_ms`. Must be in [0.0, 1.0] interval
backup_restore_s3_retry_max_backoff_ms {#backup_restore_s3_retry_max_backoff_ms}
Maximum delay in milliseconds between retries during backup and restore operations.
backup_slow_all_threads_after_retryable_s3_error {#backup_slow_all_threads_after_retryable_s3_error}
When set to
true
, all threads executing S3 requests to the same backup endpoint are slowed down
after any single S3 request encounters a retryable S3 error, such as 'Slow Down'.
When set to
false
, each thread handles s3 request backoff independently of the others.
cache_warmer_threads {#cache_warmer_threads}
Only has an effect in ClickHouse Cloud. Number of background threads for speculatively downloading new data parts into file cache, when
cache_populated_by_fetch
is enabled. Zero to disable.
calculate_text_stack_trace {#calculate_text_stack_trace}
Calculate text stack trace in case of exceptions during query execution. This is the default. It requires symbol lookups that may slow down fuzzing tests when a huge amount of wrong queries are executed. In normal cases, you should not disable this option.
cancel_http_readonly_queries_on_client_close {#cancel_http_readonly_queries_on_client_close}
Cancels HTTP read-only queries (e.g.Β SELECT) when a client closes the connection without waiting for the response. | {"source_file": "settings.md"} | [
0.0032747145742177963,
0.02632795087993145,
-0.01426155585795641,
0.05389823392033577,
-0.02291659452021122,
-0.02933404967188835,
-0.024482587352395058,
-0.0355345644056797,
-0.009210622869431973,
0.04436187818646431,
0.0071116117760539055,
0.036279622465372086,
0.04875053092837334,
-0.05... |
fa819053-9267-41b7-9a4e-548b2ad0302a | Cancels HTTP read-only queries (e.g.Β SELECT) when a client closes the connection without waiting for the response.
Cloud default value:
0
.
cast_ipv4_ipv6_default_on_conversion_error {#cast_ipv4_ipv6_default_on_conversion_error}
CAST operator into IPv4, CAST operator into IPV6 type, toIPv4, toIPv6 functions will return default value instead of throwing exception on conversion error.
cast_keep_nullable {#cast_keep_nullable}
Enables or disables keeping of the
Nullable
data type in
CAST
operations.
When the setting is enabled and the argument of
CAST
function is
Nullable
, the result is also transformed to
Nullable
type. When the setting is disabled, the result always has the destination type exactly.
Possible values:
0 β The
CAST
result has exactly the destination type specified.
1 β If the argument type is
Nullable
, the
CAST
result is transformed to
Nullable(DestinationDataType)
.
Examples
The following query results in the destination data type exactly:
sql
SET cast_keep_nullable = 0;
SELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x);
Result:
text
ββxββ¬βtoTypeName(CAST(toNullable(toInt32(0)), 'Int32'))ββ
β 0 β Int32 β
βββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ
The following query results in the
Nullable
modification on the destination data type:
sql
SET cast_keep_nullable = 1;
SELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x);
Result:
text
ββxββ¬βtoTypeName(CAST(toNullable(toInt32(0)), 'Int32'))ββ
β 0 β Nullable(Int32) β
βββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
CAST
functio
cast_string_to_date_time_mode {#cast_string_to_date_time_mode}
Allows choosing a parser of the text representation of date and time during cast from String.
Possible values:
'best_effort'
β Enables extended parsing.
ClickHouse can parse the basic
YYYY-MM-DD HH:MM:SS
format and all
ISO 8601
date and time formats. For example,
'2018-06-08T01:02:03.000Z'
.
'best_effort_us'
β Similar to
best_effort
(see the difference in
parseDateTimeBestEffortUS
'basic'
β Use basic parser.
ClickHouse can parse only the basic
YYYY-MM-DD HH:MM:SS
or
YYYY-MM-DD
format. For example,
2019-08-20 10:18:56
or
2019-08-20
.
See also:
DateTime data type.
Functions for working with dates and times.
cast_string_to_dynamic_use_inference {#cast_string_to_dynamic_use_inference}
Use types inference during String to Dynamic conversio
cast_string_to_variant_use_inference {#cast_string_to_variant_use_inference}
Use types inference during String to Variant conversion.
check_query_single_value_result {#check_query_single_value_result}
Defines the level of detail for the
CHECK TABLE
query result for
MergeTree
family engines .
Possible values: | {"source_file": "settings.md"} | [
0.01290914136916399,
0.009712659753859043,
-0.02065064013004303,
0.03753944858908653,
-0.06560289859771729,
0.05464089661836624,
0.03436491638422012,
-0.05458386242389679,
0.009806349873542786,
0.04766811057925224,
-0.021903472021222115,
0.014542324468493462,
-0.039138950407505035,
-0.0574... |
eceff405-8dba-415e-803d-41119f5d31db | check_query_single_value_result {#check_query_single_value_result}
Defines the level of detail for the
CHECK TABLE
query result for
MergeTree
family engines .
Possible values:
0 β the query shows a check status for every individual data part of a table.
1 β the query shows the general table check status.
check_referential_table_dependencies {#check_referential_table_dependencies}
Check that DDL query (such as DROP TABLE or RENAME) will not break referential dependencies
check_table_dependencies {#check_table_dependencies}
Check that DDL query (such as DROP TABLE or RENAME) will not break dependencies
checksum_on_read {#checksum_on_read}
Validate checksums on reading. It is enabled by default and should be always enabled in production. Please do not expect any benefits in disabling this setting. It may only be used for experiments and benchmarks. The setting is only applicable for tables of MergeTree family. Checksums are always validated for other table engines and when receiving data over the network.
cloud_mode {#cloud_mode}
Cloud mode
cloud_mode_database_engine {#cloud_mode_database_engine}
The database engine allowed in Cloud. 1 - rewrite DDLs to use Replicated database, 2 - rewrite DDLs to use Shared database
cloud_mode_engine {#cloud_mode_engine}
The engine family allowed in Cloud.
0 - allow everything
1 - rewrite DDLs to use *ReplicatedMergeTree
2 - rewrite DDLs to use SharedMergeTree
3 - rewrite DDLs to use SharedMergeTree except when explicitly passed remote disk is specified
UInt64 to minimize public part
cluster_for_parallel_replicas {#cluster_for_parallel_replicas}
Cluster for a shard in which current server is located
cluster_function_process_archive_on_multiple_nodes {#cluster_function_process_archive_on_multiple_nodes}
If set to
true
, increases performance of processing archives in cluster functions. Should be set to
false
for compatibility and to avoid errors during upgrade to 25.7+ if using cluster functions with archives on earlier versions.
collect_hash_table_stats_during_aggregation {#collect_hash_table_stats_during_aggregation}
Enable collecting hash table statistics to optimize memory allocatio
collect_hash_table_stats_during_joins {#collect_hash_table_stats_during_joins}
Enable collecting hash table statistics to optimize memory allocatio
compatibility {#compatibility}
The
compatibility
setting causes ClickHouse to use the default settings of a previous version of ClickHouse, where the previous version is provided as the setting.
If settings are set to non-default values, then those settings are honored (only settings that have not been modified are affected by the
compatibility
setting).
This setting takes a ClickHouse version number as a string, like
22.3
,
22.8
. An empty value means that this setting is disabled.
Disabled by default. | {"source_file": "settings.md"} | [
-0.0125394556671381,
-0.04636191576719284,
0.00028130924329161644,
-0.04916555806994438,
0.030064772814512253,
-0.07052332162857056,
0.05293865501880646,
0.00634782575070858,
-0.020502733066678047,
0.0053842091001570225,
0.06599394977092743,
-0.010391312651336193,
-0.0029709276277571917,
-... |
c41104d1-da4c-4e63-8bd4-02af73577273 | This setting takes a ClickHouse version number as a string, like
22.3
,
22.8
. An empty value means that this setting is disabled.
Disabled by default.
:::note
In ClickHouse Cloud, the service-level default compatibility setting must be set by ClickHouse Cloud support. Please
open a case
to have it set.
However, the compatibility setting can be overridden at the user, role, profile, query, or session level using standard ClickHouse setting mechanisms such as
SET compatibility = '22.3'
in a session or
SETTINGS compatibility = '22.3'
in a query.
:::
compatibility_ignore_auto_increment_in_create_table {#compatibility_ignore_auto_increment_in_create_table}
Ignore AUTO_INCREMENT keyword in column declaration if true, otherwise return error. It simplifies migration from MySQL
compatibility_ignore_collation_in_create_table {#compatibility_ignore_collation_in_create_table}
Compatibility ignore collation in create table
compile_aggregate_expressions {#compile_aggregate_expressions}
Enables or disables JIT-compilation of aggregate functions to native code. Enabling this setting can improve the performance.
Possible values:
0 β Aggregation is done without JIT compilation.
1 β Aggregation is done using JIT compilation.
See Also
min_count_to_compile_aggregate_expression
compile_expressions {#compile_expressions}
Compile some scalar functions and operators to native code.
compile_sort_description {#compile_sort_description}
Compile sort description to native code.
connect_timeout {#connect_timeout}
Connection timeout if there are no replicas.
connect_timeout_with_failover_ms {#connect_timeout_with_failover_ms}
The timeout in milliseconds for connecting to a remote server for a Distributed table engine, if the 'shard' and 'replica' sections are used in the cluster definition.
If unsuccessful, several attempts are made to connect to various replicas.
connect_timeout_with_failover_secure_ms {#connect_timeout_with_failover_secure_ms}
Connection timeout for selecting first healthy replica (for secure connections).
connection_pool_max_wait_ms {#connection_pool_max_wait_ms}
The wait time in milliseconds for a connection when the connection pool is full.
Possible values:
Positive integer.
0 β Infinite timeout.
connections_with_failover_max_tries {#connections_with_failover_max_tries}
The maximum number of connection attempts with each replica for the Distributed table engine.
convert_query_to_cnf {#convert_query_to_cnf}
When set to
true
, a
SELECT
query will be converted to conjuctive normal form (CNF). There are scenarios where rewriting a query in CNF may execute faster (view this
Github issue
for an explanation).
For example, notice how the following
SELECT
query is not modified (the default behavior): | {"source_file": "settings.md"} | [
0.023420685902237892,
-0.061423320323228836,
-0.02216489613056183,
-0.012700176797807217,
-0.04675736650824547,
0.0012360315304249525,
0.017663752660155296,
-0.024244239553809166,
-0.054118018597364426,
0.010160654783248901,
0.08352497965097427,
-0.05872844159603119,
0.08497558534145355,
-... |
1e137f5a-e55c-4ab7-8ea0-c23db16ec081 | For example, notice how the following
SELECT
query is not modified (the default behavior):
sql
EXPLAIN SYNTAX
SELECT *
FROM
(
SELECT number AS x
FROM numbers(20)
) AS a
WHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15))
SETTINGS convert_query_to_cnf = false;
The result is:
response
ββexplainβββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SELECT x β
β FROM β
β ( β
β SELECT number AS x β
β FROM numbers(20) β
β WHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15)) β
β ) AS a β
β WHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15)) β
β SETTINGS convert_query_to_cnf = 0 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Let's set
convert_query_to_cnf
to
true
and see what changes:
sql
EXPLAIN SYNTAX
SELECT *
FROM
(
SELECT number AS x
FROM numbers(20)
) AS a
WHERE ((x >= 1) AND (x <= 5)) OR ((x >= 10) AND (x <= 15))
SETTINGS convert_query_to_cnf = true;
Notice the
WHERE
clause is rewritten in CNF, but the result set is the identical - the Boolean logic is unchanged:
response
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SELECT x β
β FROM β
β ( β
β SELECT number AS x β
β FROM numbers(20) β
β WHERE ((x <= 15) OR (x <= 5)) AND ((x <= 15) OR (x >= 1)) AND ((x >= 10) OR (x <= 5)) AND ((x >= 10) OR (x >= 1)) β
β ) AS a β
β WHERE ((x >= 10) OR (x >= 1)) AND ((x >= 10) OR (x <= 5)) AND ((x <= 15) OR (x >= 1)) AND ((x <= 15) OR (x <= 5)) β
β SETTINGS convert_query_to_cnf = 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Possible values: true, false
correlated_subqueries_default_join_kind {#correlated_subqueries_default_join_kind} | {"source_file": "settings.md"} | [
0.0036241882480680943,
0.00026863408857025206,
0.029845885932445526,
-0.020008176565170288,
-0.011573048308491707,
-0.01605124957859516,
-0.0022930267732590437,
-0.024425670504570007,
-0.047043152153491974,
0.020713023841381073,
-0.000840836379211396,
-0.007176160346716642,
0.022410754114389... |
53df74b4-eb35-4689-bafc-ffd5b551a38b | Possible values: true, false
correlated_subqueries_default_join_kind {#correlated_subqueries_default_join_kind}
Controls the kind of joins in the decorrelated query plan. The default value is
right
, which means that decorrelated plan will contain RIGHT JOINs with subquery input on the right side.
Possible values:
left
- Decorrelation process will produce LEFT JOINs and input table will appear on the left side.
right
- Decorrelation process will produce RIGHT JOINs and input table will appear on the right side.
correlated_subqueries_substitute_equivalent_expressions {#correlated_subqueries_substitute_equivalent_expressions}
Use filter expressions to inference equivalent expressions and substitute them instead of creating a CROSS JOIN.
count_distinct_implementation {#count_distinct_implementation}
Specifies which of the
uniq*
functions should be used to perform the
COUNT(DISTINCT ...)
construction.
Possible values:
uniq
uniqCombined
uniqCombined64
uniqHLL12
uniqExact
count_distinct_optimization {#count_distinct_optimization}
Rewrite count distinct to subquery of group by
count_matches_stop_at_empty_match {#count_matches_stop_at_empty_match}
Stop counting once a pattern matches zero-length in the
countMatches
function.
create_if_not_exists {#create_if_not_exists}
Enable
IF NOT EXISTS
for
CREATE
statement by default. If either this setting or
IF NOT EXISTS
is specified and a table with the provided name already exists, no exception will be thrown.
create_index_ignore_unique {#create_index_ignore_unique}
Ignore UNIQUE keyword in CREATE UNIQUE INDEX. Made for SQL compatibility tests.
create_replicated_merge_tree_fault_injection_probability {#create_replicated_merge_tree_fault_injection_probability}
The probability of a fault injection during table creation after creating metadata in ZooKeeper
create_table_empty_primary_key_by_default {#create_table_empty_primary_key_by_default}
Allow to create *MergeTree tables with empty primary key when ORDER BY and PRIMARY KEY not specified
cross_join_min_bytes_to_compress {#cross_join_min_bytes_to_compress}
Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached.
cross_join_min_rows_to_compress {#cross_join_min_rows_to_compress}
Minimal count of rows to compress block in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached.
data_type_default_nullable {#data_type_default_nullable}
Allows data types without explicit modifiers
NULL or NOT NULL
in column definition will be
Nullable
.
Possible values:
1 β The data types in column definitions are set to
Nullable
by default.
0 β The data types in column definitions are set to not
Nullable
by default. | {"source_file": "settings.md"} | [
-0.008144836872816086,
0.007436370477080345,
0.022024529054760933,
0.09186971932649612,
-0.08161979913711548,
0.018943114206194878,
-0.019154539331793785,
-0.0737958773970604,
-0.04888084530830383,
-0.04818631336092949,
-0.06690504401922226,
-0.0006908094510436058,
0.06961692124605179,
0.0... |
bef372ff-09f6-4b90-90d4-7d8fc114e159 | Possible values:
1 β The data types in column definitions are set to
Nullable
by default.
0 β The data types in column definitions are set to not
Nullable
by default.
database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously}
Adds a modifier
SYNC
to all
DROP
and
DETACH
queries.
Possible values:
0 β Queries will be executed with delay.
1 β Queries will be executed without delay.
database_replicated_allow_explicit_uuid {#database_replicated_allow_explicit_uuid}
0 - Don't allow to explicitly specify UUIDs for tables in Replicated databases. 1 - Allow. 2 - Allow, but ignore the specified UUID and generate a random one instead.
database_replicated_allow_heavy_create {#database_replicated_allow_heavy_create}
Allow long-running DDL queries (CREATE AS SELECT and POPULATE) in Replicated database engine. Note that it can block DDL queue for a long time.
database_replicated_allow_only_replicated_engine {#database_replicated_allow_only_replicated_engine}
Allow to create only Replicated tables in database with engine Replicated
database_replicated_allow_replicated_engine_arguments {#database_replicated_allow_replicated_engine_arguments}
0 - Don't allow to explicitly specify ZooKeeper path and replica name for *MergeTree tables in Replicated databases. 1 - Allow. 2 - Allow, but ignore the specified path and use default one instead. 3 - Allow and don't log a warning.
database_replicated_always_detach_permanently {#database_replicated_always_detach_permanently}
Execute DETACH TABLE as DETACH TABLE PERMANENTLY if database engine is Replicated
database_replicated_enforce_synchronous_settings {#database_replicated_enforce_synchronous_settings}
Enforces synchronous waiting for some queries (see also database_atomic_wait_for_drop_and_detach_synchronously, mutations_sync, alter_sync). Not recommended to enable these settings.
database_replicated_initial_query_timeout_sec {#database_replicated_initial_query_timeout_sec}
Sets how long initial DDL query should wait for Replicated database to process previous DDL queue entries in seconds.
Possible values:
Positive integer.
0 β Unlimited.
database_shared_drop_table_delay_seconds {#database_shared_drop_table_delay_seconds}
The delay in seconds before a dropped table is actually removed from a Shared database. This allows to recover the table within this time using
UNDROP TABLE
statement.
decimal_check_overflow {#decimal_check_overflow}
Check overflow of decimal arithmetic/comparison operations
deduplicate_blocks_in_dependent_materialized_views {#deduplicate_blocks_in_dependent_materialized_views}
Enables or disables the deduplication check for materialized views that receive data from Replicated* tables.
Possible values:
0 β Disabled.
1 β Enabled. | {"source_file": "settings.md"} | [
-0.06673890352249146,
-0.06314970552921295,
-0.06851611286401749,
0.025571616366505623,
-0.024292349815368652,
-0.10951390862464905,
-0.015146151185035706,
-0.07713685929775238,
-0.06742298603057861,
-0.019016195088624954,
0.08986225724220276,
-0.009618098847568035,
0.08788558840751648,
-0... |
23133f12-928b-4d9d-bad1-2ae371579847 | Enables or disables the deduplication check for materialized views that receive data from Replicated* tables.
Possible values:
0 β Disabled.
1 β Enabled.
When enabled, ClickHouse performs deduplication of blocks in materialized views that depend on Replicated* tables.
This setting is useful for ensuring that materialized views do not contain duplicate data when the insertion operation is being retried due to a failure.
See Also
NULL Processing in IN Operators
default_materialized_view_sql_security {#default_materialized_view_sql_security}
Allows to set a default value for SQL SECURITY option when creating a materialized view.
More about SQL security
.
The default value is
DEFINER
.
default_max_bytes_in_join {#default_max_bytes_in_join}
Maximum size of right-side table if limit is required but
max_bytes_in_join
is not set.
default_normal_view_sql_security {#default_normal_view_sql_security}
SQL SECURITY
option while creating a normal view"}]}]}/>
Allows to set default
SQL SECURITY
option while creating a normal view.
More about SQL security
.
The default value is
INVOKER
.
default_table_engine {#default_table_engine}
Default table engine to use when
ENGINE
is not set in a
CREATE
statement.
Possible values:
a string representing any valid table engine name
Cloud default value:
SharedMergeTree
.
Example
Query:
```sql
SET default_table_engine = 'Log';
SELECT name, value, changed FROM system.settings WHERE name = 'default_table_engine';
```
Result:
response
ββnameββββββββββββββββββ¬βvalueββ¬βchangedββ
β default_table_engine β Log β 1 β
ββββββββββββββββββββββββ΄ββββββββ΄ββββββββββ
In this example, any new table that does not specify an
Engine
will use the
Log
table engine:
Query:
```sql
CREATE TABLE my_table (
x UInt32,
y UInt32
);
SHOW CREATE TABLE my_table;
```
Result:
response
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE default.my_table
(
`x` UInt32,
`y` UInt32
)
ENGINE = Log
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
default_temporary_table_engine {#default_temporary_table_engine}
Same as
default_table_engine
but for temporary tables.
In this example, any new temporary table that does not specify an
Engine
will use the
Log
table engine:
Query:
```sql
SET default_temporary_table_engine = 'Log';
CREATE TEMPORARY TABLE my_table (
x UInt32,
y UInt32
);
SHOW CREATE TEMPORARY TABLE my_table;
```
Result:
response
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TEMPORARY TABLE default.my_table
(
`x` UInt32,
`y` UInt32
)
ENGINE = Log
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
default_view_definer {#default_view_definer}
DEFINER
option while creating a view"}]}]}/> | {"source_file": "settings.md"} | [
-0.07709011435508728,
-0.06426064670085907,
-0.12152443081140518,
0.038049861788749695,
-0.0794769898056984,
-0.03816003352403641,
-0.0069106025621294975,
-0.03148996829986572,
-0.04160391166806221,
0.025968704372644424,
0.059919267892837524,
0.00666282931342721,
0.07404016703367233,
-0.03... |
80c74610-d324-4e53-93cf-ea84a257f386 | default_view_definer {#default_view_definer}
DEFINER
option while creating a view"}]}]}/>
Allows to set default
DEFINER
option while creating a view.
More about SQL security
.
The default value is
CURRENT_USER
.
delta_lake_enable_engine_predicate {#delta_lake_enable_engine_predicate}
Enables delta-kernel internal data pruning.
delta_lake_enable_expression_visitor_logging {#delta_lake_enable_expression_visitor_logging}
Enables Test level logs of DeltaLake expression visitor. These logs can be too verbose even for test logging.
delta_lake_insert_max_bytes_in_data_file {#delta_lake_insert_max_bytes_in_data_file}
Defines a bytes limit for a single inserted data file in delta lake.
delta_lake_insert_max_rows_in_data_file {#delta_lake_insert_max_rows_in_data_file}
Defines a rows limit for a single inserted data file in delta lake.
delta_lake_log_metadata {#delta_lake_log_metadata}
Enables logging delta lake metadata files into system table.
delta_lake_snapshot_version {#delta_lake_snapshot_version}
Version of delta lake snapshot to read. Value -1 means to read latest version (value 0 is a valid snapshot version).
delta_lake_throw_on_engine_predicate_error {#delta_lake_throw_on_engine_predicate_error}
Enables throwing an exception if there was an error when analyzing scan predicate in delta-kernel.
describe_compact_output {#describe_compact_output}
If true, include only column names and types into result of DESCRIBE query
describe_extend_object_types {#describe_extend_object_types}
Deduce concrete type of columns of type Object in DESCRIBE query
describe_include_subcolumns {#describe_include_subcolumns}
Enables describing subcolumns for a
DESCRIBE
query. For example, members of a
Tuple
or subcolumns of a
Map
,
Nullable
or an
Array
data type.
Possible values:
0 β Subcolumns are not included in
DESCRIBE
queries.
1 β Subcolumns are included in
DESCRIBE
queries.
Example
See an example for the
DESCRIBE
statement.
describe_include_virtual_columns {#describe_include_virtual_columns}
If true, virtual columns of table will be included into result of DESCRIBE query
dialect {#dialect}
Which dialect will be used to parse query
dictionary_validate_primary_key_type {#dictionary_validate_primary_key_type}
Validate primary key type for dictionaries. By default id type for simple layouts will be implicitly converted to UInt64.
distinct_overflow_mode {#distinct_overflow_mode}
Sets what happens when the amount of data exceeds one of the limits.
Possible values:
-
throw
: throw an exception (default).
-
break
: stop executing the query and return the partial result, as if the
source data ran out.
distributed_aggregation_memory_efficient {#distributed_aggregation_memory_efficient}
Is the memory-saving mode of distributed aggregation enabled.
distributed_background_insert_batch {#distributed_background_insert_batch} | {"source_file": "settings.md"} | [
0.007171486038714647,
-0.029504524543881416,
-0.06637789309024811,
-0.022736329585313797,
-0.04533366486430168,
-0.04330827668309212,
0.054385747760534286,
0.040028396993875504,
-0.06672187894582748,
0.032246850430965424,
0.04237570986151695,
-0.06724683940410614,
0.02588840387761593,
0.00... |
babf62a2-27d1-4e13-879a-159e8775df7d | Is the memory-saving mode of distributed aggregation enabled.
distributed_background_insert_batch {#distributed_background_insert_batch}
Enables/disables inserted data sending in batches.
When batch sending is enabled, the
Distributed
table engine tries to send multiple files of inserted data in one operation instead of sending them separately. Batch sending improves cluster performance by better-utilizing server and network resources.
Possible values:
1 β Enabled.
0 β Disabled.
distributed_background_insert_max_sleep_time_ms {#distributed_background_insert_max_sleep_time_ms}
Maximum interval for the
Distributed
table engine to send data. Limits exponential growth of the interval set in the
distributed_background_insert_sleep_time_ms
setting.
Possible values:
A positive integer number of milliseconds.
distributed_background_insert_sleep_time_ms {#distributed_background_insert_sleep_time_ms}
Base interval for the
Distributed
table engine to send data. The actual interval grows exponentially in the event of errors.
Possible values:
A positive integer number of milliseconds.
distributed_background_insert_split_batch_on_failure {#distributed_background_insert_split_batch_on_failure}
Enables/disables splitting batches on failures.
Sometimes sending particular batch to the remote shard may fail, because of some complex pipeline after (i.e.
MATERIALIZED VIEW
with
GROUP BY
) due to
Memory limit exceeded
or similar errors. In this case, retrying will not help (and this will stuck distributed sends for the table) but sending files from that batch one by one may succeed INSERT.
So installing this setting to
1
will disable batching for such batches (i.e. temporary disables
distributed_background_insert_batch
for failed batches).
Possible values:
1 β Enabled.
0 β Disabled.
:::note
This setting also affects broken batches (that may appears because of abnormal server (machine) termination and no
fsync_after_insert
/
fsync_directories
for
Distributed
table engine).
:::
:::note
You should not rely on automatic batch splitting, since this may hurt performance.
:::
distributed_background_insert_timeout {#distributed_background_insert_timeout}
Timeout for insert query into distributed. Setting is used only with insert_distributed_sync enabled. Zero value means no timeout.
distributed_cache_alignment {#distributed_cache_alignment}
Only has an effect in ClickHouse Cloud. A setting for testing purposes, do not change it
distributed_cache_bypass_connection_pool {#distributed_cache_bypass_connection_pool}
Only has an effect in ClickHouse Cloud. Allow to bypass distributed cache connection pool
distributed_cache_connect_backoff_max_ms {#distributed_cache_connect_backoff_max_ms}
Only has an effect in ClickHouse Cloud. Maximum backoff milliseconds for distributed cache connection creation. | {"source_file": "settings.md"} | [
-0.0019772422965615988,
-0.040044452995061874,
-0.06000164523720741,
0.09250322729349136,
-0.00845403503626585,
-0.029401790350675583,
-0.04992017522454262,
0.04399440810084343,
-0.022020291537046432,
0.029272910207509995,
0.01840023696422577,
0.016669176518917084,
0.07667895406484604,
-0.... |
e75dd98e-7c2d-4555-a9cb-6cbf7b86e732 | Only has an effect in ClickHouse Cloud. Maximum backoff milliseconds for distributed cache connection creation.
distributed_cache_connect_backoff_min_ms {#distributed_cache_connect_backoff_min_ms}
Only has an effect in ClickHouse Cloud. Minimum backoff milliseconds for distributed cache connection creation.
distributed_cache_connect_max_tries {#distributed_cache_connect_max_tries}
Only has an effect in ClickHouse Cloud. Number of tries to connect to distributed cache if unsuccessful
distributed_cache_connect_timeout_ms {#distributed_cache_connect_timeout_ms}
Only has an effect in ClickHouse Cloud. Connection timeout when connecting to distributed cache server.
distributed_cache_credentials_refresh_period_seconds {#distributed_cache_credentials_refresh_period_seconds}
Only has an effect in ClickHouse Cloud. A period of credentials refresh.
distributed_cache_data_packet_ack_window {#distributed_cache_data_packet_ack_window}
Only has an effect in ClickHouse Cloud. A window for sending ACK for DataPacket sequence in a single distributed cache read request
distributed_cache_discard_connection_if_unread_data {#distributed_cache_discard_connection_if_unread_data}
Only has an effect in ClickHouse Cloud. Discard connection if some data is unread.
distributed_cache_fetch_metrics_only_from_current_az {#distributed_cache_fetch_metrics_only_from_current_az}
Only has an effect in ClickHouse Cloud. Fetch metrics only from current availability zone in system.distributed_cache_metrics, system.distributed_cache_events
distributed_cache_log_mode {#distributed_cache_log_mode}
Only has an effect in ClickHouse Cloud. Mode for writing to system.distributed_cache_log
distributed_cache_max_unacked_inflight_packets {#distributed_cache_max_unacked_inflight_packets}
Only has an effect in ClickHouse Cloud. A maximum number of unacknowledged in-flight packets in a single distributed cache read request
distributed_cache_min_bytes_for_seek {#distributed_cache_min_bytes_for_seek}
Only has an effect in ClickHouse Cloud. Minimum number of bytes to do seek in distributed cache.
distributed_cache_pool_behaviour_on_limit {#distributed_cache_pool_behaviour_on_limit}
Only has an effect in ClickHouse Cloud. Identifies behaviour of distributed cache connection on pool limit reached
distributed_cache_prefer_bigger_buffer_size {#distributed_cache_prefer_bigger_buffer_size}
Only has an effect in ClickHouse Cloud. Same as filesystem_cache_prefer_bigger_buffer_size, but for distributed cache.
distributed_cache_read_only_from_current_az {#distributed_cache_read_only_from_current_az}
Only has an effect in ClickHouse Cloud. Allow to read only from current availability zone. If disabled, will read from all cache servers in all availability zones.
distributed_cache_read_request_max_tries {#distributed_cache_read_request_max_tries} | {"source_file": "settings.md"} | [
-0.006011746358126402,
-0.01862143911421299,
-0.028516637161374092,
0.013518600724637508,
-0.06429656594991684,
-0.06596952676773071,
-0.008481120690703392,
-0.05771220847964287,
0.047142527997493744,
0.06431987136602402,
0.003085814882069826,
0.0723213478922844,
0.025651175528764725,
-0.0... |
6db2c1e8-8703-4ef3-904f-d6c9f39591a2 | distributed_cache_read_request_max_tries {#distributed_cache_read_request_max_tries}
Only has an effect in ClickHouse Cloud. Number of tries to do distributed cache request if unsuccessful
distributed_cache_receive_response_wait_milliseconds {#distributed_cache_receive_response_wait_milliseconds}
Only has an effect in ClickHouse Cloud. Wait time in milliseconds to receive data for request from distributed cache
distributed_cache_receive_timeout_milliseconds {#distributed_cache_receive_timeout_milliseconds}
Only has an effect in ClickHouse Cloud. Wait time in milliseconds to receive any kind of response from distributed cache
distributed_cache_receive_timeout_ms {#distributed_cache_receive_timeout_ms}
Only has an effect in ClickHouse Cloud. Timeout for receiving data from distributed cache server, in milliseconds. If no bytes were received in this interval, the exception is thrown.
distributed_cache_send_timeout_ms {#distributed_cache_send_timeout_ms}
Only has an effect in ClickHouse Cloud. Timeout for sending data to istributed cache server, in milliseconds. If a client needs to send some data but is not able to send any bytes in this interval, the exception is thrown.
distributed_cache_tcp_keep_alive_timeout_ms {#distributed_cache_tcp_keep_alive_timeout_ms}
Only has an effect in ClickHouse Cloud. The time in milliseconds the connection to distributed cache server needs to remain idle before TCP starts sending keepalive probes.
distributed_cache_throw_on_error {#distributed_cache_throw_on_error}
Only has an effect in ClickHouse Cloud. Rethrow exception happened during communication with distributed cache or exception received from distributed cache. Otherwise fallback to skipping distributed cache on error
distributed_cache_wait_connection_from_pool_milliseconds {#distributed_cache_wait_connection_from_pool_milliseconds}
Only has an effect in ClickHouse Cloud. Wait time in milliseconds to receive connection from connection pool if distributed_cache_pool_behaviour_on_limit is wait
distributed_connections_pool_size {#distributed_connections_pool_size}
The maximum number of simultaneous connections with remote servers for distributed processing of all queries to a single Distributed table. We recommend setting a value no less than the number of servers in the cluster.
distributed_ddl_entry_format_version {#distributed_ddl_entry_format_version}
Compatibility version of distributed DDL (ON CLUSTER) queries
distributed_ddl_output_mode {#distributed_ddl_output_mode}
Sets format of distributed DDL query result.
Possible values:
throw
β Returns result set with query execution status for all hosts where query is finished. If query has failed on some hosts, then it will rethrow the first exception. If query is not finished yet on some hosts and
distributed_ddl_task_timeout
exceeded, then it throws
TIMEOUT_EXCEEDED
exception. | {"source_file": "settings.md"} | [
0.021807605400681496,
0.0002265919465571642,
0.003644403303042054,
0.0051569207571446896,
-0.023105503991246223,
-0.09395977109670639,
-0.007588756736367941,
-0.05301140248775482,
0.04377162083983421,
0.07983392477035522,
0.01085179764777422,
0.02748461626470089,
0.004731134977191687,
-0.0... |
8dc7e19d-8667-401c-9dd4-11279aea4879 | none
β Is similar to throw, but distributed DDL query returns no result set.
null_status_on_timeout
β Returns
NULL
as execution status in some rows of result set instead of throwing
TIMEOUT_EXCEEDED
if query is not finished on the corresponding hosts.
never_throw
β Do not throw
TIMEOUT_EXCEEDED
and do not rethrow exceptions if query has failed on some hosts.
none_only_active
- similar to
none
, but doesn't wait for inactive replicas of the
Replicated
database. Note: with this mode it's impossible to figure out that the query was not executed on some replica and will be executed in background.
null_status_on_timeout_only_active
β similar to
null_status_on_timeout
, but doesn't wait for inactive replicas of the
Replicated
database
throw_only_active
β similar to
throw
, but doesn't wait for inactive replicas of the
Replicated
database
Cloud default value:
throw
.
distributed_ddl_task_timeout {#distributed_ddl_task_timeout}
Sets timeout for DDL query responses from all hosts in cluster. If a DDL request has not been performed on all hosts, a response will contain a timeout error and a request will be executed in an async mode. Negative value means infinite.
Possible values:
Positive integer.
0 β Async mode.
Negative integer β infinite timeout.
distributed_foreground_insert {#distributed_foreground_insert}
Enables or disables synchronous data insertion into a
Distributed
table.
By default, when inserting data into a
Distributed
table, the ClickHouse server sends data to cluster nodes in background mode. When
distributed_foreground_insert=1
, the data is processed synchronously, and the
INSERT
operation succeeds only after all the data is saved on all shards (at least one replica for each shard if
internal_replication
is true).
Possible values:
0
β Data is inserted in background mode.
1
β Data is inserted in synchronous mode.
Cloud default value:
0
.
See Also
Distributed Table Engine
Managing Distributed Tables
distributed_group_by_no_merge {#distributed_group_by_no_merge}
Do not merge aggregation states from different servers for distributed query processing, you can use this in case it is for certain that there are different keys on different shards
Possible values:
0
β Disabled (final query processing is done on the initiator node).
1
- Do not merge aggregation states from different servers for distributed query processing (query completely processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards.
2
- Same as
1
but applies
ORDER BY
and
LIMIT
(it is not possible when the query processed completely on the remote node, like for
distributed_group_by_no_merge=1
) on the initiator (can be used for queries with
ORDER BY
and/or
LIMIT
).
Example | {"source_file": "settings.md"} | [
0.025074012577533722,
-0.02107952907681465,
-0.016184067353606224,
0.034030355513095856,
0.056389834731817245,
-0.07429706305265427,
-0.017773400992155075,
-0.0715828686952591,
0.03576856851577759,
-0.0042703780345618725,
0.05037833750247955,
0.014741526916623116,
0.06564725935459137,
-0.0... |
38b0165c-e078-4947-afe5-cb4edf94ca62 | Example
```sql
SELECT *
FROM remote('127.0.0.{2,3}', system.one)
GROUP BY dummy
LIMIT 1
SETTINGS distributed_group_by_no_merge = 1
FORMAT PrettyCompactMonoBlock
ββdummyββ
β 0 β
β 0 β
βββββββββ
```
```sql
SELECT *
FROM remote('127.0.0.{2,3}', system.one)
GROUP BY dummy
LIMIT 1
SETTINGS distributed_group_by_no_merge = 2
FORMAT PrettyCompactMonoBlock
ββdummyββ
β 0 β
βββββββββ
```
distributed_insert_skip_read_only_replicas {#distributed_insert_skip_read_only_replicas}
Enables skipping read-only replicas for INSERT queries into Distributed.
Possible values:
0 β INSERT was as usual, if it will go to read-only replica it will fail
1 β Initiator will skip read-only replicas before sending data to shards.
distributed_plan_default_reader_bucket_count {#distributed_plan_default_reader_bucket_count}
Default number of tasks for parallel reading in distributed query. Tasks are spread across between replicas.
distributed_plan_default_shuffle_join_bucket_count {#distributed_plan_default_shuffle_join_bucket_count}
Default number of buckets for distributed shuffle-hash-join.
distributed_plan_execute_locally {#distributed_plan_execute_locally}
Run all tasks of a distributed query plan locally. Useful for testing and debugging.
distributed_plan_force_exchange_kind {#distributed_plan_force_exchange_kind}
Force specified kind of Exchange operators between distributed query stages.
Possible values:
'' - do not force any kind of Exchange operators, let the optimizer choose,
'Persisted' - use temporary files in object storage,
'Streaming' - stream exchange data over network.
distributed_plan_force_shuffle_aggregation {#distributed_plan_force_shuffle_aggregation}
Use Shuffle aggregation strategy instead of PartialAggregation + Merge in distributed query plan.
distributed_plan_max_rows_to_broadcast {#distributed_plan_max_rows_to_broadcast}
Maximum rows to use broadcast join instead of shuffle join in distributed query plan.
distributed_plan_optimize_exchanges {#distributed_plan_optimize_exchanges}
Removes unnecessary exchanges in distributed query plan. Disable it for debugging.
distributed_product_mode {#distributed_product_mode}
Changes the behaviour of
distributed subqueries
.
ClickHouse applies this setting when the query contains the product of distributed tables, i.e.Β when the query for a distributed table contains a non-GLOBAL subquery for the distributed table.
Restrictions:
Only applied for IN and JOIN subqueries.
Only if the FROM section uses a distributed table containing more than one shard.
If the subquery concerns a distributed table containing more than one shard.
Not used for a table-valued
remote
function.
Possible values:
deny
β Default value. Prohibits using these types of subqueries (returns the "Double-distributed in/JOIN subqueries is denied" exception). | {"source_file": "settings.md"} | [
0.025543391704559326,
-0.08577277511358261,
-0.002523588016629219,
0.07733480632305145,
-0.003402820322662592,
-0.0933847576379776,
-0.07941193878650665,
-0.03585537523031235,
0.027829458937048912,
0.06958238035440445,
0.048380907624959946,
0.02779974602162838,
0.09135372191667557,
-0.0686... |
7297e17c-a9fb-4c40-a1b6-7ec68dfb2dd0 | Possible values:
deny
β Default value. Prohibits using these types of subqueries (returns the "Double-distributed in/JOIN subqueries is denied" exception).
local
β Replaces the database and table in the subquery with local ones for the destination server (shard), leaving the normal
IN
/
JOIN.
global
β Replaces the
IN
/
JOIN
query with
GLOBAL IN
/
GLOBAL JOIN.
allow
β Allows the use of these types of subqueries.
distributed_push_down_limit {#distributed_push_down_limit}
Enables or disables
LIMIT
applying on each shard separately.
This will allow to avoid:
- Sending extra rows over network;
- Processing rows behind the limit on the initiator.
Starting from 21.9 version you cannot get inaccurate results anymore, since
distributed_push_down_limit
changes query execution only if at least one of the conditions met:
-
distributed_group_by_no_merge
> 0.
- Query
does not have
GROUP BY
/
DISTINCT
/
LIMIT BY
, but it has
ORDER BY
/
LIMIT
.
- Query
has
GROUP BY
/
DISTINCT
/
LIMIT BY
with
ORDER BY
/
LIMIT
and:
-
optimize_skip_unused_shards
is enabled.
-
optimize_distributed_group_by_sharding_key
is enabled.
Possible values:
0 β Disabled.
1 β Enabled.
See also:
distributed_group_by_no_merge
optimize_skip_unused_shards
optimize_distributed_group_by_sharding_key
distributed_replica_error_cap {#distributed_replica_error_cap}
Type: unsigned int
Default value: 1000
The error count of each replica is capped at this value, preventing a single replica from accumulating too many errors.
See also:
load_balancing
Table engine Distributed
distributed_replica_error_half_life
distributed_replica_max_ignored_errors
distributed_replica_error_half_life {#distributed_replica_error_half_life}
Type: seconds
Default value: 60 seconds
Controls how fast errors in distributed tables are zeroed. If a replica is unavailable for some time, accumulates 5 errors, and distributed_replica_error_half_life is set to 1 second, then the replica is considered normal 3 seconds after the last error.
See also:
load_balancing
Table engine Distributed
distributed_replica_error_cap
distributed_replica_max_ignored_errors
distributed_replica_max_ignored_errors {#distributed_replica_max_ignored_errors}
Type: unsigned int
Default value: 0
The number of errors that will be ignored while choosing replicas (according to
load_balancing
algorithm).
See also:
load_balancing
Table engine Distributed
distributed_replica_error_cap
distributed_replica_error_half_life
do_not_merge_across_partitions_select_final {#do_not_merge_across_partitions_select_final}
Merge parts only in one partition in select final
empty_result_for_aggregation_by_constant_keys_on_empty_set {#empty_result_for_aggregation_by_constant_keys_on_empty_set}
Return empty result when aggregating by constant keys on empty set. | {"source_file": "settings.md"} | [
0.00639613950625062,
-0.021923676133155823,
0.019199874252080917,
0.012520956806838512,
-0.022544706240296364,
-0.08145397156476974,
-0.04225531592965126,
-0.0038720364682376385,
0.0047324360348284245,
-0.03483440726995468,
0.0645584911108017,
0.008850294165313244,
0.12576328217983246,
-0.... |
f586a845-9972-49d2-ae3b-de8b7ea99f03 | empty_result_for_aggregation_by_constant_keys_on_empty_set {#empty_result_for_aggregation_by_constant_keys_on_empty_set}
Return empty result when aggregating by constant keys on empty set.
empty_result_for_aggregation_by_empty_set {#empty_result_for_aggregation_by_empty_set}
Return empty result when aggregating without keys on empty set.
enable_adaptive_memory_spill_scheduler {#enable_adaptive_memory_spill_scheduler}
Trigger processor to spill data into external storage adpatively. grace join is supported at present.
enable_add_distinct_to_in_subqueries {#enable_add_distinct_to_in_subqueries}
Enable
DISTINCT
in
IN
subqueries. This is a trade-off setting: enabling it can greatly reduce the size of temporary tables transferred for distributed IN subqueries and significantly speed up data transfer between shards, by ensuring only unique values are sent.
However, enabling this setting adds extra merging effort on each node, as deduplication (DISTINCT) must be performed. Use this setting when network transfer is a bottleneck and the additional merging cost is acceptable.
enable_blob_storage_log {#enable_blob_storage_log}
Write information about blob storage operations to system.blob_storage_log table
enable_deflate_qpl_codec {#enable_deflate_qpl_codec}
If turned on, the DEFLATE_QPL codec may be used to compress columns.
enable_early_constant_folding {#enable_early_constant_folding}
Enable query optimization where we analyze function and subqueries results and rewrite query if there are constants there
enable_extended_results_for_datetime_functions {#enable_extended_results_for_datetime_functions}
Enables or disables returning results of type
Date32
with extended range (compared to type
Date
)
or
DateTime64
with extended range (compared to type
DateTime
).
Possible values:
0
β Functions return
Date
or
DateTime
for all types of arguments.
1
β Functions return
Date32
or
DateTime64
for
Date32
or
DateTime64
arguments and
Date
or
DateTime
otherwise.
The table below shows the behavior of this setting for various date-time functions. | {"source_file": "settings.md"} | [
-0.03097148798406124,
0.02333858795464039,
0.02787068672478199,
0.07692645490169525,
-0.032372985035181046,
-0.029917748644948006,
-0.008847766555845737,
0.04479305073618889,
0.060434721410274506,
-0.020571269094944,
0.03320690989494324,
-0.020850704982876778,
0.12950831651687622,
-0.10144... |
27baa71e-dda0-4b95-aa8c-2b03b9fd45f1 | | Function |
enable_extended_results_for_datetime_functions = 0
|
enable_extended_results_for_datetime_functions = 1
|
|----------|---------------------------------------------------|---------------------------------------------------|
|
toStartOfYear
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfISOYear
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfQuarter
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfMonth
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfWeek
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toLastDayOfWeek
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toLastDayOfMonth
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toMonday
| Returns
Date
or
DateTime
| Returns
Date
/
DateTime
for
Date
/
DateTime
input
Returns
Date32
/
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfDay
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfHour
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfFifteenMinutes
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfTenMinutes
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfFiveMinutes
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input |
|
toStartOfMinute
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input |
|
timeSlot
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64 | {"source_file": "settings.md"} | [
-0.019598452374339104,
-0.037126436829566956,
-0.08612291514873505,
0.06980659067630768,
0.006801149807870388,
0.030506929382681847,
-0.08233440667390823,
0.04646476358175278,
-0.05250754952430725,
-0.023670971393585205,
0.02231452241539955,
-0.04297748953104019,
-0.03124350495636463,
-0.0... |
8add1081-65e8-4b2d-844f-436a9162148a | input |
|
timeSlot
| Returns
DateTime
Note: Wrong results for values outside 1970-2149 range
| Returns
DateTime
for
Date
/
DateTime
input
Returns
DateTime64
for
Date32
/
DateTime64
input | | {"source_file": "settings.md"} | [
0.04597139358520508,
0.025093471631407738,
0.003401530906558037,
-0.024429596960544586,
0.04238351061940193,
0.056388817727565765,
-0.10377383232116699,
0.08178780972957611,
-0.009101645089685917,
0.011051760986447334,
0.030910110101103783,
-0.09302392601966858,
-0.006781836971640587,
0.03... |
aaf98bd1-60ef-4c69-b7f2-92ce356c3f36 | enable_filesystem_cache {#enable_filesystem_cache}
Use cache for remote filesystem. This setting does not turn on/off cache for disks (must be done via disk config), but allows to bypass cache for some queries if intended
enable_filesystem_cache_log {#enable_filesystem_cache_log}
Allows to record the filesystem caching log for each query
enable_filesystem_cache_on_write_operations {#enable_filesystem_cache_on_write_operations}
Enables or disables
write-through
cache. If set to
false
, the
write-through
cache is disabled for write operations. If set to
true
,
write-through
cache is enabled as long as
cache_on_write_operations
is turned on in the server config's cache disk configuration section.
See
"Using local cache"
for more details.
enable_filesystem_read_prefetches_log {#enable_filesystem_read_prefetches_log}
Log to system.filesystem prefetch_log during query. Should be used only for testing or debugging, not recommended to be turned on by default
enable_global_with_statement {#enable_global_with_statement}
Propagate WITH statements to UNION queries and all subqueries
enable_hdfs_pread {#enable_hdfs_pread}
Enable or disables pread for HDFS files. By default,
hdfsPread
is used. If disabled,
hdfsRead
and
hdfsSeek
will be used to read hdfs files.
enable_http_compression {#enable_http_compression}
Enables or disables data compression in the response to an HTTP request.
For more information, read the
HTTP interface description
.
Possible values:
0 β Disabled.
1 β Enabled.
enable_job_stack_trace {#enable_job_stack_trace}
Output stack trace of a job creator when job results in exception. Disabled by default to avoid performance overhead.
enable_join_runtime_filters {#enable_join_runtime_filters}
Filter left side by set of JOIN keys collected from the right side at runtime.
enable_lazy_columns_replication {#enable_lazy_columns_replication}
Enables lazy columns replication in JOIN and ARRAY JOIN, it allows to avoid unnecessary copy of the same rows multiple times in memory.
enable_lightweight_delete {#enable_lightweight_delete}
Enable lightweight DELETE mutations for mergetree tables.
enable_lightweight_update {#enable_lightweight_update}
Allow to use lightweight updates.
enable_memory_bound_merging_of_aggregation_results {#enable_memory_bound_merging_of_aggregation_results}
Enable memory bound merging strategy for aggregation.
enable_multiple_prewhere_read_steps {#enable_multiple_prewhere_read_steps}
Move more conditions from WHERE to PREWHERE and do reads from disk and filtering in multiple steps if there are multiple conditions combined with AND
enable_named_columns_in_function_tuple {#enable_named_columns_in_function_tuple}
Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers. | {"source_file": "settings.md"} | [
0.037795066833496094,
0.01327503751963377,
-0.03187978267669678,
0.062238212674856186,
-0.00044201500713825226,
-0.04668835178017616,
0.016055503860116005,
0.033886849880218506,
0.02960728108882904,
0.08797702193260193,
0.011531071737408638,
0.14967168867588043,
-0.04684809222817421,
-0.04... |
d8328c93-f9eb-47ee-98cc-5b349ad2761f | Generate named tuples in function tuple() when all names are unique and can be treated as unquoted identifiers.
enable_optimize_predicate_expression {#enable_optimize_predicate_expression}
Turns on predicate pushdown in
SELECT
queries.
Predicate pushdown may significantly reduce network traffic for distributed queries.
Possible values:
0 β Disabled.
1 β Enabled.
Usage
Consider the following queries:
SELECT count() FROM test_table WHERE date = '2018-10-10'
SELECT count() FROM (SELECT * FROM test_table) WHERE date = '2018-10-10'
If
enable_optimize_predicate_expression = 1
, then the execution time of these queries is equal because ClickHouse applies
WHERE
to the subquery when processing it.
If
enable_optimize_predicate_expression = 0
, then the execution time of the second query is much longer because the
WHERE
clause applies to all the data after the subquery finishes.
enable_optimize_predicate_expression_to_final_subquery {#enable_optimize_predicate_expression_to_final_subquery}
Allow push predicate to final subquery.
enable_order_by_all {#enable_order_by_all}
Enables or disables sorting with
ORDER BY ALL
syntax, see
ORDER BY
.
Possible values:
0 β Disable ORDER BY ALL.
1 β Enable ORDER BY ALL.
Example
Query:
```sql
CREATE TABLE TAB(C1 Int, C2 Int, ALL Int) ENGINE=Memory();
INSERT INTO TAB VALUES (10, 20, 30), (20, 20, 10), (30, 10, 20);
SELECT * FROM TAB ORDER BY ALL; -- returns an error that ALL is ambiguous
SELECT * FROM TAB ORDER BY ALL SETTINGS enable_order_by_all = 0;
```
Result:
text
ββC1ββ¬βC2ββ¬βALLββ
β 20 β 20 β 10 β
β 30 β 10 β 20 β
β 10 β 20 β 30 β
ββββββ΄βββββ΄ββββββ
enable_parallel_blocks_marshalling {#enable_parallel_blocks_marshalling}
Affects only distributed queries. If enabled, blocks will be (de)serialized and (de)compressed on pipeline threads (i.e. with higher parallelism that what we have by default) before/after sending to the initiator.
enable_parsing_to_custom_serialization {#enable_parsing_to_custom_serialization}
If true then data can be parsed directly to columns with custom serialization (e.g. Sparse) according to hints for serialization got from the table.
enable_positional_arguments {#enable_positional_arguments}
Enables or disables supporting positional arguments for
GROUP BY
,
LIMIT BY
,
ORDER BY
statements.
Possible values:
0 β Positional arguments aren't supported.
1 β Positional arguments are supported: column numbers can use instead of column names.
Example
Query:
```sql
CREATE TABLE positional_arguments(one Int, two Int, three Int) ENGINE=Memory();
INSERT INTO positional_arguments VALUES (10, 20, 30), (20, 20, 10), (30, 10, 20);
SELECT * FROM positional_arguments ORDER BY 2,3;
```
Result:
text
ββoneββ¬βtwoββ¬βthreeββ
β 30 β 10 β 20 β
β 20 β 20 β 10 β
β 10 β 20 β 30 β
βββββββ΄ββββββ΄ββββββββ | {"source_file": "settings.md"} | [
0.024564672261476517,
-0.016006914898753166,
0.011403035372495651,
0.08648306876420975,
-0.026891257613897324,
-0.04699135944247246,
-0.006201330106705427,
-0.01005728729069233,
-0.03494687005877495,
-0.004692349582910538,
0.015367401763796806,
-0.04261185973882675,
0.0434555783867836,
-0.... |
e00aff16-862e-476a-8af2-dd4d388f1c33 | SELECT * FROM positional_arguments ORDER BY 2,3;
```
Result:
text
ββoneββ¬βtwoββ¬βthreeββ
β 30 β 10 β 20 β
β 20 β 20 β 10 β
β 10 β 20 β 30 β
βββββββ΄ββββββ΄ββββββββ
enable_producing_buckets_out_of_order_in_aggregation {#enable_producing_buckets_out_of_order_in_aggregation}
Allow memory-efficient aggregation (see
distributed_aggregation_memory_efficient
) to produce buckets out of order.
It may improve performance when aggregation bucket sizes are skewed by letting a replica to send buckets with higher id-s to the initiator while it is still processing some heavy buckets with lower id-s.
The downside is potentially higher memory usage.
enable_reads_from_query_cache {#enable_reads_from_query_cache}
If turned on, results of
SELECT
queries are retrieved from the
query cache
.
Possible values:
0 - Disabled
1 - Enabled
enable_s3_requests_logging {#enable_s3_requests_logging}
Enable very explicit logging of S3 requests. Makes sense for debug only.
enable_scalar_subquery_optimization {#enable_scalar_subquery_optimization}
If it is set to true, prevent scalar subqueries from (de)serializing large scalar values and possibly avoid running the same subquery more than once.
enable_scopes_for_with_statement {#enable_scopes_for_with_statement}
If disabled, declarations in parent WITH cluases will behave the same scope as they declared in the current scope.
Note that this is a compatibility setting for new analyzer to allow running some invalid queries that old analyzer could execute.
enable_shared_storage_snapshot_in_query {#enable_shared_storage_snapshot_in_query}
If enabled, all subqueries within a single query will share the same StorageSnapshot for each table.
This ensures a consistent view of the data across the entire query, even if the same table is accessed multiple times.
This is required for queries where internal consistency of data parts is important. Example:
sql
SELECT
count()
FROM events
WHERE (_part, _part_offset) IN (
SELECT _part, _part_offset
FROM events
WHERE user_id = 42
)
Without this setting, the outer and inner queries may operate on different data snapshots, leading to incorrect results.
:::note
Enabling this setting disables the optimization which removes unnecessary data parts from snapshots once the planning stage is complete.
As a result, long-running queries may hold onto obsolete parts for their entire duration, delaying part cleanup and increasing storage pressure.
This setting currently applies only to tables from the MergeTree family.
:::
Possible values:
0 - Disabled
1 - Enabled
enable_sharing_sets_for_mutations {#enable_sharing_sets_for_mutations}
Allow sharing set objects build for IN subqueries between different tasks of the same mutation. This reduces memory usage and CPU consumptio
enable_software_prefetch_in_aggregation {#enable_software_prefetch_in_aggregation} | {"source_file": "settings.md"} | [
-0.06100214645266533,
0.006734733935445547,
-0.011711443774402142,
0.04410944879055023,
-0.02319379709661007,
-0.06907986849546432,
-0.036675550043582916,
-0.005759399849921465,
0.0381966196000576,
0.030899276956915855,
-0.02958744205534458,
0.07822725921869278,
0.04980852082371712,
-0.078... |
36ec6789-bbaa-404b-bb53-9160fa302900 | enable_software_prefetch_in_aggregation {#enable_software_prefetch_in_aggregation}
Enable use of software prefetch in aggregatio
enable_unaligned_array_join {#enable_unaligned_array_join}
Allow ARRAY JOIN with multiple arrays that have different sizes. When this settings is enabled, arrays will be resized to the longest one.
enable_url_encoding {#enable_url_encoding}
Allows to enable/disable decoding/encoding path in uri in
URL
engine tables.
Disabled by default.
enable_vertical_final {#enable_vertical_final}
If enable, remove duplicated rows during FINAL by marking rows as deleted and filtering them later instead of merging rows
enable_writes_to_query_cache {#enable_writes_to_query_cache}
If turned on, results of
SELECT
queries are stored in the
query cache
.
Possible values:
0 - Disabled
1 - Enabled
enable_zstd_qat_codec {#enable_zstd_qat_codec}
If turned on, the ZSTD_QAT codec may be used to compress columns.
enforce_strict_identifier_format {#enforce_strict_identifier_format}
If enabled, only allow identifiers containing alphanumeric characters and underscores.
engine_file_allow_create_multiple_files {#engine_file_allow_create_multiple_files}
Enables or disables creating a new file on each insert in file engine tables if the format has the suffix (
JSON
,
ORC
,
Parquet
, etc.). If enabled, on each insert a new file will be created with a name following this pattern:
data.Parquet
->
data.1.Parquet
->
data.2.Parquet
, etc.
Possible values:
- 0 β
INSERT
query appends new data to the end of the file.
- 1 β
INSERT
query creates a new file.
engine_file_empty_if_not_exists {#engine_file_empty_if_not_exists}
Allows to select data from a file engine table without file.
Possible values:
- 0 β
SELECT
throws exception.
- 1 β
SELECT
returns empty result.
engine_file_skip_empty_files {#engine_file_skip_empty_files}
Enables or disables skipping empty files in
File
engine tables.
Possible values:
- 0 β
SELECT
throws an exception if empty file is not compatible with requested format.
- 1 β
SELECT
returns empty result for empty file.
engine_file_truncate_on_insert {#engine_file_truncate_on_insert}
Enables or disables truncate before insert in
File
engine tables.
Possible values:
- 0 β
INSERT
query appends new data to the end of the file.
- 1 β
INSERT
query replaces existing content of the file with the new data.
engine_url_skip_empty_files {#engine_url_skip_empty_files}
Enables or disables skipping empty files in
URL
engine tables.
Possible values:
- 0 β
SELECT
throws an exception if empty file is not compatible with requested format.
- 1 β
SELECT
returns empty result for empty file.
except_default_mode {#except_default_mode}
Set default mode in EXCEPT query. Possible values: empty string, 'ALL', 'DISTINCT'. If empty, query without mode will throw exception. | {"source_file": "settings.md"} | [
-0.06197699159383774,
-0.0074104624800384045,
-0.09501239657402039,
0.06212810426950455,
-0.1288023591041565,
-0.02086487226188183,
-0.05838640034198761,
0.012387020513415337,
-0.02548227831721306,
0.01866714283823967,
-0.03681173548102379,
0.09264438599348068,
0.02022695355117321,
-0.0610... |
f7da733a-9d65-4c50-b91f-f74995768dab | except_default_mode {#except_default_mode}
Set default mode in EXCEPT query. Possible values: empty string, 'ALL', 'DISTINCT'. If empty, query without mode will throw exception.
exclude_materialize_skip_indexes_on_insert {#exclude_materialize_skip_indexes_on_insert}
Excludes specified skip indexes from being built and stored during INSERTs. The excluded skip indexes will still be built and stored
during merges
or by an explicit
MATERIALIZE INDEX
query.
Has no effect if
materialize_skip_indexes_on_insert
is false.
Example:
```sql
CREATE TABLE tab
(
a UInt64,
b UInt64,
INDEX idx_a a TYPE minmax,
INDEX idx_b b TYPE set(3)
)
ENGINE = MergeTree ORDER BY tuple();
SET exclude_materialize_skip_indexes_on_insert='idx_a'; -- idx_a will be not be updated upon insert
--SET exclude_materialize_skip_indexes_on_insert='idx_a, idx_b'; -- neither index would be updated on insert
INSERT INTO tab SELECT number, number / 50 FROM numbers(100); -- only idx_b is updated
-- since it is a session setting it can be set on a per-query level
INSERT INTO tab SELECT number, number / 50 FROM numbers(100, 100) SETTINGS exclude_materialize_skip_indexes_on_insert='idx_b';
ALTER TABLE tab MATERIALIZE INDEX idx_a; -- this query can be used to explicitly materialize the index
SET exclude_materialize_skip_indexes_on_insert = DEFAULT; -- reset setting to default
```
execute_exists_as_scalar_subquery {#execute_exists_as_scalar_subquery}
Execute non-correlated EXISTS subqueries as scalar subqueries. As for scalar subqueries, the cache is used, and the constant folding applies to the result.
external_storage_connect_timeout_sec {#external_storage_connect_timeout_sec}
Connect timeout in seconds. Now supported only for MySQL
external_storage_max_read_bytes {#external_storage_max_read_bytes}
Limit maximum number of bytes when table with external engine should flush history data. Now supported only for MySQL table engine, database engine, and dictionary. If equal to 0, this setting is disabled
external_storage_max_read_rows {#external_storage_max_read_rows}
Limit maximum number of rows when table with external engine should flush history data. Now supported only for MySQL table engine, database engine, and dictionary. If equal to 0, this setting is disabled
external_storage_rw_timeout_sec {#external_storage_rw_timeout_sec}
Read/write timeout in seconds. Now supported only for MySQL
external_table_functions_use_nulls {#external_table_functions_use_nulls}
Defines how
mysql
,
postgresql
and
odbc
table functions use Nullable columns.
Possible values:
0 β The table function explicitly uses Nullable columns.
1 β The table function implicitly uses Nullable columns.
Usage
If the setting is set to
0
, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays. | {"source_file": "settings.md"} | [
0.034545768052339554,
0.038448017090559006,
0.04112201929092407,
0.06289051473140717,
0.003551909700036049,
-0.004448652267456055,
0.041319988667964935,
-0.034087203443050385,
-0.06451132148504257,
-0.003657609922811389,
0.06418073177337646,
0.03114217333495617,
0.05533859506249428,
-0.098... |
4caf04d0-c9d0-4fbd-af81-667742e4458b | Usage
If the setting is set to
0
, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays.
external_table_strict_query {#external_table_strict_query}
If it is set to true, transforming expression to local filter is forbidden for queries to external tables.
extract_key_value_pairs_max_pairs_per_row {#extract_key_value_pairs_max_pairs_per_row}
extractKeyValuePairs
function. Used as a safeguard against consuming too much memory."}]}]}/>
Max number of pairs that can be produced by the
extractKeyValuePairs
function. Used as a safeguard against consuming too much memory.
extremes {#extremes}
Whether to count extreme values (the minimums and maximums in columns of a query result). Accepts 0 or 1. By default, 0 (disabled).
For more information, see the section "Extreme values".
fallback_to_stale_replicas_for_distributed_queries {#fallback_to_stale_replicas_for_distributed_queries}
Forces a query to an out-of-date replica if updated data is not available. See
Replication
.
ClickHouse selects the most relevant from the outdated replicas of the table.
Used when performing
SELECT
from a distributed table that points to replicated tables.
By default, 1 (enabled).
filesystem_cache_boundary_alignment {#filesystem_cache_boundary_alignment}
Filesystem cache boundary alignment. This setting is applied only for non-disk read (e.g. for cache of remote table engines / table functions, but not for storage configuration of MergeTree tables). Value 0 means no alignment.
filesystem_cache_enable_background_download_during_fetch {#filesystem_cache_enable_background_download_during_fetch}
Only has an effect in ClickHouse Cloud. Wait time to lock cache for space reservation in filesystem cache
filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage {#filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage}
Only has an effect in ClickHouse Cloud. Wait time to lock cache for space reservation in filesystem cache
filesystem_cache_max_download_size {#filesystem_cache_max_download_size}
Max remote filesystem cache size that can be downloaded by a single query
filesystem_cache_name {#filesystem_cache_name}
Filesystem cache name to use for stateless table engines or data lakes
filesystem_cache_prefer_bigger_buffer_size {#filesystem_cache_prefer_bigger_buffer_size}
Prefer bigger buffer size if filesystem cache is enabled to avoid writing small file segments which deteriorate cache performance. On the other hand, enabling this setting might increase memory usage.
filesystem_cache_reserve_space_wait_lock_timeout_milliseconds {#filesystem_cache_reserve_space_wait_lock_timeout_milliseconds}
Wait time to lock cache for space reservation in filesystem cache
filesystem_cache_segments_batch_size {#filesystem_cache_segments_batch_size} | {"source_file": "settings.md"} | [
0.008136582560837269,
0.011074296198785305,
-0.13571429252624512,
0.0063652535900473595,
-0.025921670719981194,
0.0005286216037347913,
-0.022828850895166397,
0.05129880830645561,
-0.001770991599187255,
0.023818302899599075,
0.060127418488264084,
0.006213375832885504,
0.053307149559259415,
... |
59c7ae0b-5f31-4278-8903-d968d798c3bb | Wait time to lock cache for space reservation in filesystem cache
filesystem_cache_segments_batch_size {#filesystem_cache_segments_batch_size}
Limit on size of a single batch of file segments that a read buffer can request from cache. Too low value will lead to excessive requests to cache, too large may slow down eviction from cache
filesystem_cache_skip_download_if_exceeds_per_query_cache_write_limit {#filesystem_cache_skip_download_if_exceeds_per_query_cache_write_limit}
Skip download from remote filesystem if exceeds query cache size
filesystem_prefetch_max_memory_usage {#filesystem_prefetch_max_memory_usage}
Maximum memory usage for prefetches.
filesystem_prefetch_step_bytes {#filesystem_prefetch_step_bytes}
Prefetch step in bytes. Zero means
auto
- approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task
filesystem_prefetch_step_marks {#filesystem_prefetch_step_marks}
Prefetch step in marks. Zero means
auto
- approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task
filesystem_prefetches_limit {#filesystem_prefetches_limit}
Maximum number of prefetches. Zero means unlimited. A setting
filesystem_prefetches_max_memory_usage
is more recommended if you want to limit the number of prefetches
final {#final}
Automatically applies
FINAL
modifier to all tables in a query, to tables where
FINAL
is applicable, including joined tables and tables in sub-queries, and
distributed tables.
Possible values:
0 - disabled
1 - enabled
Example:
```sql
CREATE TABLE test
(
key Int64,
some String
)
ENGINE = ReplacingMergeTree
ORDER BY key;
INSERT INTO test FORMAT Values (1, 'first');
INSERT INTO test FORMAT Values (1, 'second');
SELECT * FROM test;
ββkeyββ¬βsomeββββ
β 1 β second β
βββββββ΄βββββββββ
ββkeyββ¬βsomeβββ
β 1 β first β
βββββββ΄ββββββββ
SELECT * FROM test SETTINGS final = 1;
ββkeyββ¬βsomeββββ
β 1 β second β
βββββββ΄βββββββββ
SET final = 1;
SELECT * FROM test;
ββkeyββ¬βsomeββββ
β 1 β second β
βββββββ΄βββββββββ
```
flatten_nested {#flatten_nested}
Sets the data format of a
nested
columns.
Possible values:
1 β Nested column is flattened to separate arrays.
0 β Nested column stays a single array of tuples.
Usage
If the setting is set to
0
, it is possible to use an arbitrary level of nesting.
Examples
Query:
``sql
SET flatten_nested = 1;
CREATE TABLE t_nest (
n` Nested(a UInt32, b UInt32)) ENGINE = MergeTree ORDER BY tuple();
SHOW CREATE TABLE t_nest;
```
Result: | {"source_file": "settings.md"} | [
-0.008372463285923004,
0.059076111763715744,
-0.08878105878829956,
-0.012763637118041515,
-0.024458741769194603,
-0.03043261729180813,
0.030323343351483345,
0.09008985757827759,
-0.0021582688204944134,
0.08051227033138275,
0.004349082708358765,
0.049920957535505295,
-0.055580127984285355,
... |
6cfde90a-d008-461e-b0ce-a592343d3386 | Examples
Query:
``sql
SET flatten_nested = 1;
CREATE TABLE t_nest (
n` Nested(a UInt32, b UInt32)) ENGINE = MergeTree ORDER BY tuple();
SHOW CREATE TABLE t_nest;
```
Result:
text
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE default.t_nest
(
`n.a` Array(UInt32),
`n.b` Array(UInt32)
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS index_granularity = 8192 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Query:
```sql
SET flatten_nested = 0;
CREATE TABLE t_nest (
n
Nested(a UInt32, b UInt32)) ENGINE = MergeTree ORDER BY tuple();
SHOW CREATE TABLE t_nest;
```
Result:
text
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE default.t_nest
(
`n` Nested(a UInt32, b UInt32)
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS index_granularity = 8192 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
force_aggregate_partitions_independently {#force_aggregate_partitions_independently}
Force the use of optimization when it is applicable, but heuristics decided not to use it
force_aggregation_in_order {#force_aggregation_in_order}
The setting is used by the server itself to support distributed queries. Do not change it manually, because it will break normal operations. (Forces use of aggregation in order on remote nodes during distributed aggregation).
force_data_skipping_indices {#force_data_skipping_indices}
Disables query execution if passed data skipping indices wasn't used.
Consider the following example:
```sql
CREATE TABLE data
(
key Int,
d1 Int,
d1_null Nullable(Int),
INDEX d1_idx d1 TYPE minmax GRANULARITY 1,
INDEX d1_null_idx assumeNotNull(d1_null) TYPE minmax GRANULARITY 1
)
Engine=MergeTree()
ORDER BY key;
SELECT * FROM data_01515;
SELECT * FROM data_01515 SETTINGS force_data_skipping_indices=''; -- query will produce CANNOT_PARSE_TEXT error.
SELECT * FROM data_01515 SETTINGS force_data_skipping_indices='d1_idx'; -- query will produce INDEX_NOT_USED error.
SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_idx'; -- Ok.
SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='
d1_idx
'; -- Ok (example of full featured parser).
SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='
d1_idx
, d1_null_idx'; -- query will produce INDEX_NOT_USED error, since d1_null_idx is not used.
SELECT * FROM data_01515 WHERE d1 = 0 AND assumeNotNull(d1_null) = 0 SETTINGS force_data_skipping_indices='
d1_idx
, d1_null_idx'; -- Ok.
```
force_grouping_standard_compatibility {#force_grouping_standard_compatibility} | {"source_file": "settings.md"} | [
0.036277707666158676,
0.013533085584640503,
0.018333733081817627,
0.010725180618464947,
-0.09343738853931427,
-0.06646649539470673,
-0.03929757699370384,
0.04953204467892647,
-0.024923110380768776,
0.021587666124105453,
-0.042485546320676804,
-0.01856842078268528,
-0.007582885213196278,
-0... |
d0945144-5133-4d8f-8c00-6bf5c76ab7f4 | force_grouping_standard_compatibility {#force_grouping_standard_compatibility}
Make GROUPING function to return 1 when argument is not used as an aggregation key
force_index_by_date {#force_index_by_date}
Disables query execution if the index can't be used by date.
Works with tables in the MergeTree family.
If
force_index_by_date=1
, ClickHouse checks whether the query has a date key condition that can be used for restricting data ranges. If there is no suitable condition, it throws an exception. However, it does not check whether the condition reduces the amount of data to read. For example, the condition
Date != ' 2000-01-01 '
is acceptable even when it matches all the data in the table (i.e., running the query requires a full scan). For more information about ranges of data in MergeTree tables, see
MergeTree
.
force_optimize_projection {#force_optimize_projection}
Enables or disables the obligatory use of
projections
in
SELECT
queries, when projection optimization is enabled (see
optimize_use_projections
setting).
Possible values:
0 β Projection optimization is not obligatory.
1 β Projection optimization is obligatory.
force_optimize_projection_name {#force_optimize_projection_name}
If it is set to a non-empty string, check that this projection is used in the query at least once.
Possible values:
string: name of projection that used in a query
force_optimize_skip_unused_shards {#force_optimize_skip_unused_shards}
Enables or disables query execution if
optimize_skip_unused_shards
is enabled and skipping of unused shards is not possible. If the skipping is not possible and the setting is enabled, an exception will be thrown.
Possible values:
0 β Disabled. ClickHouse does not throw an exception.
1 β Enabled. Query execution is disabled only if the table has a sharding key.
2 β Enabled. Query execution is disabled regardless of whether a sharding key is defined for the table.
force_optimize_skip_unused_shards_nesting {#force_optimize_skip_unused_shards_nesting}
Controls
force_optimize_skip_unused_shards
(hence still requires
force_optimize_skip_unused_shards
) depends on the nesting level of the distributed query (case when you have
Distributed
table that look into another
Distributed
table).
Possible values:
0 - Disabled,
force_optimize_skip_unused_shards
works always.
1 β Enables
force_optimize_skip_unused_shards
only for the first level.
2 β Enables
force_optimize_skip_unused_shards
up to the second level.
force_primary_key {#force_primary_key}
Disables query execution if indexing by the primary key is not possible.
Works with tables in the MergeTree family. | {"source_file": "settings.md"} | [
0.06217716634273529,
0.0018247364787384868,
0.044260069727897644,
0.05215179920196533,
-0.04353339970111847,
-0.01947597786784172,
-0.05942735821008682,
0.018081799149513245,
-0.05069524794816971,
-0.003640625160187483,
0.03274328261613846,
-0.006062784232199192,
0.010593196377158165,
-0.0... |
8ef81dc8-ab35-4dc9-b8ba-b3f187f721e8 | force_primary_key {#force_primary_key}
Disables query execution if indexing by the primary key is not possible.
Works with tables in the MergeTree family.
If
force_primary_key=1
, ClickHouse checks to see if the query has a primary key condition that can be used for restricting data ranges. If there is no suitable condition, it throws an exception. However, it does not check whether the condition reduces the amount of data to read. For more information about data ranges in MergeTree tables, see
MergeTree
.
force_remove_data_recursively_on_drop {#force_remove_data_recursively_on_drop}
Recursively remove data on DROP query. Avoids 'Directory not empty' error, but may silently remove detached data
formatdatetime_e_with_space_padding {#formatdatetime_e_with_space_padding}
Formatter '%e' in function 'formatDateTime' prints single-digit days with a leading space, e.g. ' 2' instead of '2'.
formatdatetime_f_prints_scale_number_of_digits {#formatdatetime_f_prints_scale_number_of_digits}
Formatter '%f' in function 'formatDateTime' prints only the scale amount of digits for a DateTime64 instead of fixed 6 digits.
formatdatetime_f_prints_single_zero {#formatdatetime_f_prints_single_zero}
Formatter '%f' in function 'formatDateTime' prints a single zero instead of six zeros if the formatted value has no fractional seconds.
formatdatetime_format_without_leading_zeros {#formatdatetime_format_without_leading_zeros}
Formatters '%c', '%l' and '%k' in function 'formatDateTime' print months and hours without leading zeros.
formatdatetime_parsedatetime_m_is_month_name {#formatdatetime_parsedatetime_m_is_month_name}
Formatter '%M' in functions 'formatDateTime' and 'parseDateTime' print/parse the month name instead of minutes.
fsync_metadata {#fsync_metadata}
Enables or disables
fsync
when writing
.sql
files. Enabled by default.
It makes sense to disable it if the server has millions of tiny tables that are constantly being created and destroyed.
function_date_trunc_return_type_behavior {#function_date_trunc_return_type_behavior}
Allows to change the behaviour of the result type of
dateTrunc
function.
Possible values:
0 - When the second argument is
DateTime64/Date32
the return type will be
DateTime64/Date32
regardless of the time unit in the first argument.
1 - For
Date32
the result is always
Date
. For
DateTime64
the result is
DateTime
for time units
second
and higher.
function_implementation {#function_implementation}
Choose function implementation for specific target or variant (experimental). If empty enable all of them.
function_json_value_return_type_allow_complex {#function_json_value_return_type_allow_complex}
Control whether allow to return complex type (such as: struct, array, map) for json_value function.
```sql
SELECT JSON_VALUE('{"hello":{"world":"!"}}', '$.hello') settings function_json_value_return_type_allow_complex=true | {"source_file": "settings.md"} | [
0.00570501247420907,
0.0483010895550251,
0.035705242305994034,
0.05515582486987114,
-0.01705794967710972,
-0.05974742770195007,
0.006742709316313267,
0.05507578328251839,
-0.03729771450161934,
0.018382275477051735,
0.04474371299147606,
-0.014881591312587261,
0.013123677112162113,
-0.074347... |
cd9ead96-2785-4b29-93cc-7d12a8be2341 | ```sql
SELECT JSON_VALUE('{"hello":{"world":"!"}}', '$.hello') settings function_json_value_return_type_allow_complex=true
ββJSON_VALUE('{"hello":{"world":"!"}}', '$.hello')ββ
β {"world":"!"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
Possible values:
true β Allow.
false β Disallow.
function_json_value_return_type_allow_nullable {#function_json_value_return_type_allow_nullable}
Control whether allow to return
NULL
when value is not exist for JSON_VALUE function.
```sql
SELECT JSON_VALUE('{"hello":"world"}', '$.b') settings function_json_value_return_type_allow_nullable=true;
ββJSON_VALUE('{"hello":"world"}', '$.b')ββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
Possible values:
true β Allow.
false β Disallow.
function_locate_has_mysql_compatible_argument_order {#function_locate_has_mysql_compatible_argument_order}
Controls the order of arguments in function
locate
.
Possible values:
0 β Function
locate
accepts arguments
(haystack, needle[, start_pos])
.
1 β Function
locate
accepts arguments
(needle, haystack, [, start_pos])
(MySQL-compatible behavior)
function_range_max_elements_in_block {#function_range_max_elements_in_block}
Sets the safety threshold for data volume generated by function
range
. Defines the maximum number of values generated by function per block of data (sum of array sizes for every row in a block).
Possible values:
Positive integer.
See Also
max_block_size
min_insert_block_size_rows
function_sleep_max_microseconds_per_block {#function_sleep_max_microseconds_per_block}
sleep
, but not for
sleepEachRow
function. In the new version, we introduce this setting. If you set compatibility with the previous versions, we will disable the limit altogether."}]}]}/>
Maximum number of microseconds the function
sleep
is allowed to sleep for each block. If a user called it with a larger value, it throws an exception. It is a safety threshold.
function_visible_width_behavior {#function_visible_width_behavior}
visibleWidth
to be more precise"}]}]}/>
The version of
visibleWidth
behavior. 0 - only count the number of code points; 1 - correctly count zero-width and combining characters, count full-width characters as two, estimate the tab width, count delete characters.
geo_distance_returns_float64_on_float64_arguments {#geo_distance_returns_float64_on_float64_arguments}
If all four arguments to
geoDistance
,
greatCircleDistance
,
greatCircleAngle
functions are Float64, return Float64 and use double precision for internal calculations. In previous ClickHouse versions, the functions always returned Float32.
geotoh3_argument_order {#geotoh3_argument_order}
Function 'geoToH3' accepts (lon, lat) if set to 'lon_lat' and (lat, lon) if set to 'lat_lon'. | {"source_file": "settings.md"} | [
-0.012298641726374626,
0.026429925113916397,
-0.01953311450779438,
0.026917116716504097,
-0.05093487352132797,
-0.05997389554977417,
-0.006475513800978661,
-0.03907647728919983,
-0.025214534252882004,
-0.055480536073446274,
0.07857474684715271,
-0.08291970938444138,
0.009797213599085808,
0... |
7a3daacb-1991-46cf-9c4e-45347b4ab72e | geotoh3_argument_order {#geotoh3_argument_order}
Function 'geoToH3' accepts (lon, lat) if set to 'lon_lat' and (lat, lon) if set to 'lat_lon'.
glob_expansion_max_elements {#glob_expansion_max_elements}
Maximum number of allowed addresses (For external storages, table functions, etc).
grace_hash_join_initial_buckets {#grace_hash_join_initial_buckets}
Initial number of grace hash join buckets
grace_hash_join_max_buckets {#grace_hash_join_max_buckets}
Limit on the number of grace hash join buckets
group_by_overflow_mode {#group_by_overflow_mode}
Sets what happens when the number of unique keys for aggregation exceeds the limit:
-
throw
: throw an exception
-
break
: stop executing the query and return the partial result
-
any
: continue aggregation for the keys that got into the set, but do not add new keys to the set.
Using the 'any' value lets you run an approximation of GROUP BY. The quality of
this approximation depends on the statistical nature of the data.
group_by_two_level_threshold {#group_by_two_level_threshold}
From what number of keys, a two-level aggregation starts. 0 - the threshold is not set.
group_by_two_level_threshold_bytes {#group_by_two_level_threshold_bytes}
From what size of the aggregation state in bytes, a two-level aggregation begins to be used. 0 - the threshold is not set. Two-level aggregation is used when at least one of the thresholds is triggered.
group_by_use_nulls {#group_by_use_nulls}
Changes the way the
GROUP BY clause
treats the types of aggregation keys.
When the
ROLLUP
,
CUBE
, or
GROUPING SETS
specifiers are used, some aggregation keys may not be used to produce some result rows.
Columns for these keys are filled with either default value or
NULL
in corresponding rows depending on this setting.
Possible values:
0 β The default value for the aggregation key type is used to produce missing values.
1 β ClickHouse executes
GROUP BY
the same way as the SQL standard says. The types of aggregation keys are converted to
Nullable
. Columns for corresponding aggregation keys are filled with
NULL
for rows that didn't use it.
See also:
GROUP BY clause
h3togeo_lon_lat_result_order {#h3togeo_lon_lat_result_order}
Function 'h3ToGeo' returns (lon, lat) if true, otherwise (lat, lon).
handshake_timeout_ms {#handshake_timeout_ms}
Timeout in milliseconds for receiving Hello packet from replicas during handshake.
hdfs_create_new_file_on_insert {#hdfs_create_new_file_on_insert}
Enables or disables creating a new file on each insert in HDFS engine tables. If enabled, on each insert a new HDFS file will be created with the name, similar to this pattern:
initial:
data.Parquet.gz
->
data.1.Parquet.gz
->
data.2.Parquet.gz
, etc.
Possible values:
- 0 β
INSERT
query appends new data to the end of the file.
- 1 β
INSERT
query creates a new file.
hdfs_ignore_file_doesnt_exist {#hdfs_ignore_file_doesnt_exist} | {"source_file": "settings.md"} | [
0.009531427174806595,
-0.011798803694546223,
0.0296775009483099,
-0.05154876783490181,
0.013654815033078194,
-0.09197948127985,
0.01981084607541561,
0.0011714721331372857,
0.024581363424658775,
-0.004786856472492218,
0.0016780330333858728,
0.033033937215805054,
0.06275768578052521,
-0.0831... |
173efa54-3edb-4694-8006-a508950f55f8 | Possible values:
- 0 β
INSERT
query appends new data to the end of the file.
- 1 β
INSERT
query creates a new file.
hdfs_ignore_file_doesnt_exist {#hdfs_ignore_file_doesnt_exist}
Ignore absence of file if it does not exist when reading certain keys.
Possible values:
- 1 β
SELECT
returns empty result.
- 0 β
SELECT
throws an exception.
hdfs_replication {#hdfs_replication}
The actual number of replications can be specified when the hdfs file is created.
hdfs_skip_empty_files {#hdfs_skip_empty_files}
Enables or disables skipping empty files in
HDFS
engine tables.
Possible values:
- 0 β
SELECT
throws an exception if empty file is not compatible with requested format.
- 1 β
SELECT
returns empty result for empty file.
hdfs_throw_on_zero_files_match {#hdfs_throw_on_zero_files_match}
Throw an error if matched zero files according to glob expansion rules.
Possible values:
- 1 β
SELECT
throws an exception.
- 0 β
SELECT
returns empty result.
hdfs_truncate_on_insert {#hdfs_truncate_on_insert}
Enables or disables truncation before an insert in hdfs engine tables. If disabled, an exception will be thrown on an attempt to insert if a file in HDFS already exists.
Possible values:
- 0 β
INSERT
query appends new data to the end of the file.
- 1 β
INSERT
query replaces existing content of the file with the new data.
hedged_connection_timeout_ms {#hedged_connection_timeout_ms}
Connection timeout for establishing connection with replica for Hedged requests
hnsw_candidate_list_size_for_search {#hnsw_candidate_list_size_for_search}
The size of the dynamic candidate list when searching the vector similarity index, also known as 'ef_search'.
hsts_max_age {#hsts_max_age}
Expired time for HSTS. 0 means disable HSTS.
http_connection_timeout {#http_connection_timeout}
HTTP connection timeout (in seconds).
Possible values:
Any positive integer.
0 - Disabled (infinite timeout).
http_headers_progress_interval_ms {#http_headers_progress_interval_ms}
Do not send HTTP headers X-ClickHouse-Progress more frequently than at each specified interval.
http_make_head_request {#http_make_head_request}
The
http_make_head_request
setting allows the execution of a
HEAD
request while reading data from HTTP to retrieve information about the file to be read, such as its size. Since it's enabled by default, it may be desirable to disable this setting in cases where the server does not support
HEAD
requests.
http_max_field_name_size {#http_max_field_name_size}
Maximum length of field name in HTTP header
http_max_field_value_size {#http_max_field_value_size}
Maximum length of field value in HTTP header
http_max_fields {#http_max_fields}
Maximum number of fields in HTTP header
http_max_multipart_form_data_size {#http_max_multipart_form_data_size} | {"source_file": "settings.md"} | [
-0.006533558014780283,
-0.0632341280579567,
-0.0013534777099266648,
0.04189935326576233,
0.04226871579885483,
-0.04458662122488022,
-0.04513523727655411,
-0.0013996417401358485,
-0.021585479378700256,
0.027155674993991852,
0.04858389496803284,
-0.0000498454573971685,
0.12684307992458344,
-... |
398a261e-d82b-44d5-b3f2-0056f0785d02 | http_max_fields {#http_max_fields}
Maximum number of fields in HTTP header
http_max_multipart_form_data_size {#http_max_multipart_form_data_size}
Limit on size of multipart/form-data content. This setting cannot be parsed from URL parameters and should be set in a user profile. Note that content is parsed and external tables are created in memory before the start of query execution. And this is the only limit that has an effect on that stage (limits on max memory usage and max execution time have no effect while reading HTTP form data).
http_max_request_param_data_size {#http_max_request_param_data_size}
Limit on size of request data used as a query parameter in predefined HTTP requests.
http_max_tries {#http_max_tries}
Max attempts to read via http.
http_max_uri_size {#http_max_uri_size}
Sets the maximum URI length of an HTTP request.
Possible values:
Positive integer.
http_native_compression_disable_checksumming_on_decompress {#http_native_compression_disable_checksumming_on_decompress}
Enables or disables checksum verification when decompressing the HTTP POST data from the client. Used only for ClickHouse native compression format (not used with
gzip
or
deflate
).
For more information, read the
HTTP interface description
.
Possible values:
0 β Disabled.
1 β Enabled.
http_receive_timeout {#http_receive_timeout}
HTTP receive timeout (in seconds).
Possible values:
Any positive integer.
0 - Disabled (infinite timeout).
http_response_buffer_size {#http_response_buffer_size}
The number of bytes to buffer in the server memory before sending a HTTP response to the client or flushing to disk (when http_wait_end_of_query is enabled).
http_response_headers {#http_response_headers}
Allows to add or override HTTP headers which the server will return in the response with a successful query result.
This only affects the HTTP interface.
If the header is already set by default, the provided value will override it.
If the header was not set by default, it will be added to the list of headers.
Headers that are set by the server by default and not overridden by this setting, will remain.
The setting allows you to set a header to a constant value. Currently there is no way to set a header to a dynamically calculated value.
Neither names or values can contain ASCII control characters.
If you implement a UI application which allows users to modify settings but at the same time makes decisions based on the returned headers, it is recommended to restrict this setting to readonly.
Example:
SET http_response_headers = '{"Content-Type": "image/png"}'
http_retry_initial_backoff_ms {#http_retry_initial_backoff_ms}
Min milliseconds for backoff, when retrying read via http
http_retry_max_backoff_ms {#http_retry_max_backoff_ms}
Max milliseconds for backoff, when retrying read via http
http_send_timeout {#http_send_timeout}
HTTP send timeout (in seconds). | {"source_file": "settings.md"} | [
-0.0036357969511300325,
0.01994883082807064,
-0.031144697219133377,
-0.034137021750211716,
-0.05108701437711716,
-0.06846283376216888,
-0.10202259570360184,
0.0853966474533081,
-0.06752350181341171,
0.015044279396533966,
-0.034779831767082214,
0.02421274036169052,
0.021176954731345177,
-0.... |
355f841b-480f-4160-be00-b96c38857b36 | http_retry_max_backoff_ms {#http_retry_max_backoff_ms}
Max milliseconds for backoff, when retrying read via http
http_send_timeout {#http_send_timeout}
HTTP send timeout (in seconds).
Possible values:
Any positive integer.
0 - Disabled (infinite timeout).
:::note
It's applicable only to the default profile. A server reboot is required for the changes to take effect.
:::
http_skip_not_found_url_for_globs {#http_skip_not_found_url_for_globs}
Skip URLs for globs with HTTP_NOT_FOUND error
http_wait_end_of_query {#http_wait_end_of_query}
Enable HTTP response buffering on the server-side.
http_write_exception_in_output_format {#http_write_exception_in_output_format}
Write exception in output format to produce valid output. Works with JSON and XML formats.
http_zlib_compression_level {#http_zlib_compression_level}
Sets the level of data compression in the response to an HTTP request if
enable_http_compression = 1
.
Possible values: Numbers from 1 to 9.
iceberg_delete_data_on_drop {#iceberg_delete_data_on_drop}
Whether to delete all iceberg files on drop or not.
iceberg_insert_max_bytes_in_data_file {#iceberg_insert_max_bytes_in_data_file}
Max bytes of iceberg parquet data file on insert operation.
iceberg_insert_max_rows_in_data_file {#iceberg_insert_max_rows_in_data_file}
Max rows of iceberg parquet data file on insert operation.
iceberg_metadata_compression_method {#iceberg_metadata_compression_method}
Method to compress
.metadata.json
file.
iceberg_metadata_log_level {#iceberg_metadata_log_level}
Controls the level of metadata logging for Iceberg tables to system.iceberg_metadata_log.
Usually this setting can be modified for debugging purposes.
Possible values:
- none - No metadata log.
- metadata - Root metadata.json file.
- manifest_list_metadata - Everything above + metadata from avro manifest list which corresponds to a snapshot.
- manifest_list_entry - Everything above + avro manifest list entries.
- manifest_file_metadata - Everything above + metadata from traversed avro manifest files.
- manifest_file_entry - Everything above + traversed avro manifest files entries.
iceberg_snapshot_id {#iceberg_snapshot_id}
Query Iceberg table using the specific snapshot id.
iceberg_timestamp_ms {#iceberg_timestamp_ms}
Query Iceberg table using the snapshot that was current at a specific timestamp.
idle_connection_timeout {#idle_connection_timeout}
Timeout to close idle TCP connections after specified number of seconds.
Possible values:
Positive integer (0 - close immediately, after 0 seconds).
ignore_cold_parts_seconds {#ignore_cold_parts_seconds}
Only has an effect in ClickHouse Cloud. Exclude new data parts from SELECT queries until they're either pre-warmed (see
cache_populated_by_fetch
) or this many seconds old. Only for Replicated-/SharedMergeTree.
ignore_data_skipping_indices {#ignore_data_skipping_indices} | {"source_file": "settings.md"} | [
-0.03197869658470154,
0.05935845524072647,
-0.03493237867951393,
0.05952468886971474,
-0.042123351246118546,
-0.09419015794992447,
-0.08637754619121552,
0.007045125588774681,
-0.03906888887286186,
0.0018669173587113619,
0.008568721823394299,
0.07290370017290115,
0.017924249172210693,
-0.05... |
c8707e95-8e10-43e9-b748-b463eb9519a1 | ignore_data_skipping_indices {#ignore_data_skipping_indices}
Ignores the skipping indexes specified if used by the query.
Consider the following example:
```sql
CREATE TABLE data
(
key Int,
x Int,
y Int,
INDEX x_idx x TYPE minmax GRANULARITY 1,
INDEX y_idx y TYPE minmax GRANULARITY 1,
INDEX xy_idx (x,y) TYPE minmax GRANULARITY 1
)
Engine=MergeTree()
ORDER BY key;
INSERT INTO data VALUES (1, 2, 3);
SELECT * FROM data;
SELECT * FROM data SETTINGS ignore_data_skipping_indices=''; -- query will produce CANNOT_PARSE_TEXT error.
SELECT * FROM data SETTINGS ignore_data_skipping_indices='x_idx'; -- Ok.
SELECT * FROM data SETTINGS ignore_data_skipping_indices='na_idx'; -- Ok.
SELECT * FROM data WHERE x = 1 AND y = 1 SETTINGS ignore_data_skipping_indices='xy_idx',force_data_skipping_indices='xy_idx' ; -- query will produce INDEX_NOT_USED error, since xy_idx is explicitly ignored.
SELECT * FROM data WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices='xy_idx';
```
The query without ignoring any indexes:
```sql
EXPLAIN indexes = 1 SELECT * FROM data WHERE x = 1 AND y = 2;
Expression ((Projection + Before ORDER BY))
Filter (WHERE)
ReadFromMergeTree (default.data)
Indexes:
PrimaryKey
Condition: true
Parts: 1/1
Granules: 1/1
Skip
Name: x_idx
Description: minmax GRANULARITY 1
Parts: 0/1
Granules: 0/1
Skip
Name: y_idx
Description: minmax GRANULARITY 1
Parts: 0/0
Granules: 0/0
Skip
Name: xy_idx
Description: minmax GRANULARITY 1
Parts: 0/0
Granules: 0/0
```
Ignoring the
xy_idx
index:
```sql
EXPLAIN indexes = 1 SELECT * FROM data WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices='xy_idx';
Expression ((Projection + Before ORDER BY))
Filter (WHERE)
ReadFromMergeTree (default.data)
Indexes:
PrimaryKey
Condition: true
Parts: 1/1
Granules: 1/1
Skip
Name: x_idx
Description: minmax GRANULARITY 1
Parts: 0/1
Granules: 0/1
Skip
Name: y_idx
Description: minmax GRANULARITY 1
Parts: 0/0
Granules: 0/0
```
Works with tables in the MergeTree family.
ignore_drop_queries_probability {#ignore_drop_queries_probability}
If enabled, server will ignore all DROP table queries with specified probability (for Memory and JOIN engines it will replace DROP to TRUNCATE). Used for testing purposes
ignore_materialized_views_with_dropped_target_table {#ignore_materialized_views_with_dropped_target_table}
Ignore MVs with dropped target table during pushing to views
ignore_on_cluster_for_replicated_access_entities_queries {#ignore_on_cluster_for_replicated_access_entities_queries}
Ignore ON CLUSTER clause for replicated access entities management queries. | {"source_file": "settings.md"} | [
0.07126050442457199,
0.01164066232740879,
0.07706833630800247,
0.053340960294008255,
-0.04406764730811119,
-0.07459434121847153,
0.07993759959936142,
-0.01157689094543457,
-0.06153682619333267,
-0.055123455822467804,
0.06917162239551544,
-0.04039579629898071,
0.08197241276502609,
-0.151803... |
9003609d-b950-430e-8a46-b697aa9842f7 | ignore_on_cluster_for_replicated_access_entities_queries {#ignore_on_cluster_for_replicated_access_entities_queries}
Ignore ON CLUSTER clause for replicated access entities management queries.
ignore_on_cluster_for_replicated_named_collections_queries {#ignore_on_cluster_for_replicated_named_collections_queries}
Ignore ON CLUSTER clause for replicated named collections management queries.
ignore_on_cluster_for_replicated_udf_queries {#ignore_on_cluster_for_replicated_udf_queries}
Ignore ON CLUSTER clause for replicated UDF management queries.
implicit_select {#implicit_select}
Allow writing simple SELECT queries without the leading SELECT keyword, which makes it simple for calculator-style usage, e.g.
1 + 2
becomes a valid query.
In
clickhouse-local
it is enabled by default and can be explicitly disabled.
implicit_table_at_top_level {#implicit_table_at_top_level}
If not empty, queries without FROM at the top level will read from this table instead of system.one.
This is used in clickhouse-local for input data processing.
The setting could be set explicitly by a user but is not intended for this type of usage.
Subqueries are not affected by this setting (neither scalar, FROM, or IN subqueries).
SELECTs at the top level of UNION, INTERSECT, EXCEPT chains are treated uniformly and affected by this setting, regardless of their grouping in parentheses.
It is unspecified how this setting affects views and distributed queries.
The setting accepts a table name (then the table is resolved from the current database) or a qualified name in the form of 'database.table'.
Both database and table names have to be unquoted - only simple identifiers are allowed.
implicit_transaction {#implicit_transaction}
If enabled and not already inside a transaction, wraps the query inside a full transaction (begin + commit or rollback)
inject_random_order_for_select_without_order_by {#inject_random_order_for_select_without_order_by}
If enabled, injects 'ORDER BY rand()' into SELECT queries without ORDER BY clause.
Applied only for subquery depth = 0. Subqueries and INSERT INTO ... SELECT are not affected.
If the top-level construct is UNION, 'ORDER BY rand()' is injected into all children independently.
Only useful for testing and development (missing ORDER BY is a source of non-deterministic query results).
input_format_parallel_parsing {#input_format_parallel_parsing}
Enables or disables order-preserving parallel parsing of data formats. Supported only for
TabSeparated (TSV)
,
TSKV
,
CSV
and
JSONEachRow
formats.
Possible values:
1 β Enabled.
0 β Disabled.
insert_allow_materialized_columns {#insert_allow_materialized_columns}
If setting is enabled, Allow materialized columns in INSERT.
insert_deduplicate {#insert_deduplicate}
Enables or disables block deduplication of
INSERT
(for Replicated* tables).
Possible values:
0 β Disabled.
1 β Enabled. | {"source_file": "settings.md"} | [
-0.012427149340510368,
-0.04948253929615021,
-0.06547465920448303,
0.0849696546792984,
-0.03636133298277855,
-0.09553156048059464,
0.04018263891339302,
-0.07384945452213287,
-0.037038251757621765,
0.03633831813931465,
0.10938183218240738,
-0.06547778099775314,
0.07955522835254669,
-0.05492... |
7a0b8948-ed73-4eaa-92a1-d3eb7318a20c | insert_deduplicate {#insert_deduplicate}
Enables or disables block deduplication of
INSERT
(for Replicated* tables).
Possible values:
0 β Disabled.
1 β Enabled.
By default, blocks inserted into replicated tables by the
INSERT
statement are deduplicated (see
Data Replication
).
For the replicated tables by default the only 100 of the most recent blocks for each partition are deduplicated (see
replicated_deduplication_window
,
replicated_deduplication_window_seconds
).
For not replicated tables see
non_replicated_deduplication_window
.
insert_deduplication_token {#insert_deduplication_token}
The setting allows a user to provide own deduplication semantic in MergeTree/ReplicatedMergeTree
For example, by providing a unique value for the setting in each INSERT statement,
user can avoid the same inserted data being deduplicated.
Possible values:
Any string
insert_deduplication_token
is used for deduplication
only
when not empty.
For the replicated tables by default the only 100 of the most recent inserts for each partition are deduplicated (see
replicated_deduplication_window
,
replicated_deduplication_window_seconds
).
For not replicated tables see
non_replicated_deduplication_window
.
:::note
insert_deduplication_token
works on a partition level (the same as
insert_deduplication
checksum). Multiple partitions can have the same
insert_deduplication_token
.
:::
Example:
```sql
CREATE TABLE test_table
( A Int64 )
ENGINE = MergeTree
ORDER BY A
SETTINGS non_replicated_deduplication_window = 100;
INSERT INTO test_table SETTINGS insert_deduplication_token = 'test' VALUES (1);
-- the next insert won't be deduplicated because insert_deduplication_token is different
INSERT INTO test_table SETTINGS insert_deduplication_token = 'test1' VALUES (1);
-- the next insert will be deduplicated because insert_deduplication_token
-- is the same as one of the previous
INSERT INTO test_table SETTINGS insert_deduplication_token = 'test' VALUES (2);
SELECT * FROM test_table
ββAββ
β 1 β
βββββ
ββAββ
β 1 β
βββββ
```
insert_keeper_fault_injection_probability {#insert_keeper_fault_injection_probability}
Approximate probability of failure for a keeper request during insert. Valid value is in interval [0.0f, 1.0f]
insert_keeper_fault_injection_seed {#insert_keeper_fault_injection_seed}
0 - random seed, otherwise the setting value
insert_keeper_max_retries {#insert_keeper_max_retries}
The setting sets the maximum number of retries for ClickHouse Keeper (or ZooKeeper) requests during insert into replicated MergeTree. Only Keeper requests which failed due to network error, Keeper session timeout, or request timeout are considered for retries.
Possible values:
Positive integer.
0 β Retries are disabled
Cloud default value:
20
. | {"source_file": "settings.md"} | [
-0.09005913883447647,
-0.040356624871492386,
0.025331685319542885,
0.013127228245139122,
-0.01774386502802372,
-0.06302706897258759,
-0.018637128174304962,
0.018529566004872322,
0.018018385395407677,
0.030496060848236084,
0.09825611859560013,
-0.0010731132933869958,
0.07387275248765945,
-0... |
df8a0da8-3737-4177-82e4-ea84e43691ab | Possible values:
Positive integer.
0 β Retries are disabled
Cloud default value:
20
.
Keeper request retries are done after some timeout. The timeout is controlled by the following settings:
insert_keeper_retry_initial_backoff_ms
,
insert_keeper_retry_max_backoff_ms
.
The first retry is done after
insert_keeper_retry_initial_backoff_ms
timeout. The consequent timeouts will be calculated as follows:
timeout = min(insert_keeper_retry_max_backoff_ms, latest_timeout * 2)
For example, if
insert_keeper_retry_initial_backoff_ms=100
,
insert_keeper_retry_max_backoff_ms=10000
and
insert_keeper_max_retries=8
then timeouts will be
100, 200, 400, 800, 1600, 3200, 6400, 10000
.
Apart from fault tolerance, the retries aim to provide a better user experience - they allow to avoid returning an error during INSERT execution if Keeper is restarted, for example, due to an upgrade.
insert_keeper_retry_initial_backoff_ms {#insert_keeper_retry_initial_backoff_ms}
Initial timeout(in milliseconds) to retry a failed Keeper request during INSERT query execution
Possible values:
Positive integer.
0 β No timeout
insert_keeper_retry_max_backoff_ms {#insert_keeper_retry_max_backoff_ms}
Maximum timeout (in milliseconds) to retry a failed Keeper request during INSERT query execution
Possible values:
Positive integer.
0 β Maximum timeout is not limited
insert_null_as_default {#insert_null_as_default}
Enables or disables the insertion of
default values
instead of
NULL
into columns with not
nullable
data type.
If column type is not nullable and this setting is disabled, then inserting
NULL
causes an exception. If column type is nullable, then
NULL
values are inserted as is, regardless of this setting.
This setting is applicable to
INSERT ... SELECT
queries. Note that
SELECT
subqueries may be concatenated with
UNION ALL
clause.
Possible values:
0 β Inserting
NULL
into a not nullable column causes an exception.
1 β Default column value is inserted instead of
NULL
.
insert_quorum {#insert_quorum}
:::note
This setting is not applicable to SharedMergeTree, see
SharedMergeTree consistency
for more information.
:::
Enables the quorum writes.
If
insert_quorum < 2
, the quorum writes are disabled.
If
insert_quorum >= 2
, the quorum writes are enabled.
If
insert_quorum = 'auto'
, use majority number (
number_of_replicas / 2 + 1
) as quorum number.
Quorum writes
INSERT
succeeds only when ClickHouse manages to correctly write data to the
insert_quorum
of replicas during the
insert_quorum_timeout
. If for any reason the number of replicas with successful writes does not reach the
insert_quorum
, the write is considered failed and ClickHouse will delete the inserted block from all the replicas where data has already been written. | {"source_file": "settings.md"} | [
0.0030690510757267475,
0.028580965474247932,
-0.04333149641752243,
0.061593882739543915,
-0.04484843462705612,
-0.030597150325775146,
-0.03085629642009735,
0.028901346027851105,
0.01360240951180458,
0.024222012609243393,
0.025019461289048195,
0.0030314933974295855,
0.0888470932841301,
-0.0... |
31332672-443b-41df-834d-7e4f5ac12c63 | When
insert_quorum_parallel
is disabled, all replicas in the quorum are consistent, i.e. they contain data from all previous
INSERT
queries (the
INSERT
sequence is linearized). When reading data written using
insert_quorum
and
insert_quorum_parallel
is disabled, you can turn on sequential consistency for
SELECT
queries using
select_sequential_consistency
.
ClickHouse generates an exception:
If the number of available replicas at the time of the query is less than the
insert_quorum
.
When
insert_quorum_parallel
is disabled and an attempt to write data is made when the previous block has not yet been inserted in
insert_quorum
of replicas. This situation may occur if the user tries to perform another
INSERT
query to the same table before the previous one with
insert_quorum
is completed.
See also:
insert_quorum_timeout
insert_quorum_parallel
select_sequential_consistency
insert_quorum_parallel {#insert_quorum_parallel}
:::note
This setting is not applicable to SharedMergeTree, see
SharedMergeTree consistency
for more information.
:::
Enables or disables parallelism for quorum
INSERT
queries. If enabled, additional
INSERT
queries can be sent while previous queries have not yet finished. If disabled, additional writes to the same table will be rejected.
Possible values:
0 β Disabled.
1 β Enabled.
See also:
insert_quorum
insert_quorum_timeout
select_sequential_consistency
insert_quorum_timeout {#insert_quorum_timeout}
Write to a quorum timeout in milliseconds. If the timeout has passed and no write has taken place yet, ClickHouse will generate an exception and the client must repeat the query to write the same block to the same or any other replica.
See also:
insert_quorum
insert_quorum_parallel
select_sequential_consistency
insert_shard_id {#insert_shard_id}
If not
0
, specifies the shard of
Distributed
table into which the data will be inserted synchronously.
If
insert_shard_id
value is incorrect, the server will throw an exception.
To get the number of shards on
requested_cluster
, you can check server config or use this query:
sql
SELECT uniq(shard_num) FROM system.clusters WHERE cluster = 'requested_cluster';
Possible values:
0 β Disabled.
Any number from
1
to
shards_num
of corresponding
Distributed
table.
Example
Query:
sql
CREATE TABLE x AS system.numbers ENGINE = MergeTree ORDER BY number;
CREATE TABLE x_dist AS x ENGINE = Distributed('test_cluster_two_shards_localhost', currentDatabase(), x);
INSERT INTO x_dist SELECT * FROM numbers(5) SETTINGS insert_shard_id = 1;
SELECT * FROM x_dist ORDER BY number ASC;
Result:
text
ββnumberββ
β 0 β
β 0 β
β 1 β
β 1 β
β 2 β
β 2 β
β 3 β
β 3 β
β 4 β
β 4 β
ββββββββββ
interactive_delay {#interactive_delay} | {"source_file": "settings.md"} | [
-0.053207505494356155,
-0.0723056048154831,
-0.004820632748305798,
0.03022395633161068,
-0.12271016091108322,
-0.001720884582027793,
-0.09207095205783844,
-0.10027679800987244,
0.01978651061654091,
0.02755141444504261,
0.08186568319797516,
-0.011050460860133171,
0.04289622977375984,
-0.051... |
dabc074b-ce33-4f23-a666-11475d2e031c | Result:
text
ββnumberββ
β 0 β
β 0 β
β 1 β
β 1 β
β 2 β
β 2 β
β 3 β
β 3 β
β 4 β
β 4 β
ββββββββββ
interactive_delay {#interactive_delay}
The interval in microseconds for checking whether request execution has been canceled and sending the progress.
intersect_default_mode {#intersect_default_mode}
Set default mode in INTERSECT query. Possible values: empty string, 'ALL', 'DISTINCT'. If empty, query without mode will throw exception.
jemalloc_collect_profile_samples_in_trace_log {#jemalloc_collect_profile_samples_in_trace_log}
Collect jemalloc allocation and deallocation samples in trace log.
jemalloc_enable_profiler {#jemalloc_enable_profiler}
Enable jemalloc profiler for the query. Jemalloc will sample allocations and all deallocations for sampled allocations.
Profiles can be flushed using SYSTEM JEMALLOC FLUSH PROFILE which can be used for allocation analysis.
Samples can also be stored in system.trace_log using config jemalloc_collect_global_profile_samples_in_trace_log or with query setting jemalloc_collect_profile_samples_in_trace_log.
See
Allocation Profiling
join_algorithm {#join_algorithm}
Specifies which
JOIN
algorithm is used.
Several algorithms can be specified, and an available one would be chosen for a particular query based on kind/strictness and table engine.
Possible values:
grace_hash
Grace hash join
is used. Grace hash provides an algorithm option that provides performant complex joins while limiting memory use.
The first phase of a grace join reads the right table and splits it into N buckets depending on the hash value of key columns (initially, N is
grace_hash_join_initial_buckets
). This is done in a way to ensure that each bucket can be processed independently. Rows from the first bucket are added to an in-memory hash table while the others are saved to disk. If the hash table grows beyond the memory limit (e.g., as set by
max_bytes_in_join
, the number of buckets is increased and the assigned bucket for each row. Any rows which don't belong to the current bucket are flushed and reassigned.
Supports
INNER/LEFT/RIGHT/FULL ALL/ANY JOIN
.
hash
Hash join algorithm
is used. The most generic implementation that supports all combinations of kind and strictness and multiple join keys that are combined with
OR
in the
JOIN ON
section.
When using the
hash
algorithm, the right part of
JOIN
is uploaded into RAM.
parallel_hash
A variation of
hash
join that splits the data into buckets and builds several hashtables instead of one concurrently to speed up this process.
When using the
parallel_hash
algorithm, the right part of
JOIN
is uploaded into RAM.
partial_merge
A variation of the
sort-merge algorithm
, where only the right table is fully sorted.
The
RIGHT JOIN
and
FULL JOIN
are supported only with
ALL
strictness (
SEMI
,
ANTI
,
ANY
, and
ASOF
are not supported). | {"source_file": "settings.md"} | [
0.09390343725681305,
-0.003503042971715331,
-0.09483516961336136,
0.020326752215623856,
-0.023509811609983444,
-0.054637361317873,
0.11207857728004456,
0.00635862909257412,
-0.0867789089679718,
0.026097185909748077,
0.022833075374364853,
-0.03410466015338898,
0.03193921223282814,
-0.074468... |
e2f1a2bb-d89a-4385-92fc-9836929cac56 | The
RIGHT JOIN
and
FULL JOIN
are supported only with
ALL
strictness (
SEMI
,
ANTI
,
ANY
, and
ASOF
are not supported).
When using the
partial_merge
algorithm, ClickHouse sorts the data and dumps it to the disk. The
partial_merge
algorithm in ClickHouse differs slightly from the classic realization. First, ClickHouse sorts the right table by joining keys in blocks and creates a min-max index for sorted blocks. Then it sorts parts of the left table by the
join key
and joins them over the right table. The min-max index is also used to skip unneeded right table blocks.
direct
This algorithm can be applied when the storage for the right table supports key-value requests.
The
direct
algorithm performs a lookup in the right table using rows from the left table as keys. It's supported only by special storage such as
Dictionary
or
EmbeddedRocksDB
and only the
LEFT
and
INNER
JOINs.
auto
When set to
auto
,
hash
join is tried first, and the algorithm is switched on the fly to another algorithm if the memory limit is violated.
full_sorting_merge
Sort-merge algorithm
with full sorting joined tables before joining.
prefer_partial_merge
ClickHouse always tries to use
partial_merge
join if possible, otherwise, it uses
hash
.
Deprecated
, same as
partial_merge,hash
.
default (deprecated)
Legacy value, please don't use anymore.
Same as
direct,hash
, i.e. try to use direct join and hash join join (in this order).
join_any_take_last_row {#join_any_take_last_row}
Changes the behaviour of join operations with
ANY
strictness.
:::note
This setting applies only for
JOIN
operations with
Join
engine tables.
:::
Possible values:
0 β If the right table has more than one matching row, only the first one found is joined.
1 β If the right table has more than one matching row, only the last one found is joined.
See also:
JOIN clause
Join table engine
join_default_strictness
join_default_strictness {#join_default_strictness}
Sets default strictness for
JOIN clauses
.
Possible values:
ALL
β If the right table has several matching rows, ClickHouse creates a
Cartesian product
from matching rows. This is the normal
JOIN
behaviour from standard SQL.
ANY
β If the right table has several matching rows, only the first one found is joined. If the right table has only one matching row, the results of
ANY
and
ALL
are the same.
ASOF
β For joining sequences with an uncertain match.
Empty string
β If
ALL
or
ANY
is not specified in the query, ClickHouse throws an exception.
join_on_disk_max_files_to_merge {#join_on_disk_max_files_to_merge}
Limits the number of files allowed for parallel sorting in MergeJoin operations when they are executed on disk.
The bigger the value of the setting, the more RAM is used and the less disk I/O is needed.
Possible values:
Any positive integer, starting from 2. | {"source_file": "settings.md"} | [
0.0034666373394429684,
-0.003915888257324696,
0.023401131853461266,
-0.008950239047408104,
-0.04279494658112526,
-0.09538157284259796,
-0.05178508907556534,
0.02905449829995632,
-0.06789734959602356,
0.03313148021697998,
0.03290652111172676,
0.1141049936413765,
0.004103128798305988,
-0.058... |
5dab9e6b-379e-4061-acb7-8923d442ba3c | The bigger the value of the setting, the more RAM is used and the less disk I/O is needed.
Possible values:
Any positive integer, starting from 2.
join_output_by_rowlist_perkey_rows_threshold {#join_output_by_rowlist_perkey_rows_threshold}
The lower limit of per-key average rows in the right table to determine whether to output by row list in hash join.
join_overflow_mode {#join_overflow_mode}
Defines what action ClickHouse performs when any of the following join limits is reached:
max_bytes_in_join
max_rows_in_join
Possible values:
THROW
β ClickHouse throws an exception and breaks operation.
BREAK
β ClickHouse breaks operation and does not throw an exception.
Default value:
THROW
.
See Also
JOIN clause
Join table engine
join_runtime_bloom_filter_bytes {#join_runtime_bloom_filter_bytes}
Size in bytes of a bloom filter used as JOIN runtime filter (see enable_join_runtime_filters setting).
join_runtime_bloom_filter_hash_functions {#join_runtime_bloom_filter_hash_functions}
Number of hash functions in a bloom filter used as JOIN runtime filter (see enable_join_runtime_filters setting).
join_runtime_filter_exact_values_limit {#join_runtime_filter_exact_values_limit}
Maximum number of elements in runtime filter that are stored as is in a set, when this threshold is exceeded if switches to bloom filter.
join_to_sort_maximum_table_rows {#join_to_sort_maximum_table_rows}
The maximum number of rows in the right table to determine whether to rerange the right table by key in left or inner join.
join_to_sort_minimum_perkey_rows {#join_to_sort_minimum_perkey_rows}
The lower limit of per-key average rows in the right table to determine whether to rerange the right table by key in left or inner join. This setting ensures that the optimization is not applied for sparse table keys
join_use_nulls {#join_use_nulls}
Sets the type of
JOIN
behaviour. When merging tables, empty cells may appear. ClickHouse fills them differently based on this setting.
Possible values:
0 β The empty cells are filled with the default value of the corresponding field type.
1 β
JOIN
behaves the same way as in standard SQL. The type of the corresponding field is converted to
Nullable
, and empty cells are filled with
NULL
.
joined_block_split_single_row {#joined_block_split_single_row}
Allow to chunk hash join result by rows corresponding to single row from left table.
This may reduce memory usage in case of row with many matches in right table, but may increase CPU usage.
Note that
max_joined_block_size_rows != 0
is mandatory for this setting to have effect.
The
max_joined_block_size_bytes
combined with this setting is helpful to avoid excessive memory usage in case of skewed data with some large rows having many matches in right table.
joined_subquery_requires_alias {#joined_subquery_requires_alias} | {"source_file": "settings.md"} | [
0.009438730776309967,
0.0019902782514691353,
-0.06759750843048096,
-0.00914773065596819,
-0.042319972068071365,
-0.0527803935110569,
0.0011988538317382336,
0.058595191687345505,
-0.08194724470376968,
-0.013086351566016674,
-0.036245763301849365,
0.04286568611860275,
0.05405742675065994,
-0... |
6e97753a-a242-434b-9c92-d0bfcb330d35 | joined_subquery_requires_alias {#joined_subquery_requires_alias}
Force joined subqueries and table functions to have aliases for correct name qualification.
kafka_disable_num_consumers_limit {#kafka_disable_num_consumers_limit}
Disable limit on kafka_num_consumers that depends on the number of available CPU cores.
kafka_max_wait_ms {#kafka_max_wait_ms}
The wait time in milliseconds for reading messages from
Kafka
before retry.
Possible values:
Positive integer.
0 β Infinite timeout.
See also:
Apache Kafka
keeper_map_strict_mode {#keeper_map_strict_mode}
Enforce additional checks during operations on KeeperMap. E.g. throw an exception on an insert for already existing key
keeper_max_retries {#keeper_max_retries}
Max retries for general keeper operations
keeper_retry_initial_backoff_ms {#keeper_retry_initial_backoff_ms}
Initial backoff timeout for general keeper operations
keeper_retry_max_backoff_ms {#keeper_retry_max_backoff_ms}
Max backoff timeout for general keeper operations
least_greatest_legacy_null_behavior {#least_greatest_legacy_null_behavior}
If enabled, functions 'least' and 'greatest' return NULL if one of their arguments is NULL.
legacy_column_name_of_tuple_literal {#legacy_column_name_of_tuple_literal}
List all names of element of large tuple literals in their column names instead of hash. This settings exists only for compatibility reasons. It makes sense to set to 'true', while doing rolling update of cluster from version lower than 21.7 to higher.
lightweight_delete_mode {#lightweight_delete_mode}
A mode of internal update query that is executed as a part of lightweight delete.
Possible values:
-
alter_update
- run
ALTER UPDATE
query that creates a heavyweight mutation.
-
lightweight_update
- run lightweight update if possible, run
ALTER UPDATE
otherwise.
-
lightweight_update_force
- run lightweight update if possible, throw otherwise.
lightweight_deletes_sync {#lightweight_deletes_sync}
The same as
mutations_sync
, but controls only execution of lightweight deletes.
Possible values: | {"source_file": "settings.md"} | [
-0.0022463782224804163,
0.00836662296205759,
-0.07193209230899811,
0.033990949392318726,
-0.06100738048553467,
-0.05278129130601883,
-0.029562795534729958,
0.026309385895729065,
-0.0065254527144134045,
0.0020668902434408665,
0.020324716344475746,
-0.0957861915230751,
0.05624318867921829,
-... |
b4bf3865-d86b-4792-85f6-2f622447cab1 | lightweight_deletes_sync {#lightweight_deletes_sync}
The same as
mutations_sync
, but controls only execution of lightweight deletes.
Possible values:
| Value | Description |
|-------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
|
0
| Mutations execute asynchronously. |
|
1
| The query waits for the lightweight deletes to complete on the current server. |
|
2
| The query waits for the lightweight deletes to complete on all replicas (if they exist). |
|
3
| The query waits only for active replicas. Supported only for
SharedMergeTree
. For
ReplicatedMergeTree
it behaves the same as
mutations_sync = 2
.|
See Also
Synchronicity of ALTER Queries
Mutations
limit {#limit}
Sets the maximum number of rows to get from the query result. It adjusts the value set by the
LIMIT
clause, so that the limit, specified in the query, cannot exceed the limit, set by this setting.
Possible values:
0 β The number of rows is not limited.
Positive integer.
load_balancing {#load_balancing}
Specifies the algorithm of replicas selection that is used for distributed query processing.
ClickHouse supports the following algorithms of choosing replicas:
Random
(by default)
Nearest hostname
Hostname levenshtein distance
In order
First or random
Round robin
See also:
distributed_replica_max_ignored_errors
Random (by Default) {#load_balancing-random}
sql
load_balancing = random
The number of errors is counted for each replica. The query is sent to the replica with the fewest errors, and if there are several of these, to anyone of them.
Disadvantages: Server proximity is not accounted for; if the replicas have different data, you will also get different data.
Nearest Hostname {#load_balancing-nearest_hostname}
sql
load_balancing = nearest_hostname
The number of errors is counted for each replica. Every 5 minutes, the number of errors is integrally divided by 2. Thus, the number of errors is calculated for a recent time with exponential smoothing. If there is one replica with a minimal number of errors (i.e.Β errors occurred recently on the other replicas), the query is sent to it. If there are multiple replicas with the same minimal number of errors, the query is sent to the replica with a hostname that is most similar to the server's hostname in the config file (for the number of different characters in identical positions, up to the minimum length of both hostnames). | {"source_file": "settings.md"} | [
-0.0970466285943985,
-0.010362133383750916,
-0.044627558439970016,
0.06296366453170776,
0.06881897151470184,
-0.09286613017320633,
0.04472782462835312,
-0.12288585305213928,
0.07003526389598846,
0.004262149333953857,
0.08148893713951111,
-0.022546060383319855,
0.05187561362981796,
-0.08638... |
81826fc2-888e-4fdc-b613-aae317125d4a | For instance, example01-01-1 and example01-01-2 are different in one position, while example01-01-1 and example01-02-2 differ in two places.
This method might seem primitive, but it does not require external data about network topology, and it does not compare IP addresses, which would be complicated for our IPv6 addresses.
Thus, if there are equivalent replicas, the closest one by name is preferred.
We can also assume that when sending a query to the same server, in the absence of failures, a distributed query will also go to the same servers. So even if different data is placed on the replicas, the query will return mostly the same results.
Hostname levenshtein distance {#load_balancing-hostname_levenshtein_distance}
sql
load_balancing = hostname_levenshtein_distance
Just like
nearest_hostname
, but it compares hostname in a
levenshtein distance
manner. For example:
```text
example-clickhouse-0-0 ample-clickhouse-0-0
1
example-clickhouse-0-0 example-clickhouse-1-10
2
example-clickhouse-0-0 example-clickhouse-12-0
3
```
In Order {#load_balancing-in_order}
sql
load_balancing = in_order
Replicas with the same number of errors are accessed in the same order as they are specified in the configuration.
This method is appropriate when you know exactly which replica is preferable.
First or Random {#load_balancing-first_or_random}
sql
load_balancing = first_or_random
This algorithm chooses the first replica in the set or a random replica if the first is unavailable. It's effective in cross-replication topology setups, but useless in other configurations.
The
first_or_random
algorithm solves the problem of the
in_order
algorithm. With
in_order
, if one replica goes down, the next one gets a double load while the remaining replicas handle the usual amount of traffic. When using the
first_or_random
algorithm, the load is evenly distributed among replicas that are still available.
It's possible to explicitly define what the first replica is by using the setting
load_balancing_first_offset
. This gives more control to rebalance query workloads among replicas.
Round Robin {#load_balancing-round_robin}
sql
load_balancing = round_robin
This algorithm uses a round-robin policy across replicas with the same number of errors (only the queries with
round_robin
policy is accounted).
load_balancing_first_offset {#load_balancing_first_offset}
Which replica to preferably send a query when FIRST_OR_RANDOM load balancing strategy is used.
load_marks_asynchronously {#load_marks_asynchronously}
Load MergeTree marks asynchronously
local_filesystem_read_method {#local_filesystem_read_method}
Method of reading data from local filesystem, one of: read, pread, mmap, io_uring, pread_threadpool. | {"source_file": "settings.md"} | [
-0.02078969217836857,
-0.056348465383052826,
-0.00263206590898335,
-0.017291462048888206,
-0.07407307624816895,
-0.08653052151203156,
-0.03107627108693123,
-0.021169569343328476,
0.069986492395401,
-0.055800795555114746,
-0.02900407277047634,
0.012464398518204689,
0.09965022653341293,
-0.1... |
9bf1d0ad-0025-43e3-8cbc-71b992f88903 | local_filesystem_read_method {#local_filesystem_read_method}
Method of reading data from local filesystem, one of: read, pread, mmap, io_uring, pread_threadpool.
The 'io_uring' method is experimental and does not work for Log, TinyLog, StripeLog, File, Set and Join, and other tables with append-able files in presence of concurrent reads and writes.
If you read various articles about 'io_uring' on the Internet, don't be blinded by them. It is not a better method of reading files, unless the case of a large amount of small IO requests, which is not the case in ClickHouse. There are no reasons to enable 'io_uring'.
local_filesystem_read_prefetch {#local_filesystem_read_prefetch}
Should use prefetching when reading data from local filesystem.
lock_acquire_timeout {#lock_acquire_timeout}
Defines how many seconds a locking request waits before failing.
Locking timeout is used to protect from deadlocks while executing read/write operations with tables. When the timeout expires and the locking request fails, the ClickHouse server throws an exception "Locking attempt timed out! Possible deadlock avoided. Client should retry." with error code
DEADLOCK_AVOIDED
.
Possible values:
Positive integer (in seconds).
0 β No locking timeout.
log_comment {#log_comment}
Specifies the value for the
log_comment
field of the
system.query_log
table and comment text for the server log.
It can be used to improve the readability of server logs. Additionally, it helps to select queries related to the test from the
system.query_log
after running
clickhouse-test
.
Possible values:
Any string no longer than
max_query_size
. If the max_query_size is exceeded, the server throws an exception.
Example
Query:
sql
SET log_comment = 'log_comment test', log_queries = 1;
SELECT 1;
SYSTEM FLUSH LOGS;
SELECT type, query FROM system.query_log WHERE log_comment = 'log_comment test' AND event_date >= yesterday() ORDER BY event_time DESC LIMIT 2;
Result:
text
ββtypeβββββββββ¬βqueryββββββ
β QueryStart β SELECT 1; β
β QueryFinish β SELECT 1; β
βββββββββββββββ΄ββββββββββββ
log_formatted_queries {#log_formatted_queries}
Allows to log formatted queries to the
system.query_log
system table (populates
formatted_query
column in the
system.query_log
).
Possible values:
0 β Formatted queries are not logged in the system table.
1 β Formatted queries are logged in the system table.
log_processors_profiles {#log_processors_profiles}
Write time that processor spent during execution/waiting for data to
system.processors_profile_log
table.
See also:
system.processors_profile_log
EXPLAIN PIPELINE
log_profile_events {#log_profile_events}
Log query performance statistics into the query_log, query_thread_log and query_views_log.
log_queries {#log_queries}
Setting up query logging. | {"source_file": "settings.md"} | [
0.006733825895935297,
0.010797275230288506,
-0.05239855870604515,
0.018761923536658287,
0.02634790539741516,
-0.10302654653787613,
0.006112745963037014,
0.06203090772032738,
-0.019012102857232094,
0.06256913393735886,
0.0472080335021019,
0.11422526091337204,
-0.03651990368962288,
-0.009429... |
d39c207c-1f4c-4372-be58-940f723c246c | log_profile_events {#log_profile_events}
Log query performance statistics into the query_log, query_thread_log and query_views_log.
log_queries {#log_queries}
Setting up query logging.
Queries sent to ClickHouse with this setup are logged according to the rules in the
query_log
server configuration parameter.
Example:
text
log_queries=1
log_queries_cut_to_length {#log_queries_cut_to_length}
If query length is greater than a specified threshold (in bytes), then cut query when writing to query log. Also limit the length of printed query in ordinary text log.
log_queries_min_query_duration_ms {#log_queries_min_query_duration_ms}
If enabled (non-zero), queries faster than the value of this setting will not be logged (you can think about this as a
long_query_time
for
MySQL Slow Query Log
), and this basically means that you will not find them in the following tables:
system.query_log
system.query_thread_log
Only the queries with the following type will get to the log:
QUERY_FINISH
EXCEPTION_WHILE_PROCESSING
Type: milliseconds
Default value: 0 (any query)
log_queries_min_type {#log_queries_min_type}
query_log
minimal type to log.
Possible values:
-
QUERY_START
(
=1
)
-
QUERY_FINISH
(
=2
)
-
EXCEPTION_BEFORE_START
(
=3
)
-
EXCEPTION_WHILE_PROCESSING
(
=4
)
Can be used to limit which entities will go to
query_log
, say you are interested only in errors, then you can use
EXCEPTION_WHILE_PROCESSING
:
text
log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
log_queries_probability {#log_queries_probability}
Allows a user to write to
query_log
,
query_thread_log
, and
query_views_log
system tables only a sample of queries selected randomly with the specified probability. It helps to reduce the load with a large volume of queries in a second.
Possible values:
0 β Queries are not logged in the system tables.
Positive floating-point number in the range [0..1]. For example, if the setting value is
0.5
, about half of the queries are logged in the system tables.
1 β All queries are logged in the system tables.
log_query_settings {#log_query_settings}
Log query settings into the query_log and OpenTelemetry span log.
log_query_threads {#log_query_threads}
Setting up query threads logging.
Query threads log into the
system.query_thread_log
table. This setting has effect only when
log_queries
is true. Queries' threads run by ClickHouse with this setup are logged according to the rules in the
query_thread_log
server configuration parameter.
Possible values:
0 β Disabled.
1 β Enabled.
Example
text
log_query_threads=1
log_query_views {#log_query_views}
Setting up query views logging.
When a query run by ClickHouse with this setting enabled has associated views (materialized or live views), they are logged in the
query_views_log
server configuration parameter.
Example:
text
log_query_views=1 | {"source_file": "settings.md"} | [
0.07833600789308548,
-0.035966020077466965,
-0.020892100408673286,
0.06685352325439453,
-0.012878893874585629,
-0.08714720606803894,
0.10015001893043518,
-0.0243573859333992,
-0.03868977725505829,
-0.010766055434942245,
0.01873260922729969,
-0.02601708099246025,
0.07783547788858414,
-0.016... |
a2d68829-c34a-442a-9ffc-b3f02da7d6aa | Example:
text
log_query_views=1
low_cardinality_allow_in_native_format {#low_cardinality_allow_in_native_format}
Allows or restricts using the
LowCardinality
data type with the
Native
format.
If usage of
LowCardinality
is restricted, ClickHouse server converts
LowCardinality
-columns to ordinary ones for
SELECT
queries, and convert ordinary columns to
LowCardinality
-columns for
INSERT
queries.
This setting is required mainly for third-party clients which do not support
LowCardinality
data type.
Possible values:
1 β Usage of
LowCardinality
is not restricted.
0 β Usage of
LowCardinality
is restricted.
low_cardinality_max_dictionary_size {#low_cardinality_max_dictionary_size}
Sets a maximum size in rows of a shared global dictionary for the
LowCardinality
data type that can be written to a storage file system. This setting prevents issues with RAM in case of unlimited dictionary growth. All the data that can't be encoded due to maximum dictionary size limitation ClickHouse writes in an ordinary method.
Possible values:
Any positive integer.
low_cardinality_use_single_dictionary_for_part {#low_cardinality_use_single_dictionary_for_part}
Turns on or turns off using of single dictionary for the data part.
By default, the ClickHouse server monitors the size of dictionaries and if a dictionary overflows then the server starts to write the next one. To prohibit creating several dictionaries set
low_cardinality_use_single_dictionary_for_part = 1
.
Possible values:
1 β Creating several dictionaries for the data part is prohibited.
0 β Creating several dictionaries for the data part is not prohibited.
low_priority_query_wait_time_ms {#low_priority_query_wait_time_ms}
When the query prioritization mechanism is employed (see setting
priority
), low-priority queries wait for higher-priority queries to finish. This setting specifies the duration of waiting.
make_distributed_plan {#make_distributed_plan}
Make distributed query plan.
materialize_skip_indexes_on_insert {#materialize_skip_indexes_on_insert}
If INSERTs build and store skip indexes. If disabled, skip indexes will only be built and stored
during merges
or by explicit
MATERIALIZE INDEX
.
See also
exclude_materialize_skip_indexes_on_insert
.
materialize_statistics_on_insert {#materialize_statistics_on_insert}
If INSERTs build and insert statistics. If disabled, statistics will be build and stored during merges or by explicit MATERIALIZE STATISTICS
materialize_ttl_after_modify {#materialize_ttl_after_modify}
Apply TTL for old data, after ALTER MODIFY TTL query
materialized_views_ignore_errors {#materialized_views_ignore_errors}
Allows to ignore errors for MATERIALIZED VIEW, and deliver original block to the table regardless of MVs
materialized_views_squash_parallel_inserts {#materialized_views_squash_parallel_inserts} | {"source_file": "settings.md"} | [
0.08744841068983078,
0.002139993943274021,
-0.1436658650636673,
0.021876133978366852,
-0.07334697991609573,
-0.09782721102237701,
-0.01867366023361683,
0.006759790237993002,
-0.023320794105529785,
-0.003025972517207265,
0.02613000199198723,
0.02792387269437313,
0.04795648530125618,
-0.0083... |
e50c1beb-c7e5-48b3-b726-18b005723d73 | materialized_views_squash_parallel_inserts {#materialized_views_squash_parallel_inserts}
Squash inserts to materialized views destination table of a single INSERT query from parallel inserts to reduce amount of generated parts.
If set to false and
parallel_view_processing
is enabled, INSERT query will generate part in the destination table for each
max_insert_thread
.
max_analyze_depth {#max_analyze_depth}
Maximum number of analyses performed by interpreter.
max_ast_depth {#max_ast_depth}
The maximum nesting depth of a query syntactic tree. If exceeded, an exception is thrown.
:::note
At this time, it isn't checked during parsing, but only after parsing the query.
This means that a syntactic tree that is too deep can be created during parsing,
but the query will fail.
:::
max_ast_elements {#max_ast_elements}
The maximum number of elements in a query syntactic tree. If exceeded, an exception is thrown.
:::note
At this time, it isn't checked during parsing, but only after parsing the query.
This means that a syntactic tree that is too deep can be created during parsing,
but the query will fail.
:::
max_autoincrement_series {#max_autoincrement_series}
The limit on the number of series created by the
generateSerialID
function.
As each series represents a node in Keeper, it is recommended to have no more than a couple of millions of them.
max_backup_bandwidth {#max_backup_bandwidth}
The maximum read speed in bytes per second for particular backup on server. Zero means unlimited.
max_block_size {#max_block_size}
In ClickHouse, data is processed by blocks, which are sets of column parts. The internal processing cycles for a single block are efficient but there are noticeable costs when processing each block.
The
max_block_size
setting indicates the recommended maximum number of rows to include in a single block when loading data from tables. Blocks the size of
max_block_size
are not always loaded from the table: if ClickHouse determines that less data needs to be retrieved, a smaller block is processed.
The block size should not be too small to avoid noticeable costs when processing each block. It should also not be too large to ensure that queries with a LIMIT clause execute quickly after processing the first block. When setting
max_block_size
, the goal should be to avoid consuming too much memory when extracting a large number of columns in multiple threads and to preserve at least some cache locality.
max_bytes_before_external_group_by {#max_bytes_before_external_group_by}
Cloud default value: half the memory amount per replica.
Enables or disables execution of
GROUP BY
clauses in external memory.
(See
GROUP BY in external memory
)
Possible values:
Maximum volume of RAM (in bytes) that can be used by the single
GROUP BY
operation.
0
β
GROUP BY
in external memory disabled. | {"source_file": "settings.md"} | [
-0.00704648531973362,
-0.06892227381467819,
-0.018079698085784912,
0.015573738142848015,
-0.03184621408581734,
-0.10829025506973267,
-0.060815393924713135,
0.005978972185403109,
0.009960574097931385,
0.01939115673303604,
0.02104872837662697,
-0.042644061148166656,
0.07571111619472504,
-0.0... |
eb829496-1ac3-4cd3-bc17-f03eb348d2b2 | Possible values:
Maximum volume of RAM (in bytes) that can be used by the single
GROUP BY
operation.
0
β
GROUP BY
in external memory disabled.
:::note
If memory usage during GROUP BY operations is exceeding this threshold in bytes,
activate the 'external aggregation' mode (spill data to disk).
The recommended value is half of the available system memory.
:::
max_bytes_before_external_sort {#max_bytes_before_external_sort}
Cloud default value: half the memory amount per replica.
Enables or disables execution of
ORDER BY
clauses in external memory. See
ORDER BY Implementation Details
If memory usage during ORDER BY operation exceeds this threshold in bytes, the 'external sorting' mode (spill data to disk) is activated.
Possible values:
Maximum volume of RAM (in bytes) that can be used by the single
ORDER BY
operation.
The recommended value is half of available system memory
0
β
ORDER BY
in external memory disabled.
max_bytes_before_remerge_sort {#max_bytes_before_remerge_sort}
In case of ORDER BY with LIMIT, when memory usage is higher than specified threshold, perform additional steps of merging blocks before final merge to keep just top LIMIT rows.
max_bytes_in_distinct {#max_bytes_in_distinct}
The maximum number of bytes of the state (in uncompressed bytes) in memory, which
is used by a hash table when using DISTINCT.
max_bytes_in_join {#max_bytes_in_join}
The maximum size in number of bytes of the hash table used when joining tables.
This setting applies to
SELECT ... JOIN
operations and the
Join table engine
.
If the query contains joins, ClickHouse checks this setting for every intermediate result.
ClickHouse can proceed with different actions when the limit is reached. Use
the
join_overflow_mode
settings to choose the action.
Possible values:
Positive integer.
0 β Memory control is disabled.
max_bytes_in_set {#max_bytes_in_set}
The maximum number of bytes (of uncompressed data) used by a set in the IN clause
created from a subquery.
max_bytes_ratio_before_external_group_by {#max_bytes_ratio_before_external_group_by}
The ratio of available memory that is allowed for
GROUP BY
. Once reached,
external memory is used for aggregation.
For example, if set to
0.6
,
GROUP BY
will allow using 60% of the available memory
(to server/user/merges) at the beginning of the execution, after that, it will
start using external aggregation.
max_bytes_ratio_before_external_sort {#max_bytes_ratio_before_external_sort}
The ratio of available memory that is allowed for
ORDER BY
. Once reached, external sort is used.
For example, if set to
0.6
,
ORDER BY
will allow using
60%
of available memory (to server/user/merges) at the beginning of the execution, after that, it will start using external sort. | {"source_file": "settings.md"} | [
-0.0015338611556217074,
-0.0621892511844635,
-0.010220425203442574,
0.010871177539229393,
-0.03413422778248787,
-0.0269502941519022,
-0.034369707107543945,
0.06300380080938339,
0.053407516330480576,
0.0863099992275238,
0.014273076318204403,
0.11816142499446869,
0.023173972964286804,
-0.034... |
d62e068d-dec0-4374-82e3-ef2853d52157 | For example, if set to
0.6
,
ORDER BY
will allow using
60%
of available memory (to server/user/merges) at the beginning of the execution, after that, it will start using external sort.
Note, that
max_bytes_before_external_sort
is still respected, spilling to disk will be done only if the sorting block is bigger then
max_bytes_before_external_sort
.
max_bytes_to_read {#max_bytes_to_read}
The maximum number of bytes (of uncompressed data) that can be read from a table when running a query.
The restriction is checked for each processed chunk of data, applied only to the
deepest table expression and when reading from a remote server, checked only on
the remote server.
max_bytes_to_read_leaf {#max_bytes_to_read_leaf}
The maximum number of bytes (of uncompressed data) that can be read from a local
table on a leaf node when running a distributed query. While distributed queries
can issue a multiple sub-queries to each shard (leaf) - this limit will
be checked only on the read stage on the leaf nodes and will be ignored on the
merging of results stage on the root node.
For example, a cluster consists of 2 shards and each shard contains a table with
100 bytes of data. A distributed query which is supposed to read all the data
from both tables with setting
max_bytes_to_read=150
will fail as in total it
will be 200 bytes. A query with
max_bytes_to_read_leaf=150
will succeed since
leaf nodes will read 100 bytes at max.
The restriction is checked for each processed chunk of data.
:::note
This setting is unstable with
prefer_localhost_replica=1
.
:::
max_bytes_to_sort {#max_bytes_to_sort}
The maximum number of bytes before sorting. If more than the specified amount of
uncompressed bytes have to be processed for ORDER BY operation, the behavior will
be determined by the
sort_overflow_mode
which by default is set to
throw
.
max_bytes_to_transfer {#max_bytes_to_transfer}
The maximum number of bytes (uncompressed data) that can be passed to a remote
server or saved in a temporary table when the GLOBAL IN/JOIN section is executed.
max_columns_to_read {#max_columns_to_read}
The maximum number of columns that can be read from a table in a single query.
If a query requires reading more than the specified number of columns, an exception
is thrown.
:::tip
This setting is useful for preventing overly complex queries.
:::
0
value means unlimited.
max_compress_block_size {#max_compress_block_size}
The maximum size of blocks of uncompressed data before compressing for writing to a table. By default, 1,048,576 (1 MiB). Specifying a smaller block size generally leads to slightly reduced compression ratio, the compression and decompression speed increases slightly due to cache locality, and memory consumption is reduced.
:::note
This is an expert-level setting, and you shouldn't change it if you're just getting started with ClickHouse.
::: | {"source_file": "settings.md"} | [
0.01907772570848465,
-0.062192171812057495,
-0.006276830565184355,
0.0419502891600132,
-0.012622877024114132,
-0.05900115892291069,
-0.05108623579144478,
0.06157126650214195,
0.040395691990852356,
0.08173349499702454,
-0.010186368599534035,
0.15359443426132202,
0.0626664012670517,
-0.04724... |
a96a7df6-44d3-4514-8614-f86dbb98d662 | :::note
This is an expert-level setting, and you shouldn't change it if you're just getting started with ClickHouse.
:::
Don't confuse blocks for compression (a chunk of memory consisting of bytes) with blocks for query processing (a set of rows from a table).
max_concurrent_queries_for_all_users {#max_concurrent_queries_for_all_users}
Throw exception if the value of this setting is less or equal than the current number of simultaneously processed queries.
Example:
max_concurrent_queries_for_all_users
can be set to 99 for all users and database administrator can set it to 100 for itself to run queries for investigation even when the server is overloaded.
Modifying the setting for one query or user does not affect other queries.
Possible values:
Positive integer.
0 β No limit.
Example
xml
<max_concurrent_queries_for_all_users>99</max_concurrent_queries_for_all_users>
See Also
max_concurrent_queries
max_concurrent_queries_for_user {#max_concurrent_queries_for_user}
The maximum number of simultaneously processed queries per user.
Possible values:
Positive integer.
0 β No limit.
Example
xml
<max_concurrent_queries_for_user>5</max_concurrent_queries_for_user>
max_distributed_connections {#max_distributed_connections}
The maximum number of simultaneous connections with remote servers for distributed processing of a single query to a single Distributed table. We recommend setting a value no less than the number of servers in the cluster.
The following parameters are only used when creating Distributed tables (and when launching a server), so there is no reason to change them at runtime.
max_distributed_depth {#max_distributed_depth}
Limits the maximum depth of recursive queries for
Distributed
tables.
If the value is exceeded, the server throws an exception.
Possible values:
Positive integer.
0 β Unlimited depth.
max_download_buffer_size {#max_download_buffer_size}
The maximal size of buffer for parallel downloading (e.g. for URL engine) per each thread.
max_download_threads {#max_download_threads}
The maximum number of threads to download data (e.g. for URL engine).
max_estimated_execution_time {#max_estimated_execution_time}
Maximum query estimate execution time in seconds. Checked on every data block
when
timeout_before_checking_execution_speed
expires.
max_execution_speed {#max_execution_speed}
The maximum number of execution rows per second. Checked on every data block when
timeout_before_checking_execution_speed
expires. If the execution speed is high, the execution speed will be reduced.
max_execution_speed_bytes {#max_execution_speed_bytes}
The maximum number of execution bytes per second. Checked on every data block when
timeout_before_checking_execution_speed
expires. If the execution speed is high, the execution speed will be reduced.
max_execution_time {#max_execution_time} | {"source_file": "settings.md"} | [
-0.0015376703813672066,
-0.01082246471196413,
-0.10069429874420166,
0.021518388763070107,
-0.1173398494720459,
-0.08419434726238251,
0.033018894493579865,
-0.006444115191698074,
-0.04305395856499672,
0.008007911033928394,
0.0016913405852392316,
0.02053944580256939,
0.05779208242893219,
-0.... |
c7ef92d2-235e-4aa9-bd61-dd389e477d45 | timeout_before_checking_execution_speed
expires. If the execution speed is high, the execution speed will be reduced.
max_execution_time {#max_execution_time}
The maximum query execution time in seconds.
The
max_execution_time
parameter can be a bit tricky to understand.
It operates based on interpolation relative to the current query execution speed
(this behaviour is controlled by
timeout_before_checking_execution_speed
).
ClickHouse will interrupt a query if the projected execution time exceeds the
specified
max_execution_time
. By default, the
timeout_before_checking_execution_speed
is set to 10 seconds. This means that after 10 seconds of query execution, ClickHouse
will begin estimating the total execution time. If, for example,
max_execution_time
is set to 3600 seconds (1 hour), ClickHouse will terminate the query if the estimated
time exceeds this 3600-second limit. If you set
timeout_before_checking_execution_speed
to 0, ClickHouse will use the clock time as the basis for
max_execution_time
.
If query runtime exceeds the specified number of seconds, the behavior will be
determined by the 'timeout_overflow_mode', which by default is set to
throw
.
:::note
The timeout is checked and the query can stop only in designated places during data processing.
It currently cannot stop during merging of aggregation states or during query analysis,
and the actual run time will be higher than the value of this setting.
:::
max_execution_time_leaf {#max_execution_time_leaf}
Similar semantically to
max_execution_time
but only
applied on leaf nodes for distributed or remote queries.
For example, if we want to limit the execution time on a leaf node to
10s
but
have no limit on the initial node, instead of having
max_execution_time
in the
nested subquery settings:
sql
SELECT count()
FROM cluster(cluster, view(SELECT * FROM t SETTINGS max_execution_time = 10));
We can use
max_execution_time_leaf
as the query settings:
sql
SELECT count()
FROM cluster(cluster, view(SELECT * FROM t)) SETTINGS max_execution_time_leaf = 10;
max_expanded_ast_elements {#max_expanded_ast_elements}
Maximum size of query syntax tree in number of nodes after expansion of aliases and the asterisk.
max_fetch_partition_retries_count {#max_fetch_partition_retries_count}
Amount of retries while fetching partition from another host.
max_final_threads {#max_final_threads}
Sets the maximum number of parallel threads for the
SELECT
query data read phase with the
FINAL
modifier.
Possible values:
Positive integer.
0 or 1 β Disabled.
SELECT
queries are executed in a single thread.
max_http_get_redirects {#max_http_get_redirects} | {"source_file": "settings.md"} | [
0.007495091296732426,
0.0037102319765836,
-0.04873303696513176,
0.032241396605968475,
-0.02230348251760006,
-0.10787975043058395,
-0.009082915261387825,
-0.025701098144054413,
0.020115366205573082,
-0.015249131247401237,
-0.0005778616759926081,
0.03247447684407234,
-0.024509241804480553,
-... |
fa78f962-df99-4d0a-b2c9-babb25664013 | Possible values:
Positive integer.
0 or 1 β Disabled.
SELECT
queries are executed in a single thread.
max_http_get_redirects {#max_http_get_redirects}
Max number of HTTP GET redirects hops allowed. Ensures additional security measures are in place to prevent a malicious server from redirecting your requests to unexpected services.\n\nIt is the case when an external server redirects to another address, but that address appears to be internal to the company's infrastructure, and by sending an HTTP request to an internal server, you could request an internal API from the internal network, bypassing the auth, or even query other services, such as Redis or Memcached. When you don't have an internal infrastructure (including something running on your localhost), or you trust the server, it is safe to allow redirects. Although keep in mind, that if the URL uses HTTP instead of HTTPS, and you will have to trust not only the remote server but also your ISP and every network in the middle.
max_hyperscan_regexp_length {#max_hyperscan_regexp_length}
Defines the maximum length for each regular expression in the
hyperscan multi-match functions
.
Possible values:
Positive integer.
0 - The length is not limited.
Example
Query:
sql
SELECT multiMatchAny('abcd', ['ab','bcd','c','d']) SETTINGS max_hyperscan_regexp_length = 3;
Result:
text
ββmultiMatchAny('abcd', ['ab', 'bcd', 'c', 'd'])ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT multiMatchAny('abcd', ['ab','bcd','c','d']) SETTINGS max_hyperscan_regexp_length = 2;
Result:
text
Exception: Regexp length too large.
See Also
max_hyperscan_regexp_total_length
max_hyperscan_regexp_total_length {#max_hyperscan_regexp_total_length}
Sets the maximum length total of all regular expressions in each
hyperscan multi-match function
.
Possible values:
Positive integer.
0 - The length is not limited.
Example
Query:
sql
SELECT multiMatchAny('abcd', ['a','b','c','d']) SETTINGS max_hyperscan_regexp_total_length = 5;
Result:
text
ββmultiMatchAny('abcd', ['a', 'b', 'c', 'd'])ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββ
Query:
sql
SELECT multiMatchAny('abcd', ['ab','bc','c','d']) SETTINGS max_hyperscan_regexp_total_length = 5;
Result:
text
Exception: Total regexp lengths too large.
See Also
max_hyperscan_regexp_length
max_insert_block_size {#max_insert_block_size} | {"source_file": "settings.md"} | [
-0.020156338810920715,
0.0018143252236768603,
-0.08557046204805374,
-0.02890782244503498,
-0.036666497588157654,
-0.08965519070625305,
0.03867509961128235,
-0.05173999071121216,
0.013954251073300838,
-0.01171466987580061,
0.00954435020685196,
0.0660390630364418,
0.048541609197854996,
-0.05... |
da5bbd1a-915d-42ea-bdbc-76765683ee12 | Result:
text
Exception: Total regexp lengths too large.
See Also
max_hyperscan_regexp_length
max_insert_block_size {#max_insert_block_size}
The size of blocks (in a count of rows) to form for insertion into a table.
This setting only applies in cases when the server forms the blocks.
For example, for an INSERT via the HTTP interface, the server parses the data format and forms blocks of the specified size.
But when using clickhouse-client, the client parses the data itself, and the 'max_insert_block_size' setting on the server does not affect the size of the inserted blocks.
The setting also does not have a purpose when using INSERT SELECT, since data is inserted using the same blocks that are formed after SELECT.
The default is slightly more than
max_block_size
. The reason for this is that certain table engines (
*MergeTree
) form a data part on the disk for each inserted block, which is a fairly large entity. Similarly,
*MergeTree
tables sort data during insertion, and a large enough block size allow sorting more data in RAM.
max_insert_delayed_streams_for_parallel_write {#max_insert_delayed_streams_for_parallel_write}
The maximum number of streams (columns) to delay final part flush. Default - auto (100 in case of underlying storage supports parallel write, for example S3 and disabled otherwise)
max_insert_threads {#max_insert_threads}
The maximum number of threads to execute the
INSERT SELECT
query.
Possible values:
0 (or 1) β
INSERT SELECT
no parallel execution.
Positive integer. Bigger than 1.
Cloud default value:
-
1
for nodes with 8 GiB memory
-
2
for nodes with 16 GiB memory
-
4
for larger nodes
Parallel
INSERT SELECT
has effect only if the
SELECT
part is executed in parallel, see
max_threads
setting.
Higher values will lead to higher memory usage.
max_joined_block_size_bytes {#max_joined_block_size_bytes}
Maximum block size in bytes for JOIN result (if join algorithm supports it). 0 means unlimited.
max_joined_block_size_rows {#max_joined_block_size_rows}
Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited.
max_limit_for_vector_search_queries {#max_limit_for_vector_search_queries}
SELECT queries with LIMIT bigger than this setting cannot use vector similarity indices. Helps to prevent memory overflows in vector similarity indices.
max_local_read_bandwidth {#max_local_read_bandwidth}
The maximum speed of local reads in bytes per second.
max_local_write_bandwidth {#max_local_write_bandwidth}
The maximum speed of local writes in bytes per second.
max_memory_usage {#max_memory_usage}
Cloud default value: depends on the amount of RAM on the replica.
The maximum amount of RAM to use for running a query on a single server.
A value of
0
means unlimited. | {"source_file": "settings.md"} | [
-0.011512486264109612,
-0.049225129187107086,
0.01510972622781992,
-0.016570085659623146,
-0.06318444758653641,
-0.0632527768611908,
-0.06778507679700851,
0.03780721500515938,
-0.011873899027705193,
0.022717086598277092,
0.04208063334226608,
0.039610303938388824,
0.012544930912554264,
-0.0... |
b061536e-d4f1-483c-96c7-21351956f10e | Cloud default value: depends on the amount of RAM on the replica.
The maximum amount of RAM to use for running a query on a single server.
A value of
0
means unlimited.
This setting does not consider the volume of available memory or the total volume
of memory on the machine. The restriction applies to a single query within a
single server.
You can use
SHOW PROCESSLIST
to see the current memory consumption for each query.
Peak memory consumption is tracked for each query and written to the log.
Memory usage is not fully tracked for states of the following aggregate functions
from
String
and
Array
arguments:
-
min
-
max
-
any
-
anyLast
-
argMin
-
argMax
Memory consumption is also restricted by the parameters
max_memory_usage_for_user
and
max_server_memory_usage
.
max_memory_usage_for_user {#max_memory_usage_for_user}
The maximum amount of RAM to use for running a user's queries on a single server. Zero means unlimited.
By default, the amount is not restricted (
max_memory_usage_for_user = 0
).
Also see the description of
max_memory_usage
.
For example if you want to set
max_memory_usage_for_user
to 1000 bytes for a user named
clickhouse_read
, you can use the statement
sql
ALTER USER clickhouse_read SETTINGS max_memory_usage_for_user = 1000;
You can verify it worked by logging out of your client, logging back in, then use the
getSetting
function:
sql
SELECT getSetting('max_memory_usage_for_user');
max_network_bandwidth {#max_network_bandwidth}
Limits the speed of the data exchange over the network in bytes per second. This setting applies to every query.
Possible values:
Positive integer.
0 β Bandwidth control is disabled.
max_network_bandwidth_for_all_users {#max_network_bandwidth_for_all_users}
Limits the speed that data is exchanged at over the network in bytes per second. This setting applies to all concurrently running queries on the server.
Possible values:
Positive integer.
0 β Control of the data speed is disabled.
max_network_bandwidth_for_user {#max_network_bandwidth_for_user}
Limits the speed of the data exchange over the network in bytes per second. This setting applies to all concurrently running queries performed by a single user.
Possible values:
Positive integer.
0 β Control of the data speed is disabled.
max_network_bytes {#max_network_bytes}
Limits the data volume (in bytes) that is received or transmitted over the network when executing a query. This setting applies to every individual query.
Possible values:
Positive integer.
0 β Data volume control is disabled.
max_number_of_partitions_for_independent_aggregation {#max_number_of_partitions_for_independent_aggregation}
Maximal number of partitions in table to apply optimizatio
max_os_cpu_wait_time_ratio_to_throw {#max_os_cpu_wait_time_ratio_to_throw} | {"source_file": "settings.md"} | [
0.058513619005680084,
-0.019166525453329086,
-0.09626295417547226,
0.04705922305583954,
-0.06157240271568298,
-0.005232424940913916,
0.025746844708919525,
0.05970621854066849,
0.0032360234763473272,
0.049540918320417404,
-0.042094677686691284,
0.03261769190430641,
0.03146950155496597,
-0.0... |
0d3cae6d-7dca-4a34-a8e3-e5654b779dee | Maximal number of partitions in table to apply optimizatio
max_os_cpu_wait_time_ratio_to_throw {#max_os_cpu_wait_time_ratio_to_throw}
Max ratio between OS CPU wait (OSCPUWaitMicroseconds metric) and busy (OSCPUVirtualTimeMicroseconds metric) times to consider rejecting queries. Linear interpolation between min and max ratio is used to calculate the probability, the probability is 1 at this point.
max_parallel_replicas {#max_parallel_replicas}
The maximum number of replicas for each shard when executing a query.
Possible values:
Positive integer.
Additional Info
This options will produce different results depending on the settings used.
:::note
This setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain requirements. See
Distributed Subqueries and max_parallel_replicas
for more details.
:::
Parallel processing using
SAMPLE
key
A query may be processed faster if it is executed on several servers in parallel. But the query performance may degrade in the following cases:
The position of the sampling key in the partitioning key does not allow efficient range scans.
Adding a sampling key to the table makes filtering by other columns less efficient.
The sampling key is an expression that is expensive to calculate.
The cluster latency distribution has a long tail, so that querying more servers increases the query overall latency.
Parallel processing using
parallel_replicas_custom_key
This setting is useful for any replicated table.
max_parser_backtracks {#max_parser_backtracks}
Maximum parser backtracking (how many times it tries different alternatives in the recursive descend parsing process).
max_parser_depth {#max_parser_depth}
Limits maximum recursion depth in the recursive descent parser. Allows controlling the stack size.
Possible values:
Positive integer.
0 β Recursion depth is unlimited.
max_parsing_threads {#max_parsing_threads}
The maximum number of threads to parse data in input formats that support parallel parsing. By default, it is determined automatically.
max_partition_size_to_drop {#max_partition_size_to_drop}
Restriction on dropping partitions in query time. The value
0
means that you can drop partitions without any restrictions.
Cloud default value: 1 TB.
:::note
This query setting overwrites its server setting equivalent, see
max_partition_size_to_drop
:::
max_partitions_per_insert_block {#max_partitions_per_insert_block}
Limits the maximum number of partitions in a single inserted block
and an exception is thrown if the block contains too many partitions.
Positive integer.
0
β Unlimited number of partitions.
Details
When inserting data, ClickHouse calculates the number of partitions in the
inserted block. If the number of partitions is more than | {"source_file": "settings.md"} | [
-0.019228143617510796,
-0.04961317405104637,
-0.017598921433091164,
0.0037342270370572805,
-0.007046154234558344,
-0.09938342124223709,
-0.0672692134976387,
0.031723055988550186,
0.0367400124669075,
-0.029997823759913445,
-0.03960021212697029,
-0.02309231273829937,
0.050637707114219666,
-0... |
d59f7478-1b81-49e9-9491-363bf80574a9 | 0
β Unlimited number of partitions.
Details
When inserting data, ClickHouse calculates the number of partitions in the
inserted block. If the number of partitions is more than
max_partitions_per_insert_block
, ClickHouse either logs a warning or throws an
exception based on
throw_on_max_partitions_per_insert_block
. Exceptions have
the following text:
"Too many partitions for a single INSERT block (
partitions_count
partitions, limit is " + toString(max_partitions) + ").
The limit is controlled by the 'max_partitions_per_insert_block' setting.
A large number of partitions is a common misconception. It will lead to severe
negative performance impact, including slow server startup, slow INSERT queries
and slow SELECT queries. Recommended total number of partitions for a table is
under 1000..10000. Please note, that partitioning is not intended to speed up
SELECT queries (ORDER BY key is sufficient to make range queries fast).
Partitions are intended for data manipulation (DROP PARTITION, etc)."
:::note
This setting is a safety threshold because using a large number of partitions is a common misconception.
:::
max_partitions_to_read {#max_partitions_to_read}
Limits the maximum number of partitions that can be accessed in a single query.
The setting value specified when the table is created can be overridden via query-level setting.
Possible values:
Positive integer
-1
- unlimited (default)
:::note
You can also specify the MergeTree setting
max_partitions_to_read
in tables' setting.
:::
max_parts_to_move {#max_parts_to_move}
Limit the number of parts that can be moved in one query. Zero means unlimited.
max_projection_rows_to_use_projection_index {#max_projection_rows_to_use_projection_index}
If the number of rows to read from the projection index is less than or equal to this threshold, ClickHouse will try to apply the projection index during query execution.
max_query_size {#max_query_size}
The maximum number of bytes of a query string parsed by the SQL parser.
Data in the VALUES clause of INSERT queries is processed by a separate stream parser (that consumes O(1) RAM) and not affected by this restriction.
:::note
max_query_size
cannot be set within an SQL query (e.g.,
SELECT now() SETTINGS max_query_size=10000
) because ClickHouse needs to allocate a buffer to parse the query, and this buffer size is determined by the
max_query_size
setting, which must be configured before the query is executed.
:::
max_read_buffer_size {#max_read_buffer_size}
The maximum size of the buffer to read from the filesystem.
max_read_buffer_size_local_fs {#max_read_buffer_size_local_fs}
The maximum size of the buffer to read from local filesystem. If set to 0 then max_read_buffer_size will be used.
max_read_buffer_size_remote_fs {#max_read_buffer_size_remote_fs} | {"source_file": "settings.md"} | [
-0.021005695685744286,
-0.07925360649824142,
0.008931664749979973,
0.006448377855122089,
0.006727209314703941,
-0.07198721170425415,
-0.06814936548471451,
0.07044654339551926,
0.0361284501850605,
-0.011237485334277153,
-0.013385549187660217,
0.002322000451385975,
0.02790922485291958,
0.003... |
2effbe6e-2d72-424b-a96a-f1cb233b10db | The maximum size of the buffer to read from local filesystem. If set to 0 then max_read_buffer_size will be used.
max_read_buffer_size_remote_fs {#max_read_buffer_size_remote_fs}
The maximum size of the buffer to read from remote filesystem. If set to 0 then max_read_buffer_size will be used.
max_recursive_cte_evaluation_depth {#max_recursive_cte_evaluation_depth}
Maximum limit on recursive CTE evaluation depth
max_remote_read_network_bandwidth {#max_remote_read_network_bandwidth}
The maximum speed of data exchange over the network in bytes per second for read.
max_remote_write_network_bandwidth {#max_remote_write_network_bandwidth}
The maximum speed of data exchange over the network in bytes per second for write.
max_replica_delay_for_distributed_queries {#max_replica_delay_for_distributed_queries}
Disables lagging replicas for distributed queries. See
Replication
.
Sets the time in seconds. If a replica's lag is greater than or equal to the set value, this replica is not used.
Possible values:
Positive integer.
0 β Replica lags are not checked.
To prevent the use of any replica with a non-zero lag, set this parameter to 1.
Used when performing
SELECT
from a distributed table that points to replicated tables.
max_result_bytes {#max_result_bytes}
Limits the result size in bytes (uncompressed). The query will stop after processing a block of data if the threshold is met,
but it will not cut the last block of the result, therefore the result size can be larger than the threshold.
Caveats
The result size in memory is taken into account for this threshold.
Even if the result size is small, it can reference larger data structures in memory,
representing dictionaries of LowCardinality columns, and Arenas of AggregateFunction columns,
so the threshold can be exceeded despite the small result size.
:::warning
The setting is fairly low level and should be used with caution
:::
max_result_rows {#max_result_rows}
Cloud default value:
0
.
Limits the number of rows in the result. Also checked for subqueries, and on remote servers when running parts of a distributed query.
No limit is applied when the value is
0
.
The query will stop after processing a block of data if the threshold is met, but
it will not cut the last block of the result, therefore the result size can be
larger than the threshold.
max_rows_in_distinct {#max_rows_in_distinct}
The maximum number of different rows when using DISTINCT.
max_rows_in_join {#max_rows_in_join}
Limits the number of rows in the hash table that is used when joining tables.
This settings applies to
SELECT ... JOIN
operations and the
Join
table engine.
If a query contains multiple joins, ClickHouse checks this setting for every intermediate result.
ClickHouse can proceed with different actions when the limit is reached. Use the
join_overflow_mode
setting to choose the action.
Possible values:
Positive integer. | {"source_file": "settings.md"} | [
-0.0037640961818397045,
-0.06493455916643143,
-0.06039881706237793,
-0.004836931359022856,
-0.03995325043797493,
-0.06774672865867615,
-0.054113712161779404,
0.12127599865198135,
0.006713788025081158,
0.04968709126114845,
-0.0325298085808754,
0.08057337254285812,
0.012839186936616898,
-0.0... |
a4684c07-039c-458e-82ad-e65bdd4522d2 | ClickHouse can proceed with different actions when the limit is reached. Use the
join_overflow_mode
setting to choose the action.
Possible values:
Positive integer.
0
β Unlimited number of rows.
max_rows_in_set {#max_rows_in_set}
The maximum number of rows for a data set in the IN clause created from a subquery.
max_rows_in_set_to_optimize_join {#max_rows_in_set_to_optimize_join}
Maximal size of the set to filter joined tables by each other's row sets before joining.
Possible values:
0 β Disable.
Any positive integer.
max_rows_to_group_by {#max_rows_to_group_by}
The maximum number of unique keys received from aggregation. This setting lets
you limit memory consumption when aggregating.
If aggregation during GROUP BY is generating more than the specified number of
rows (unique GROUP BY keys), the behavior will be determined by the
'group_by_overflow_mode' which by default is
throw
, but can be also switched
to an approximate GROUP BY mode.
max_rows_to_read {#max_rows_to_read}
The maximum number of rows that can be read from a table when running a query.
The restriction is checked for each processed chunk of data, applied only to the
deepest table expression and when reading from a remote server, checked only on
the remote server.
max_rows_to_read_leaf {#max_rows_to_read_leaf}
The maximum number of rows that can be read from a local table on a leaf node when
running a distributed query. While distributed queries can issue multiple sub-queries
to each shard (leaf) - this limit will be checked only on the read stage on the
leaf nodes and ignored on the merging of results stage on the root node.
For example, a cluster consists of 2 shards and each shard contains a table with
100 rows. The distributed query which is supposed to read all the data from both
tables with setting
max_rows_to_read=150
will fail, as in total there will be
200 rows. A query with
max_rows_to_read_leaf=150
will succeed, since leaf nodes
will read at max 100 rows.
The restriction is checked for each processed chunk of data.
:::note
This setting is unstable with
prefer_localhost_replica=1
.
:::
max_rows_to_sort {#max_rows_to_sort}
The maximum number of rows before sorting. This allows you to limit memory consumption when sorting.
If more than the specified amount of records have to be processed for the ORDER BY operation,
the behavior will be determined by the
sort_overflow_mode
which by default is set to
throw
.
max_rows_to_transfer {#max_rows_to_transfer}
Maximum size (in rows) that can be passed to a remote server or saved in a
temporary table when the GLOBAL IN/JOIN section is executed.
max_sessions_for_user {#max_sessions_for_user}
Maximum number of simultaneous sessions per authenticated user to the ClickHouse server.
Example: | {"source_file": "settings.md"} | [
-0.013199594803154469,
-0.02635236643254757,
-0.0424707867205143,
0.029161542654037476,
-0.057334184646606445,
-0.03511606156826019,
0.0478341318666935,
0.021567123010754585,
-0.07365711778402328,
-0.018089331686496735,
-0.022070422768592834,
0.06798499077558517,
0.0482918918132782,
-0.076... |
f2585b3b-5a93-45e5-b2d9-3f47cfdc4013 | max_sessions_for_user {#max_sessions_for_user}
Maximum number of simultaneous sessions per authenticated user to the ClickHouse server.
Example:
xml
<profiles>
<single_session_profile>
<max_sessions_for_user>1</max_sessions_for_user>
</single_session_profile>
<two_sessions_profile>
<max_sessions_for_user>2</max_sessions_for_user>
</two_sessions_profile>
<unlimited_sessions_profile>
<max_sessions_for_user>0</max_sessions_for_user>
</unlimited_sessions_profile>
</profiles>
<users>
<!-- User Alice can connect to a ClickHouse server no more than once at a time. -->
<Alice>
<profile>single_session_user</profile>
</Alice>
<!-- User Bob can use 2 simultaneous sessions. -->
<Bob>
<profile>two_sessions_profile</profile>
</Bob>
<!-- User Charles can use arbitrarily many of simultaneous sessions. -->
<Charles>
<profile>unlimited_sessions_profile</profile>
</Charles>
</users>
Possible values:
- Positive integer
-
0
- infinite count of simultaneous sessions (default)
max_size_to_preallocate_for_aggregation {#max_size_to_preallocate_for_aggregation}
For how many elements it is allowed to preallocate space in all hash tables in total before aggregatio
max_size_to_preallocate_for_joins {#max_size_to_preallocate_for_joins}
For how many elements it is allowed to preallocate space in all hash tables in total before joi
max_streams_for_merge_tree_reading {#max_streams_for_merge_tree_reading}
If is not zero, limit the number of reading streams for MergeTree table.
max_streams_multiplier_for_merge_tables {#max_streams_multiplier_for_merge_tables}
Ask more streams when reading from Merge table. Streams will be spread across tables that Merge table will use. This allows more even distribution of work across threads and is especially helpful when merged tables differ in size.
max_streams_to_max_threads_ratio {#max_streams_to_max_threads_ratio}
Allows you to use more sources than the number of threads - to more evenly distribute work across threads. It is assumed that this is a temporary solution since it will be possible in the future to make the number of sources equal to the number of threads, but for each source to dynamically select available work for itself.
max_subquery_depth {#max_subquery_depth}
If a query has more than the specified number of nested subqueries, throws an
exception.
:::tip
This allows you to have a sanity check to protect against the users of your
cluster from writing overly complex queries.
:::
max_table_size_to_drop {#max_table_size_to_drop}
Restriction on deleting tables in query time. The value
0
means that you can delete all tables without any restrictions.
Cloud default value: 1 TB.
:::note
This query setting overwrites its server setting equivalent, see
max_table_size_to_drop
:::
max_temporary_columns {#max_temporary_columns} | {"source_file": "settings.md"} | [
0.037978991866111755,
-0.006386587396264076,
-0.099732406437397,
-0.048401474952697754,
-0.12682859599590302,
-0.03380252793431282,
0.07815849035978317,
0.004095249343663454,
-0.039888184517621994,
-0.014511344023048878,
0.01176841277629137,
-0.005349732469767332,
0.0961945503950119,
-0.01... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.