id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
bf573990-c2c3-4025-bb10-38998656fe14 | sh
cd <repository>/tests/config
sudo ./install.sh
:::note
Tests should be
- be minimal: only create the minimally needed tables, columns and, complexity,
- be fast: not take longer than a few seconds (better: sub-seconds),
- be correct and deterministic: fails if and only if the feature-under-test is not working,
- be isolated/stateless: don't rely on environment and timing
- be exhaustive: cover corner cases like zeros, nulls, empty sets, exceptions (negative tests, use syntax
-- { serverError xyz }
and
-- { clientError xyz }
for that),
- clean up tables at the end of the test (in case of leftovers),
- make sure the other tests don't test the same stuff (i.e. grep first).
:::
Restricting test runs {#restricting-test-runs}
A test can have zero or more
tags
specifying restrictions in which contexts the test runs in CI.
For
.sql
tests tags are placed in the first line as a SQL comment:
```sql
-- Tags: no-fasttest, no-replicated-database
-- no-fasttest:
-- no-replicated-database:
SELECT 1
```
For
.sh
tests tags are written as a comment on the second line:
```bash
!/usr/bin/env bash
Tags: no-fasttest, no-replicated-database
- no-fasttest:
- no-replicated-database:
```
List of available tags:
|Tag name | What it does | Usage example |
|---|---|---|
|
disabled
| Test is not run ||
|
long
| Test's execution time is extended from 1 to 10 minutes ||
|
deadlock
| Test is run in a loop for a long time ||
|
race
| Same as
deadlock
. Prefer
deadlock
||
|
shard
| Server is required to listen to
127.0.0.*
||
|
distributed
| Same as
shard
. Prefer
shard
||
|
global
| Same as
shard
. Prefer
shard
||
|
zookeeper
| Test requires Zookeeper or ClickHouse Keeper to run | Test uses
ReplicatedMergeTree
|
|
replica
| Same as
zookeeper
. Prefer
zookeeper
||
|
no-fasttest
| Test is not run under
Fast test
| Test uses
MySQL
table engine which is disabled in Fast test|
|
fasttest-only
| Test is only run under
Fast test
||
|
no-[asan, tsan, msan, ubsan]
| Disables tests in build with
sanitizers
| Test is run under QEMU which doesn't work with sanitizers |
|
no-replicated-database
|||
|
no-ordinary-database
|||
|
no-parallel
| Disables running other tests in parallel with this one | Test reads from
system
tables and invariants may be broken|
|
no-parallel-replicas
|||
|
no-debug
|||
|
no-stress
|||
|
no-polymorphic-parts
|||
|
no-random-settings
|||
|
no-random-merge-tree-settings
|||
|
no-backward-compatibility-check
|||
|
no-cpu-x86_64
|||
|
no-cpu-aarch64
|||
|
no-cpu-ppc64le
|||
|
no-s3-storage
|||
In addition to above settings, you can use
USE_*
flags from
system.build_options
to define usage of particular ClickHouse features.
For example, if your test uses a MySQL table, you should add a tag
use-mysql
.
Specifying limits for random settings {#specifying-limits-for-random-settings} | {"source_file": "tests.md"} | [
0.0161278173327446,
-0.008318444713950157,
-0.025949474424123764,
-0.001447641639970243,
0.018172089010477066,
-0.11444117873907089,
-0.038067352026700974,
0.021431200206279755,
-0.12902630865573883,
0.05649113655090332,
0.08403711020946503,
-0.02823871187865734,
0.05931159481406212,
0.005... |
e40a55a3-4dbb-4fc0-b52f-d8857b9c9ba2 | Specifying limits for random settings {#specifying-limits-for-random-settings}
A test can specify minimum and maximum allowed values for settings that can be randomized during test run.
For
.sh
tests limits are written as a comment on the line next to tags or on the second line if no tags are specified:
```bash
!/usr/bin/env bash
Tags: no-fasttest
Random settings limits: max_block_size=(1000, 10000); index_granularity=(100, None)
```
For
.sql
tests tags are placed as a SQL comment in the line next to tags or in the first line:
sql
-- Tags: no-fasttest
-- Random settings limits: max_block_size=(1000, 10000); index_granularity=(100, None)
SELECT 1
If you need to specify only one limit, you can use
None
for another one.
Choosing the test name {#choosing-the-test-name}
The name of the test starts with a five-digit prefix followed by a descriptive name, such as
00422_hash_function_constexpr.sql
.
To choose the prefix, find the largest prefix already present in the directory, and increment it by one.
sh
ls tests/queries/0_stateless/[0-9]*.reference | tail -n 1
In the meantime, some other tests might be added with the same numeric prefix, but this is OK and does not lead to any problems, you don't have to change it later.
Checking for an error that must occur {#checking-for-an-error-that-must-occur}
Sometimes you want to test that a server error occurs for an incorrect query. We support special annotations for this in SQL tests, in the following form:
sql
SELECT x; -- { serverError 49 }
This test ensures that the server returns an error with code 49 about unknown column
x
.
If there is no error, or the error is different, the test will fail.
If you want to ensure that an error occurs on the client side, use
clientError
annotation instead.
Do not check for a particular wording of error message, it may change in the future, and the test will needlessly break.
Check only the error code.
If the existing error code is not precise enough for your needs, consider adding a new one.
Testing a distributed query {#testing-a-distributed-query}
If you want to use distributed queries in functional tests, you can leverage
remote
table function with
127.0.0.{1..2}
addresses for the server to query itself; or you can use predefined test clusters in server configuration file like
test_shard_localhost
.
Remember to add the words
shard
or
distributed
to the test name, so that it is run in CI in correct configurations, where the server is configured to support distributed queries.
Working with temporary files {#working-with-temporary-files} | {"source_file": "tests.md"} | [
0.002871187636628747,
0.022463032975792885,
-0.08398555219173431,
0.011255443096160889,
-0.011577585712075233,
-0.09177137911319733,
0.07354997098445892,
0.032188910990953445,
-0.07565588504076004,
0.013087247498333454,
0.02597840316593647,
-0.020721906796097755,
0.05798637121915817,
-0.05... |
5f569b8f-b1e7-4c93-9041-a71a7b3ca224 | Working with temporary files {#working-with-temporary-files}
Sometimes in a shell test you may need to create a file on the fly to work with.
Keep in mind that some CI checks run tests in parallel, so if you are creating or removing a temporary file in your script without a unique name this can cause some of the CI checks, such as Flaky, to fail.
To get around this you should use environment variable
$CLICKHOUSE_TEST_UNIQUE_NAME
to give temporary files a name unique to the test that is running.
That way you can be sure that the file you are creating during setup or removing during cleanup is the file only in use by that test and not some other test which is running in parallel.
Known bugs {#known-bugs}
If we know some bugs that can be easily reproduced by functional tests, we place prepared functional tests in
tests/queries/bugs
directory.
These tests will be moved to
tests/queries/0_stateless
when bugs are fixed.
Integration tests {#integration-tests}
Integration tests allow testing ClickHouse in clustered configuration and ClickHouse interaction with other servers like MySQL, Postgres, MongoDB.
They are useful to emulate network splits, packet drops, etc.
These tests are run under Docker and create multiple containers with various software.
See
tests/integration/README.md
on how to run these tests.
Note that integration of ClickHouse with third-party drivers is not tested.
Also, we currently do not have integration tests with our JDBC and ODBC drivers.
Unit tests {#unit-tests}
Unit tests are useful when you want to test not the ClickHouse as a whole, but a single isolated library or class.
You can enable or disable build of tests with
ENABLE_TESTS
CMake option.
Unit tests (and other test programs) are located in
tests
subdirectories across the code.
To run unit tests, type
ninja test
.
Some tests use
gtest
, but some are just programs that return non-zero exit code on test failure.
It's not necessary to have unit tests if the code is already covered by functional tests (and functional tests are usually much more simple to use).
You can run individual gtest checks by calling the executable directly, for example:
bash
$ ./src/unit_tests_dbms --gtest_filter=LocalAddress*
Performance tests {#performance-tests}
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries.
Performance tests are located at
tests/performance/
.
Each test is represented by an
.xml
file with a description of the test case.
Tests are run with
docker/test/performance-comparison
tool . See the readme file for invocation.
Each test run one or multiple queries (possibly with combinations of parameters) in a loop. | {"source_file": "tests.md"} | [
-0.02342742681503296,
0.027141353115439415,
-0.03892089053988457,
0.07693879306316376,
0.029760509729385376,
-0.13389679789543152,
0.014087510295212269,
-0.02029428817331791,
0.02548997290432453,
0.07720707356929779,
0.06987718492746353,
0.0004074655007570982,
0.03946869447827339,
-0.02689... |
7291a06f-a62e-4613-94b6-1c6da15b075b | Each test run one or multiple queries (possibly with combinations of parameters) in a loop.
If you want to improve performance of ClickHouse in some scenario, and if improvements can be observed on simple queries, it is highly recommended to write a performance test.
Also, it is recommended to write performance tests when you add or modify SQL functions which are relatively isolated and not too obscure.
It always makes sense to use
perf top
or other
perf
tools during your tests.
Test tools and scripts {#test-tools-and-scripts}
Some programs in
tests
directory are not prepared tests, but are test tools.
For example, for
Lexer
there is a tool
src/Parsers/tests/lexer
that just do tokenization of stdin and writes colorized result to stdout.
You can use these kind of tools as a code examples and for exploration and manual testing.
Miscellaneous tests {#miscellaneous-tests}
There are tests for machine learned models in
tests/external_models
.
These tests are not updated and must be transferred to integration tests.
There is separate test for quorum inserts.
This test run ClickHouse cluster on separate servers and emulate various failure cases: network split, packet drop (between ClickHouse nodes, between ClickHouse and ZooKeeper, between ClickHouse server and client, etc.),
kill -9
,
kill -STOP
and
kill -CONT
, like
Jepsen
. Then the test checks that all acknowledged inserts was written and all rejected inserts was not.
Quorum test was written by separate team before ClickHouse was open-sourced.
This team no longer work with ClickHouse.
Test was accidentally written in Java.
For these reasons, quorum test must be rewritten and moved to integration tests.
Manual Testing {#manual-testing}
When you develop a new feature, it is reasonable to also test it manually.
You can do it with the following steps:
Build ClickHouse. Run ClickHouse from the terminal: change directory to
programs/clickhouse-server
and run it with
./clickhouse-server
. It will use configuration (
config.xml
,
users.xml
and files within
config.d
and
users.d
directories) from the current directory by default. To connect to ClickHouse server, run
programs/clickhouse-client/clickhouse-client
.
Note that all clickhouse tools (server, client, etc) are just symlinks to a single binary named
clickhouse
.
You can find this binary at
programs/clickhouse
.
All tools can also be invoked as
clickhouse tool
instead of
clickhouse-tool
.
Alternatively you can install ClickHouse package: either stable release from ClickHouse repository or you can build package for yourself with
./release
in ClickHouse sources root.
Then start the server with
sudo clickhouse start
(or stop to stop the server).
Look for logs at
/etc/clickhouse-server/clickhouse-server.log
.
When ClickHouse is already installed on your system, you can build a new
clickhouse
binary and replace the existing binary: | {"source_file": "tests.md"} | [
0.013144146651029587,
-0.08686954528093338,
-0.07057030498981476,
0.06075189635157585,
-0.05653879791498184,
-0.05883479490876198,
0.007069538347423077,
0.03042002208530903,
-0.06793755292892456,
0.002531318925321102,
0.018714316189289093,
-0.033731069415807724,
0.05155264958739281,
-0.032... |
fb2d0e28-ae2c-4195-a663-b543a1340e81 | When ClickHouse is already installed on your system, you can build a new
clickhouse
binary and replace the existing binary:
bash
$ sudo clickhouse stop
$ sudo cp ./clickhouse /usr/bin/
$ sudo clickhouse start
Also you can stop system clickhouse-server and run your own with the same configuration but with logging to terminal:
bash
$ sudo clickhouse stop
$ sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
Example with gdb:
bash
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
If the system clickhouse-server is already running and you do not want to stop it, you can change port numbers in your
config.xml
(or override them in a file in
config.d
directory), provide appropriate data path, and run it.
clickhouse
binary has almost no dependencies and works across wide range of Linux distributions.
To quick and dirty test your changes on a server, you can simply
scp
your fresh built
clickhouse
binary to your server and then run it as in examples above.
Build tests {#build-tests}
Build tests allow to check that build is not broken on various alternative configurations and on some foreign systems.
These tests are automated as well.
Examples:
- cross-compile for Darwin x86_64 (macOS)
- cross-compile for FreeBSD x86_64
- cross-compile for Linux AArch64
- build on Ubuntu with libraries from system packages (discouraged)
- build with shared linking of libraries (discouraged)
For example, build with system packages is bad practice, because we cannot guarantee what exact version of packages a system will have.
But this is really needed by Debian maintainers.
For this reason we at least have to support this variant of build.
Another example: shared linking is a common source of trouble, but it is needed for some enthusiasts.
Though we cannot run all tests on all variant of builds, we want to check at least that various build variants are not broken.
For this purpose we use build tests.
We also test that there are no translation units that are too long to compile or require too much RAM.
We also test that there are no too large stack frames.
Testing for protocol compatibility {#testing-for-protocol-compatibility}
When we extend ClickHouse network protocol, we test manually that old clickhouse-client works with new clickhouse-server and new clickhouse-client works with old clickhouse-server (simply by running binaries from corresponding packages).
We also test some cases automatically with integrational tests:
- if data written by old version of ClickHouse can be successfully read by the new version;
- do distributed queries work in a cluster with different ClickHouse versions.
Help from the Compiler {#help-from-the-compiler} | {"source_file": "tests.md"} | [
0.03281142935156822,
-0.07056919485330582,
-0.026376554742455482,
-0.10302881896495819,
-0.04420130327343941,
-0.07485464215278625,
-0.0035013870801776648,
-0.06343252956867218,
-0.05789312347769737,
-0.015020879916846752,
0.022237565368413925,
0.015619262121617794,
-0.024874906986951828,
... |
45c5c4f9-bb78-481b-8578-219eade36256 | Help from the Compiler {#help-from-the-compiler}
Main ClickHouse code (that is located in
src
directory) is built with
-Wall -Wextra -Werror
and with some additional enabled warnings.
Although these options are not enabled for third-party libraries.
Clang has even more useful warnings - you can look for them with
-Weverything
and pick something to default build.
We always use clang to build ClickHouse, both for development and production.
You can build on your own machine with debug mode (to save battery of your laptop), but please note that compiler is able to generate more warnings with
-O3
due to better control flow and inter-procedure analysis.
When building with clang in debug mode, debug version of
libc++
is used that allows to catch more errors at runtime.
Sanitizers {#sanitizers}
:::note
If the process (ClickHouse server or client) crashes at startup when running it locally, you might need to disable address space layout randomization:
sudo sysctl kernel.randomize_va_space=0
:::
Address sanitizer {#address-sanitizer}
We run functional, integration, stress and unit tests under ASan on per-commit basis.
Thread sanitizer {#thread-sanitizer}
We run functional, integration, stress and unit tests under TSan on per-commit basis.
Memory sanitizer {#memory-sanitizer}
We run functional, integration, stress and unit tests under MSan on per-commit basis.
Undefined behaviour sanitizer {#undefined-behaviour-sanitizer}
We run functional, integration, stress and unit tests under UBSan on per-commit basis.
The code of some third-party libraries is not sanitized for UB.
Valgrind (memcheck) {#valgrind-memcheck}
We used to run functional tests under Valgrind overnight, but don't do it anymore.
It takes multiple hours.
Currently there is one known false positive in
re2
library, see
this article
.
Fuzzing {#fuzzing}
ClickHouse fuzzing is implemented both using
libFuzzer
and random SQL queries.
All the fuzz testing should be performed with sanitizers (Address and Undefined).
LibFuzzer is used for isolated fuzz testing of library code.
Fuzzers are implemented as part of test code and have "_fuzzer" name postfixes.
Fuzzer example can be found at
src/Parsers/fuzzers/lexer_fuzzer.cpp
.
LibFuzzer-specific configs, dictionaries and corpus are stored at
tests/fuzz
.
We encourage you to write fuzz tests for every functionality that handles user input.
Fuzzers are not built by default.
To build fuzzers both
-DENABLE_FUZZING=1
and
-DENABLE_TESTS=1
options should be set.
We recommend to disable Jemalloc while building fuzzers.
Configuration used to integrate ClickHouse fuzzing to
Google OSS-Fuzz can be found at
docker/fuzz
.
We also use simple fuzz test to generate random SQL queries and to check that the server does not die executing them.
You can find it in
00746_sql_fuzzy.pl
.
This test should be run continuously (overnight and longer). | {"source_file": "tests.md"} | [
-0.04010181128978729,
-0.03509243205189705,
-0.03577660769224167,
-0.009894458577036858,
0.07204225659370422,
-0.07994098216295242,
-0.04330672696232796,
0.03645176440477371,
-0.037982817739248276,
-0.03932925686240196,
0.02099570445716381,
-0.061785925179719925,
-0.037254687398672104,
-0.... |
8b80959a-7d64-4c72-9d61-100861378124 | We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases.
It does random permutations and substitutions in queries AST.
It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order.
You can learn more about this fuzzer in
this blog article
.
Stress test {#stress-test}
Stress tests are another case of fuzzing.
It runs all functional tests in parallel in random order with a single server.
Results of the tests are not checked.
It is checked that:
- server does not crash, no debug or sanitizer traps are triggered;
- there are no deadlocks;
- the database structure is consistent;
- server can successfully stop after the test and start again without exceptions.
There are five variants (Debug, ASan, TSan, MSan, UBSan).
Thread fuzzer {#thread-fuzzer}
Thread Fuzzer (please don't mix up with Thread Sanitizer) is another kind of fuzzing that allows to randomize thread order of execution.
It helps to find even more special cases.
Security audit {#security-audit}
Our Security Team did some basic overview of ClickHouse capabilities from the security standpoint.
Static analyzers {#static-analyzers}
We run
clang-tidy
on per-commit basis.
clang-static-analyzer
checks are also enabled.
clang-tidy
is also used for some style checks.
We have evaluated
clang-tidy
,
Coverity
,
cppcheck
,
PVS-Studio
,
tscancode
,
CodeQL
.
You will find instructions for usage in
tests/instructions/
directory.
If you use
CLion
as an IDE, you can leverage some
clang-tidy
checks out of the box.
We also use
shellcheck
for static analysis of shell scripts.
Hardening {#hardening}
In debug build we are using custom allocator that does ASLR of user-level allocations.
We also manually protect memory regions that are expected to be readonly after allocation.
In debug build we also involve a customization of libc that ensures that no "harmful" (obsolete, insecure, not thread-safe) functions are called.
Debug assertions are used extensively.
In debug build, if exception with "logical error" code (implies a bug) is being thrown, the program is terminated prematurely.
It allows to use exceptions in release build but make it an assertion in debug build.
Debug version of jemalloc is used for debug builds.
Debug version of libc++ is used for debug builds.
Runtime integrity checks {#runtime-integrity-checks}
Data stored on disk is checksummed.
Data in MergeTree tables is checksummed in three ways simultaneously* (compressed data blocks, uncompressed data blocks, the total checksum across blocks).
Data transferred over network between client and server or between servers is also checksummed.
Replication ensures bit-identical data on replicas. | {"source_file": "tests.md"} | [
-0.03079702891409397,
-0.008105484768748283,
-0.050157513469457626,
0.052481431514024734,
-0.0030026808381080627,
-0.09489762037992477,
0.060248002409935,
-0.01213993038982153,
0.027041103690862656,
-0.005379884038120508,
-0.021822022274136543,
0.028437968343496323,
0.04257979243993759,
-0... |
af4847dd-867c-498c-9433-e2c5966a4c3d | It is required to protect from faulty hardware (bit rot on storage media, bit flips in RAM on server, bit flips in RAM of network controller, bit flips in RAM of network switch, bit flips in RAM of client, bit flips on the wire).
Note that bit flips are common and likely to occur even for ECC RAM and in presence of TCP checksums (if you manage to run thousands of servers processing petabytes of data each day).
See the video (russian)
.
ClickHouse provides diagnostics that will help ops engineers to find faulty hardware.
* and it is not slow.
Code style {#code-style}
Code style rules are described
here
.
To check for some common style violations, you can use
utils/check-style
script.
To force proper style of your code, you can use
clang-format
.
File
.clang-format
is located at the sources root.
It mostly corresponding with our actual code style.
But it's not recommended to apply
clang-format
to existing files because it makes formatting worse.
You can use
clang-format-diff
tool that you can find in clang source repository.
Alternatively you can try
uncrustify
tool to reformat your code.
Configuration is in
uncrustify.cfg
in the sources root.
It is less tested than
clang-format
.
CLion
has its own code formatter that has to be tuned for our code style.
We also use
codespell
to find typos in code.
It is automated as well.
Test coverage {#test-coverage}
We also track test coverage but only for functional tests and only for clickhouse-server.
It is performed on daily basis.
Tests for tests {#tests-for-tests}
There is automated check for flaky tests.
It runs all new tests 100 times (for functional tests) or 10 times (for integration tests).
If at least single time the test failed, it is considered flaky.
Test automation {#test-automation}
We run tests with
GitHub Actions
.
Build jobs and tests are run in Sandbox on per commit basis.
Resulting packages and test results are published in GitHub and can be downloaded by direct links.
Artifacts are stored for several months.
When you send a pull request on GitHub, we tag it as "can be tested" and our CI system will build ClickHouse packages (release, debug, with address sanitizer, etc) for you. | {"source_file": "tests.md"} | [
-0.030068790540099144,
0.05879109352827072,
-0.00900242943316698,
-0.021235371008515358,
-0.027278555557131767,
-0.07098086178302765,
-0.06169085577130318,
0.03392054885625839,
-0.014387985691428185,
0.02891288883984089,
0.037377506494522095,
0.044735558331012726,
-0.06348103284835815,
-0.... |
2a324cd0-b2c1-4cf5-b526-691cd784c56a | description: 'Page describing ClickHouse third-party usage and how to add and maintain
third-party libraries.'
sidebar_label: 'Third-Party Libraries'
sidebar_position: 60
slug: /development/contrib
title: 'Third-Party Libraries'
doc_type: 'reference'
Third-Party Libraries
ClickHouse utilizes third-party libraries for different purposes, e.g., to connect to other databases, to decode/encode data during load/save from/to disk, or to implement certain specialized SQL functions.
To be independent of the available libraries in the target system, each third-party library is imported as a Git submodule into ClickHouse's source tree and compiled and linked with ClickHouse.
A list of third-party libraries and their licenses can be obtained by the following query:
sql
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en';
Note that the listed libraries are the ones located in the
contrib/
directory of the ClickHouse repository.
Depending on the build options, some of the libraries may have not been compiled, and, as a result, their functionality may not be available at runtime.
Example
Adding and maintaining third-party libraries {#adding-and-maintaining-third-party-libraries}
Each third-party library must reside in a dedicated directory under the
contrib/
directory of the ClickHouse repository.
Avoid dumping copies of external code into the library directory.
Instead create a Git submodule to pull third-party code from an external upstream repository.
All submodules used by ClickHouse are listed in the
.gitmodule
file.
- If the library can be used as-is (the default case), you can reference the upstream repository directly.
- If the library needs patching, create a fork of the upstream repository in the
ClickHouse organization on GitHub
.
In the latter case, we aim to isolate custom patches as much as possible from upstream commits.
To that end, create a branch with prefix
ClickHouse/
from the branch or tag you want to integrate, e.g.
ClickHouse/2024_2
(for branch
2024_2
) or
ClickHouse/release/vX.Y.Z
(for tag
release/vX.Y.Z
).
Avoid following upstream development branches
master
/
main
/
dev
(i.e., prefix branches
ClickHouse/master
/
ClickHouse/main
/
ClickHouse/dev
in the fork repository).
Such branches are moving targets which make proper versioning harder.
"Prefix branches" ensure that pulls from the upstream repository into the fork will leave custom
ClickHouse/
branches unaffected.
Submodules in
contrib/
must only track
ClickHouse/
branches of forked third-party repositories.
Patches are only applied against
ClickHouse/
branches of external libraries. | {"source_file": "contrib.md"} | [
-0.04113971069455147,
-0.14508886635303497,
-0.10034043341875076,
-0.042894572019577026,
0.03736263886094093,
-0.00970121007412672,
0.03747956082224846,
-0.013738925568759441,
-0.008617337793111801,
-0.05822981894016266,
0.028884340077638626,
0.03511104732751846,
0.04383315145969391,
-0.08... |
8d681039-ef6a-4d97-804e-b7489798270b | Patches are only applied against
ClickHouse/
branches of external libraries.
There are two ways to do that:
- you like to make a new fix against a
ClickHouse/
-prefix branch in the forked repository, e.g. a sanitizer fix. In that case, push the fix as a branch with
ClickHouse/
prefix, e.g.
ClickHouse/fix-sanitizer-disaster
. Then create a PR from the new branch against the custom tracking branch, e.g.
ClickHouse/2024_2 <-- ClickHouse/fix-sanitizer-disaster
and merge the PR.
- you update the submodule and need to re-apply earlier patches. In this case, re-creating old PRs is overkill. Instead, simply cherry-pick older commits into the new
ClickHouse/
branch (corresponding to the new version). Feel free to squash commits of PRs that had multiple commits. In the best case, we did contribute custom patches back to upstream and can omit patches in the new version.
Once the submodule has been updated, bump the submodule in ClickHouse to point to the new hash in the fork.
Create patches of third-party libraries with the official repository in mind and consider contributing the patch back to the upstream repository.
This makes sure that others will also benefit from the patch and it will not be a maintenance burden for the ClickHouse team. | {"source_file": "contrib.md"} | [
-0.05980890989303589,
-0.07696324586868286,
0.0035258105490356684,
-0.07587426155805588,
0.013944780454039574,
-0.04839852824807167,
-0.0029713634867221117,
0.05159927159547806,
-0.059904105961322784,
0.033504974097013474,
0.06381471455097198,
0.010251834988594055,
0.04632362723350525,
-0.... |
bcf52e3d-b355-4575-bc51-3aea9aadb381 | description: 'Index page for Development and Contributing'
slug: /development/
title: 'Development and Contributing'
doc_type: 'landing-page'
In this section of the docs you will find the following pages: | {"source_file": "index.md"} | [
0.034735847264528275,
-0.028324702754616737,
0.026517806574702263,
0.01388789527118206,
0.03897799551486969,
-0.0004663339350372553,
-0.043718185275793076,
0.012761819176375866,
-0.027829904109239578,
0.03255116939544678,
-0.010820021852850914,
-0.003839569864794612,
-0.010058490559458733,
... |
6a7011d9-51f0-447f-bf11-7436c1e84fd2 | description: 'Coding style guidelines for ClickHouse C++ development'
sidebar_label: 'C++ Style Guide'
sidebar_position: 70
slug: /development/style
title: 'C++ Style Guide'
doc_type: 'guide'
C++ style guide
General recommendations {#general-recommendations}
The following are recommendations, not requirements.
If you are editing code, it makes sense to follow the formatting of the existing code.
Code style is needed for consistency. Consistency makes it easier to read the code, and it also makes it easier to search the code.
Many of the rules do not have logical reasons; they are dictated by established practices.
Formatting {#formatting}
1.
Most of the formatting is done automatically by
clang-format
.
2.
Indents is 4 spaces. Configure your development environment so that a tab adds four spaces.
3.
Opening and closing curly brackets must be on a separate line.
cpp
inline void readBoolText(bool & x, ReadBuffer & buf)
{
char tmp = '0';
readChar(tmp, buf);
x = tmp != '0';
}
4.
If the entire function body is a single
statement
, it can be placed on a single line. Place spaces around curly braces (besides the space at the end of the line).
cpp
inline size_t mask() const { return buf_size() - 1; }
inline size_t place(HashValue x) const { return x & mask(); }
5.
For functions. Don't put spaces around brackets.
cpp
void reinsert(const Value & x)
cpp
memcpy(&buf[place_value], &x, sizeof(x));
6.
In
if
,
for
,
while
and other expressions, a space is inserted in front of the opening bracket (as opposed to function calls).
cpp
for (size_t i = 0; i < rows; i += storage.index_granularity)
7.
Add spaces around binary operators (
+
,
-
,
*
,
/
,
%
, ...) and the ternary operator
?:
.
cpp
UInt16 year = (s[0] - '0') * 1000 + (s[1] - '0') * 100 + (s[2] - '0') * 10 + (s[3] - '0');
UInt8 month = (s[5] - '0') * 10 + (s[6] - '0');
UInt8 day = (s[8] - '0') * 10 + (s[9] - '0');
8.
If a line feed is entered, put the operator on a new line and increase the indent before it.
cpp
if (elapsed_ns)
message << " ("
<< rows_read_on_server * 1000000000 / elapsed_ns << " rows/s., "
<< bytes_read_on_server * 1000.0 / elapsed_ns << " MB/s.) ";
9.
You can use spaces for alignment within a line, if desired.
cpp
dst.ClickLogID = click.LogID;
dst.ClickEventID = click.EventID;
dst.ClickGoodEvent = click.GoodEvent;
10.
Don't use spaces around the operators
.
,
->
.
If necessary, the operator can be wrapped to the next line. In this case, the offset in front of it is increased.
11.
Do not use a space to separate unary operators (
--
,
++
,
*
,
&
, ...) from the argument.
12.
Put a space after a comma, but not before it. The same rule goes for a semicolon inside a
for
expression.
13.
Do not use spaces to separate the
[]
operator. | {"source_file": "style.md"} | [
-0.06214478239417076,
0.06250769644975662,
-0.009853866882622242,
-0.023885488510131836,
-0.05749738961458206,
0.04622766003012657,
-0.05325455963611603,
0.07972443848848343,
-0.04005597159266472,
0.033005282282829285,
0.041157931089401245,
0.008005066774785519,
0.017236609011888504,
-0.09... |
e9a07fc3-7986-4346-82ff-fcbd094dcbc4 | 12.
Put a space after a comma, but not before it. The same rule goes for a semicolon inside a
for
expression.
13.
Do not use spaces to separate the
[]
operator.
14.
In a
template <...>
expression, use a space between
template
and
<
; no spaces after
<
or before
>
.
cpp
template <typename TKey, typename TValue>
struct AggregatedStatElement
{}
15.
In classes and structures, write
public
,
private
, and
protected
on the same level as
class/struct
, and indent the rest of the code.
cpp
template <typename T>
class MultiVersion
{
public:
/// Version of object for usage. shared_ptr manage lifetime of version.
using Version = std::shared_ptr<const T>;
...
}
16.
If the same
namespace
is used for the entire file, and there isn't anything else significant, an offset is not necessary inside
namespace
.
17.
If the block for an
if
,
for
,
while
, or other expression consists of a single
statement
, the curly brackets are optional. Place the
statement
on a separate line, instead. This rule is also valid for nested
if
,
for
,
while
, ...
But if the inner
statement
contains curly brackets or
else
, the external block should be written in curly brackets.
cpp
/// Finish write.
for (auto & stream : streams)
stream.second->finalize();
18.
There shouldn't be any spaces at the ends of lines.
19.
Source files are UTF-8 encoded.
20.
Non-ASCII characters can be used in string literals.
cpp
<< ", " << (timer.elapsed() / chunks_stats.hits) << " ΞΌsec/hit.";
21.
Do not write multiple expressions in a single line.
22.
Group sections of code inside functions and separate them with no more than one empty line.
23.
Separate functions, classes, and so on with one or two empty lines.
24.
A const
(related to a value) must be written before the type name.
cpp
//correct
const char * pos
const std::string & s
//incorrect
char const * pos
25.
When declaring a pointer or reference, the
*
and
&
symbols should be separated by spaces on both sides.
cpp
//correct
const char * pos
//incorrect
const char* pos
const char *pos
26.
When using template types, alias them with the
using
keyword (except in the simplest cases).
In other words, the template parameters are specified only in
using
and aren't repeated in the code.
using
can be declared locally, such as inside a function.
cpp
//correct
using FileStreams = std::map<std::string, std::shared_ptr<Stream>>;
FileStreams streams;
//incorrect
std::map<std::string, std::shared_ptr<Stream>> streams;
27.
Do not declare several variables of different types in one statement.
cpp
//incorrect
int x, *y;
28.
Do not use C-style casts.
cpp
//incorrect
std::cerr << (int)c <<; std::endl;
//correct
std::cerr << static_cast<int>(c) << std::endl;
29.
In classes and structs, group members and functions separately inside each visibility scope. | {"source_file": "style.md"} | [
-0.065220907330513,
0.12043538689613342,
0.05159000679850578,
-0.07005808502435684,
-0.0007881045457907021,
0.04016841948032379,
-0.0031661067623645067,
0.0896318182349205,
-0.005155099090188742,
0.02812046743929386,
0.06116608530282974,
-0.030311211943626404,
-0.05845578387379646,
-0.0485... |
e3d79e59-85bf-4ebe-9147-4e2adf1e0186 | 29.
In classes and structs, group members and functions separately inside each visibility scope.
30.
For small classes and structs, it is not necessary to separate the method declaration from the implementation.
The same is true for small methods in any classes or structs.
For template classes and structs, do not separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit).
31.
You can wrap lines at 140 characters, instead of 80.
32.
Always use the prefix increment/decrement operators if postfix is not required.
cpp
for (Names::const_iterator it = column_names.begin(); it != column_names.end(); ++it)
Comments {#comments}
1.
Be sure to add comments for all non-trivial parts of code.
This is very important. Writing the comment might help you realize that the code isn't necessary, or that it is designed wrong.
cpp
/** Part of piece of memory, that can be used.
* For example, if internal_buffer is 1MB, and there was only 10 bytes loaded to buffer from file for reading,
* then working_buffer will have size of only 10 bytes
* (working_buffer.end() will point to position right after those 10 bytes available for read).
*/
2.
Comments can be as detailed as necessary.
3.
Place comments before the code they describe. In rare cases, comments can come after the code, on the same line.
cpp
/** Parses and executes the query.
*/
void executeQuery(
ReadBuffer & istr, /// Where to read the query from (and data for INSERT, if applicable)
WriteBuffer & ostr, /// Where to write the result
Context & context, /// DB, tables, data types, engines, functions, aggregate functions...
BlockInputStreamPtr & query_plan, /// Here could be written the description on how query was executed
QueryProcessingStage::Enum stage = QueryProcessingStage::Complete /// Up to which stage process the SELECT query
)
4.
Comments should be written in English only.
5.
If you are writing a library, include detailed comments explaining it in the main header file.
6.
Do not add comments that do not provide additional information. In particular, do not leave empty comments like this:
cpp
/*
* Procedure Name:
* Original procedure name:
* Author:
* Date of creation:
* Dates of modification:
* Modification authors:
* Original file name:
* Purpose:
* Intent:
* Designation:
* Classes used:
* Constants:
* Local variables:
* Parameters:
* Date of creation:
* Purpose:
*/
The example is borrowed from the resource http://home.tamk.fi/~jaalto/course/coding-style/doc/unmaintainable-code/.
7.
Do not write garbage comments (author, creation date ..) at the beginning of each file.
8.
Single-line comments begin with three slashes:
///
and multi-line comments begin with
/**
. These comments are considered "documentation". | {"source_file": "style.md"} | [
-0.07876704633235931,
0.05037716031074524,
-0.10349666327238083,
-0.010685423389077187,
-0.03990766406059265,
-0.015152799896895885,
-0.0017209105426445603,
0.0970001220703125,
-0.04098241776227951,
-0.008658207021653652,
0.06327307969331741,
0.005385881755501032,
0.006664226297289133,
-0.... |
90f60272-af74-4433-8ddd-01b158167905 | 8.
Single-line comments begin with three slashes:
///
and multi-line comments begin with
/**
. These comments are considered "documentation".
Note: You can use Doxygen to generate documentation from these comments. But Doxygen is not generally used because it is more convenient to navigate the code in the IDE.
9.
Multi-line comments must not have empty lines at the beginning and end (except the line that closes a multi-line comment).
10.
For commenting out code, use basic comments, not "documenting" comments.
11.
Delete the commented out parts of the code before committing.
12.
Do not use profanity in comments or code.
13.
Do not use uppercase letters. Do not use excessive punctuation.
cpp
/// WHAT THE FAIL???
14.
Do not use comments to make delimiters.
cpp
///******************************************************
15.
Do not start discussions in comments.
cpp
/// Why did you do this stuff?
16.
There's no need to write a comment at the end of a block describing what it was about.
cpp
/// for
Names {#names}
1.
Use lowercase letters with underscores in the names of variables and class members.
cpp
size_t max_block_size;
2.
For the names of functions (methods), use camelCase beginning with a lowercase letter.
cpp
std::string getName() const override { return "Memory"; }
3.
For the names of classes (structs), use CamelCase beginning with an uppercase letter. Prefixes other than I are not used for interfaces.
cpp
class StorageMemory : public IStorage
4.
using
are named the same way as classes.
5.
Names of template type arguments: in simple cases, use
T
;
T
,
U
;
T1
,
T2
.
For more complex cases, either follow the rules for class names, or add the prefix
T
.
cpp
template <typename TKey, typename TValue>
struct AggregatedStatElement
6.
Names of template constant arguments: either follow the rules for variable names, or use
N
in simple cases.
cpp
template <bool without_www>
struct ExtractDomain
7.
For abstract classes (interfaces) you can add the
I
prefix.
cpp
class IProcessor
8.
If you use a variable locally, you can use the short name.
In all other cases, use a name that describes the meaning.
cpp
bool info_successfully_loaded = false;
9.
Names of
define
s and global constants use ALL_CAPS with underscores.
```cpp
define MAX_SRC_TABLE_NAMES_TO_STORE 1000
```
10.
File names should use the same style as their contents.
If a file contains a single class, name the file the same way as the class (CamelCase).
If the file contains a single function, name the file the same way as the function (camelCase).
11.
If the name contains an abbreviation, then:
For variable names, the abbreviation should use lowercase letters
mysql_connection
(not
mySQL_connection
).
For names of classes and functions, keep the uppercase letters in the abbreviation
MySQLConnection
(not
MySqlConnection
). | {"source_file": "style.md"} | [
-0.13228151202201843,
0.024120306596159935,
0.019245538860559464,
-0.019308075308799744,
-0.01590353064239025,
-0.01743975654244423,
-0.03314455971121788,
0.0958871990442276,
-0.0072283316403627396,
0.019835570827126503,
-0.006209429819136858,
-0.01756361313164234,
0.04728558287024498,
0.0... |
2cb42ff0-f07e-4641-8c98-2907031f0ce5 | For names of classes and functions, keep the uppercase letters in the abbreviation
MySQLConnection
(not
MySqlConnection
).
12.
Constructor arguments that are used just to initialize the class members should be named the same way as the class members, but with an underscore at the end.
cpp
FileQueueProcessor(
const std::string & path_,
const std::string & prefix_,
std::shared_ptr<FileHandler> handler_)
: path(path_),
prefix(prefix_),
handler(handler_),
log(&Logger::get("FileQueueProcessor"))
{
}
The underscore suffix can be omitted if the argument is not used in the constructor body.
13.
There is no difference in the names of local variables and class members (no prefixes required).
cpp
timer (not m_timer)
14.
For the constants in an
enum
, use CamelCase with a capital letter. ALL_CAPS is also acceptable. If the
enum
is non-local, use an
enum class
.
cpp
enum class CompressionMethod
{
QuickLZ = 0,
LZ4 = 1,
};
15.
All names must be in English. Transliteration of Hebrew words is not allowed.
not T_PAAMAYIM_NEKUDOTAYIM
16.
Abbreviations are acceptable if they are well known (when you can easily find the meaning of the abbreviation in Wikipedia or in a search engine).
`AST`, `SQL`.
Not `NVDH` (some random letters)
Incomplete words are acceptable if the shortened version is common use.
You can also use an abbreviation if the full name is included next to it in the comments.
17.
File names with C++ source code must have the
.cpp
extension. Header files must have the
.h
extension.
How to write code {#how-to-write-code}
1.
Memory management.
Manual memory deallocation (
delete
) can only be used in library code.
In library code, the
delete
operator can only be used in destructors.
In application code, memory must be freed by the object that owns it.
Examples:
The easiest way is to place an object on the stack, or make it a member of another class.
For a large number of small objects, use containers.
For automatic deallocation of a small number of objects that reside in the heap, use
shared_ptr/unique_ptr
.
2.
Resource management.
Use
RAII
and see above.
3.
Error handling.
Use exceptions. In most cases, you only need to throw an exception, and do not need to catch it (because of
RAII
).
In offline data processing applications, it's often acceptable to not catch exceptions.
In servers that handle user requests, it's usually enough to catch exceptions at the top level of the connection handler.
In thread functions, you should catch and keep all exceptions to rethrow them in the main thread after
join
.
```cpp
/// If there weren't any calculations yet, calculate the first block synchronously
if (!started)
{
calculate();
started = true;
}
else /// If calculations are already in progress, wait for the result
pool.wait();
if (exception)
exception->rethrow();
``` | {"source_file": "style.md"} | [
-0.06176311522722244,
0.034028973430395126,
-0.06342373788356781,
-0.024133022874593735,
-0.1285487860441208,
-0.021337322890758514,
0.09193405508995056,
0.09256218373775482,
-0.03345127031207085,
0.024614550173282623,
0.054434701800346375,
-0.047557149082422256,
0.0853942334651947,
-0.017... |
f41a4892-a160-4191-b012-509b3ba271b7 | if (exception)
exception->rethrow();
```
Never hide exceptions without handling. Never just blindly put all exceptions to log.
cpp
//Not correct
catch (...) {}
If you need to ignore some exceptions, do so only for specific ones and rethrow the rest.
cpp
catch (const DB::Exception & e)
{
if (e.code() == ErrorCodes::UNKNOWN_AGGREGATE_FUNCTION)
return nullptr;
else
throw;
}
When using functions with response codes or
errno
, always check the result and throw an exception in case of error.
cpp
if (0 != close(fd))
throw ErrnoException(ErrorCodes::CANNOT_CLOSE_FILE, "Cannot close file {}", file_name);
You can use assert to check invariant in code.
4.
Exception types.
There is no need to use complex exception hierarchy in application code. The exception text should be understandable to a system administrator.
5.
Throwing exceptions from destructors.
This is not recommended, but it is allowed.
Use the following options:
Create a function (
done()
or
finalize()
) that will do all the work in advance that might lead to an exception. If that function was called, there should be no exceptions in the destructor later.
Tasks that are too complex (such as sending messages over the network) can be put in separate method that the class user will have to call before destruction.
If there is an exception in the destructor, it's better to log it than to hide it (if the logger is available).
In simple applications, it is acceptable to rely on
std::terminate
(for cases of
noexcept
by default in C++11) to handle exceptions.
6.
Anonymous code blocks.
You can create a separate code block inside a single function in order to make certain variables local, so that the destructors are called when exiting the block.
```cpp
Block block = data.in->read();
{
std::lock_guard
lock(mutex);
data.ready = true;
data.block = block;
}
ready_any.set();
```
7.
Multithreading.
In offline data processing programs:
Try to get the best possible performance on a single CPU core. You can then parallelize your code if necessary.
In server applications:
Use the thread pool to process requests. At this point, we haven't had any tasks that required userspace context switching.
Fork is not used for parallelization.
8.
Syncing threads.
Often it is possible to make different threads use different memory cells (even better: different cache lines,) and to not use any thread synchronization (except
joinAll
).
If synchronization is required, in most cases, it is sufficient to use mutex under
lock_guard
.
In other cases use system synchronization primitives. Do not use busy wait.
Atomic operations should be used only in the simplest cases.
Do not try to implement lock-free data structures unless it is your primary area of expertise.
9.
Pointers vs references.
In most cases, prefer references.
10.
const
. | {"source_file": "style.md"} | [
-0.02010388672351837,
0.08599670976400375,
0.042365480214357376,
0.036579154431819916,
0.05009469762444496,
-0.01237933337688446,
-0.0333351194858551,
0.12329496443271637,
0.0021933915559202433,
0.04791171848773956,
0.013702231459319592,
-0.009547221474349499,
0.017930228263139725,
-0.0450... |
9c4720a1-eeef-4214-ab25-4b36b49a6a04 | Do not try to implement lock-free data structures unless it is your primary area of expertise.
9.
Pointers vs references.
In most cases, prefer references.
10.
const
.
Use constant references, pointers to constants,
const_iterator
, and
const
methods.
Consider
const
to be default and use non-
const
only when necessary.
When passing variables by value, using
const
usually does not make sense.
11.
unsigned.
Use
unsigned
if necessary.
12.
Numeric types.
Use the types
UInt8
,
UInt16
,
UInt32
,
UInt64
,
Int8
,
Int16
,
Int32
, and
Int64
, as well as
size_t
,
ssize_t
, and
ptrdiff_t
.
Don't use these types for numbers:
signed/unsigned long
,
long long
,
short
,
signed/unsigned char
,
char
.
13.
Passing arguments.
Pass complex values by value if they are going to be moved and use std::move; pass by reference if you want to update value in a loop.
If a function captures ownership of an object created in the heap, make the argument type
shared_ptr
or
unique_ptr
.
14.
Return values.
In most cases, just use
return
. Do not write
return std::move(res)
.
If the function allocates an object on heap and returns it, use
shared_ptr
or
unique_ptr
.
In rare cases (updating a value in a loop) you might need to return the value via an argument. In this case, the argument should be a reference.
```cpp
using AggregateFunctionPtr = std::shared_ptr
;
/*
Allows creating an aggregate function by its name.
/
class AggregateFunctionFactory
{
public:
AggregateFunctionFactory();
AggregateFunctionPtr get(const String & name, const DataTypes & argument_types) const;
```
15.
namespace
.
There is no need to use a separate
namespace
for application code.
Small libraries do not need this, either.
For medium to large libraries, put everything in a
namespace
.
In the library's
.h
file, you can use
namespace detail
to hide implementation details not needed for the application code.
In a
.cpp
file, you can use a
static
or anonymous
namespace
to hide symbols.
Also, a
namespace
can be used for an
enum
to prevent the corresponding names from falling into an external
namespace
(but it's better to use an
enum class
).
16.
Deferred initialization.
If arguments are required for initialization, then you normally shouldn't write a default constructor.
If later you'll need to delay initialization, you can add a default constructor that will create an invalid object. Or, for a small number of objects, you can use
shared_ptr/unique_ptr
.
```cpp
Loader(DB::Connection * connection_, const std::string & query, size_t max_block_size_);
/// For deferred initialization
Loader() {}
```
17.
Virtual functions.
If the class is not intended for polymorphic use, you do not need to make functions virtual. This also applies to the destructor.
18.
Encodings.
Use UTF-8 everywhere. Use
std::string
and
char *
. Do not use
std::wstring
and
wchar_t
. | {"source_file": "style.md"} | [
-0.04545707628130913,
0.03307341784238815,
-0.02371564880013466,
-0.08819986134767532,
-0.0339231938123703,
-0.013525412417948246,
0.053591012954711914,
0.08258526027202606,
-0.02671988680958748,
-0.021090496331453323,
-0.009370123036205769,
-0.007312116678804159,
-0.03684713691473007,
-0.... |
e40b0655-8f61-4797-863d-5273322fff33 | 18.
Encodings.
Use UTF-8 everywhere. Use
std::string
and
char *
. Do not use
std::wstring
and
wchar_t
.
19.
Logging.
See the examples everywhere in the code.
Before committing, delete all meaningless and debug logging, and any other types of debug output.
Logging in cycles should be avoided, even on the Trace level.
Logs must be readable at any logging level.
Logging should only be used in application code, for the most part.
Log messages must be written in English.
The log should preferably be understandable for the system administrator.
Do not use profanity in the log.
Use UTF-8 encoding in the log. In rare cases you can use non-ASCII characters in the log.
20.
Input-output.
Don't use
iostreams
in internal cycles that are critical for application performance (and never use
stringstream
).
Use the
DB/IO
library instead.
21.
Date and time.
See the
DateLUT
library.
22.
include.
Always use
#pragma once
instead of include guards.
23.
using.
using namespace
is not used. You can use
using
with something specific. But make it local inside a class or function.
24.
Do not use
trailing return type
for functions unless necessary.
cpp
auto f() -> void
25.
Declaration and initialization of variables.
```cpp
//right way
std::string s = "Hello";
std::string s{"Hello"};
//wrong way
auto s = std::string{"Hello"};
```
26.
For virtual functions, write
virtual
in the base class, but write
override
instead of
virtual
in descendent classes.
Unused features of C++ {#unused-features-of-c}
1.
Virtual inheritance is not used.
2.
Constructs which have convenient syntactic sugar in modern C++, e.g.
```cpp
// Traditional way without syntactic sugar
template
::value, void>> // SFINAE via std::enable_if, usage of ::value
std::pair
func(const E
& e) // explicitly specified return type
{
if (elements.count(e)) // .count() membership test
{
// ...
}
elements.erase(
std::remove_if(
elements.begin(), elements.end(),
[&](const auto x){
return x == 1;
}),
elements.end()); // remove-erase idiom
return std::make_pair(1, 2); // create pair via make_pair()
}
// With syntactic sugar (C++14/17/20)
template
requires std::same_v
// SFINAE via C++20 concept, usage of C++14 template alias
auto func(const E
& e) // auto return type (C++14)
{
if (elements.contains(e)) // C++20 .contains membership test
{
// ...
}
elements.erase_if(
elements,
[&](const auto x){
return x == 1;
}); // C++20 std::erase_if
return {1, 2}; // or: return std::pair(1, 2); // create pair via initialization list or value initialization (C++17)
}
```
Platform {#platform}
1.
We write code for a specific platform.
But other things being equal, cross-platform or portable code is preferred.
2.
Language: C++20 (see the list of available
C++20 features
). | {"source_file": "style.md"} | [
0.02298213727772236,
0.061868105083703995,
-0.03335363790392876,
0.010106528177857399,
-0.0074029769748449326,
-0.007665826939046383,
0.05915292352437973,
0.07286803424358368,
0.0076768831349909306,
0.028568435460329056,
0.051336027681827545,
-0.0972571074962616,
0.060228947550058365,
0.02... |
e2806648-b752-4100-86bf-1cb7f2fe6d60 | 1.
We write code for a specific platform.
But other things being equal, cross-platform or portable code is preferred.
2.
Language: C++20 (see the list of available
C++20 features
).
3.
Compiler:
clang
. At the time of writing (March 2025), the code is compiled using clang version >= 19.
The standard library is used (
libc++
).
4.
OS: Linux Ubuntu, not older than Precise.
5.
Code is written for x86_64 CPU architecture.
The CPU instruction set is the minimum supported set among our servers. Currently, it is SSE 4.2.
6.
Use
-Wall -Wextra -Werror -Weverything
compilation flags with a few exception.
7.
Use static linking with all libraries except those that are difficult to connect to statically (see the output of the
ldd
command).
8.
Code is developed and debugged with release settings.
Tools {#tools}
1.
KDevelop is a good IDE.
2.
For debugging, use
gdb
,
valgrind
(
memcheck
),
strace
,
-fsanitize=...
, or
tcmalloc_minimal_debug
.
3.
For profiling, use
Linux Perf
,
valgrind
(
callgrind
), or
strace -cf
.
4.
Sources are in Git.
5.
Assembly uses
CMake
.
6.
Programs are released using
deb
packages.
7.
Commits to master must not break the build.
Though only selected revisions are considered workable.
8.
Make commits as often as possible, even if the code is only partially ready.
Use branches for this purpose.
If your code in the
master
branch is not buildable yet, exclude it from the build before the
push
. You'll need to finish it or remove it within a few days.
9.
For non-trivial changes, use branches and publish them on the server.
10.
Unused code is removed from the repository.
Libraries {#libraries}
1.
The C++20 standard library is used (experimental extensions are allowed), as well as
boost
and
Poco
frameworks.
2.
It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in
contrib
directory and built with ClickHouse. See
Guidelines for adding new third-party libraries
for details.
3.
Preference is always given to libraries that are already in use.
General recommendations {#general-recommendations-1}
1.
Write as little code as possible.
2.
Try the simplest solution.
3.
Don't write code until you know how it's going to work and how the inner loop will function.
4.
In the simplest cases, use
using
instead of classes or structs.
5.
If possible, do not write copy constructors, assignment operators, destructors (other than a virtual one, if the class contains at least one virtual function), move constructors or move assignment operators. In other words, the compiler-generated functions must work correctly. You can use
default
.
6.
Code simplification is encouraged. Reduce the size of your code where possible.
Additional recommendations {#additional-recommendations} | {"source_file": "style.md"} | [
-0.05402541533112526,
-0.014817245304584503,
-0.02819874696433544,
0.02621515281498432,
0.0030450269114226103,
-0.032337334007024765,
-0.13100992143154144,
0.09280850738286972,
-0.07003327459096909,
-0.05003424733877182,
-0.015263354405760765,
-0.03667403757572174,
-0.06427400559186935,
-0... |
f5142257-ad8f-493e-9543-503b7f1dcaf5 | 6.
Code simplification is encouraged. Reduce the size of your code where possible.
Additional recommendations {#additional-recommendations}
1.
Explicitly specifying
std::
for types from
stddef.h
is not recommended. In other words, we recommend writing
size_t
instead
std::size_t
, because it's shorter.
It is acceptable to add
std::
.
2.
Explicitly specifying
std::
for functions from the standard C library
is not recommended. In other words, write
memcpy
instead of
std::memcpy
.
The reason is that there are similar non-standard functions, such as
memmem
. We do use these functions on occasion. These functions do not exist in
namespace std
.
If you write
std::memcpy
instead of
memcpy
everywhere, then
memmem
without
std::
will look strange.
Nevertheless, you can still use
std::
if you prefer it.
3.
Using functions from C when the same ones are available in the standard C++ library.
This is acceptable if it is more efficient.
For example, use
memcpy
instead of
std::copy
for copying large chunks of memory.
4.
Multiline function arguments.
Any of the following wrapping styles are allowed:
cpp
function(
T1 x1,
T2 x2)
cpp
function(
size_t left, size_t right,
const & RangesInDataParts ranges,
size_t limit)
cpp
function(size_t left, size_t right,
const & RangesInDataParts ranges,
size_t limit)
cpp
function(size_t left, size_t right,
const & RangesInDataParts ranges,
size_t limit)
cpp
function(
size_t left,
size_t right,
const & RangesInDataParts ranges,
size_t limit) | {"source_file": "style.md"} | [
-0.06908927857875824,
0.038402486592531204,
0.05977117642760277,
-0.033230431377887726,
0.028450047597289085,
-0.05756961181759834,
0.04731022194027901,
0.07089051604270935,
-0.015357834286987782,
0.01343529112637043,
0.05151418596506119,
-0.044388297945261,
-0.014434161596000195,
-0.02641... |
41a2ed7c-3a1c-4d8b-a894-02d0418d222d | description: 'Guide for building ClickHouse from source for the RISC-V 64 architecture'
sidebar_label: 'Build on Linux for RISC-V 64'
sidebar_position: 30
slug: /development/build-cross-riscv
title: 'How to Build ClickHouse on Linux for RISC-V 64'
doc_type: 'guide'
How to Build ClickHouse on Linux for RISC-V 64
ClickHouse has experimental support for RISC-V. Not all features can be enabled.
Build ClickHouse {#build-clickhouse}
To cross-compile for RISC-V on an non-RISC-V machine:
bash
cd ClickHouse
mkdir build-riscv64
CC=clang-19 CXX=clang++-19 cmake . -Bbuild-riscv64 -G Ninja -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-riscv64.cmake -DGLIBC_COMPATIBILITY=OFF -DENABLE_LDAP=OFF -DOPENSSL_NO_ASM=ON -DENABLE_JEMALLOC=ON -DENABLE_PARQUET=OFF -DENABLE_GRPC=OFF -DENABLE_HDFS=OFF -DENABLE_MYSQL=OFF
ninja -C build-riscv64
The resulting binary will run only on Linux with the RISC-V 64 CPU architecture. | {"source_file": "build-cross-riscv.md"} | [
-0.03006128966808319,
-0.04800025746226311,
-0.02515999786555767,
-0.06732277572154999,
0.005958174355328083,
-0.02426590770483017,
-0.08719055354595184,
0.011642072349786758,
-0.10086338967084885,
-0.060543108731508255,
0.052062515169382095,
-0.10052727162837982,
-0.018900729715824127,
-0... |
0ae84e43-3f9c-4499-aef4-643f6ff0e61e | description: 'Guide for building ClickHouse from source for the s390x architecture'
sidebar_label: 'Build on Linux for s390x (zLinux)'
sidebar_position: 30
slug: /development/build-cross-s390x
title: 'Build on Linux for s390x (zLinux)'
doc_type: 'guide'
Build on Linux for s390x (zLinux)
ClickHouse has experimental support for s390x.
Building ClickHouse for s390x {#building-clickhouse-for-s390x}
s390x has two OpenSSL-related build options:
- By default, OpenSSL is build on s390x as a shared library. This is different from all other platforms, where OpenSSL is build as static library.
- To build OpenSSL as a static library regardless, pass
-DENABLE_OPENSSL_DYNAMIC=0
to CMake.
These instructions assume that the host machine is x86_64 and has all the tooling required to build natively based on the
build instructions
. It also assumes that the host is Ubuntu 22.04 but the following instructions should also work on Ubuntu 20.04.
In addition to installing the tooling used to build natively, the following additional packages need to be installed:
bash
apt-get install binutils-s390x-linux-gnu libc6-dev-s390x-cross gcc-s390x-linux-gnu binfmt-support qemu-user-static
If you wish to cross compile rust code install the rust cross compile target for s390x:
bash
rustup target add s390x-unknown-linux-gnu
The s390x build uses the mold linker, download it from https://github.com/rui314/mold/releases/download/v2.0.0/mold-2.0.0-x86_64-linux.tar.gz
and place it into your
$PATH
.
To build for s390x:
bash
cmake -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-s390x.cmake ..
ninja
Running {#running}
Once built, the binary can be run with, e.g.:
bash
qemu-s390x-static -L /usr/s390x-linux-gnu ./clickhouse
Debugging {#debugging}
Install LLDB:
bash
apt-get install lldb-15
To Debug a s390x executable, run clickhouse using QEMU in debug mode:
bash
qemu-s390x-static -g 31338 -L /usr/s390x-linux-gnu ./clickhouse
In another shell run LLDB and attach, replace
<Clickhouse Parent Directory>
and
<build directory>
with the values corresponding to your environment. | {"source_file": "build-cross-s390x.md"} | [
0.007759525906294584,
0.0043091182596981525,
-0.0633174329996109,
-0.0003005322068929672,
0.014588585123419762,
-0.021578630432486534,
-0.11281478404998779,
0.03127044811844826,
-0.06704756617546082,
-0.10286352783441544,
0.030638782307505608,
-0.008174124173820019,
0.10620243102312088,
-0... |
735ae1dd-cbc9-4ae2-a505-593be3497805 | In another shell run LLDB and attach, replace
<Clickhouse Parent Directory>
and
<build directory>
with the values corresponding to your environment.
bash
lldb-15
(lldb) target create ./clickhouse
Current executable set to '/<Clickhouse Parent Directory>/ClickHouse/<build directory>/programs/clickhouse' (s390x).
(lldb) settings set target.source-map <build directory> /<Clickhouse Parent Directory>/ClickHouse
(lldb) gdb-remote 31338
Process 1 stopped
* thread #1, stop reason = signal SIGTRAP
frame #0: 0x0000004020e74cd0
-> 0x4020e74cd0: lgr %r2, %r15
0x4020e74cd4: aghi %r15, -160
0x4020e74cd8: xc 0(8,%r15), 0(%r15)
0x4020e74cde: brasl %r14, 275429939040
(lldb) b main
Breakpoint 1: 9 locations.
(lldb) c
Process 1 resuming
Process 1 stopped
* thread #1, stop reason = breakpoint 1.1
frame #0: 0x0000004005cd9fc0 clickhouse`main(argc_=1, argv_=0x0000004020e594a8) at main.cpp:450:17
447 #if !defined(FUZZING_MODE)
448 int main(int argc_, char ** argv_)
449 {
-> 450 inside_main = true;
451 SCOPE_EXIT({ inside_main = false; });
452
453 /// PHDR cache is required for query profiler to work reliably
Visual Studio Code integration {#visual-studio-code-integration}
CodeLLDB
extension is required for visual debugging.
Command Variable
extension can help dynamic launches if using
CMake Variants
.
Make sure to set the backend to your LLVM installation eg.
"lldb.library": "/usr/lib/x86_64-linux-gnu/liblldb-15.so"
Make sure to run the clickhouse executable in debug mode prior to launch. (It is also possible to create a
preLaunchTask
that automates this)
Example configurations {#example-configurations}
cmake-variants.yaml {#cmake-variantsyaml}
```yaml
buildType:
default: relwithdebinfo
choices:
debug:
short: Debug
long: Emit debug information
buildType: Debug
release:
short: Release
long: Optimize generated code
buildType: Release
relwithdebinfo:
short: RelWithDebInfo
long: Release with Debug Info
buildType: RelWithDebInfo
tsan:
short: MinSizeRel
long: Minimum Size Release
buildType: MinSizeRel
toolchain:
default: default
description: Select toolchain
choices:
default:
short: x86_64
long: x86_64
s390x:
short: s390x
long: s390x
settings:
CMAKE_TOOLCHAIN_FILE: cmake/linux/toolchain-s390x.cmake
```
launch.json {#launchjson}
json
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "custom",
"name": "(lldb) Launch s390x with qemu",
"targetCreateCommands": ["target create ${command:cmake.launchTargetPath}"],
"processCreateCommands": ["gdb-remote 2159"],
"preLaunchTask": "Run ClickHouse"
}
]
}
settings.json {#settingsjson} | {"source_file": "build-cross-s390x.md"} | [
0.06596068292856216,
-0.1397765874862671,
-0.015963176265358925,
-0.10443364828824997,
0.013424692675471306,
0.003674639156088233,
-0.046811506152153015,
0.009754224680364132,
-0.04424150288105011,
0.02196982502937317,
-0.02427254058420658,
-0.06665101647377014,
-0.010672305710613728,
-0.0... |
1c459317-7647-47b0-a68e-2ad118a31f9d | settings.json {#settingsjson}
This would also put different builds under different subfolders of the
build
folder.
json
{
"cmake.buildDirectory": "${workspaceFolder}/build/${buildKitVendor}-${buildKitVersion}-${variant:toolchain}-${variant:buildType}",
"lldb.library": "/usr/lib/x86_64-linux-gnu/liblldb-15.so"
}
run-debug.sh {#run-debugsh}
```sh
! /bin/sh
echo 'Starting debugger session'
cd $1
qemu-s390x-static -g 2159 -L /usr/s390x-linux-gnu $2 $3 $4
```
tasks.json {#tasksjson}
Defines a task to run the compiled executable in
server
mode under a
tmp
folder next to the binaries, with configuration from under
programs/server/config.xml
.
json
{
"version": "2.0.0",
"tasks": [
{
"label": "Run ClickHouse",
"type": "shell",
"isBackground": true,
"command": "${workspaceFolder}/.vscode/run-debug.sh",
"args": [
"${command:cmake.launchTargetDirectory}/tmp",
"${command:cmake.launchTargetPath}",
"server",
"--config-file=${workspaceFolder}/programs/server/config.xml"
],
"problemMatcher": [
{
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": "^Starting debugger session",
"endsPattern": ".*"
}
}
]
}
]
} | {"source_file": "build-cross-s390x.md"} | [
0.0014195170952007174,
0.03173695504665375,
-0.039212096482515335,
-0.023883512243628502,
-0.013150685466825962,
0.02376640774309635,
-0.06018517166376114,
0.08269777148962021,
-0.012067978270351887,
-0.023702820762991905,
0.0012810323387384415,
-0.1154482439160347,
0.00027595245046541095,
... |
f72b1395-fd7f-4b29-b8f5-1d67710d7403 | description: 'Step-by-step guide for building ClickHouse from source on Linux systems'
sidebar_label: 'Build on Linux'
sidebar_position: 10
slug: /development/build
title: 'How to Build ClickHouse on Linux'
doc_type: 'guide'
How to Build ClickHouse on Linux
:::info You don't have to build ClickHouse yourself!
You can install pre-built ClickHouse as described in
Quick Start
.
:::
ClickHouse can be build on the following platforms:
x86_64
AArch64
PowerPC 64 LE (experimental)
s390/x (experimental)
RISC-V 64 (experimental)
Assumptions {#assumptions}
The following tutorial is based on Ubuntu Linux but it should also work on any other Linux distribution with appropriate changes.
The minimum recommended Ubuntu version for development is 24.04 LTS.
The tutorial assumes that you have the ClickHouse repository and all submodules locally checked out.
Install prerequisites {#install-prerequisites}
First, see the generic
prerequisites documentation
.
ClickHouse uses CMake and Ninja for building.
You can optionally install ccache to let the build reuse already compiled object files.
bash
sudo apt-get update
sudo apt-get install build-essential git cmake ccache python3 ninja-build nasm yasm gawk lsb-release wget software-properties-common gnupg
Install the Clang compiler {#install-the-clang-compiler}
To install Clang on Ubuntu/Debian, use LLVM's automatic installation script from
here
.
bash
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"
For other Linux distributions, check if you can install any of LLVM's
prebuild packages
.
As of March 2025, Clang 19 or higher is required.
GCC or other compilers are not supported.
Install the Rust compiler (optional) {#install-the-rust-compiler-optional}
:::note
Rust is an optional dependency of ClickHouse.
If Rust is not installed, some features of ClickHouse will be omitted from compilation.
:::
First, follow the steps in the official
Rust documentation
to install
rustup
.
As with C++ dependencies, ClickHouse uses vendoring to control exactly what's installed and avoid depending on third party services (like the
crates.io
registry).
Although in release mode any rust modern rustup toolchain version should work with these dependencies, if you plan to enable sanitizers you must use a version that matches the exact same
std
as the one used in CI (for which we vendor the crates):
bash
rustup toolchain install nightly-2025-07-07
rustup default nightly-2025-07-07
rustup component add rust-src
Build ClickHouse {#build-clickhouse}
We recommend to create a separate directory
build
inside
ClickHouse
which contains all build artifacts:
sh
mkdir build
cd build
You can have several different directories (e.g.
build_release
,
build_debug
, etc.) for different build types.
Optional: If you have multiple compiler versions installed, you can optionally specify the exact compiler to use.
sh
export CC=clang-19
export CXX=clang++-19 | {"source_file": "build.md"} | [
-0.02069961465895176,
-0.09359181672334671,
-0.03359028697013855,
-0.03818289563059807,
0.0049482122994959354,
-0.06610473990440369,
-0.04510105401277542,
0.021674638614058495,
-0.12735266983509064,
0.008605710230767727,
0.06923563778400421,
-0.05719498544931412,
-0.00020467664580792189,
-... |
b0c18511-3ea6-4e73-a092-8553fea110d0 | Optional: If you have multiple compiler versions installed, you can optionally specify the exact compiler to use.
sh
export CC=clang-19
export CXX=clang++-19
For development purposes, debug builds are recommended.
Compared to release builds, they have a lower compiler optimization level (
-O
) which provides a better debugging experience.
Also, internal exceptions of type
LOGICAL_ERROR
crash immediately instead of failing gracefully.
sh
cmake -D CMAKE_BUILD_TYPE=Debug ..
:::note
If you wish to use a debugger such as gdb, add
-D DEBUG_O_LEVEL="0"
to the above command to remove all compiler optimizations, which can interfere with gdb's ability to view/access variables.
:::
Run ninja to build:
sh
ninja clickhouse
If you like to build all the binaries (utilities and tests), run ninja without parameters:
sh
ninja
You can control the number of parallel build jobs using parameter
-j
:
sh
ninja -j 1 clickhouse-server clickhouse-client
:::tip
CMake provides shortcuts for above commands:
sh
cmake -S . -B build # configure build, run from repository top-level directory
cmake --build build # compile
:::
Running the ClickHouse Executable {#running-the-clickhouse-executable}
After the build completed successfully, you find the executable in
ClickHouse/<build_dir>/programs/
:
The ClickHouse server tries to find a configuration file
config.xml
in the current directory.
You can alternative specify a configuration file on the command-line via
-C
.
To connect to the ClickHouse server with
clickhouse-client
, open another terminal, navigate to
ClickHouse/build/programs/
and run
./clickhouse client
.
If you get
Connection refused
message on macOS or FreeBSD, try specifying host address 127.0.0.1:
bash
clickhouse client --host 127.0.0.1
Advanced options {#advanced-options}
Minimal Build {#minimal-build}
If you don't need functionality provided by third-party libraries, you can speed the build further up:
sh
cmake -DENABLE_LIBRARIES=OFF
In case of problems, you are on your own ...
Rust requires an internet connection. To disable Rust support:
sh
cmake -DENABLE_RUST=OFF
Running the ClickHouse Executable {#running-the-clickhouse-executable-1}
You can replace the production version of ClickHouse binary installed in your system with the compiled ClickHouse binary.
To do that, install ClickHouse on your machine following the instructions from the official website.
Next, run:
bash
sudo service clickhouse-server stop
sudo cp ClickHouse/build/programs/clickhouse /usr/bin/
sudo service clickhouse-server start
Note that
clickhouse-client
,
clickhouse-server
and others are symlinks to the commonly shared
clickhouse
binary.
You can also run your custom-built ClickHouse binary with the config file from the ClickHouse package installed on your system: | {"source_file": "build.md"} | [
-0.000810141209512949,
0.0023191131185740232,
0.0011740954359993339,
0.027291862294077873,
-0.010823141783475876,
0.01914447918534279,
-0.05973530933260918,
0.04181540384888649,
-0.09662341326475143,
0.0004642527783289552,
0.0150368707254529,
-0.15598279237747192,
0.018712615594267845,
-0.... |
8b7f0b3e-b0fa-4fa4-9d57-59ac24ae8b63 | You can also run your custom-built ClickHouse binary with the config file from the ClickHouse package installed on your system:
bash
sudo service clickhouse-server stop
sudo -u clickhouse ClickHouse/build/programs/clickhouse server --config-file /etc/clickhouse-server/config.xml
Building on Any Linux {#building-on-any-linux}
Install prerequisites on OpenSUSE Tumbleweed:
bash
sudo zypper install git cmake ninja clang-c++ python lld nasm yasm gawk
git clone --recursive https://github.com/ClickHouse/ClickHouse.git
mkdir build
cmake -S . -B build
cmake --build build
Install prerequisites on Fedora Rawhide:
bash
sudo yum update
sudo yum --nogpg install git cmake make clang python3 ccache lld nasm yasm gawk
git clone --recursive https://github.com/ClickHouse/ClickHouse.git
mkdir build
cmake -S . -B build
cmake --build build
Building in docker {#building-in-docker}
You can run any build locally in an environment similar to CI using:
bash
python -m ci.praktika "BUILD_JOB_NAME"
where BUILD_JOB_NAME is the job name as shown in the CI report, e.g., "Build (arm_release)", "Build (amd_debug)"
This command pulls the appropriate Docker image
clickhouse/binary-builder
with all required dependencies,
and runs the build script inside it:
./ci/jobs/build_clickhouse.py
The build output will be placed in
./ci/tmp/
.
It works on both AMD and ARM architectures and requires no additional dependencies other than Docker. | {"source_file": "build.md"} | [
0.0028176545165479183,
-0.09714297950267792,
-0.007866981439292431,
-0.051453158259391785,
-0.0018737423233687878,
-0.07954243570566177,
-0.05723005160689354,
0.014109531417489052,
-0.08751118183135986,
-0.020692940801382065,
0.02916203811764717,
-0.07413317263126373,
-0.011584118939936161,
... |
e2f1edd1-1f26-486d-9d5f-53670ac2ca9d | description: 'Guide for cross-compiling ClickHouse from Linux for macOS systems'
sidebar_label: 'Build on Linux for macOS'
sidebar_position: 20
slug: /development/build-cross-osx
title: 'Build on Linux for macOS'
doc_type: 'guide'
How to Build ClickHouse on Linux for macOS
This is for the case when you have a Linux machine and want to use it to build
clickhouse
binary that will run on OS X.
The main use case is continuous integration checks which run on Linux machines.
If you want to build ClickHouse directly on macOS, proceed with the
native build instructions
.
The cross-build for macOS is based on the
Build instructions
, follow them first.
The following sections provide a walk-through for building ClickHouse for
x86_64
macOS.
If you're targeting ARM architecture, simply substitute all occurrences of
x86_64
with
aarch64
.
For example, replace
x86_64-apple-darwin
with
aarch64-apple-darwin
throughout the steps.
Install cross-compilation toolset {#install-cross-compilation-toolset}
Let's remember the path where we install
cctools
as
${CCTOOLS}
```bash
mkdir ~/cctools
export CCTOOLS=$(cd ~/cctools && pwd)
cd ${CCTOOLS}
git clone https://github.com/tpoechtrager/apple-libtapi.git
cd apple-libtapi
git checkout 15dfc2a8c9a2a89d06ff227560a69f5265b692f9
INSTALLPREFIX=${CCTOOLS} ./build.sh
./install.sh
cd ..
git clone https://github.com/tpoechtrager/cctools-port.git
cd cctools-port/cctools
git checkout 2a3e1c2a6ff54a30f898b70cfb9ba1692a55fad7
./configure --prefix=$(readlink -f ${CCTOOLS}) --with-libtapi=$(readlink -f ${CCTOOLS}) --target=x86_64-apple-darwin
make install
```
Also, we need to download macOS X SDK into the working tree.
bash
cd ClickHouse/cmake/toolchain/darwin-x86_64
curl -L 'https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz' | tar xJ --strip-components=1
Build ClickHouse {#build-clickhouse}
bash
cd ClickHouse
mkdir build-darwin
cd build-darwin
CC=clang-19 CXX=clang++-19 cmake -DCMAKE_AR:FILEPATH=${CCTOOLS}/bin/x86_64-apple-darwin-ar -DCMAKE_INSTALL_NAME_TOOL=${CCTOOLS}/bin/x86_64-apple-darwin-install_name_tool -DCMAKE_RANLIB:FILEPATH=${CCTOOLS}/bin/x86_64-apple-darwin-ranlib -DLINKER_NAME=${CCTOOLS}/bin/x86_64-apple-darwin-ld -DCMAKE_TOOLCHAIN_FILE=cmake/darwin/toolchain-x86_64.cmake ..
ninja
The resulting binary will have a Mach-O executable format and can't be run on Linux. | {"source_file": "build-cross-osx.md"} | [
0.017629463225603104,
-0.05080655217170715,
0.002773431595414877,
-0.06805671751499176,
-0.014686447568237782,
-0.07653530687093735,
-0.050854675471782684,
-0.01730523072183132,
-0.10493606328964233,
-0.03902840614318848,
0.07479308545589447,
-0.10613766312599182,
-0.008141408674418926,
-0... |
3c756766-110b-4138-a6b1-8e5f3f001d1f | description: 'Overview of the ClickHouse continuous integration system'
sidebar_label: 'Continuous Integration (CI)'
sidebar_position: 55
slug: /development/continuous-integration
title: 'Continuous Integration (CI)'
doc_type: 'reference'
Continuous Integration (CI)
When you submit a pull request, some automated checks are ran for your code by the ClickHouse
continuous integration (CI) system
.
This happens after a repository maintainer (someone from ClickHouse team) has screened your code and added the
can be tested
label to your pull request.
The results of the checks are listed on the GitHub pull request page as described in the
GitHub checks documentation
.
If a check is failing, you might be required to fix it.
This page gives an overview of checks you may encounter, and what you can do to fix them.
If it looks like the check failure is not related to your changes, it may be some transient failure or an infrastructure problem.
Push an empty commit to the pull request to restart the CI checks:
shell
git reset
git commit --allow-empty
git push
If you are not sure what to do, ask a maintainer for help.
Merge with master {#merge-with-master}
Verifies that the PR can be merged to master.
If not, it will fail with a message
Cannot fetch mergecommit
.
To fix this check, resolve the conflict as described in the
GitHub documentation
, or merge the
master
branch to your pull request branch using git.
Docs check {#docs-check}
Tries to build the ClickHouse documentation website.
It can fail if you changed something in the documentation.
Most probable reason is that some cross-link in the documentation is wrong.
Go to the check report and look for
ERROR
and
WARNING
messages.
Description check {#description-check}
Check that the description of your pull request conforms to the template
PULL_REQUEST_TEMPLATE.md
.
You have to specify a changelog category for your change (e.g., Bug Fix), and write a user-readable message describing the change for
CHANGELOG.md
Docker image {#docker-image}
Builds the ClickHouse server and keeper Docker images to verify that they build correctly.
Official docker library tests {#official-docker-library-tests}
Runs the tests from the
official Docker library
to verify that the
clickhouse/clickhouse-server
Docker image works correctly.
To add new tests, create a directory
ci/jobs/scripts/docker_server/tests/$test_name
and the script
run.sh
there.
Additional details about the tests can be found in the
CI jobs scripts documentation
.
Marker check {#marker-check}
This check means that the CI system started to process the pull request.
When it has 'pending' status, it means that not all checks have been started yet.
After all checks have been started, it changes status to 'success'.
Style check {#style-check}
Performs various style checks on the code base.
Basic checks in the Style Check job:
cpp {#cpp} | {"source_file": "continuous-integration.md"} | [
0.017474964261054993,
-0.07409293204545975,
-0.008236308582127094,
-0.02821413055062294,
0.0404214933514595,
-0.08881150931119919,
-0.041517361998558044,
-0.06718795001506805,
-0.011144425719976425,
0.03350704908370972,
0.07529373466968536,
-0.028144724667072296,
0.01245121005922556,
-0.05... |
4a5ef858-08e0-46d9-8951-b3983d354ab6 | Style check {#style-check}
Performs various style checks on the code base.
Basic checks in the Style Check job:
cpp {#cpp}
Performs simple regex-based code style checks using the
ci/jobs/scripts/check_style/check_cpp.sh
script (which can also be run locally).
If it fails, fix the style issues according to the
code style guide
.
codespell, aspell {#codespell}
Check for grammatical mistakes and typos.
mypy {#mypy}
Performs static type checking for Python code.
Running the style check job locally {#running-style-check-locally}
The entire
Style Check
job can be run locally in a Docker container with:
sh
python -m ci.praktika run "Style check"
To run a specific check (e.g.,
cpp
check):
sh
python -m ci.praktika run "Style check" --test cpp
These commands pull the
clickhouse/style-test
Docker image and run the job in a containerized environment.
No dependencies other than Python 3 and Docker are required.
Fast test {#fast-test}
Normally this is the first check that is run for a PR.
It builds ClickHouse and runs most of
stateless functional tests
, omitting some.
If it fails, further checks are not started until it is fixed.
Look at the report to see which tests fail, then reproduce the failure locally as described
here
.
Running fast test locally: {#running-fast-test-locally}
sh
python -m ci.praktika run "Fast test" [--test some_test_name]
These commands pull the
clickhouse/fast-test
Docker image and run the job in a containerized environment.
No dependencies other than Python 3 and Docker are required.
Build check {#build-check}
Builds ClickHouse in various configurations for use in further steps.
Running Builds Locally {#running-builds-locally}
The build can be run locally in a CI-like environment using:
bash
python -m ci.praktika run "<BUILD_JOB_NAME>"
No dependencies other than Python 3 and Docker are required.
Available Build Jobs {#available-build-jobs}
The build job names are exactly as they appear in the CI Report:
AMD64 Builds:
-
Build (amd_debug)
- Debug build with symbols
-
Build (amd_release)
- Optimized release build
-
Build (amd_asan)
- Address Sanitizer build
-
Build (amd_tsan)
- Thread Sanitizer build
-
Build (amd_msan)
- Memory Sanitizer build
-
Build (amd_ubsan)
- Undefined Behavior Sanitizer build
-
Build (amd_binary)
- Quick release build without Thin LTO
-
Build (amd_compat)
- Compatibility build for older systems
-
Build (amd_musl)
- Build with musl libc
-
Build (amd_darwin)
- macOS build
-
Build (amd_freebsd)
- FreeBSD build
ARM64 Builds:
-
Build (arm_release)
- ARM64 optimized release build
-
Build (arm_asan)
- ARM64 Address Sanitizer build
-
Build (arm_coverage)
- ARM64 build with coverage instrumentation
-
Build (arm_binary)
- ARM64 Quick release build without Thin LTO
-
Build (arm_darwin)
- macOS ARM64 build
-
Build (arm_v80compat)
- ARMv8.0 compatibility build
Other Architectures: | {"source_file": "continuous-integration.md"} | [
-0.03085235320031643,
0.04367286339402199,
0.041404686868190765,
0.029611483216285706,
-0.00610737968236208,
-0.11591097712516785,
0.002998006297275424,
-0.048981428146362305,
-0.0821470394730568,
-0.005127204582095146,
-0.026589414104819298,
0.0004853142600040883,
0.052162811160087585,
-0... |
3c275231-d12b-4054-a811-d541eeeb381f | Other Architectures:
-
Build (ppc64le)
- PowerPC 64-bit Little Endian
-
Build (riscv64)
- RISC-V 64-bit
-
Build (s390x)
- IBM System/390 64-bit
-
Build (loongarch64)
- LoongArch 64-bit
If the job succeeds, build results will be available in the
<repo_root>/ci/tmp/build
directory.
Note:
For builds not in the "Other Architectures" category (which use cross-compilation), your local machine architecture must match the build type to produce the build as requested by
BUILD_JOB_NAME
.
Example {#example-run-local}
To run a local debug build:
bash
python -m ci.praktika run "Build (amd_debug)"
If the above approach does not work for you, use the cmake options from the build log and follow the
general build process
.
Functional stateless tests {#functional-stateless-tests}
Runs
stateless functional tests
for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc.
Look at the report to see which tests fail, then reproduce the failure locally as described
here
.
Note that you have to use the correct build configuration to reproduce -- a test might fail under AddressSanitizer but pass in Debug.
Download the binary from
CI build checks page
, or build it locally.
Integration tests {#integration-tests}
Runs
integration tests
.
Bugfix validate check {#bugfix-validate-check}
Checks that either a new test (functional or integration) or there some changed tests that fail with the binary built on master branch.
This check is triggered when pull request has "pr-bugfix" label.
Stress test {#stress-test}
Runs stateless functional tests concurrently from several clients to detect concurrency-related errors. If it fails:
* Fix all other test failures first;
* Look at the report to find the server logs and check them for possible causes
of error.
Compatibility check {#compatibility-check}
Checks that
clickhouse
binary runs on distributions with old libc versions.
If it fails, ask a maintainer for help.
AST fuzzer {#ast-fuzzer}
Runs randomly generated queries to catch program errors.
If it fails, ask a maintainer for help.
Performance tests {#performance-tests}
Measure changes in query performance.
This is the longest check that takes just below 6 hours to run.
The performance test report is described in detail
here
. | {"source_file": "continuous-integration.md"} | [
-0.022364825010299683,
0.06037479266524315,
-0.0484224334359169,
0.027150830253958702,
0.010245298035442829,
-0.07720721513032913,
-0.1546611189842224,
0.016731614246964455,
-0.07928114384412766,
0.0008084702421911061,
-0.0098988963291049,
-0.10867889970541,
-0.025086576119065285,
0.010014... |
88a3ab45-4e5b-4f9f-ab90-756e4a9778d9 | description: 'How to build Clickhouse and run benchmark with DEFLATE_QPL Codec'
sidebar_label: 'Building and Benchmarking DEFLATE_QPL'
sidebar_position: 73
slug: /development/building_and_benchmarking_deflate_qpl
title: 'Build Clickhouse with DEFLATE_QPL'
doc_type: 'guide'
Build Clickhouse with DEFLATE_QPL
Make sure your host machine meet the QPL required
prerequisites
deflate_qpl is enabled by default during cmake build. In case you accidentally change it, please double-check build flag: ENABLE_QPL=1
For generic requirements, please refer to Clickhouse generic
build instructions
Run Benchmark with DEFLATE_QPL
Files list {#files-list}
The folders
benchmark_sample
under
qpl-cmake
give example to run benchmark with python scripts:
client_scripts
contains python scripts for running typical benchmark, for example:
-
client_stressing_test.py
: The python script for query stress test with [1~4] server instances.
-
queries_ssb.sql
: The file lists all queries for
Star Schema Benchmark
-
allin1_ssb.sh
: This shell script executes benchmark workflow all in one automatically.
database_files
means it will store database files according to lz4/deflate/zstd codec.
Run benchmark automatically for Star Schema: {#run-benchmark-automatically-for-star-schema}
bash
$ cd ./benchmark_sample/client_scripts
$ sh run_ssb.sh
After complete, please check all the results in this folder:
./output/
In case you run into failure, please manually run benchmark as below sections.
Definition {#definition}
[CLICKHOUSE_EXE] means the path of clickhouse executable program.
Environment {#environment}
CPU: Sapphire Rapid
OS Requirements refer to
System Requirements for QPL
IAA Setup refer to
Accelerator Configuration
Install python modules:
bash
pip3 install clickhouse_driver numpy
[Self-check for IAA]
bash
$ accel-config list | grep -P 'iax|state'
Expected output like this:
bash
"dev":"iax1",
"state":"enabled",
"state":"enabled",
If you see nothing output, it means IAA is not ready to work. Please check IAA setup again.
Generate raw data {#generate-raw-data}
bash
$ cd ./benchmark_sample
$ mkdir rawdata_dir && cd rawdata_dir
Use
dbgen
to generate 100 million rows data with the parameters:
-s 20
The files like
*.tbl
are expected to output under
./benchmark_sample/rawdata_dir/ssb-dbgen
:
Database setup {#database-setup}
Set up database with LZ4 codec
bash
$ cd ./database_dir/lz4
$ [CLICKHOUSE_EXE] server -C config_lz4.xml >&/dev/null&
$ [CLICKHOUSE_EXE] client
Here you should see the message
Connected to ClickHouse server
from console which means client successfully setup connection with server.
Complete below three steps mentioned in
Star Schema Benchmark
- Creating tables in ClickHouse
- Inserting data. Here should use
./benchmark_sample/rawdata_dir/ssb-dbgen/*.tbl
as input data.
- Converting "star schema" to de-normalized "flat schema" | {"source_file": "building_and_benchmarking_deflate_qpl.md"} | [
-0.07334959506988525,
0.01235208660364151,
-0.0007732462836429477,
0.00967178214341402,
-0.025814665481448174,
-0.05055709183216095,
-0.039633460342884064,
0.06659016013145447,
-0.11133880168199539,
-0.04070073738694191,
-0.01898861862719059,
-0.05784216895699501,
0.0023899979423731565,
0.... |
31481a46-91c8-4ca1-8c84-2242d9af078e | - Creating tables in ClickHouse
- Inserting data. Here should use
./benchmark_sample/rawdata_dir/ssb-dbgen/*.tbl
as input data.
- Converting "star schema" to de-normalized "flat schema"
Set up database with IAA Deflate codec
bash
$ cd ./database_dir/deflate
$ [CLICKHOUSE_EXE] server -C config_deflate.xml >&/dev/null&
$ [CLICKHOUSE_EXE] client
Complete three steps same as lz4 above
Set up database with ZSTD codec
bash
$ cd ./database_dir/zstd
$ [CLICKHOUSE_EXE] server -C config_zstd.xml >&/dev/null&
$ [CLICKHOUSE_EXE] client
Complete three steps same as lz4 above
[self-check]
For each codec(lz4/zstd/deflate), please execute below query to make sure the databases are created successfully:
sql
SELECT count() FROM lineorder_flat
You are expected to see below output:
sql
ββββcount()ββ
β 119994608 β
βββββββββββββ
[Self-check for IAA Deflate codec]
At the first time you execute insertion or query from client, clickhouse server console is expected to print this log:
text
Hardware-assisted DeflateQpl codec is ready!
If you never find this, but see another log as below:
text
Initialization of hardware-assisted DeflateQpl codec failed
That means IAA devices is not ready, you need check IAA setup again.
Benchmark with single instance {#benchmark-with-single-instance}
Before start benchmark, Please disable C6 and set CPU frequency governor to be
performance
bash
$ cpupower idle-set -d 3
$ cpupower frequency-set -g performance
To eliminate impact of memory bound on cross sockets, we use
numactl
to bind server on one socket and client on another socket.
Single instance means single server connected with single client
Now run benchmark for LZ4/Deflate/ZSTD respectively:
LZ4:
bash
$ cd ./database_dir/lz4
$ numactl -m 0 -N 0 [CLICKHOUSE_EXE] server -C config_lz4.xml >&/dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 1 > lz4.log
IAA deflate:
bash
$ cd ./database_dir/deflate
$ numactl -m 0 -N 0 [CLICKHOUSE_EXE] server -C config_deflate.xml >&/dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 1 > deflate.log
ZSTD:
bash
$ cd ./database_dir/zstd
$ numactl -m 0 -N 0 [CLICKHOUSE_EXE] server -C config_zstd.xml >&/dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 1 > zstd.log
Now three logs should be output as expected:
text
lz4.log
deflate.log
zstd.log
How to check performance metrics:
We focus on QPS, please search the keyword:
QPS_Final
and collect statistics
Benchmark with multi-instances {#benchmark-with-multi-instances}
To reduce impact of memory bound on too much threads, We recommend run benchmark with multi-instances.
Multi-instance means multipleοΌ2 or 4οΌservers connected with respective client.
The cores of one socket need to be divided equally and assigned to the servers respectively. | {"source_file": "building_and_benchmarking_deflate_qpl.md"} | [
0.022036852315068245,
-0.06542415916919708,
-0.09978657960891724,
0.04364198073744774,
-0.06225676089525223,
-0.02825571596622467,
0.008584780618548393,
0.06755291670560837,
-0.1244223490357399,
0.008416854776442051,
0.016402438282966614,
-0.11022382974624634,
0.10562042891979218,
-0.03247... |
c55aa4b9-ad4a-4c20-8a05-86f0f4515e1e | Multi-instance means multipleοΌ2 or 4οΌservers connected with respective client.
The cores of one socket need to be divided equally and assigned to the servers respectively.
For multi-instances, must create new folder for each codec and insert dataset by following the similar steps as single instance.
There are 2 differences:
- For client side, you need launch clickhouse with the assigned port during table creation and data insertion.
- For server side, you need launch clickhouse with the specific xml config file in which port has been assigned. All customized xml config files for multi-instances has been provided under ./server_config.
Here we assume there are 60 cores per socket and take 2 instances for example.
Launch server for first instance
LZ4:
bash
$ cd ./database_dir/lz4
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_lz4.xml >&/dev/null&
ZSTD:
bash
$ cd ./database_dir/zstd
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_zstd.xml >&/dev/null&
IAA Deflate:
bash
$ cd ./database_dir/deflate
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_deflate.xml >&/dev/null&
[Launch server for second instance]
LZ4:
bash
$ cd ./database_dir && mkdir lz4_s2 && cd lz4_s2
$ cp ../../server_config/config_lz4_s2.xml ./
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_lz4_s2.xml >&/dev/null&
ZSTD:
bash
$ cd ./database_dir && mkdir zstd_s2 && cd zstd_s2
$ cp ../../server_config/config_zstd_s2.xml ./
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_zstd_s2.xml >&/dev/null&
IAA Deflate:
bash
$ cd ./database_dir && mkdir deflate_s2 && cd deflate_s2
$ cp ../../server_config/config_deflate_s2.xml ./
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_deflate_s2.xml >&/dev/null&
Creating tables && Inserting data for second instance
Creating tables:
bash
$ [CLICKHOUSE_EXE] client -m --port=9001
Inserting data:
bash
$ [CLICKHOUSE_EXE] client --query "INSERT INTO [TBL_FILE_NAME] FORMAT CSV" < [TBL_FILE_NAME].tbl --port=9001
[TBL_FILE_NAME] represents the name of a file named with the regular expression: *. tbl under
./benchmark_sample/rawdata_dir/ssb-dbgen
.
--port=9001
stands for the assigned port for server instance which is also defined in config_lz4_s2.xml/config_zstd_s2.xml/config_deflate_s2.xml. For even more instances, you need replace it with the value: 9002/9003 which stand for s3/s4 instance respectively. If you don't assign it, the port is 9000 by default which has been used by first instance.
Benchmarking with 2 instances
LZ4:
bash
$ cd ./database_dir/lz4
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_lz4.xml >&/dev/null&
$ cd ./database_dir/lz4_s2
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_lz4_s2.xml >&/dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 2 > lz4_2insts.log
ZSTD: | {"source_file": "building_and_benchmarking_deflate_qpl.md"} | [
0.017215760424733162,
-0.11502241343259811,
-0.1126081794500351,
-0.016037093475461006,
-0.05893741548061371,
-0.04897760599851608,
-0.03336910158395767,
0.03632263466715813,
-0.03674113005399704,
0.023372692987322807,
0.03206954523921013,
-0.06819319725036621,
0.07059145718812943,
-0.0607... |
29ebdbac-e943-4fa3-92b1-419d5032f7ed | ZSTD:
bash
$ cd ./database_dir/zstd
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_zstd.xml >&/dev/null&
$ cd ./database_dir/zstd_s2
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_zstd_s2.xml >&/dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 2 > zstd_2insts.log
IAA deflate
bash
$ cd ./database_dir/deflate
$ numactl -C 0-29,120-149 [CLICKHOUSE_EXE] server -C config_deflate.xml >&/dev/null&
$ cd ./database_dir/deflate_s2
$ numactl -C 30-59,150-179 [CLICKHOUSE_EXE] server -C config_deflate_s2.xml >&/dev/null&
$ cd ./client_scripts
$ numactl -m 1 -N 1 python3 client_stressing_test.py queries_ssb.sql 2 > deflate_2insts.log
Here the last argument:
2
of client_stressing_test.py stands for the number of instances. For more instances, you need replace it with the value: 3 or 4. This script support up to 4 instances/
Now three logs should be output as expected:
text
lz4_2insts.log
deflate_2insts.log
zstd_2insts.log
How to check performance metrics:
We focus on QPS, please search the keyword:
QPS_Final
and collect statistics
Benchmark setup for 4 instances is similar with 2 instances above.
We recommend use 2 instances benchmark data as final report for review.
Tips {#tips}
Each time before launch new clickhouse server, please make sure no background clickhouse process running, please check and kill old one:
bash
$ ps -aux| grep clickhouse
$ kill -9 [PID]
By comparing the query list in ./client_scripts/queries_ssb.sql with official
Star Schema Benchmark
, you will find 3 queries are not included: Q1.2/Q1.3/Q3.4 . This is because cpu utilization% is very low < 10% for these queries which means cannot demonstrate performance differences. | {"source_file": "building_and_benchmarking_deflate_qpl.md"} | [
-0.041614796966314316,
0.02240162342786789,
-0.10020294785499573,
0.037973515689373016,
0.018672030419111252,
-0.1305612474679947,
0.08127935230731964,
0.10637998580932617,
-0.03756805881857872,
0.016489917412400246,
0.029892979189753532,
-0.04493353143334389,
0.11053086072206497,
0.010595... |
e9e2ed1c-1920-4fc8-a731-4c6e0e0ba5b2 | description: 'Guide for building ClickHouse from source for the AARCH64 architecture'
sidebar_label: 'Build on Linux for AARCH64'
sidebar_position: 25
slug: /development/build-cross-arm
title: 'How to Build ClickHouse on Linux for AARCH64'
doc_type: 'guide'
How to Build ClickHouse on Linux for AARCH64
No special steps are required to build ClickHouse for Aarch64 on an Aarch64 machine.
To cross compile ClickHouse for AArch64 on an x86 Linux machine, pass the following flag to
cmake
:
-DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-aarch64.cmake | {"source_file": "build-cross-arm.md"} | [
0.004566616844385862,
0.005092530976980925,
-0.021628201007843018,
-0.03310106694698334,
-0.05781039223074913,
-0.023269688710570335,
-0.07175125181674957,
-0.02795659750699997,
-0.0946747288107872,
0.00528256269171834,
0.05408915877342224,
-0.09571950137615204,
-0.03488896042108536,
-0.05... |
13918e38-345f-4935-9834-d48e85ca23a2 | description: 'Guide for building ClickHouse from source for the LoongArch64 architecture'
sidebar_label: 'Build on Linux for LoongArch64'
sidebar_position: 35
slug: /development/build-cross-loongarch
title: 'Build on Linux for LoongArch64'
doc_type: 'guide'
Build on Linux for LoongArch64
ClickHouse has experimental support for LoongArch64
Build ClickHouse {#build-clickhouse}
The llvm version required for building must be greater than or equal to 19.1.0.
bash
cd ClickHouse
mkdir build-loongarch64
CC=clang-19 CXX=clang++-19 cmake . -Bbuild-loongarch64 -G Ninja -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-loongarch64.cmake
ninja -C build-loongarch64
The resulting binary will run only on Linux with the LoongArch64 CPU architecture. | {"source_file": "build-cross-loongarch.md"} | [
0.013106372207403183,
-0.03229231387376785,
-0.01124595571309328,
-0.08181121200323105,
0.006000048480927944,
-0.06404227018356323,
-0.11345386505126953,
-0.012763690203428268,
-0.08391419798135757,
-0.06868387758731842,
0.05190100148320198,
-0.09283075481653214,
-0.027594102546572685,
-0.... |
14c0de4b-94f0-442f-a244-1fd5e0fc3478 | slug: /concepts/faq
title: 'FAQ'
description: 'Landing page for FAQ'
pagination_prev: null
pagination_next: null
doc_type: 'landing-page'
keywords: ['FAQ', 'questions', 'answers']
| Page | Description |
|---------------------------------------------------------------|----------------------------------------------------------------------------------------|
|
General Questions about ClickHouse
| General questions we get about ClickHouse. |
|
Why not use something like MapReduce?
| Explainer on why MapReduce implementations are not appropriate for the OLAP scenario. |
|
What does "Π½Π΅ ΡΠΎΡΠΌΠΎΠ·ΠΈΡ" mean
| Explainer on what "Π½Π΅ ΡΠΎΡΠΌΠΎΠ·ΠΈΡ" means, which you may have seen on ClickHouse t-shirts. |
|
What is OLAP
| Explainer on what Online Analytical Processing is. |
|
Who is using ClickHouse
| Learn about who is using ClickHouse. | | {"source_file": "index.md"} | [
0.02274765446782112,
0.030920108780264854,
0.013499456457793713,
-0.02442934550344944,
0.007373974658548832,
0.005816450342535973,
0.00468234671279788,
0.02529699169099331,
-0.04263091832399368,
0.03299309313297272,
0.015591343864798546,
-0.021876946091651917,
0.0411921851336956,
-0.038963... |
cd63e4d6-c869-45c9-be03-fd9e499bfb1b | slug: /starter-guides
title: 'Starter Guides'
description: 'Landing page for starter guides'
pagination_prev: null
pagination_next: null
doc_type: 'landing-page'
keywords: ['beginner', 'tutorial', 'create table', 'insert data', 'select data', 'update data', 'delete data']
In this section of the docs you'll find starter guides for common SQL queries:
CREATE
,
INSERT
,
SELECT
, and mutations
UPDATE
and
DELETE
.
| Page | Description |
|------------------------------------------------------------|------------------------------------------------------------------------|
|
Create Tables
| Starter guide on how to create a table. |
|
Insert Data
| Starter guide on how to insert data into a table. |
|
Select Data
| Starter guide on how to select data from a table. |
|
Update and Delete Data
| Starter guide on mutations - updating and deleting data in ClickHouse. | | {"source_file": "index.md"} | [
0.0037561417557299137,
-0.012612177059054375,
-0.02614196203649044,
0.01229914091527462,
-0.06891751289367676,
0.0010825973004102707,
-0.029469627887010574,
-0.0046620056964457035,
-0.009296274743974209,
0.058286070823669434,
0.08535141497850418,
0.021214699372649193,
0.07841794937849045,
... |
206eae59-800f-491d-aa92-7248cb74af61 | slug: /troubleshooting
sidebar_label: 'Troubleshooting'
doc_type: 'guide'
keywords: [
'clickhouse troubleshooting',
'clickhouse errors',
'database troubleshooting',
'clickhouse connection issues',
'memory limit exceeded',
'clickhouse performance problems',
'database error messages',
'clickhouse configuration issues',
'connection refused error',
'clickhouse debugging',
'database connection problems',
'troubleshooting guide'
]
title: 'Troubleshooting Common Issues'
description: 'Find solutions to the most common ClickHouse problems including slow queries, memory errors, connection issues, and configuration problems.'
Troubleshooting common issues {#troubleshooting-common-issues}
Having problems with ClickHouse? Find the solutions to common issues here.
Performance and errors {#performance-and-errors}
Queries running slowly, timeouts, or getting specific error messages like "Memory limit exceeded" or "Connection refused."
Show performance and error solutions
### Query performance {#query-performance}
- [Find which queries are using the most resources](/knowledgebase/find-expensive-queries)
- [Complete query optimization guide](/docs/optimize/query-optimization)
- [Optimize JOIN operations](/docs/best-practices/minimize-optimize-joins)
- [Run diagnostic queries to find bottlenecks](/docs/knowledgebase/useful-queries-for-troubleshooting)
### Data insertion performance {#data-insertion-performance}
- [Speed up data insertion](/docs/optimize/bulk-inserts)
- [Set up asynchronous inserts](/docs/optimize/asynchronous-inserts)
### Advanced analysis tools {#advanced-analysis-tools}
- [Check what processes are running](/docs/knowledgebase/which-processes-are-currently-running)
- [Monitor system performance](/docs/operations/system-tables/processes)
### Error messages {#error-messages}
- **"Memory limit exceeded"** β [Debug memory limit errors](/docs/guides/developer/debugging-memory-issues)
- **"Connection refused"** β [Fix connection problems](#connections-and-authentication)
- **"Login failures"** β [Set up users, roles, and permissions](/docs/operations/access-rights)
- **"SSL certificate errors"** β [Fix certificate problems](/docs/knowledgebase/certificate_verify_failed_error)
- **"Table/database errors"** β [Database creation guide](/docs/sql-reference/statements/create/database) | [Table UUID problems](/docs/engines/database-engines/atomic)
- **"Network timeouts"** β [Network troubleshooting](/docs/interfaces/http)
- **Other issues** β [Track errors across your cluster](/docs/operations/system-tables/errors)
Memory and resources {#memory-and-resources}
High memory usage, out-of-memory crashes, or need help sizing your ClickHouse deployment.
Show memory solutions
### Memory debugging and monitoring: {#memory-debugging-and-monitoring} | {"source_file": "index.md"} | [
0.046762846410274506,
-0.008601749315857887,
-0.027472419664263725,
0.07271156460046768,
-0.0160597562789917,
-0.060426369309425354,
-0.03695301711559296,
-0.017609504982829094,
-0.11311908066272736,
0.028099671006202698,
0.052685417234897614,
0.039203181862831116,
0.01863103359937668,
0.0... |
f009848b-f447-4c3c-8265-1bee0479ca7d | High memory usage, out-of-memory crashes, or need help sizing your ClickHouse deployment.
Show memory solutions
### Memory debugging and monitoring: {#memory-debugging-and-monitoring}
- [Identify what's using memory](/docs/guides/developer/debugging-memory-issues)
- [Check current memory usage](/docs/operations/system-tables/processes)
- [Memory allocation profiling](/docs/operations/allocation-profiling)
- [Analyze memory usage patterns](/docs/operations/system-tables/query_log)
### Memory configuration: {#memory-configuration}
- [Configure memory limits](/docs/operations/settings/memory-overcommit)
- [Server memory settings](/docs/operations/server-configuration-parameters/settings)
- [Session memory settings](/docs/operations/settings/settings)
### Scaling and sizing: {#scaling-and-sizing}
- [Right-size your service](/docs/operations/tips)
- [Configure automatic scaling](/docs/manage/scaling)
Connections and Authentication {#connections-and-authentication}
Can't connect to ClickHouse, authentication failures, SSL certificate errors, or client setup issues.
Show connection solutions
### Basic Connection issues {#basic-connection-issues}
- [Fix HTTP interface issues](/docs/interfaces/http)
- [Handle SSL certificate problems](/docs/knowledgebase/certificate_verify_failed_error)
- [User authentication setup](/docs/operations/access-rights)
### Client interfaces {#client-interfaces}
- [Native ClickHouse clients](/docs/interfaces/natives-clients-and-interfaces)
- [MySQL interface problems](/docs/interfaces/mysql)
- [PostgreSQL interface issues](/docs/interfaces/postgresql)
- [gRPC interface configuration](/docs/interfaces/grpc)
- [SSH interface setup](/docs/interfaces/ssh)
### Network and data {#network-and-data}
- [Network security settings](/docs/operations/server-configuration-parameters/settings)
- [Data format parsing issues](/docs/interfaces/formats)
Setup and configuration {#setup-and-configuration}
Initial installation, server configuration, database creation, data ingestion issues, or replication setup.
Show setup and configuration solutions
### Initial setup {#initial-setup}
- [Configure server settings](/docs/operations/server-configuration-parameters/settings)
- [Set up security and access control](/docs/operations/access-rights)
- [Configure hardware properly](/docs/operations/tips)
### Database management {#database-management}
- [Create and manage databases](/docs/sql-reference/statements/create/database)
- [Choose the right table engine](/docs/engines/table-engines)
### Data operations {#data-operations}
- [Optimize bulk data insertion](/docs/optimize/bulk-inserts)
- [Handle data format problems](/docs/interfaces/formats)
- [Set up streaming data pipelines](/docs/optimize/asynchronous-inserts)
- [Improve S3 integration performance](/docs/integrations/s3/performance) | {"source_file": "index.md"} | [
0.06235187128186226,
-0.025930188596248627,
-0.07218223065137863,
-0.01023120991885662,
-0.06681859493255615,
-0.03416651114821434,
0.015336578711867332,
0.06105875223875046,
-0.11366093903779984,
0.0707755759358406,
0.021044692024588585,
0.03738109767436981,
-0.016836240887641907,
0.01399... |
0dfe6f56-0d7b-47db-b70b-ab2e366f7f99 | ### Advanced configuration {#advanced-configuration}
- [Set up data replication](/docs/engines/table-engines/mergetree-family/replication)
- [Configure distributed tables](/docs/engines/table-engines/special/distributed)
- [Set up backup and recovery](/docs/operations/backup)
- [Configure monitoring](/docs/operations/system-tables/overview)
Still need help? {#still-need-help}
If you can't find a solution:
Ask AI
-
Ask AI
for instant answers.
Check system tables
-
Overview
Review server logs
- Look for error messages in your ClickHouse logs
Ask the community
-
Join Our Community Slack
,
GitHub Discussions
Get professional support
-
ClickHouse Cloud support | {"source_file": "index.md"} | [
0.00005128472184878774,
-0.13195234537124634,
-0.019156862050294876,
0.0074497428722679615,
0.03043944016098976,
-0.08445245772600174,
-0.022125303745269775,
-0.005811998154968023,
-0.07008235156536102,
0.1082419604063034,
0.024461574852466583,
-0.018814442679286003,
0.06696509569883347,
-... |
8c6a1d39-8a86-40d5-b36f-3297f63b2b82 | description: 'Guide to configuring secure SSL/TLS communication between ClickHouse
and ZooKeeper'
sidebar_label: 'Secured Communication with Zookeeper'
sidebar_position: 45
slug: /operations/ssl-zookeeper
title: 'Optional secured communication between ClickHouse and Zookeeper'
doc_type: 'guide'
Optional secured communication between ClickHouse and Zookeeper
import SelfManaged from '@site/docs/_snippets/_self_managed_only_automated.md';
You should specify
ssl.keyStore.location
,
ssl.keyStore.password
and
ssl.trustStore.location
,
ssl.trustStore.password
for communication with ClickHouse client over SSL. These options are available from Zookeeper version 3.5.2.
You can add
zookeeper.crt
to trusted certificates.
bash
sudo cp zookeeper.crt /usr/local/share/ca-certificates/zookeeper.crt
sudo update-ca-certificates
Client section in
config.xml
will look like:
xml
<client>
<certificateFile>/etc/clickhouse-server/client.crt</certificateFile>
<privateKeyFile>/etc/clickhouse-server/client.key</privateKeyFile>
<loadDefaultCAFile>true</loadDefaultCAFile>
<cacheSessions>true</cacheSessions>
<disableProtocols>sslv2,sslv3</disableProtocols>
<preferServerCiphers>true</preferServerCiphers>
<invalidCertificateHandler>
<name>RejectCertificateHandler</name>
</invalidCertificateHandler>
</client>
Add Zookeeper to ClickHouse config with some cluster and macros:
xml
<clickhouse>
<zookeeper>
<node>
<host>localhost</host>
<port>2281</port>
<secure>1</secure>
</node>
</zookeeper>
</clickhouse>
Start
clickhouse-server
. In logs you should see:
text
<Trace> ZooKeeper: initialized, hosts: secure://localhost:2281
Prefix
secure://
indicates that connection is secured by SSL.
To ensure traffic is encrypted run
tcpdump
on secured port:
bash
tcpdump -i any dst port 2281 -nnXS
And query in
clickhouse-client
:
sql
SELECT * FROM system.zookeeper WHERE path = '/';
On unencrypted connection you will see in
tcpdump
output something like this:
text
..../zookeeper/quota.
On encrypted connection you should not see this. | {"source_file": "ssl-zookeeper.md"} | [
-0.028924018144607544,
0.035711146891117096,
-0.05781378597021103,
-0.0056667206808924675,
-0.03105120174586773,
-0.05940753221511841,
-0.0023028748109936714,
-0.03520891070365906,
-0.017102954909205437,
0.008703779429197311,
-0.006922111846506596,
-0.08192409574985504,
0.08218076825141907,
... |
e3a6d433-c553-4664-a27c-63bd3d1b3ed6 | description: 'Documentation for Named collections'
sidebar_label: 'Named collections'
sidebar_position: 69
slug: /operations/named-collections
title: 'Named collections'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Named collections provide a way to store collections of key-value pairs to be
used to configure integrations with external sources. You can use named collections with
dictionaries, tables, table functions, and object storage.
Named collections can be configured with DDL or in configuration files and are applied
when ClickHouse starts. They simplify the creation of objects and the hiding of credentials
from users without administrative access.
The keys in a named collection must match the parameter names of the corresponding
function, table engine, database, etc. In the examples below the parameter list is
linked to for each type.
Parameters set in a named collection can be overridden in SQL, this is shown in the examples
below. This ability can be limited using
[NOT] OVERRIDABLE
keywords and XML attributes
and/or the configuration option
allow_named_collection_override_by_default
.
:::warning
If override is allowed, it may be possible for users without administrative access to
figure out the credentials that you are trying to hide.
If you are using named collections with that purpose, you should disable
allow_named_collection_override_by_default
(which is enabled by default).
:::
Storing named collections in the system database {#storing-named-collections-in-the-system-database}
DDL example {#ddl-example}
sql
CREATE NAMED COLLECTION name AS
key_1 = 'value' OVERRIDABLE,
key_2 = 'value2' NOT OVERRIDABLE,
url = 'https://connection.url/'
In the above example:
key_1
can always be overridden.
key_2
can never be overridden.
url
can be overridden or not depending on the value of
allow_named_collection_override_by_default
.
Permissions to create named collections with DDL {#permissions-to-create-named-collections-with-ddl}
To manage named collections with DDL a user must have the
named_collection_control
privilege. This can be assigned by adding a file to
/etc/clickhouse-server/users.d/
. The example gives the user
default
both the
access_management
and
named_collection_control
privileges:
xml title='/etc/clickhouse-server/users.d/user_default.xml'
<clickhouse>
<users>
<default>
<password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex replace=true>
<access_management>1</access_management>
<!-- highlight-start -->
<named_collection_control>1</named_collection_control>
<!-- highlight-end -->
</default>
</users>
</clickhouse> | {"source_file": "named-collections.md"} | [
-0.10798317939043045,
-0.04212656989693642,
-0.10803069174289703,
0.01667007803916931,
-0.07523291558027267,
0.02193501964211464,
0.024205291643738747,
-0.05733872205018997,
0.04052869975566864,
0.015514103695750237,
0.03927929326891899,
-0.024720966815948486,
0.09458564221858978,
-0.05613... |
8f080fad-26b3-45d6-80ea-a817b1922f8a | :::tip
In the above example the
password_sha256_hex
value is the hexadecimal representation of the SHA256 hash of the password. This configuration for the user
default
has the attribute
replace=true
as in the default configuration has a plain text
password
set, and it is not possible to have both plain text and sha256 hex passwords set for a user.
:::
Storage for named collections {#storage-for-named-collections}
Named collections can either be stored on local disk or in ZooKeeper/Keeper. By default local storage is used.
They can also be stored using encryption with the same algorithms used for
disk encryption
,
where
aes_128_ctr
is used by default.
To configure named collections storage you need to specify a
type
. This can be either
local
or
keeper
/
zookeeper
. For encrypted storage,
you can use
local_encrypted
or
keeper_encrypted
/
zookeeper_encrypted
.
To use ZooKeeper/Keeper we also need to set up a
path
(path in ZooKeeper/Keeper, where named collections will be stored) to
named_collections_storage
section in configuration file. The following example uses encryption and ZooKeeper/Keeper:
xml
<clickhouse>
<named_collections_storage>
<type>zookeeper_encrypted</type>
<key_hex>bebec0cabebec0cabebec0cabebec0ca</key_hex>
<algorithm>aes_128_ctr</algorithm>
<path>/named_collections_path/</path>
<update_timeout_ms>1000</update_timeout_ms>
</named_collections_storage>
</clickhouse>
An optional configuration parameter
update_timeout_ms
by default is equal to
5000
.
Storing named collections in configuration files {#storing-named-collections-in-configuration-files}
XML example {#xml-example}
xml title='/etc/clickhouse-server/config.d/named_collections.xml'
<clickhouse>
<named_collections>
<name>
<key_1 overridable="true">value</key_1>
<key_2 overridable="false">value_2</key_2>
<url>https://connection.url/</url>
</name>
</named_collections>
</clickhouse>
In the above example:
key_1
can always be overridden.
key_2
can never be overridden.
url
can be overridden or not depending on the value of
allow_named_collection_override_by_default
.
Modifying named collections {#modifying-named-collections}
Named collections that are created with DDL queries can be altered or dropped with DDL. Named collections created with XML files can be managed by editing or deleting the corresponding XML.
Alter a DDL named collection {#alter-a-ddl-named-collection}
Change or add the keys
key1
and
key3
of the collection
collection2
(this will not change the value of the
overridable
flag for those keys):
sql
ALTER NAMED COLLECTION collection2 SET key1=4, key3='value3'
Change or add the key
key1
and allow it to be always overridden:
sql
ALTER NAMED COLLECTION collection2 SET key1=4 OVERRIDABLE
Remove the key
key2
from
collection2
:
sql
ALTER NAMED COLLECTION collection2 DELETE key2 | {"source_file": "named-collections.md"} | [
-0.008140146732330322,
0.033550746738910675,
-0.11836976557970047,
-0.026002535596489906,
-0.061981040984392166,
-0.010685653425753117,
-0.002491650404408574,
-0.016199536621570587,
0.030228974297642708,
0.04001641646027565,
0.005298612639307976,
-0.030484672635793686,
0.051916010677814484,
... |
986f7728-077e-478b-a694-9bc155397a9f | sql
ALTER NAMED COLLECTION collection2 SET key1=4 OVERRIDABLE
Remove the key
key2
from
collection2
:
sql
ALTER NAMED COLLECTION collection2 DELETE key2
Change or add the key
key1
and delete the key
key3
of the collection
collection2
:
sql
ALTER NAMED COLLECTION collection2 SET key1=4, DELETE key3
To force a key to use the default settings for the
overridable
flag, you have to
remove and re-add the key.
sql
ALTER NAMED COLLECTION collection2 DELETE key1;
ALTER NAMED COLLECTION collection2 SET key1=4;
Drop the DDL named collection
collection2
: {#drop-the-ddl-named-collection-collection2}
sql
DROP NAMED COLLECTION collection2
Named collections for accessing S3 {#named-collections-for-accessing-s3}
The description of parameters see
s3 Table Function
.
DDL example {#ddl-example-1}
sql
CREATE NAMED COLLECTION s3_mydata AS
access_key_id = 'AKIAIOSFODNN7EXAMPLE',
secret_access_key = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
format = 'CSV',
url = 'https://s3.us-east-1.amazonaws.com/yourbucket/mydata/'
XML example {#xml-example-1}
xml
<clickhouse>
<named_collections>
<s3_mydata>
<access_key_id>AKIAIOSFODNN7EXAMPLE</access_key_id>
<secret_access_key>wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY</secret_access_key>
<format>CSV</format>
<url>https://s3.us-east-1.amazonaws.com/yourbucket/mydata/</url>
</s3_mydata>
</named_collections>
</clickhouse>
s3() function and S3 Table named collection examples {#s3-function-and-s3-table-named-collection-examples}
Both of the following examples use the same named collection
s3_mydata
:
s3() function {#s3-function}
sql
INSERT INTO FUNCTION s3(s3_mydata, filename = 'test_file.tsv.gz',
format = 'TSV', structure = 'number UInt64', compression_method = 'gzip')
SELECT * FROM numbers(10000);
:::tip
The first argument to the
s3()
function above is the name of the collection,
s3_mydata
. Without named collections, the access key ID, secret, format, and URL would all be passed in every call to the
s3()
function.
:::
S3 table {#s3-table}
```sql
CREATE TABLE s3_engine_table (number Int64)
ENGINE=S3(s3_mydata, url='https://s3.us-east-1.amazonaws.com/yourbucket/mydata/test_file.tsv.gz', format = 'TSV')
SETTINGS input_format_with_names_use_header = 0;
SELECT * FROM s3_engine_table LIMIT 3;
ββnumberββ
β 0 β
β 1 β
β 2 β
ββββββββββ
```
Named collections for accessing MySQL database {#named-collections-for-accessing-mysql-database}
The description of parameters see
mysql
.
DDL example {#ddl-example-2}
sql
CREATE NAMED COLLECTION mymysql AS
user = 'myuser',
password = 'mypass',
host = '127.0.0.1',
port = 3306,
database = 'test',
connection_pool_size = 8,
replace_query = 1
XML example {#xml-example-2} | {"source_file": "named-collections.md"} | [
-0.04065796360373497,
-0.047514356672763824,
-0.09104202687740326,
-0.02201993018388748,
-0.11578798294067383,
-0.0373288169503212,
0.017468664795160294,
-0.07918491959571838,
-0.002034455304965377,
0.045297302305698395,
0.0606791153550148,
-0.031017368659377098,
0.07117071747779846,
-0.10... |
edc19648-d89f-410a-8bfc-240458233301 | XML example {#xml-example-2}
xml
<clickhouse>
<named_collections>
<mymysql>
<user>myuser</user>
<password>mypass</password>
<host>127.0.0.1</host>
<port>3306</port>
<database>test</database>
<connection_pool_size>8</connection_pool_size>
<replace_query>1</replace_query>
</mymysql>
</named_collections>
</clickhouse>
mysql() function, MySQL table, MySQL database, and Dictionary named collection examples {#mysql-function-mysql-table-mysql-database-and-dictionary-named-collection-examples}
The four following examples use the same named collection
mymysql
:
mysql() function {#mysql-function}
```sql
SELECT count() FROM mysql(mymysql, table = 'test');
ββcount()ββ
β 3 β
βββββββββββ
``
:::note
The named collection does not specify the
table
parameter, so it is specified in the function call as
table = 'test'`.
:::
MySQL table {#mysql-table}
```sql
CREATE TABLE mytable(A Int64) ENGINE = MySQL(mymysql, table = 'test', connection_pool_size=3, replace_query=0);
SELECT count() FROM mytable;
ββcount()ββ
β 3 β
βββββββββββ
```
:::note
The DDL overrides the named collection setting for connection_pool_size.
:::
MySQL database {#mysql-database}
```sql
CREATE DATABASE mydatabase ENGINE = MySQL(mymysql);
SHOW TABLES FROM mydatabase;
ββnameββββ
β source β
β test β
ββββββββββ
```
MySQL Dictionary {#mysql-dictionary}
```sql
CREATE DICTIONARY dict (A Int64, B String)
PRIMARY KEY A
SOURCE(MYSQL(NAME mymysql TABLE 'source'))
LIFETIME(MIN 1 MAX 2)
LAYOUT(HASHED());
SELECT dictGet('dict', 'B', 2);
ββdictGet('dict', 'B', 2)ββ
β two β
βββββββββββββββββββββββββββ
```
Named collections for accessing PostgreSQL database {#named-collections-for-accessing-postgresql-database}
The description of parameters see
postgresql
. Additionally, there are aliases:
username
for
user
db
for
database
.
Parameter
addresses_expr
is used in a collection instead of
host:port
. The parameter is optional, because there are other optional ones:
host
,
hostname
,
port
. The following pseudo code explains the priority:
sql
CASE
WHEN collection['addresses_expr'] != '' THEN collection['addresses_expr']
WHEN collection['host'] != '' THEN collection['host'] || ':' || if(collection['port'] != '', collection['port'], '5432')
WHEN collection['hostname'] != '' THEN collection['hostname'] || ':' || if(collection['port'] != '', collection['port'], '5432')
END
Example of creation:
sql
CREATE NAMED COLLECTION mypg AS
user = 'pguser',
password = 'jw8s0F4',
host = '127.0.0.1',
port = 5432,
database = 'test',
schema = 'test_schema'
Example of configuration: | {"source_file": "named-collections.md"} | [
0.022528117522597313,
-0.020548848435282707,
-0.09381487220525742,
0.0011359634809195995,
-0.12437162548303604,
-0.046417590230703354,
0.019698994234204292,
0.04830782860517502,
0.0018294380279257894,
-0.03186924383044243,
0.0736277773976326,
-0.0753447636961937,
0.14607003331184387,
-0.09... |
1d052ea1-be8e-4498-afeb-bdf4c6d335f2 | sql
CREATE NAMED COLLECTION mypg AS
user = 'pguser',
password = 'jw8s0F4',
host = '127.0.0.1',
port = 5432,
database = 'test',
schema = 'test_schema'
Example of configuration:
xml
<clickhouse>
<named_collections>
<mypg>
<user>pguser</user>
<password>jw8s0F4</password>
<host>127.0.0.1</host>
<port>5432</port>
<database>test</database>
<schema>test_schema</schema>
</mypg>
</named_collections>
</clickhouse>
Example of using named collections with the postgresql function {#example-of-using-named-collections-with-the-postgresql-function}
```sql
SELECT * FROM postgresql(mypg, table = 'test');
ββaββ¬βbββββ
β 2 β two β
β 1 β one β
βββββ΄ββββββ
SELECT * FROM postgresql(mypg, table = 'test', schema = 'public');
ββaββ
β 1 β
β 2 β
β 3 β
βββββ
```
Example of using named collections with database with engine PostgreSQL {#example-of-using-named-collections-with-database-with-engine-postgresql}
```sql
CREATE TABLE mypgtable (a Int64) ENGINE = PostgreSQL(mypg, table = 'test', schema = 'public');
SELECT * FROM mypgtable;
ββaββ
β 1 β
β 2 β
β 3 β
βββββ
```
:::note
PostgreSQL copies data from the named collection when the table is being created. A change in the collection does not affect the existing tables.
:::
Example of using named collections with database with engine PostgreSQL {#example-of-using-named-collections-with-database-with-engine-postgresql-1}
```sql
CREATE DATABASE mydatabase ENGINE = PostgreSQL(mypg);
SHOW TABLES FROM mydatabase
ββnameββ
β test β
ββββββββ
```
Example of using named collections with a dictionary with source POSTGRESQL {#example-of-using-named-collections-with-a-dictionary-with-source-postgresql}
```sql
CREATE DICTIONARY dict (a Int64, b String)
PRIMARY KEY a
SOURCE(POSTGRESQL(NAME mypg TABLE test))
LIFETIME(MIN 1 MAX 2)
LAYOUT(HASHED());
SELECT dictGet('dict', 'b', 2);
ββdictGet('dict', 'b', 2)ββ
β two β
βββββββββββββββββββββββββββ
```
Named collections for accessing a remote ClickHouse database {#named-collections-for-accessing-a-remote-clickhouse-database}
The description of parameters see
remote
.
Example of configuration:
sql
CREATE NAMED COLLECTION remote1 AS
host = 'remote_host',
port = 9000,
database = 'system',
user = 'foo',
password = 'secret',
secure = 1
xml
<clickhouse>
<named_collections>
<remote1>
<host>remote_host</host>
<port>9000</port>
<database>system</database>
<user>foo</user>
<password>secret</password>
<secure>1</secure>
</remote1>
</named_collections>
</clickhouse>
secure
is not needed for connection because of
remoteSecure
, but it can be used for dictionaries.
Example of using named collections with the
remote
/
remoteSecure
functions {#example-of-using-named-collections-with-the-remoteremotesecure-functions} | {"source_file": "named-collections.md"} | [
0.02564668282866478,
-0.06767068058252335,
-0.12470962852239609,
0.007639097049832344,
-0.14671838283538818,
-0.046119965612888336,
0.007729857228696346,
-0.004120636731386185,
-0.03546413406729698,
-0.0015092818066477776,
0.02037939243018627,
-0.03457370027899742,
0.048348456621170044,
-0... |
530c58a5-c07f-4587-8317-060fbdcd185f | Example of using named collections with the
remote
/
remoteSecure
functions {#example-of-using-named-collections-with-the-remoteremotesecure-functions}
```sql
SELECT * FROM remote(remote1, table = one);
ββdummyββ
β 0 β
βββββββββ
SELECT * FROM remote(remote1, database = merge(system, '^one'));
ββdummyββ
β 0 β
βββββββββ
INSERT INTO FUNCTION remote(remote1, database = default, table = test) VALUES (1,'a');
SELECT * FROM remote(remote1, database = default, table = test);
ββaββ¬βbββ
β 1 β a β
βββββ΄ββββ
```
Example of using named collections with a dictionary with source ClickHouse {#example-of-using-named-collections-with-a-dictionary-with-source-clickhouse}
```sql
CREATE DICTIONARY dict(a Int64, b String)
PRIMARY KEY a
SOURCE(CLICKHOUSE(NAME remote1 TABLE test DB default))
LIFETIME(MIN 1 MAX 2)
LAYOUT(HASHED());
SELECT dictGet('dict', 'b', 1);
ββdictGet('dict', 'b', 1)ββ
β a β
βββββββββββββββββββββββββββ
```
Named collections for accessing Kafka {#named-collections-for-accessing-kafka}
The description of parameters see
Kafka
.
DDL example {#ddl-example-3}
```sql
CREATE NAMED COLLECTION my_kafka_cluster AS
kafka_broker_list = 'localhost:9092',
kafka_topic_list = 'kafka_topic',
kafka_group_name = 'consumer_group',
kafka_format = 'JSONEachRow',
kafka_max_block_size = '1048576';
```
XML example {#xml-example-3}
xml
<clickhouse>
<named_collections>
<my_kafka_cluster>
<kafka_broker_list>localhost:9092</kafka_broker_list>
<kafka_topic_list>kafka_topic</kafka_topic_list>
<kafka_group_name>consumer_group</kafka_group_name>
<kafka_format>JSONEachRow</kafka_format>
<kafka_max_block_size>1048576</kafka_max_block_size>
</my_kafka_cluster>
</named_collections>
</clickhouse>
Example of using named collections with a Kafka table {#example-of-using-named-collections-with-a-kafka-table}
Both of the following examples use the same named collection
my_kafka_cluster
:
```sql
CREATE TABLE queue
(
timestamp UInt64,
level String,
message String
)
ENGINE = Kafka(my_kafka_cluster)
CREATE TABLE queue
(
timestamp UInt64,
level String,
message String
)
ENGINE = Kafka(my_kafka_cluster)
SETTINGS kafka_num_consumers = 4,
kafka_thread_per_consumer = 1;
```
Named collections for backups {#named-collections-for-backups}
For the description of parameters see
Backup and Restore
.
DDL example {#ddl-example-4}
sql
BACKUP TABLE default.test to S3(named_collection_s3_backups, 'directory')
XML example {#xml-example-4}
xml
<clickhouse>
<named_collections>
<named_collection_s3_backups>
<url>https://my-s3-bucket.s3.amazonaws.com/backup-S3/</url>
<access_key_id>ABC123</access_key_id>
<secret_access_key>Abc+123</secret_access_key>
</named_collection_s3_backups>
</named_collections>
</clickhouse> | {"source_file": "named-collections.md"} | [
-0.02457434870302677,
-0.057130686938762665,
-0.0727459043264389,
0.005061951465904713,
-0.1132153570652008,
-0.09131234884262085,
-0.0007973958272486925,
-0.02018715813755989,
0.014834715984761715,
0.07350035011768341,
0.06012917682528496,
-0.01534706074744463,
0.11723436415195465,
-0.009... |
cc2c95d2-2944-44eb-87b5-6f4e42fe531d | Named collections for accessing MongoDB Table and Dictionary {#named-collections-for-accessing-mongodb-table-and-dictionary}
For the description of parameters see
mongodb
.
DDL example {#ddl-example-5}
sql
CREATE NAMED COLLECTION mymongo AS
user = '',
password = '',
host = '127.0.0.1',
port = 27017,
database = 'test',
collection = 'my_collection',
options = 'connectTimeoutMS=10000'
XML example {#xml-example-5}
xml
<clickhouse>
<named_collections>
<mymongo>
<user></user>
<password></password>
<host>127.0.0.1</host>
<port>27017</port>
<database>test</database>
<collection>my_collection</collection>
<options>connectTimeoutMS=10000</options>
</mymongo>
</named_collections>
</clickhouse>
MongoDB table {#mongodb-table}
```sql
CREATE TABLE mytable(log_type VARCHAR, host VARCHAR, command VARCHAR) ENGINE = MongoDB(mymongo, options='connectTimeoutMS=10000&compressors=zstd')
SELECT count() FROM mytable;
ββcount()ββ
β 2 β
βββββββββββ
```
:::note
The DDL overrides the named collection setting for options.
:::
MongoDB Dictionary {#mongodb-dictionary}
``sql
CREATE DICTIONARY dict
(
a
Int64,
b` String
)
PRIMARY KEY a
SOURCE(MONGODB(NAME mymongo COLLECTION my_dict))
LIFETIME(MIN 1 MAX 2)
LAYOUT(HASHED())
SELECT dictGet('dict', 'b', 2);
ββdictGet('dict', 'b', 2)ββ
β two β
βββββββββββββββββββββββββββ
```
:::note
The named collection specifies
my_collection
for the collection name. In the function call it is overwritten by
collection = 'my_dict'
to select another collection.
::: | {"source_file": "named-collections.md"} | [
0.07991261780261993,
-0.008584508672356606,
-0.10352619737386703,
0.036527976393699646,
-0.08760744333267212,
-0.10449893027544022,
0.01734032668173313,
0.013263956643640995,
0.000817290332634002,
0.03460870683193207,
0.06958053261041641,
-0.09779127687215805,
0.07193616032600403,
-0.04605... |
8c3083c2-2c97-4b76-8332-c4abec8dfa25 | description: 'Page describing usage recommendations for open-source ClickHouse'
sidebar_label: 'OSS usage recommendations'
sidebar_position: 58
slug: /operations/tips
title: 'OSS usage recommendations'
doc_type: 'guide'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_automated.md';
CPU Scaling Governor {#cpu-scaling-governor}
Always use the
performance
scaling governor. The
on-demand
scaling governor works much worse with constantly high demand.
bash
$ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
CPU Limitations {#cpu-limitations}
Processors can overheat. Use
dmesg
to see if the CPU's clock rate was limited due to overheating.
The restriction can also be set externally at the datacenter level. You can use
turbostat
to monitor it under a load.
RAM {#ram}
For small amounts of data (up to ~200 GB compressed), it is best to use as much memory as the volume of data.
For large amounts of data and when processing interactive (online) queries, you should use a reasonable amount of RAM (128 GB or more) so the hot data subset will fit in the cache of pages.
Even for data volumes of ~50 TB per server, using 128 GB of RAM significantly improves query performance compared to 64 GB.
Do not disable overcommit. The value
cat /proc/sys/vm/overcommit_memory
should be 0 or 1. Run
bash
$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
Use
perf top
to watch the time spent in the kernel for memory management.
Permanent huge pages also do not need to be allocated.
Using less than 16GB of RAM {#using-less-than-16gb-of-ram}
The recommended amount of RAM is 32 GB or more.
If your system has less than 16 GB of RAM, you may experience various memory exceptions because default settings do not match this amount of memory. You can use ClickHouse in a system with a small amount of RAM (as low as 2 GB), but these setups require additional tuning and can only ingest at a low rate.
When using ClickHouse with less than 16GB of RAM, we recommend the following:
Lower the size of the mark cache in the
config.xml
. It can be set as low as 500 MB, but it cannot be set to zero.
Lower the number of query processing threads down to
1
.
Lower the
max_block_size
to
8192
. Values as low as
1024
can still be practical.
Lower
max_download_threads
to
1
.
Set
input_format_parallel_parsing
and
output_format_parallel_formatting
to
0
.
disable writing in log tables, as it keeps the background merge task reserving RAM to perform merges of log tables. Disable
asynchronous_metric_log
,
metric_log
,
text_log
,
trace_log
.
Additional notes:
- To flush the memory cached by the memory allocator, you can run the
SYSTEM JEMALLOC PURGE
command.
- We do not recommend using S3 or Kafka integrations on low-memory machines because they require significant memory for buffers.
Storage Subsystem {#storage-subsystem} | {"source_file": "tips.md"} | [
0.034696806222200394,
-0.0081162229180336,
-0.06530099362134933,
0.094374880194664,
0.05998674035072327,
-0.10270654410123825,
0.009198724292218685,
0.08045902848243713,
-0.07654896378517151,
0.013202656991779804,
-0.025862162932753563,
0.042080868035554886,
-0.020764334127306938,
-0.05783... |
360409f8-1f43-40f2-8ca1-336e149fd511 | command.
- We do not recommend using S3 or Kafka integrations on low-memory machines because they require significant memory for buffers.
Storage Subsystem {#storage-subsystem}
If your budget allows you to use SSD, use SSD.
If not, use HDD. SATA HDDs 7200 RPM will do.
Give preference to a lot of servers with local hard drives over a smaller number of servers with attached disk shelves.
But for storing archives with rare queries, shelves will work.
RAID {#raid}
When using HDD, you can combine their RAID-10, RAID-5, RAID-6 or RAID-50.
For Linux, software RAID is better (with
mdadm
).
When creating RAID-10, select the
far
layout.
If your budget allows, choose RAID-10.
LVM by itself (without RAID or
mdadm
) is ok, but making RAID with it or combining it with
mdadm
is a less explored option, and there will be more chances for mistakes
(selecting wrong chunk size; misalignment of chunks; choosing a wrong raid type; forgetting to cleanup disks). If you are confident
in using LVM, there is nothing against using it.
If you have more than 4 disks, use RAID-6 (preferred) or RAID-50, instead of RAID-5.
When using RAID-5, RAID-6 or RAID-50, always increase stripe_cache_size, since the default value is usually not the best choice.
bash
$ echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
Calculate the exact number from the number of devices and the block size, using the formula:
2 * num_devices * chunk_size_in_bytes / 4096
.
A block size of 64 KB is sufficient for most RAID configurations. The average clickhouse-server write size is approximately 1 MB (1024 KB), and thus the recommended stripe size is also 1 MB. The block size can be optimized if needed when set to 1 MB divided by the number of non-parity disks in the RAID array, such that each write is parallelized across all available non-parity disks.
Never set the block size too small or too large.
You can use RAID-0 on SSD.
Regardless of RAID use, always use replication for data security.
Enable NCQ with a long queue. For HDD, choose the mq-deadline or CFQ scheduler, and for SSD, choose noop. Don't reduce the 'readahead' setting.
For HDD, enable the write cache.
Make sure that
fstrim
is enabled for NVME and SSD disks in your OS (usually it's implemented using a cronjob or systemd service).
File System {#file-system}
Ext4 is the most reliable option. Set the mount options
noatime
. XFS works well too.
Most other file systems should also work fine.
FAT-32 and exFAT are not supported due to lack of hard links.
Do not use compressed filesystems, because ClickHouse does compression on its own and better.
It's not recommended to use encrypted filesystems, because you can use builtin encryption in ClickHouse, which is better.
While ClickHouse can work over NFS, it is not the best idea.
Linux Kernel {#linux-kernel}
Don't use an outdated Linux kernel.
Network {#network} | {"source_file": "tips.md"} | [
-0.005599487107247114,
-0.026643414050340652,
-0.12054964900016785,
-0.07689189165830612,
-0.03295838460326195,
0.024644173681735992,
-0.09856351464986801,
0.06859926879405975,
-0.02944459766149521,
0.03229842707514763,
-0.046389684081077576,
0.04765204340219498,
0.028248203918337822,
-0.0... |
24ad1342-78c7-4678-9efe-21301d9e7f86 | While ClickHouse can work over NFS, it is not the best idea.
Linux Kernel {#linux-kernel}
Don't use an outdated Linux kernel.
Network {#network}
If you are using IPv6, increase the size of the route cache.
The Linux kernel prior to 3.2 had a multitude of problems with IPv6 implementation.
Use at least a 10 GB network, if possible. 1 Gb will also work, but it will be much worse for patching replicas with tens of terabytes of data, or for processing distributed queries with a large amount of intermediate data.
Huge Pages {#huge-pages}
If you are using old Linux kernel, disable transparent huge pages. It interferes with memory allocator, which leads to significant performance degradation.
On newer Linux kernels transparent huge pages are alright.
bash
$ echo 'madvise' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
If you want to modify the transparent huge pages setting permanently, editing the
/etc/default/grub
to add the
transparent_hugepage=madvise
to the
GRUB_CMDLINE_LINUX_DEFAULT
option:
bash
$ GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=madvise ..."
After that, run the
sudo update-grub
command then reboot to take effect.
Hypervisor configuration {#hypervisor-configuration}
If you are using OpenStack, set
ini
cpu_mode=host-passthrough
in
nova.conf
.
If you are using libvirt, set
xml
<cpu mode='host-passthrough'/>
in XML configuration.
This is important for ClickHouse to be able to get correct information with
cpuid
instruction.
Otherwise you may get
Illegal instruction
crashes when hypervisor is run on old CPU models.
ClickHouse Keeper and ZooKeeper {#zookeeper}
ClickHouse Keeper is recommended to replace ZooKeeper for ClickHouse clusters. See the documentation for
ClickHouse Keeper
If you would like to continue using ZooKeeper then it is best to use a fresh version of ZooKeeper β 3.4.9 or later. The version in stable Linux distributions may be outdated.
You should never use manually written scripts to transfer data between different ZooKeeper clusters, because the result will be incorrect for sequential nodes. Never use the "zkcopy" utility for the same reason: https://github.com/ksprojects/zkcopy/issues/15
If you want to divide an existing ZooKeeper cluster into two, the correct way is to increase the number of its replicas and then reconfigure it as two independent clusters.
You can run ClickHouse Keeper on the same server as ClickHouse in test environments, or in environments with low ingestion rate.
For production environments we suggest to use separate servers for ClickHouse and ZooKeeper/Keeper, or place ClickHouse files and Keeper files on to separate disks. Because ZooKeeper/Keeper are very sensitive for disk latency and ClickHouse may utilize all available system resources.
You can have ZooKeeper observers in an ensemble but ClickHouse servers should not interact with observers. | {"source_file": "tips.md"} | [
0.00446720514446497,
-0.05505203455686569,
-0.03350087255239487,
-0.02712045982480049,
0.01313581969588995,
-0.1051107719540596,
-0.10676983743906021,
-0.02297109365463257,
-0.08948595821857452,
0.013549755327403545,
0.013531822711229324,
0.11710596829652786,
-0.05227365344762802,
-0.04821... |
09003670-a2a2-42b4-bdf2-645ad2debb49 | You can have ZooKeeper observers in an ensemble but ClickHouse servers should not interact with observers.
Do not change
minSessionTimeout
setting, large values may affect ClickHouse restart stability.
With the default settings, ZooKeeper is a time bomb:
The ZooKeeper server won't delete files from old snapshots and logs when using the default configuration (see
autopurge
), and this is the responsibility of the operator.
This bomb must be defused.
The ZooKeeper (3.5.1) configuration below is used in a large production environment:
zoo.cfg:
```bash
http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html
The number of milliseconds of each tick
tickTime=2000
The number of ticks that the initial
synchronization phase can take
This value is not quite motivated
initLimit=300
The number of ticks that can pass between
sending a request and getting an acknowledgement
syncLimit=10
maxClientCnxns=2000
It is the maximum value that client may request and the server will accept.
It is Ok to have high maxSessionTimeout on server to allow clients to work with high session timeout if they want.
But we request session timeout of 30 seconds by default (you can change it with session_timeout_ms in ClickHouse config).
maxSessionTimeout=60000000
the directory where the snapshot is stored.
dataDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/data
Place the dataLogDir to a separate physical disc for better performance
dataLogDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/logs
autopurge.snapRetainCount=10
autopurge.purgeInterval=1
To avoid seeks ZooKeeper allocates space in the transaction log file in
blocks of preAllocSize kilobytes. The default block size is 64M. One reason
for changing the size of the blocks is to reduce the block size if snapshots
are taken more often. (Also, see snapCount).
preAllocSize=131072
Clients can submit requests faster than ZooKeeper can process them,
especially if there are a lot of clients. To prevent ZooKeeper from running
out of memory due to queued requests, ZooKeeper will throttle clients so that
there is no more than globalOutstandingLimit outstanding requests in the
system. The default limit is 1000.
globalOutstandingLimit=1000
ZooKeeper logs transactions to a transaction log. After snapCount transactions
are written to a log file a snapshot is started and a new transaction log file
is started. The default snapCount is 100000.
snapCount=3000000
If this option is defined, requests will be will logged to a trace file named
traceFile.year.month.day.
traceFile=
Leader accepts client connections. Default value is "yes". The leader machine
coordinates updates. For higher update throughput at thes slight expense of
read throughput the leader can be configured to not accept clients and focus
on coordination.
leaderServes=yes | {"source_file": "tips.md"} | [
-0.056450363248586655,
-0.0005488788592629135,
0.03243064507842064,
0.03066926635801792,
0.03334755077958107,
-0.05277632921934128,
-0.025659840553998947,
-0.026769977062940598,
0.016360430046916008,
0.04362766072154045,
-0.09713464230298996,
-0.021776506677269936,
0.013795604929327965,
0.... |
20ce7967-a611-444e-8018-c49f398a1c7a | coordinates updates. For higher update throughput at thes slight expense of
read throughput the leader can be configured to not accept clients and focus
on coordination.
leaderServes=yes
standaloneEnabled=false
dynamicConfigFile=/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/zoo.cfg.dynamic
```
Java version:
text
openjdk 11.0.5-shenandoah 2019-10-15
OpenJDK Runtime Environment (build 11.0.5-shenandoah+10-adhoc.heretic.src)
OpenJDK 64-Bit Server VM (build 11.0.5-shenandoah+10-adhoc.heretic.src, mixed mode)
JVM parameters:
```bash
NAME=zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}
ZOOCFGDIR=/etc/$NAME/conf
TODO this is really ugly
How to find out, which jars are needed?
seems, that log4j requires the log4j.properties file to be in the classpath
CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper-3.6.2/lib/audience-annotations-0.5.0.jar:/usr/share/zookeeper-3.6.2/lib/commons-cli-1.2.jar:/usr/share/zookeeper-3.6.2/lib/commons-lang-2.6.jar:/usr/share/zookeeper-3.6.2/lib/jackson-annotations-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-core-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-databind-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/javax.servlet-api-3.1.0.jar:/usr/share/zookeeper-3.6.2/lib/jetty-http-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-io-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-security-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-server-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-servlet-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-util-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jline-2.14.6.jar:/usr/share/zookeeper-3.6.2/lib/json-simple-1.1.1.jar:/usr/share/zookeeper-3.6.2/lib/log4j-1.2.17.jar:/usr/share/zookeeper-3.6.2/lib/metrics-core-3.2.5.jar:/usr/share/zookeeper-3.6.2/lib/netty-buffer-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-codec-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-handler-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-resolver-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-epoll-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-unix-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_common-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_hotspot-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_servlet-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-api-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-log4j12-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/snappy-java-1.1.7.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-jute-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-prometheus-metrics-3.6.2.jar:/usr/share/zookeeper-3.6.2/etc" | {"source_file": "tips.md"} | [
0.004502894356846809,
0.04678613319993019,
-0.0765935406088829,
-0.013544402085244656,
0.027753274887800217,
-0.06147310882806778,
-0.017509890720248222,
-0.005292493384331465,
-0.0031515993177890778,
0.03818965330719948,
0.011596142314374447,
-0.04161075875163078,
0.03636632859706879,
0.0... |
ecd1d512-ff45-4f3d-8697-d38fe45f7983 | ZOOCFG="$ZOOCFGDIR/zoo.cfg"
ZOO_LOG_DIR=/var/log/$NAME
USER=zookeeper
GROUP=zookeeper
PIDDIR=/var/run/$NAME
PIDFILE=$PIDDIR/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
JAVA=/usr/local/jdk-11/bin/java
ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
JMXLOCALONLY=false
JAVA_OPTS="-Xms{{ '{{' }} cluster.get('xms','128M') {{ '}}' }} \
-Xmx{{ '{{' }} cluster.get('xmx','1G') {{ '}}' }} \
-Xlog:safepoint,gc
=info,age
=debug:file=/var/log/$NAME/zookeeper-gc.log:time,level,tags:filecount=16,filesize=16M
-verbose:gc \
-XX:+UseG1GC \
-Djute.maxbuffer=8388608 \
-XX:MaxGCPauseMillis=50"
```
Salt initialization:
```text
description "zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} centralized coordination service"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
limit nofile 8192 8192
pre-start script
[ -r "/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment" ] || exit 0
. /etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment
[ -d $ZOO_LOG_DIR ] || mkdir -p $ZOO_LOG_DIR
chown $USER:$GROUP $ZOO_LOG_DIR
end script
script
. /etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment
[ -r /etc/default/zookeeper ] && . /etc/default/zookeeper
if [ -z "$JMXDISABLE" ]; then
JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY"
fi
exec start-stop-daemon --start -c $USER --exec $JAVA --name zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} \
-- -cp $CLASSPATH $JAVA_OPTS -Dzookeeper.log.dir=${ZOO_LOG_DIR} \
-Dzookeeper.root.logger=${ZOO_LOG4J_PROP} $ZOOMAIN $ZOOCFG
end script
```
Antivirus software {#antivirus-software}
If you use antivirus software configure it to skip folders with ClickHouse datafiles (
/var/lib/clickhouse
) otherwise performance may be reduced and you may experience unexpected errors during data ingestion and background merges.
Related Content {#related-content}
Getting started with ClickHouse? Here are 13 "Deadly Sins" and how to avoid them | {"source_file": "tips.md"} | [
0.04294174164533615,
0.03766482323408127,
-0.05365518853068352,
-0.04152866452932358,
-0.019895657896995544,
-0.012668859213590622,
0.01767715997993946,
0.08041101694107056,
-0.025613827630877495,
0.04264916107058525,
0.04351775720715523,
-0.08266724646091461,
0.04092361405491829,
0.025485... |
224d6cfe-2a23-4da9-a1eb-f6ce5203417c | description: 'Guide to using and configuring the query cache feature in ClickHouse'
sidebar_label: 'Query cache'
sidebar_position: 65
slug: /operations/query-cache
title: 'Query cache'
doc_type: 'guide'
Query cache
The query cache allows to compute
SELECT
queries just once and to serve further executions of the same query directly from the cache.
Depending on the type of the queries, this can dramatically reduce latency and resource consumption of the ClickHouse server.
Background, design and limitations {#background-design-and-limitations}
Query caches can generally be viewed as transactionally consistent or inconsistent.
In transactionally consistent caches, the database invalidates (discards) cached query results if the result of the
SELECT
query changes
or potentially changes. In ClickHouse, operations which change the data include inserts/updates/deletes in/of/from tables or collapsing
merges. Transactionally consistent caching is especially suitable for OLTP databases, for example
MySQL
(which removed query cache after v8.0) and
Oracle
.
In transactionally inconsistent caches, slight inaccuracies in query results are accepted under the assumption that all cache entries are
assigned a validity period after which they expire (e.g. 1 minute) and that the underlying data changes only little during this period.
This approach is overall more suitable for OLAP databases. As an example where transactionally inconsistent caching is sufficient,
consider an hourly sales report in a reporting tool which is simultaneously accessed by multiple users. Sales data changes typically
slowly enough that the database only needs to compute the report once (represented by the first
SELECT
query). Further queries can be
served directly from the query cache. In this example, a reasonable validity period could be 30 min.
Transactionally inconsistent caching is traditionally provided by client tools or proxy packages (e.g.
chproxy
) interacting with the database. As a result, the same caching logic and
configuration is often duplicated. With ClickHouse's query cache, the caching logic moves to the server side. This reduces maintenance
effort and avoids redundancy.
Configuration settings and usage {#configuration-settings-and-usage}
:::note
In ClickHouse Cloud, you must use
query level settings
to edit query cache settings. Editing
config level settings
is currently not supported.
:::
:::note
clickhouse-local
runs a single query at a time. Since query result caching does not make sense, the query
result cache is disabled in clickhouse-local.
:::
Setting
use_query_cache
can be used to control whether a specific query or all queries of the
current session should utilize the query cache. For example, the first execution of query
sql
SELECT some_expensive_calculation(column_1, column_2)
FROM table
SETTINGS use_query_cache = true; | {"source_file": "query-cache.md"} | [
-0.011520135216414928,
-0.027104008942842484,
0.024341069161891937,
0.05769922584295273,
-0.08645375818014145,
-0.1398490071296692,
0.036676835268735886,
-0.0845908373594284,
0.06190713495016098,
-0.03671297803521156,
0.008971638046205044,
0.09201203286647797,
0.0031509289983659983,
-0.069... |
6f200f61-27bf-46ae-942a-04078ecd87b4 | sql
SELECT some_expensive_calculation(column_1, column_2)
FROM table
SETTINGS use_query_cache = true;
will store the query result in the query cache. Subsequent executions of the same query (also with parameter
use_query_cache = true
) will
read the computed result from the cache and return it immediately.
:::note
Setting
use_query_cache
and all other query-cache-related settings only take an effect on stand-alone
SELECT
statements. In particular,
the results of
SELECT
s to views created by
CREATE VIEW AS SELECT [...] SETTINGS use_query_cache = true
are not cached unless the
SELECT
statement runs with
SETTINGS use_query_cache = true
.
:::
The way the cache is utilized can be configured in more detail using settings
enable_writes_to_query_cache
and
enable_reads_from_query_cache
(both
true
by default). The former setting
controls whether query results are stored in the cache, whereas the latter setting determines if the database should try to retrieve query
results from the cache. For example, the following query will use the cache only passively, i.e. attempt to read from it but not store its
result in it:
sql
SELECT some_expensive_calculation(column_1, column_2)
FROM table
SETTINGS use_query_cache = true, enable_writes_to_query_cache = false;
For maximum control, it is generally recommended to provide settings
use_query_cache
,
enable_writes_to_query_cache
and
enable_reads_from_query_cache
only with specific queries. It is also possible to enable caching at user or profile level (e.g. via
SET
use_query_cache = true
) but one should keep in mind that all
SELECT
queries may return cached results then.
The query cache can be cleared using statement
SYSTEM DROP QUERY CACHE
. The content of the query cache is displayed in system table
system.query_cache
. The number of query cache hits and misses since database start are shown as events
"QueryCacheHits" and "QueryCacheMisses" in system table
system.events
. Both counters are only updated for
SELECT
queries which run with setting
use_query_cache = true
, other queries do not affect "QueryCacheMisses". Field
query_cache_usage
in system table
system.query_log
shows for each executed query whether the query result was written into or
read from the query cache. Metrics
QueryCacheEntries
and
QueryCacheBytes
in system table
system.metrics
show how many entries / bytes the query cache currently contains.
The query cache exists once per ClickHouse server process. However, cache results are by default not shared between users. This can be
changed (see below) but doing so is not recommended for security reasons.
Query results are referenced in the query cache by the
Abstract Syntax Tree (AST)
of
their query. This means that caching is agnostic to upper/lowercase, for example
SELECT 1
and
select 1
are treated as the same query. To
make the matching more natural, all query-level settings related to the query cache are removed from the AST. | {"source_file": "query-cache.md"} | [
-0.009567467495799065,
-0.03805484250187874,
-0.0838017463684082,
0.08343882858753204,
-0.028335321694612503,
-0.07220323383808136,
0.07376179844141006,
-0.019929010421037674,
0.07185612618923187,
0.053915198892354965,
0.022446593269705772,
-0.012214033864438534,
0.004477695561945438,
-0.0... |
c55f8ff4-70c0-46c6-95f0-176ef32f3404 | If the query was aborted due to an exception or user cancellation, no entry is written into the query cache.
The size of the query cache in bytes, the maximum number of cache entries and the maximum size of individual cache entries (in bytes and in
records) can be configured using different
server configuration options
.
xml
<query_cache>
<max_size_in_bytes>1073741824</max_size_in_bytes>
<max_entries>1024</max_entries>
<max_entry_size_in_bytes>1048576</max_entry_size_in_bytes>
<max_entry_size_in_rows>30000000</max_entry_size_in_rows>
</query_cache>
It is also possible to limit the cache usage of individual users using
settings profiles
and
settings
constraints
. More specifically, you can restrict the maximum amount of memory (in bytes) a user may
allocate in the query cache and the maximum number of stored query results. For that, first provide configurations
query_cache_max_size_in_bytes
and
query_cache_max_entries
in a user profile in
users.xml
, then make both settings
readonly:
xml
<profiles>
<default>
<!-- The maximum cache size in bytes for user/profile 'default' -->
<query_cache_max_size_in_bytes>10000</query_cache_max_size_in_bytes>
<!-- The maximum number of SELECT query results stored in the cache for user/profile 'default' -->
<query_cache_max_entries>100</query_cache_max_entries>
<!-- Make both settings read-only so the user cannot change them -->
<constraints>
<query_cache_max_size_in_bytes>
<readonly/>
</query_cache_max_size_in_bytes>
<query_cache_max_entries>
<readonly/>
<query_cache_max_entries>
</constraints>
</default>
</profiles>
To define how long a query must run at least such that its result can be cached, you can use setting
query_cache_min_query_duration
. For example, the result of query
sql
SELECT some_expensive_calculation(column_1, column_2)
FROM table
SETTINGS use_query_cache = true, query_cache_min_query_duration = 5000;
is only cached if the query runs longer than 5 seconds. It is also possible to specify how often a query needs to run until its result is
cached - for that use setting
query_cache_min_query_runs
.
Entries in the query cache become stale after a certain time period (time-to-live). By default, this period is 60 seconds but a different
value can be specified at session, profile or query level using setting
query_cache_ttl
. The query
cache evicts entries "lazily", i.e. when an entry becomes stale, it is not immediately removed from the cache. Instead, when a new entry
is to be inserted into the query cache, the database checks whether the cache has enough free space for the new entry. If this is not the
case, the database tries to remove all stale entries. If the cache still has not enough free space, the new entry is not inserted. | {"source_file": "query-cache.md"} | [
0.08439622074365616,
0.020156966522336006,
-0.06975828111171722,
0.013710899278521538,
-0.10962971299886703,
-0.050883352756500244,
0.023706451058387756,
0.03725342079997063,
-0.07309132069349289,
0.016924407333135605,
-0.01058912742882967,
0.05632578581571579,
0.07099555432796478,
-0.1006... |
5571f7be-af9b-4b8d-8370-b831fc6c11d8 | Entries in the query cache are compressed by default. This reduces the overall memory consumption at the cost of slower writes into / reads
from the query cache. To disable compression, use setting
query_cache_compress_entries
.
Sometimes it is useful to keep multiple results for the same query cached. This can be achieved using setting
query_cache_tag
that acts as as a label (or namespace) for a query cache entries. The query cache
considers results of the same query with different tags different.
Example for creating three different query cache entries for the same query:
sql
SELECT 1 SETTINGS use_query_cache = true; -- query_cache_tag is implicitly '' (empty string)
SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'tag 1';
SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'tag 2';
To remove only entries with tag
tag
from the query cache, you can use statement
SYSTEM DROP QUERY CACHE TAG 'tag'
.
ClickHouse reads table data in blocks of
max_block_size
rows. Due to filtering, aggregation,
etc., result blocks are typically much smaller than 'max_block_size' but there are also cases where they are much bigger. Setting
query_cache_squash_partial_results
(enabled by default) controls if result blocks
are squashed (if they are tiny) or split (if they are large) into blocks of 'max_block_size' size before insertion into the query result
cache. This reduces performance of writes into the query cache but improves compression rate of cache entries and provides more natural
block granularity when query results are later served from the query cache.
As a result, the query cache stores for each query multiple (partial)
result blocks. While this behavior is a good default, it can be suppressed using setting
query_cache_squash_partial_results
.
Also, results of queries with non-deterministic functions are not cached by default. Such functions include
- functions for accessing dictionaries:
dictGet()
etc.
-
user-defined functions
without tag
<deterministic>true</deterministic>
in their XML
definition,
- functions which return the current date or time:
now()
,
today()
,
yesterday()
etc.,
- functions which return random values:
randomString()
,
fuzzBits()
etc.,
- functions whose result depends on the size and order or the internal chunks used for query processing:
nowInBlock()
etc.,
rowNumberInBlock()
,
runningDifference()
,
blockSize()
etc.,
- functions which depend on the environment:
currentUser()
,
queryID()
,
getMacro()
etc.
To force caching of results of queries with non-deterministic functions regardless, use setting
query_cache_nondeterministic_function_handling
.
Results of queries that involve system tables (e.g.
system.processes
` or
information_schema.tables
) are not cached by default. To force caching of results of queries with
system tables regardless, use setting
query_cache_system_table_handling
. | {"source_file": "query-cache.md"} | [
0.016490302979946136,
0.03774920850992203,
-0.08617992699146271,
0.04296477138996124,
-0.054472919553518295,
-0.09534387290477753,
0.07037630677223206,
-0.035723745822906494,
0.010912199504673481,
0.03440593555569649,
0.006040130741894245,
0.028969278559088707,
0.047448866069316864,
-0.112... |
81151050-dc34-43e2-8f32-cfa60d6fdd68 | information_schema.tables
) are not cached by default. To force caching of results of queries with
system tables regardless, use setting
query_cache_system_table_handling
.
Finally, entries in the query cache are not shared between users due to security reasons. For example, user A must not be able to bypass a
row policy on a table by running the same query as another user B for whom no such policy exists. However, if necessary, cache entries can
be marked accessible by other users (i.e. shared) by supplying setting
query_cache_share_between_users
.
Related content {#related-content}
Blog:
Introducing the ClickHouse Query Cache | {"source_file": "query-cache.md"} | [
0.061277009546756744,
-0.055303845554590225,
-0.029587164521217346,
-0.02583470568060875,
-0.039487119764089584,
-0.06848981976509094,
0.038305241614580154,
-0.07065755128860474,
-0.027574660256505013,
0.0036667927633970976,
0.028369084000587463,
0.058928389102220535,
0.03132696449756622,
... |
ecfd8bba-dc68-4f38-a9c3-930d959bab39 | description: 'Page detailing allocation profiling in ClickHouse'
sidebar_label: 'Allocation profiling'
slug: /operations/allocation-profiling
title: 'Allocation profiling'
doc_type: 'guide'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Allocation profiling
ClickHouse uses
jemalloc
as its global allocator. Jemalloc comes with some tools for allocation sampling and profiling.
To make allocation profiling more convenient, ClickHouse and Keeper allow you to control sampling using configs, query settings,
SYSTEM
commands and four letter word (4LW) commands in Keeper.
Additionally, samples can be collected into
system.trace_log
table under
JemallocSample
type.
:::note
This guide is applicable for versions 25.9+.
For older versions, please check
allocation profiling for versions before 25.9
.
:::
Sampling allocations {#sampling-allocations}
If you want to sample and profile allocations in
jemalloc
, you need to start ClickHouse/Keeper with config
jemalloc_enable_global_profiler
enabled.
xml
<clickhouse>
<jemalloc_enable_global_profiler>1</jemalloc_enable_global_profiler>
</clickhouse>
jemalloc
will sample allocations and store the information internally.
You can also enable allocations per query by using
jemalloc_enable_profiler
setting.
:::warning Warning
Because ClickHouse is an allocation-heavy application, jemalloc sampling may incur performance overhead.
:::
Storing jemalloc samples in
system.trace_log
{#storing-jemalloc-samples-in-system-trace-log}
You can store all the jemalloc samples in
system.trace_log
under
JemallocSample
type.
To enable it globally you can use config
jemalloc_collect_global_profile_samples_in_trace_log
.
xml
<clickhouse>
<jemalloc_collect_global_profile_samples_in_trace_log>1</jemalloc_collect_global_profile_samples_in_trace_log>
</clickhouse>
:::warning Warning
Because ClickHouse is an allocation-heavy application, collecting all samples in system.trace_log may incur high load.
:::
You can also enable it per query by using
jemalloc_collect_profile_samples_in_trace_log
setting.
Example of analyzing memory usage of a query using
system.trace_log
{#example-analyzing-memory-usage-trace-log}
First, we need to run the query with enabled jemalloc profiler and collect the samples for it into
system.trace_log
:
``sql
SELECT *
FROM numbers(1000000)
ORDER BY number DESC
SETTINGS max_bytes_ratio_before_external_sort = 0
FORMAT
Null`
SETTINGS jemalloc_enable_profiler = 1, jemalloc_collect_profile_samples_in_trace_log = 1
Query id: 8678d8fe-62c5-48b8-b0cd-26851c62dd75
Ok.
0 rows in set. Elapsed: 0.009 sec. Processed 1.00 million rows, 8.00 MB (108.58 million rows/s., 868.61 MB/s.)
Peak memory usage: 12.65 MiB.
```
:::note
If ClickHouse was started with
jemalloc_enable_global_profiler
, you don't have to enable
jemalloc_enable_profiler
. | {"source_file": "allocation-profiling.md"} | [
0.08703623712062836,
-0.036818988621234894,
-0.14526936411857605,
0.01465738657861948,
0.038813114166259766,
-0.036150939762592316,
0.11002808064222336,
0.002330497605726123,
-0.10200221836566925,
0.04976123198866844,
-0.010482321493327618,
-0.06431170552968979,
-0.0025285910815000534,
-0.... |
a4c1d5be-4ace-4820-9679-64bae7fbf7c8 | :::note
If ClickHouse was started with
jemalloc_enable_global_profiler
, you don't have to enable
jemalloc_enable_profiler
.
Same is true for
jemalloc_collect_global_profile_samples_in_trace_log
and
jemalloc_collect_profile_samples_in_trace_log
.
:::
We will flush the
system.trace_log
:
sql
SYSTEM FLUSH LOGS trace_log
and query it to get memory usage of the query we run for each time point:
sql
WITH per_bucket AS
(
SELECT
event_time_microseconds AS bucket_time,
sum(size) AS bucket_sum
FROM system.trace_log
WHERE trace_type = 'JemallocSample'
AND query_id = '8678d8fe-62c5-48b8-b0cd-26851c62dd75'
GROUP BY bucket_time
)
SELECT
bucket_time,
sum(bucket_sum) OVER (
ORDER BY bucket_time ASC
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
) AS cumulative_size,
formatReadableSize(cumulative_size) AS cumulative_size_readable
FROM per_bucket
ORDER BY bucket_time
We can also find the time where the memory usage was the highest:
sql
SELECT
argMax(bucket_time, cumulative_size),
max(cumulative_size)
FROM
(
WITH per_bucket AS
(
SELECT
event_time_microseconds AS bucket_time,
sum(size) AS bucket_sum
FROM system.trace_log
WHERE trace_type = 'JemallocSample'
AND query_id = '8678d8fe-62c5-48b8-b0cd-26851c62dd75'
GROUP BY bucket_time
)
SELECT
bucket_time,
sum(bucket_sum) OVER (
ORDER BY bucket_time ASC
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
) AS cumulative_size,
formatReadableSize(cumulative_size) AS cumulative_size_readable
FROM per_bucket
ORDER BY bucket_time
)
We can use that result to see from where did we have the most active allocations at that time point:
sql
SELECT
concat(
'\n',
arrayStringConcat(
arrayMap(
(x, y) -> concat(x, ': ', y),
arrayMap(x -> addressToLine(x), allocation_trace),
arrayMap(x -> demangle(addressToSymbol(x)), allocation_trace)
),
'\n'
)
) AS symbolized_trace,
sum(s) AS per_trace_sum
FROM
(
SELECT
ptr,
sum(size) AS s,
argMax(trace, event_time_microseconds) AS allocation_trace
FROM system.trace_log
WHERE trace_type = 'JemallocSample'
AND query_id = '8678d8fe-62c5-48b8-b0cd-26851c62dd75'
AND event_time_microseconds <= '2025-09-04 11:56:21.737139'
GROUP BY ptr
HAVING s > 0
)
GROUP BY ALL
ORDER BY per_trace_sum ASC
Flushing heap profiles {#flushing-heap-profiles}
By default, the heap profile file will be generated in
/tmp/jemalloc_clickhouse._pid_._seqnum_.heap
where
_pid_
is the PID of ClickHouse and
_seqnum_
is the global sequence number for the current heap profile.
For Keeper, the default file is
/tmp/jemalloc_keeper._pid_._seqnum_.heap
, and follows the same rules. | {"source_file": "allocation-profiling.md"} | [
0.0836753249168396,
-0.04461638256907463,
-0.055413637310266495,
0.06220531836152077,
-0.016323905438184738,
-0.04395580664277077,
0.09004314243793488,
0.025982117280364037,
-0.06648305058479309,
0.03712514042854309,
-0.01919109933078289,
-0.07551583647727966,
0.03566150739789009,
-0.07108... |
096582de-4c1f-478e-93b3-166683524f22 | For Keeper, the default file is
/tmp/jemalloc_keeper._pid_._seqnum_.heap
, and follows the same rules.
You can tell
jemalloc
to flush the current profile by running:
sql
SYSTEM JEMALLOC FLUSH PROFILE
It will return the location of the flushed profile.
sh
echo jmfp | nc localhost 9181
A different location can be defined by appending the
MALLOC_CONF
environment variable with the
prof_prefix
option.
For example, if you want to generate profiles in the
/data
folder where the filename prefix will be
my_current_profile
, you can run ClickHouse/Keeper with the following environment variable:
sh
MALLOC_CONF=prof_prefix:/data/my_current_profile
The generated file will be appended to the prefix PID and sequence number.
Analyzing heap profiles {#analyzing-heap-profiles}
After heap profiles have been generated, they need to be analyzed.
For that,
jemalloc
's tool called
jeprof
can be used. It can be installed in multiple ways:
- Using the system's package manager
- Cloning the
jemalloc repo
and running
autogen.sh
from the root folder. This will provide you with the
jeprof
script inside the
bin
folder
:::note
jeprof
uses
addr2line
to generate stacktraces which can be really slow.
If that's the case, it is recommended to install an
alternative implementation
of the tool.
bash
git clone https://github.com/gimli-rs/addr2line.git --depth=1 --branch=0.23.0
cd addr2line
cargo build --features bin --release
cp ./target/release/addr2line path/to/current/addr2line
Alternatively,
llvm-addr2line
works equally well.
:::
There are many different formats to generate from the heap profile using
jeprof
.
It is recommended to run
jeprof --help
for information on the usage and the various options the tool provides.
In general, the
jeprof
command is used as:
sh
jeprof path/to/binary path/to/heap/profile --output_format [ > output_file]
If you want to compare which allocations happened between two profiles you can set the
base
argument:
sh
jeprof path/to/binary --base path/to/first/heap/profile path/to/second/heap/profile --output_format [ > output_file]
Examples {#examples}
if you want to generate a text file with each procedure written per line:
sh
jeprof path/to/binary path/to/heap/profile --text > result.txt
if you want to generate a PDF file with a call-graph:
sh
jeprof path/to/binary path/to/heap/profile --pdf > result.pdf
Generating a flame graph {#generating-flame-graph}
jeprof
allows you to generate collapsed stacks for building flame graphs.
You need to use the
--collapsed
argument:
sh
jeprof path/to/binary path/to/heap/profile --collapsed > result.collapsed
After that, you can use many different tools to visualize collapsed stacks.
The most popular is
FlameGraph
which contains a script called
flamegraph.pl
:
sh
cat result.collapsed | /path/to/FlameGraph/flamegraph.pl --color=mem --title="Allocation Flame Graph" --width 2400 > result.svg | {"source_file": "allocation-profiling.md"} | [
0.011059832759201527,
-0.01823933608829975,
-0.15440569818019867,
-0.02063431404531002,
-0.028496528044342995,
-0.02799191325902939,
0.05631174147129059,
0.048203252255916595,
-0.0850704088807106,
0.03993088752031326,
0.007091153878718615,
-0.03596705198287964,
0.04781532287597656,
-0.0572... |
ea1a2109-b0bc-402e-bcb8-e6fdd1cce6a1 | sh
cat result.collapsed | /path/to/FlameGraph/flamegraph.pl --color=mem --title="Allocation Flame Graph" --width 2400 > result.svg
Another interesting tool is
speedscope
that allows you to analyze collected stacks in a more interactive way.
Additional options for the profiler {#additional-options-for-profiler}
jemalloc
has many different options available, which are related to the profiler. They can be controlled by modifying the
MALLOC_CONF
environment variable.
For example, the interval between allocation samples can be controlled with
lg_prof_sample
.
If you want to dump the heap profile every N bytes you can enable it using
lg_prof_interval
.
It is recommended to check
jemalloc
s
reference page
for a complete list of options.
Other resources {#other-resources}
ClickHouse/Keeper expose
jemalloc
related metrics in many different ways.
:::warning Warning
It's important to be aware that none of these metrics are synchronized with each other and values may drift.
:::
System table
asynchronous_metrics
{#system-table-asynchronous_metrics}
sql
SELECT *
FROM system.asynchronous_metrics
WHERE metric LIKE '%jemalloc%'
FORMAT Vertical
Reference
System table
jemalloc_bins
{#system-table-jemalloc_bins}
Contains information about memory allocations done via the jemalloc allocator in different size classes (bins) aggregated from all arenas.
Reference
Prometheus {#prometheus}
All
jemalloc
related metrics from
asynchronous_metrics
are also exposed using the Prometheus endpoint in both ClickHouse and Keeper.
Reference
jmst
4LW command in Keeper {#jmst-4lw-command-in-keeper}
Keeper supports the
jmst
4LW command which returns
basic allocator statistics
:
sh
echo jmst | nc localhost 9181 | {"source_file": "allocation-profiling.md"} | [
0.014462544582784176,
0.03970613703131676,
-0.11723362654447556,
0.02844633162021637,
0.021851668134331703,
-0.053459152579307556,
0.009310238994657993,
0.07540819048881531,
-0.037808310240507126,
-0.04070395231246948,
-0.05881110206246376,
-0.03604118153452873,
-0.040093157440423965,
-0.0... |
89d2cae3-066b-4d30-b895-eea20c981d87 | description: 'Guide to testing and benchmarking hardware performance with ClickHouse'
sidebar_label: 'Testing Hardware'
sidebar_position: 54
slug: /operations/performance-test
title: 'How to Test Your Hardware with ClickHouse'
doc_type: 'guide'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
You can run a basic ClickHouse performance test on any server without installation of ClickHouse packages.
Automated run {#automated-run}
You can run the benchmark with a single script.
Download the script.
bash
wget https://raw.githubusercontent.com/ClickHouse/ClickBench/main/hardware/hardware.sh
Run the script.
bash
chmod a+x ./hardware.sh
./hardware.sh
Copy the output and send it to feedback@clickhouse.com
All the results are published here: https://clickhouse.com/benchmark/hardware/ | {"source_file": "performance-test.md"} | [
-0.008593942038714886,
-0.03699863702058792,
-0.04175075516104698,
0.022492898628115654,
-0.04149062559008598,
-0.14174695312976837,
-0.04151526093482971,
-0.016408123075962067,
-0.1379241943359375,
-0.019032759591937065,
0.03142455965280533,
-0.04370107501745224,
0.010443859733641148,
-0.... |
33a4c88f-4c80-41f4-acf4-b6eaa906396c | description: 'Documentation for Workload Scheduling'
sidebar_label: 'Workload scheduling'
sidebar_position: 69
slug: /operations/workload-scheduling
title: 'Workload scheduling'
doc_type: 'reference'
When ClickHouse execute multiple queries simultaneously, they may be using shared resources (e.g. disks and CPU cores). Scheduling constraints and policies can be applied to regulate how resources are utilized and shared between different workloads. For all resources a common scheduling hierarchy can be configured. Hierarchy root represents shared resources, while leafs are specific workloads, holding requests that exceed resource capacity.
:::note
Currently
remote disk IO
and
CPU
can be scheduled using described method. For flexible memory limits see
Memory overcommit
:::
Disk configuration {#disk_config}
To enable IO workload scheduling for a specific disk, you have to create read and write resources for WRITE and READ access:
sql
CREATE RESOURCE resource_name (WRITE DISK disk_name, READ DISK disk_name)
-- or
CREATE RESOURCE read_resource_name (WRITE DISK write_disk_name)
CREATE RESOURCE write_resource_name (READ DISK read_disk_name)
Resource could be used for any number of disks for READ or WRITE or both for READ and WRITE. There is a syntax allowing to use a resource for all the disks:
sql
CREATE RESOURCE all_io (READ ANY DISK, WRITE ANY DISK);
An alternative way to express which disks are used by a resource is server's
storage_configuration
:
:::warning
Workload scheduling using clickhouse configuration is deprecated. SQL syntax should be used instead.
:::
To enable IO scheduling for a specific disk, you have to specify
read_resource
and/or
write_resource
in storage configuration. It says ClickHouse what resource should be used for every read and write requests with given disk. Read and write resource can refer to the same resource name, which is useful for local SSDs or HDDs. Multiple different disks also can refer to the same resource, which is useful for remote disks: if you want to be able to allow fair division of network bandwidth between e.g. "production" and "development" workloads.
Example:
xml
<clickhouse>
<storage_configuration>
...
<disks>
<s3>
<type>s3</type>
<endpoint>https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/root-path/</endpoint>
<access_key_id>your_access_key_id</access_key_id>
<secret_access_key>your_secret_access_key</secret_access_key>
<read_resource>network_read</read_resource>
<write_resource>network_write</write_resource>
</s3>
</disks>
<policies>
<s3_main>
<volumes>
<main>
<disk>s3</disk>
</main>
</volumes>
</s3_main>
</policies>
</storage_configuration>
</clickhouse> | {"source_file": "workload-scheduling.md"} | [
0.022257160395383835,
-0.03810763359069824,
-0.07614541798830032,
0.08809491246938705,
-0.0442843921482563,
-0.05962013825774193,
0.02181946113705635,
0.05200871080160141,
-0.044895123690366745,
0.036493707448244095,
-0.04389036074280739,
0.06013653799891472,
0.04501722753047943,
0.0042913... |
b0f03b77-0eed-4c20-850d-1e6bf58aceb4 | Note that server configuration options have priority over SQL way to define resources.
Workload markup {#workload_markup}
Queries can be marked with setting
workload
to distinguish different workloads. If
workload
is not set, than value "default" is used. Note that you are able to specify the other value using settings profiles. Setting constraints can be used to make
workload
constant if you want all queries from the user to be marked with fixed value of
workload
setting.
It is possible to assign a
workload
setting for background activities. Merges and mutations are using
merge_workload
and
mutation_workload
server settings correspondingly. These values can also be overridden for specific tables using
merge_workload
and
mutation_workload
merge tree settings
Let's consider an example of a system with two different workloads: "production" and "development".
sql
SELECT count() FROM my_table WHERE value = 42 SETTINGS workload = 'production'
SELECT count() FROM my_table WHERE value = 13 SETTINGS workload = 'development'
Resource scheduling hierarchy {#hierarchy}
From the standpoint of scheduling subsystem a resource represents a hierarchy of scheduling nodes.
```mermaid
graph TD
subgraph network_read
nr_root(("/"))
-->|100 concurrent requests| nr_fair("fair")
-->|75% bandwidth| nr_prod["prod"]
nr_fair
-->|25% bandwidth| nr_dev["dev"]
end
subgraph network_write
nw_root(("/"))
-->|100 concurrent requests| nw_fair("fair")
-->|75% bandwidth| nw_prod["prod"]
nw_fair
-->|25% bandwidth| nw_dev["dev"]
end
```
:::warning
Workload scheduling using clickhouse configuration is deprecated. SQL syntax should be used instead. SQL syntax creates all necessary scheduling nodes automatically and the following scheduling node description should be considered as lower level implementation details, accessible through
system.scheduler
table.
:::
Possible node types:
*
inflight_limit
(constraint) - blocks if either number of concurrent in-flight requests exceeds
max_requests
, or their total cost exceeds
max_cost
; must have a single child.
*
bandwidth_limit
(constraint) - blocks if current bandwidth exceeds
max_speed
(0 means unlimited) or burst exceeds
max_burst
(by default equals
max_speed
); must have a single child.
*
fair
(policy) - selects the next request to serve from one of its children nodes according to max-min fairness; children nodes can specify
weight
(default is 1).
*
priority
(policy) - selects the next request to serve from one of its children nodes according to static priorities (lower value means higher priority); children nodes can specify
priority
(default is 0).
*
fifo
(queue) - leaf of the hierarchy capable of holding requests that exceed resource capacity. | {"source_file": "workload-scheduling.md"} | [
0.005414661951363087,
-0.021274633705615997,
-0.015001269057393074,
0.04513508081436157,
-0.062143873423337936,
-0.057992856949567795,
0.05055319890379906,
0.011452065780758858,
-0.053225837647914886,
-0.008152905851602554,
0.00578523613512516,
-0.018491044640541077,
0.08164756745100021,
0... |
f9d762dc-e213-40ad-9064-ad64b551c112 | To be able to use the full capacity of the underlying resource, you should use
inflight_limit
. Note that a low number of
max_requests
or
max_cost
could lead to not full resource utilization, while too high numbers could lead to empty queues inside the scheduler, which in turn will result in policies being ignored (unfairness or ignoring of priorities) in the subtree. On the other hand, if you want to protect resources from too high utilization, you should use
bandwidth_limit
. It throttles when the amount of resource consumed in
duration
seconds exceeds
max_burst + max_speed * duration
bytes. Two
bandwidth_limit
nodes on the same resource could be used to limit peak bandwidth during short intervals and average bandwidth for longer ones.
The following example shows how to define IO scheduling hierarchies shown in the picture:
xml
<clickhouse>
<resources>
<network_read>
<node path="/">
<type>inflight_limit</type>
<max_requests>100</max_requests>
</node>
<node path="/fair">
<type>fair</type>
</node>
<node path="/fair/prod">
<type>fifo</type>
<weight>3</weight>
</node>
<node path="/fair/dev">
<type>fifo</type>
</node>
</network_read>
<network_write>
<node path="/">
<type>inflight_limit</type>
<max_requests>100</max_requests>
</node>
<node path="/fair">
<type>fair</type>
</node>
<node path="/fair/prod">
<type>fifo</type>
<weight>3</weight>
</node>
<node path="/fair/dev">
<type>fifo</type>
</node>
</network_write>
</resources>
</clickhouse>
Workload classifiers {#workload_classifiers}
:::warning
Workload scheduling using clickhouse configuration is deprecated. SQL syntax should be used instead. Classifiers are created automatically when using SQL syntax.
:::
Workload classifiers are used to define mapping from
workload
specified by a query into leaf-queues that should be used for specific resources. At the moment, workload classification is simple: only static mapping is available.
Example:
xml
<clickhouse>
<workload_classifiers>
<production>
<network_read>/fair/prod</network_read>
<network_write>/fair/prod</network_write>
</production>
<development>
<network_read>/fair/dev</network_read>
<network_write>/fair/dev</network_write>
</development>
<default>
<network_read>/fair/dev</network_read>
<network_write>/fair/dev</network_write>
</default>
</workload_classifiers>
</clickhouse>
Workload hierarchy {#workloads} | {"source_file": "workload-scheduling.md"} | [
-0.027708586305379868,
0.031494081020355225,
-0.03231078386306763,
-0.006834310945123434,
-0.016024695709347725,
-0.09280600398778915,
0.016808364540338516,
0.008491188287734985,
-0.0531032532453537,
0.038633354008197784,
-0.1047639399766922,
0.009518647566437721,
-0.017786601558327675,
0.... |
68efaf2f-38be-4da7-9845-80daeb00833d | Workload hierarchy {#workloads}
ClickHouse provides convenient SQL syntax to define scheduling hierarchy. All resources that were created with
CREATE RESOURCE
share the same structure of the hierarchy, but could differ in some aspects. Every workload created with
CREATE WORKLOAD
maintains a few automatically created scheduling nodes for every resource. A child workload can be created inside another parent workload. Here is the example that defines exactly the same hierarchy as XML configuration above:
sql
CREATE RESOURCE network_write (WRITE DISK s3)
CREATE RESOURCE network_read (READ DISK s3)
CREATE WORKLOAD all SETTINGS max_io_requests = 100
CREATE WORKLOAD development IN all
CREATE WORKLOAD production IN all SETTINGS weight = 3
The name of a leaf workload without children could be used in query settings
SETTINGS workload = 'name'
.
To customize workload the following settings could be used:
*
priority
- sibling workloads are served according to static priority values (lower value means higher priority).
*
weight
- sibling workloads having the same static priority share resources according to weights.
*
max_io_requests
- the limit on the number of concurrent IO requests in this workload.
*
max_bytes_inflight
- the limit on the total inflight bytes for concurrent requests in this workload.
*
max_bytes_per_second
- the limit on byte read or write rate of this workload.
*
max_burst_bytes
- the maximum number of bytes that could be processed by the workload without being throttled (for every resource independently).
*
max_concurrent_threads
- the limit on the number of threads for queries in this workload.
*
max_concurrent_threads_ratio_to_cores
- the same as
max_concurrent_threads
, but normalized to the number of available CPU cores.
*
max_cpus
- the limit on the number of CPU cores to serve queries in this workload.
*
max_cpu_share
- the same as
max_cpus
, but normalized to the number of available CPU cores.
*
max_burst_cpu_seconds
- the maximum number of CPU seconds that could be consumed by the workload without being throttled due to
max_cpus
.
All limits specified through workload settings are independent for every resource. For example workload with
max_bytes_per_second = 10485760
will have 10 MB/s bandwidth limit for every read and write resource independently. If common limit for reading and writing is required, consider using the same resource for READ and WRITE access.
There is no way to specify different hierarchies of workloads for different resources. But there is a way to specify different workload setting value for a specific resource:
sql
CREATE OR REPLACE WORKLOAD all SETTINGS max_io_requests = 100, max_bytes_per_second = 1000000 FOR network_read, max_bytes_per_second = 2000000 FOR network_write
Also note that workload or resource could not be dropped if it is referenced from another workload. To update a definition of a workload use
CREATE OR REPLACE WORKLOAD
query. | {"source_file": "workload-scheduling.md"} | [
-0.06178303435444832,
-0.04312850162386894,
-0.07069021463394165,
0.08151456713676453,
-0.06840695440769196,
-0.10137898474931717,
-0.0018422799184918404,
-0.0032217977568507195,
-0.03563319519162178,
0.027375096455216408,
0.0027984995394945145,
-0.03901577368378639,
0.06928487122058868,
-... |
2ae6389d-fccc-483a-908f-64e0bc1940e3 | Also note that workload or resource could not be dropped if it is referenced from another workload. To update a definition of a workload use
CREATE OR REPLACE WORKLOAD
query.
:::note
Workload settings are translated into a proper set of scheduling nodes. For lower-level details, see the description of the scheduling node
types and options
.
:::
CPU scheduling {#cpu_scheduling}
To enable CPU scheduling for workloads create CPU resource and set a limit for the number of concurrent threads:
sql
CREATE RESOURCE cpu (MASTER THREAD, WORKER THREAD)
CREATE WORKLOAD all SETTINGS max_concurrent_threads = 100
When ClickHouse server executes many concurrent queries with
multiple threads
and all CPU slots are in use the overload state is reached. In the overload state every released CPU slot is rescheduled to proper workload according to scheduling policies. For queries sharing the same workload, slots are allocated using round robin. For queries in separate workloads, slots are allocated according to weights, priorities, and limits specified for workloads.
CPU time is consumed by threads when they are not blocked and work on CPU-intensive tasks. For scheduling purpose, two kinds of threads are distinguished:
* Master thread β the first thread that starts working on a query or background activity like a merge or a mutation.
* Worker thread β the additional threads that master can spawn to work on CPU-intensive tasks.
It may be desirable to use separate resources for master and worker threads to achieve better responsiveness. A high number of worker threads can easily monopolize CPU resource when high
max_threads
query setting values are used. Then incoming queries should block and wait a CPU slot for its master thread to start execution. To avoid this the following configuration could be used:
sql
CREATE RESOURCE worker_cpu (WORKER THREAD)
CREATE RESOURCE master_cpu (MASTER THREAD)
CREATE WORKLOAD all SETTINGS max_concurrent_threads = 100 FOR worker_cpu, max_concurrent_threads = 1000 FOR master_cpu
It will create separate limits on master and worker threads. Even if all 100 worker CPU slots are busy, new queries will not be blocked until there are available master CPU slots. They will start execution with one thread. Later if worker CPU slots became available, such queries could upscale and spawn their worker threads. On the other hand, such an approach does not bind the total number of slots to the number of CPU processors, and running too many concurrent threads will affect performance. | {"source_file": "workload-scheduling.md"} | [
-0.036745477467775345,
-0.05366921424865723,
-0.02343117631971836,
0.06281737983226776,
-0.07544582337141037,
-0.10127196460962296,
0.04183794558048248,
-0.04799976572394371,
0.007279560435563326,
0.019874120131134987,
-0.024348074570298195,
0.03620953857898712,
0.009928854182362556,
-0.04... |
63493361-99ad-4e3f-88e6-664dfade3b37 | Limiting the concurrency of master threads will not limit the number of concurrent queries. CPU slots could be released in the middle of the query execution and reacquired by other threads. For example, 4 concurrent queries with 2 concurrent master thread limit could all be executed in parallel. In this case, every query will receive 50% of a CPU processor. A separate logic should be used to limit the number of concurrent queries and it is not currently supported for workloads.
Separate thread concurrency limits could be used for workloads:
sql
CREATE RESOURCE cpu (MASTER THREAD, WORKER THREAD)
CREATE WORKLOAD all
CREATE WORKLOAD admin IN all SETTINGS max_concurrent_threads = 10
CREATE WORKLOAD production IN all SETTINGS max_concurrent_threads = 100
CREATE WORKLOAD analytics IN production SETTINGS max_concurrent_threads = 60, weight = 9
CREATE WORKLOAD ingestion IN production
This configuration example provides independent CPU slot pools for admin and production. The production pool is shared between analytics and ingestion. Furthermore, if the production pool is overloaded, 9 of 10 released slots will be rescheduled to analytical queries if necessary. The ingestion queries would only receive 1 of 10 slots during overload periods. This might improve the latency of user-facing queries. Analytics has its own limit of 60 concurrent thread, always leaving at least 40 threads to support ingestion. When there is no overload, ingestion could use all 100 threads.
To exclude a query from CPU scheduling set a query setting
use_concurrency_control
to 0.
CPU scheduling is not supported for merges and mutations yet.
To provide fair allocations for workload it is necessary to perform preemption and down-scaling during query execution. Preemption is enabled with
cpu_slot_preemption
server setting. If it is enabled, every threads renews its CPU slot periodically (according to
cpu_slot_quantum_ns
server setting). Such a renewal can block execution if CPU is overloaded. When execution is blocked for prolonged time (see
cpu_slot_preemption_timeout_ms
server setting), then query scales down and the number of concurrently running threads decreases dynamically. Note that CPU time fairness is guaranteed between workloads, but between queries inside the same workload it might be violated in some corner cases.
:::warning
Slot scheduling provides a way to control
query concurrency
but does not guarantee fair CPU time allocation unless server setting
cpu_slot_preemption
is set to
true
, otherwise fairness is provided based on number of CPU slot allocations among competing workloads. It does not imply equal amount of CPU seconds because without preemption CPU slot may be held indefinitely. A thread acquires a slot at the beginning and release when work is done.
::: | {"source_file": "workload-scheduling.md"} | [
-0.02162100374698639,
-0.0275577325373888,
-0.03165304288268089,
0.02905595488846302,
-0.09515637159347534,
-0.04344967380166054,
0.05865301564335823,
-0.02405250445008278,
-0.017436908558011055,
0.02560783177614212,
-0.043096281588077545,
-0.01856728084385395,
0.04109756648540497,
-0.0200... |
149f2035-0b34-4d65-becd-07776ddbe55b | :::note
Declaring CPU resource disables effect of
concurrent_threads_soft_limit_num
and
concurrent_threads_soft_limit_ratio_to_cores
settings. Instead, workload setting
max_concurrent_threads
is used to limit the number of CPUs allocated for a specific workload. To achieve the previous behavior create only WORKER THREAD resource, set
max_concurrent_threads
for the workload
all
to the same value as
concurrent_threads_soft_limit_num
and use
workload = "all"
query setting. This configuration corresponds to
concurrent_threads_scheduler
setting set "fair_round_robin" value.
:::
Threads vs. CPUs {#threads_vs_cpus}
There are two way to control CPU consumption of a workload:
* Thread number limit:
max_concurrent_threads
and
max_concurrent_threads_ratio_to_cores
* CPU throttling:
max_cpus
,
max_cpu_share
and
max_burst_cpu_seconds
The first allows one to dynamically control how many threads are spawned for a query, depending on the current server load. It effectively lowers what
max_threads
query setting dictates. The second throttles CPU consumption of the workload using token bucket algorithm. It does not affect thread number directly, but throttles the total CPU consumption of all threads in the workload.
Token bucket throttling with
max_cpus
and
max_burst_cpu_seconds
means the following. During any interval of
delta
seconds the total CPU consumption by all queries in workload is not allowed to be greater than
max_cpus * delta + max_burst_cpu_seconds
CPU seconds. It limits average consumption by
max_cpus
in long-term, but this limit might be exceeded in short-term. For example, given
max_burst_cpu_seconds = 60
and
max_cpus=0.001
, one is allowed to run either 1 thread for 60 seconds or 2 threads for 30 seconds or 60 threads for 1 seconds without being throttled. Default value for
max_burst_cpu_seconds
is 1 second. Lower values may lead to under-utilization of allowed
max_cpus
cores given many concurrent threads.
:::warning
CPU throttling settings are active only if
cpu_slot_preemption
server setting is enabled and ignored otherwise.
:::
While holding a CPU slot a thread could be in one of there main states:
*
Running:
Effectively consuming CPU resource. Time spent in this state in accounted by the CPU throttling.
*
Ready:
Waiting for a CPU to became available. Not accounted by CPU throttling.
*
Blocked:
Doing IO operations or other blocking syscalls (e.g. waiting on a mutex). Not accounted by CPU throttling.
Let's consider an example of configuration that combines both CPU throttling and thread number limits: | {"source_file": "workload-scheduling.md"} | [
-0.07963677495718002,
0.009611839428544044,
-0.028707370162010193,
0.05644313618540764,
-0.01468685269355774,
-0.037345755845308304,
0.005815949756652117,
-0.011373819783329964,
0.03661429509520531,
0.015071763657033443,
-0.01833777315914631,
-0.011176680214703083,
-0.007470768876373768,
-... |
6b608681-ab72-4daa-9634-bb0dca95466c | Let's consider an example of configuration that combines both CPU throttling and thread number limits:
sql
CREATE RESOURCE cpu (MASTER THREAD, WORKER THREAD)
CREATE WORKLOAD all SETTINGS max_concurrent_threads_ratio_to_cores = 2
CREATE WORKLOAD admin IN all SETTINGS max_concurrent_threads = 2, priority = -1
CREATE WORKLOAD production IN all SETTINGS weight = 4
CREATE WORKLOAD analytics IN production SETTINGS max_cpu_share = 0.7, weight = 3
CREATE WORKLOAD ingestion IN production
CREATE WORKLOAD development IN all SETTINGS max_cpu_share = 0.3
Here we limit the total number of threads for all queries to be x2 of the available CPUs. Admin workload is limited to exactly two threads at most, regardless of the number of available CPUs. Admin has priority -1 (less than default 0) and it gets any CPU slot first if required. When the admin does not run queries, CPU resources are divided among production and development workloads. Guaranteed shares of CPU time are based on weights (4 to 1): At least 80% goes to production (if required), and at least 20% goes to development (if required). While weights form guarantees, CPU throttling forms limits: production is not limited and can consume 100%, while development has a limit of 30%, which is applied even if there are no queries from other workloads. Production workload is not a leaf, so its resources are split among analytics and ingestion according to weights (3 to 1). It means that analytics has a guarantee of at least 0.8 * 0.75 = 60%, and based on
max_cpu_share
, it has a limit of 70% of total CPU resources. While ingestion is left with a guarantee of at least 0.8 * 0.25 = 20%, it has no upper limit.
:::note
If you want to maximize CPU utilization on your ClickHouse server, avoid using
max_cpus
and
max_cpu_share
for the root workload
all
. Instead, set a higher value for
max_concurrent_threads
. For example, on a system with 8 CPUs, set
max_concurrent_threads = 16
. This allows 8 threads to run CPU tasks while 8 other threads can handle I/O operations. Additional threads will create CPU pressure, ensuring scheduling rules are enforced. In contrast, setting
max_cpus = 8
will never create CPU pressure because the server cannot exceed the 8 available CPUs.
:::
Query slot scheduling {#query_scheduling}
To enable query slot scheduling for workloads create QUERY resource and set a limit for the number of concurrent queries or queries per second:
sql
CREATE RESOURCE query (QUERY)
CREATE WORKLOAD all SETTINGS max_concurrent_queries = 100, max_queries_per_second = 10, max_burst_queries = 20
Workload setting
max_concurrent_queries
limits the number of concurrent queries that could run simultaneously for a given workload. This is analog of query
max_concurrent_queries_for_all_users
and server
max_concurrent_queries
settings. Async insert queries and some specific queries like KILL are not counted towards the limit. | {"source_file": "workload-scheduling.md"} | [
-0.020523453131318092,
-0.045781854540109634,
-0.028400888666510582,
0.026285625994205475,
-0.04985050857067108,
-0.07432123273611069,
0.004714496433734894,
-0.039106082171201706,
-0.009380294941365719,
-0.01144507434219122,
-0.012148608453571796,
0.0008086866582743824,
0.03679463639855385,
... |
f28fe875-5588-459e-ae25-14cfd3b8e4bd | Workload settings
max_queries_per_second
and
max_burst_queries
limit number of queries for the workload with a token bucket throttler. It guarantees that during any time interval
T
no more than
max_queries_per_second * T + max_burst_queries
new queries will start execution.
Workload setting
max_waiting_queries
limits number of waiting queries for the workload. When the limit is reached, the server returns an error
SERVER_OVERLOADED
.
:::note
Blocked queries will wait indefinitely and not appear in
SHOW PROCESSLIST
until all constraints are satisfied.
:::
Workloads and resources storage {#workload_entity_storage}
Definitions of all workloads and resources in the form of
CREATE WORKLOAD
and
CREATE RESOURCE
queries are stored persistently either on disk at
workload_path
or in ZooKeeper at
workload_zookeeper_path
. ZooKeeper storage is recommended to achieve consistency between nodes. Alternatively
ON CLUSTER
clause could be used along with disk storage.
Configuration-based workloads and resources {#config_based_workloads}
In addition to SQL-based definitions, workloads and resources can be predefined in the server configuration file. This is useful in cloud environments where some limitations are dictated by infrastructure, while other limits could be changed by customers. Configuration-based entities have priority over SQL-defined ones and cannot be modified or deleted using SQL commands.
Configuration format {#config_based_workloads_format}
xml
<clickhouse>
<resources_and_workloads>
RESOURCE s3disk_read (READ DISK s3);
RESOURCE s3disk_write (WRITE DISK s3);
WORKLOAD all SETTINGS max_io_requests = 500 FOR s3disk_read, max_io_requests = 1000 FOR s3disk_write, max_bytes_per_second = 1342177280 FOR s3disk_read, max_bytes_per_second = 3355443200 FOR s3disk_write;
WORKLOAD production IN all SETTINGS weight = 3;
</resources_and_workloads>
</clickhouse>
The configuration uses the same SQL syntax as
CREATE WORKLOAD
and
CREATE RESOURCE
statements. All queries must be valid.
Usage recommendations {#config_based_workloads_usage_recommendations}
For cloud environments, a typical setup might include:
Define root workload and network IO resources in configuration to set infrastructure limits
Set
throw_on_unknown_workload
to enforce these limits
Create a
CREATE WORKLOAD default IN all
to automatically apply limits to all queries (since the default value for
workload
query setting is 'default')
Allow users to create additional workloads within the configured hierarchy
This ensures that all background activities and queries respect the infrastructure limitations while still allowing flexibility for user-specific scheduling policies.
Another use case is different configuration for different nodes in a heterogeneous cluster.
Strict resource access {#strict_resource_access} | {"source_file": "workload-scheduling.md"} | [
-0.03563258424401283,
-0.008801093325018883,
-0.010232868604362011,
0.005806667730212212,
-0.036676112562417984,
-0.09043039381504059,
0.013154329732060432,
-0.0023257629945874214,
-0.005201214458793402,
0.07907088100910187,
-0.05852874368429184,
-0.014836042188107967,
-0.002646839478984475,... |
9d939a60-b6be-403f-85fc-32bae018895a | Another use case is different configuration for different nodes in a heterogeneous cluster.
Strict resource access {#strict_resource_access}
To enforce all queries to follow resource scheduling policies there is a server setting
throw_on_unknown_workload
. If it is set to
true
then every query is required to use valid
workload
query setting, otherwise
RESOURCE_ACCESS_DENIED
exception is thrown. If it is set to
false
then such a query does not use resource scheduler, i.e. it will get unlimited access to any
RESOURCE
. Query setting 'use_concurrency_control = 0' allows query to avoid CPU scheduler and get unlimited access to CPU. To enforce CPU scheduling create a setting constraint to keep 'use_concurrency_control' read-only constant value.
:::note
Do not set
throw_on_unknown_workload
to
true
unless
CREATE WORKLOAD default
is executed. It could lead to server startup issues if a query without explicit setting
workload
is executed during startup.
:::
See also {#see-also}
system.scheduler
system.workloads
system.resources
merge_workload
merge tree setting
merge_workload
global server setting
mutation_workload
merge tree setting
mutation_workload
global server setting
workload_path
global server setting
workload_zookeeper_path
global server setting
cpu_slot_preemption
global server setting
cpu_slot_quantum_ns
global server setting
cpu_slot_preemption_timeout_ms
global server setting | {"source_file": "workload-scheduling.md"} | [
-0.028804590925574303,
0.0645478218793869,
-0.023566067218780518,
0.11001329869031906,
-0.019725525751709938,
-0.029013322666287422,
-0.012394563294947147,
-0.036258988082408905,
-0.03524938225746155,
0.002578185871243477,
-0.021075192838907242,
0.0007737077539786696,
-0.017852909862995148,
... |
f0122509-5546-4f65-abe6-3109d210d1df | description: 'Guide to configuring and managing resource usage quotas in ClickHouse'
sidebar_label: 'Quotas'
sidebar_position: 51
slug: /operations/quotas
title: 'Quotas'
doc_type: 'guide'
:::note Quotas in ClickHouse Cloud
Quotas are supported in ClickHouse Cloud but must be created using the
DDL syntax
. The XML configuration approach documented below is
not supported
.
:::
Quotas allow you to limit resource usage over a period of time or track the use of resources.
Quotas are set up in the user config, which is usually 'users.xml'.
The system also has a feature for limiting the complexity of a single query. See the section
Restrictions on query complexity
.
In contrast to query complexity restrictions, quotas:
Place restrictions on a set of queries that can be run over a period of time, instead of limiting a single query.
Account for resources spent on all remote servers for distributed query processing.
Let's look at the section of the 'users.xml' file that defines quotas.
```xml
3600
<!-- Unlimited. Just collect data for the specified time interval. -->
<queries>0</queries>
<query_selects>0</query_selects>
<query_inserts>0</query_inserts>
<errors>0</errors>
<result_rows>0</result_rows>
<read_rows>0</read_rows>
<execution_time>0</execution_time>
</interval>
</default>
```
By default, the quota tracks resource consumption for each hour, without limiting usage.
The resource consumption calculated for each interval is output to the server log after each request.
```xml
3600
<queries>1000</queries>
<query_selects>100</query_selects>
<query_inserts>100</query_inserts>
<written_bytes>5000000</written_bytes>
<errors>100</errors>
<result_rows>1000000000</result_rows>
<read_rows>100000000000</read_rows>
<execution_time>900</execution_time>
<failed_sequential_authentications>5</failed_sequential_authentications>
</interval>
<interval>
<duration>86400</duration>
<queries>10000</queries>
<query_selects>10000</query_selects>
<query_inserts>10000</query_inserts>
<errors>1000</errors>
<result_rows>5000000000</result_rows>
<result_bytes>160000000000</result_bytes>
<read_rows>500000000000</read_rows>
<result_bytes>16000000000000</result_bytes>
<execution_time>7200</execution_time>
</interval>
```
For the 'statbox' quota, restrictions are set for every hour and for every 24 hours (86,400 seconds). The time interval is counted, starting from an implementation-defined fixed moment in time. In other words, the 24-hour interval does not necessarily begin at midnight.
When the interval ends, all collected values are cleared. For the next hour, the quota calculation starts over.
Here are the amounts that can be restricted:
queries
β The total number of requests.
query_selects
β The total number of select requests. | {"source_file": "quotas.md"} | [
-0.007736729457974434,
0.022235948592424393,
-0.04134887084364891,
0.024403460323810577,
-0.05816672742366791,
-0.03616435080766678,
0.019798992201685905,
-0.01395315583795309,
0.030453363433480263,
0.029913533478975296,
-0.007553177885711193,
-0.029017282649874687,
0.07541994005441666,
-0... |
9717db4c-f513-4174-bd87-373b5a55617d | Here are the amounts that can be restricted:
queries
β The total number of requests.
query_selects
β The total number of select requests.
query_inserts
β The total number of insert requests.
errors
β The number of queries that threw an exception.
result_rows
β The total number of rows given as a result.
result_bytes
- The total size of rows given as a result.
read_rows
β The total number of source rows read from tables for running the query on all remote servers.
read_bytes
- The total size read from tables for running the query on all remote servers.
written_bytes
- The total size of a writing operation.
execution_time
β The total query execution time, in seconds (wall time).
failed_sequential_authentications
- The total number of sequential authentication errors.
If the limit is exceeded for at least one time interval, an exception is thrown with a text about which restriction was exceeded, for which interval, and when the new interval begins (when queries can be sent again).
Quotas can use the "quota key" feature to report on resources for multiple keys independently. Here is an example of this:
```xml
<!-- keyed β The quota_key "key" is passed in the query parameter,
and the quota is tracked separately for each key value.
For example, you can pass a username as the key,
so the quota will be counted separately for each username.
Using keys makes sense only if quota_key is transmitted by the program, not by a user.
You can also write <keyed_by_ip />, so the IP address is used as the quota key.
(But keep in mind that users can change the IPv6 address fairly easily.)
-->
<keyed />
```
The quota is assigned to users in the 'users' section of the config. See the section "Access rights".
For distributed query processing, the accumulated amounts are stored on the requestor server. So if the user goes to another server, the quota there will "start over".
When the server is restarted, quotas are reset.
Related Content {#related-content}
Blog:
Building single page applications with ClickHouse | {"source_file": "quotas.md"} | [
-0.011790459044277668,
-0.022793453186750412,
-0.0689050555229187,
0.04267679527401924,
-0.052024614065885544,
-0.036817003041505814,
-0.01899806223809719,
0.04630150645971298,
0.01629718579351902,
0.051610104739665985,
-0.04041285812854767,
-0.003536839736625552,
0.11181022971868515,
-0.0... |
2092e9ff-4e86-4892-b5b7-8b600685b5c6 | description: 'caching mechanism that allows for caching of
data in in-process memory rather than relying on the OS page cache.'
sidebar_label: 'Userspace page cache'
sidebar_position: 65
slug: /operations/userspace-page-cache
title: 'Userspace page cache'
doc_type: 'reference'
Userspace page cache
Overview {#overview}
The userspace page cache is a new caching mechanism that allows for caching of
data in in-process memory rather than relying on the OS page cache.
ClickHouse already offers the
Filesystem cache
as a way of caching on top of remote object storage such as Amazon S3, Google
Cloud Storage (GCS) or Azure Blob Storage. The userspace page cache is designed
to speed up access to remote data when the normal OS caching isn't doing a good
enough job.
It differs from the filesystem cache in the following ways:
| Filesystem Cache | Userspace page cache |
|---------------------------------------------------------|---------------------------------------|
| Writes data to the local filesystem | Present only in memory |
| Takes up disk space (also configurable on tmpfs) | Independent of filesystem |
| Survives server restarts | Does not survive server restarts |
| Does not show up in the server's memory usage | Shows up in the server's memory usage |
| Suitable for both on-disk and in-memory (OS page cache) |
Good for disk-less servers
|
Configuration settings and usage {#configuration-settings-and-usage}
Usage {#usage}
To enable the userspace page cache, first configure it on the server:
bash
cat config.d/page_cache.yaml
page_cache_max_size: 100G
:::note
The userspace page cache will use up to the specified amount of memory, but
this memory amount is not reserved. The memory will be evicted when it is needed
for other server needs.
:::
Next, enable its usage on the query-level:
sql
SET use_page_cache_for_disks_without_file_cache=1;
Settings {#settings} | {"source_file": "userspace-page-cache.md"} | [
-0.009813116863369942,
-0.017225658521056175,
-0.01647036336362362,
0.01033720187842846,
-0.016096094623208046,
-0.05602307245135307,
0.013976610265672207,
-0.01957063376903534,
0.06614556163549423,
0.0794970765709877,
0.02460147626698017,
0.12384629249572754,
0.005696097854524851,
-0.0815... |
009fcfca-62c2-4d1e-9351-c43be3b353b6 | | Setting | Description | Default |
|----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
|
use_page_cache_for_disks_without_file_cache
| Use userspace page cache for remote disks that don't have filesystem cache enabled. |
0
|
|
use_page_cache_with_distributed_cache
| Use userspace page cache when distributed cache is used. |
0
|
|
read_from_page_cache_if_exists_otherwise_bypass_cache
| Use userspace page cache in passive mode, similar to
read_from_filesystem_cache_if_exists_otherwise_bypass_cache
. |
0
|
|
page_cache_inject_eviction
| Userspace page cache will sometimes invalidate some pages at random. Intended for testing. |
0
|
|
page_cache_block_size
| Size of file chunks to store in the userspace page cache, in bytes. All reads that go through the cache will be rounded up to a multiple of this size. |
1048576
|
|
page_cache_history_window_ms
| Delay before freed memory can be used by userspace page cache. |
1000 | {"source_file": "userspace-page-cache.md"} | [
0.0273667573928833,
0.04266985505819321,
-0.006559492088854313,
-0.007316181436181068,
-0.06094277277588844,
0.07176969945430756,
0.012952437624335289,
0.047629036009311676,
0.028226379305124283,
-0.05743034929037094,
0.021556850522756577,
-0.04850621894001961,
-0.025508733466267586,
-0.05... |
4e609264-d684-4fe6-b8bb-30c56fcbd9c9 | 1000
|
|
page_cache_policy
| Userspace page cache policy name. |
SLRU
|
|
page_cache_size_ratio
| The size of the protected queue in the userspace page cache relative to the cache\'s total size. |
0.5
|
|
page_cache_min_size
| Minimum size of the userspace page cache. |
104857600
|
|
page_cache_max_size
| Maximum size of the userspace page cache. Set to 0 to disable the cache. If greater than page_cache_min_size, the cache size will be continuously adjusted within this range, to use most of the available memory while keeping the total memory usage below the limit (
max_server_memory_usage
[
_to_ram_ratio
]). |
0
|
|
page_cache_free_memory_ratio
| Fraction of the memory limit to keep free from the userspace page cache. Analogous to Linux min_free_kbytes setting. |
0.15
|
|
page_cache_lookahead_blocks
| On userspace page cache miss, read up to this many consecutive blocks at once from the underlying storage, if they\'re also not in the cache. Each block is page_cache_block_size bytes. |
16
|
|
page_cache_shards
| Stripe userspace page cache over this many shards to reduce mutex contention. Experimental, not likely to improve performance. |
4
| | {"source_file": "userspace-page-cache.md"} | [
0.0018328825244680047,
0.00475149042904377,
-0.11384468525648117,
0.014674008823931217,
-0.018726404756307602,
-0.021521035581827164,
0.027930159121751785,
-0.006128111854195595,
-0.04962422326207161,
-0.009656899608671665,
0.05754408985376358,
0.06440749764442444,
0.002032445976510644,
-0... |
57a37f8a-928c-43ae-9999-51beeeef4bd1 | Related content {#related-content}
Filesystem cache
ClickHouse v25.3 Release Webinar | {"source_file": "userspace-page-cache.md"} | [
-0.0035867555998265743,
-0.07441461086273193,
-0.05780106037855148,
-0.06291677802801132,
0.053964629769325256,
-0.04513683542609215,
-0.013428142294287682,
-0.09689870476722717,
-0.04354486241936684,
-0.017700545489788055,
0.10900546610355377,
0.059445083141326904,
-0.038535889238119125,
... |
c69af3c2-5dbb-49b7-8d7b-d2ce16dd368f | description: 'Documentation for cluster discovery in ClickHouse'
sidebar_label: 'Cluster discovery'
slug: /operations/cluster-discovery
title: 'Cluster discovery'
doc_type: 'guide'
Cluster discovery
Overview {#overview}
ClickHouse's Cluster Discovery feature simplifies cluster configuration by allowing nodes to automatically discover and register themselves without the need for explicit definition in the configuration files. This is especially beneficial in cases where the manual definition of each node becomes cumbersome.
:::note
Cluster Discovery is an experimental feature and can be changed or removed in future versions.
To enable it include the
allow_experimental_cluster_discovery
setting in your configuration file:
xml
<clickhouse>
<!-- ... -->
<allow_experimental_cluster_discovery>1</allow_experimental_cluster_discovery>
<!-- ... -->
</clickhouse>
:::
Remote servers configuration {#remote-servers-configuration}
Traditional manual configuration {#traditional-manual-configuration}
Traditionally, in ClickHouse, each shard and replica in the cluster needed to be manually specified in the configuration:
```xml
node1
9000
node2
9000
node3
9000
node4
9000
```
Using cluster discovery {#using-cluster-discovery}
With Cluster Discovery, rather than defining each node explicitly, you simply specify a path in ZooKeeper. All nodes that register under this path in ZooKeeper will be automatically discovered and added to the cluster.
```xml
/clickhouse/discovery/cluster_name
<!-- # Optional configuration parameters: -->
<!-- ## Authentication credentials to access all other nodes in cluster: -->
<!-- <user>user1</user> -->
<!-- <password>pass123</password> -->
<!-- ### Alternatively to password, interserver secret may be used: -->
<!-- <secret>secret123</secret> -->
<!-- ## Shard for current node (see below): -->
<!-- <shard>1</shard> -->
<!-- ## Observer mode (see below): -->
<!-- <observer/> -->
</discovery>
</cluster_name>
```
If you want to specify a shard number for a particular node, you can include the
<shard>
tag within the
<discovery>
section:
for
node1
and
node2
:
xml
<discovery>
<path>/clickhouse/discovery/cluster_name</path>
<shard>1</shard>
</discovery>
for
node3
and
node4
:
xml
<discovery>
<path>/clickhouse/discovery/cluster_name</path>
<shard>2</shard>
</discovery>
Observer mode {#observer-mode}
Nodes configured in observer mode will not register themselves as replicas.
They will solely observe and discover other active replicas in the cluster without actively participating.
To enable observer mode, include the
<observer/>
tag within the
<discovery>
section:
xml
<discovery>
<path>/clickhouse/discovery/cluster_name</path>
<observer/>
</discovery>
Discovery of clusters {#discovery-of-clusters} | {"source_file": "cluster-discovery.md"} | [
0.004278145730495453,
-0.07132136821746826,
-0.03138948231935501,
0.037550631910562515,
0.03304494917392731,
-0.004988338798284531,
-0.027692724019289017,
-0.1196083351969719,
-0.012307269498705864,
0.028419148176908493,
0.05677422881126404,
-0.04033268615603447,
0.05348573625087738,
-0.01... |
3c98943c-6cbe-42a7-b812-fd1d11d808d7 | xml
<discovery>
<path>/clickhouse/discovery/cluster_name</path>
<observer/>
</discovery>
Discovery of clusters {#discovery-of-clusters}
Sometimes you may need to add and remove not only hosts in clusters, but clusters themselves. You can use the
<multicluster_root_path>
node with root path for several clusters:
xml
<remote_servers>
<some_unused_name>
<discovery>
<multicluster_root_path>/clickhouse/discovery</multicluster_root_path>
<observer/>
</discovery>
</some_unused_name>
</remote_servers>
In this case, when some other host registers itself with the path
/clickhouse/discovery/some_new_cluster
, a cluster with name
some_new_cluster
will be added.
You can use both features simultaneously, the host can register itself in cluster
my_cluster
and discovery any other clusters:
xml
<remote_servers>
<my_cluster>
<discovery>
<path>/clickhouse/discovery/my_cluster</path>
</discovery>
</my_cluster>
<some_unused_name>
<discovery>
<multicluster_root_path>/clickhouse/discovery</multicluster_root_path>
<observer/>
</discovery>
</some_unused_name>
</remote_servers>
Limitations:
- You can't use both
<path>
and
<multicluster_root_path>
in the same
remote_servers
subtree.
-
<multicluster_root_path>
can only be with
<observer/>
.
- The last part of path from Keeper is used as the cluster name, while during registration the name is taken from the XML tag.
Use cases and limitations {#use-cases-and-limitations}
As nodes are added or removed from the specified ZooKeeper path, they are automatically discovered or removed from the cluster without the need for configuration changes or server restarts.
However, changes affect only cluster configuration, not the data or existing databases and tables.
Consider the following example with a cluster of 3 nodes:
xml
<remote_servers>
<default>
<discovery>
<path>/clickhouse/discovery/default_cluster</path>
</discovery>
</default>
</remote_servers>
```sql
SELECT * EXCEPT (default_database, errors_count, slowdowns_count, estimated_recovery_time, database_shard_name, database_replica_name)
FROM system.clusters WHERE cluster = 'default';
ββclusterββ¬βshard_numββ¬βshard_weightββ¬βreplica_numββ¬βhost_nameβββββ¬βhost_addressββ¬βportββ¬βis_localββ¬βuserββ¬βis_activeββ
β default β 1 β 1 β 1 β 92d3c04025e8 β 172.26.0.5 β 9000 β 0 β β α΄Ία΅α΄Έα΄Έ β
β default β 1 β 1 β 2 β a6a68731c21b β 172.26.0.4 β 9000 β 1 β β α΄Ία΅α΄Έα΄Έ β
β default β 1 β 1 β 3 β 8e62b9cb17a1 β 172.26.0.2 β 9000 β 0 β β α΄Ία΅α΄Έα΄Έ β
βββββββββββ΄ββββββββββββ΄βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββ΄βββββββββββ΄βββββββ΄ββββββββββββ
``` | {"source_file": "cluster-discovery.md"} | [
0.016744356602430344,
-0.09247290343046188,
-0.04514077305793762,
0.054909396916627884,
0.058737561106681824,
-0.05193120613694191,
0.004070304334163666,
-0.0834682509303093,
0.019696662202477455,
0.010799894109368324,
0.05745447054505348,
-0.10629594326019287,
0.04106270894408226,
-0.0201... |
3dd72c00-2cc6-412e-b7f2-f21fd43d7b9f | ```sql
CREATE TABLE event_table ON CLUSTER default (event_time DateTime, value String)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/event_table', '{replica}')
ORDER BY event_time PARTITION BY toYYYYMM(event_time);
INSERT INTO event_table ...
```
Then, we add a new node to the cluster, starting a new node with the same entry in the
remote_servers
section in a configuration file:
response
ββclusterββ¬βshard_numββ¬βshard_weightββ¬βreplica_numββ¬βhost_nameβββββ¬βhost_addressββ¬βportββ¬βis_localββ¬βuserββ¬βis_activeββ
β default β 1 β 1 β 1 β 92d3c04025e8 β 172.26.0.5 β 9000 β 0 β β α΄Ία΅α΄Έα΄Έ β
β default β 1 β 1 β 2 β a6a68731c21b β 172.26.0.4 β 9000 β 1 β β α΄Ία΅α΄Έα΄Έ β
β default β 1 β 1 β 3 β 8e62b9cb17a1 β 172.26.0.2 β 9000 β 0 β β α΄Ία΅α΄Έα΄Έ β
β default β 1 β 1 β 4 β b0df3669b81f β 172.26.0.6 β 9000 β 0 β β α΄Ία΅α΄Έα΄Έ β
βββββββββββ΄ββββββββββββ΄βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββ΄βββββββββββ΄βββββββ΄ββββββββββββ
The fourth node is participating in the cluster, but table
event_table
still exists only on the first three nodes:
```sql
SELECT hostname(), database, table FROM clusterAllReplicas(default, system.tables) WHERE table = 'event_table' FORMAT PrettyCompactMonoBlock
ββhostname()ββββ¬βdatabaseββ¬βtableββββββββ
β a6a68731c21b β default β event_table β
β 92d3c04025e8 β default β event_table β
β 8e62b9cb17a1 β default β event_table β
ββββββββββββββββ΄βββββββββββ΄ββββββββββββββ
```
If you need to have tables replicated on all the nodes, you may use the
Replicated
database engine in alternative to cluster discovery. | {"source_file": "cluster-discovery.md"} | [
0.025296568870544434,
-0.07546321302652359,
0.009481086395680904,
0.07441896200180054,
-0.020265640690922737,
-0.032824765890836716,
-0.03519132733345032,
-0.025569580495357513,
-0.00692885322496295,
0.10831042379140854,
0.03579796105623245,
-0.05264175683259964,
0.04333321377635002,
-0.06... |
1fada436-637c-4221-8246-75bb95c882dc | description: 'Guide to configuring and using SQL startup scripts in ClickHouse for
automatic schema creation and migrations'
sidebar_label: 'Startup scripts'
slug: /operations/startup-scripts
title: 'Startup scripts'
doc_type: 'guide'
Startup scripts
ClickHouse can run arbitrary SQL queries from the server configuration during startup. This can be useful for migrations or automatic schema creation.
xml
<clickhouse>
<startup_scripts>
<throw_on_error>false</throw_on_error>
<scripts>
<query>CREATE ROLE OR REPLACE test_role</query>
</scripts>
<scripts>
<query>CREATE TABLE TestTable (id UInt64) ENGINE=TinyLog</query>
<condition>SELECT 1;</condition>
</scripts>
<scripts>
<query>CREATE DICTIONARY test_dict (...) SOURCE(CLICKHOUSE(...))</query>
<user>default</user>
</scripts>
</startup_scripts>
</clickhouse>
ClickHouse executes all queries from the
startup_scripts
sequentially in the specified order. If any of the queries fail, the execution of the following queries won't be interrupted. However, if
throw_on_error
is set to true,
the server will not start if an error occurs during script execution.
You can specify a conditional query in the config. In that case, the corresponding query executes only when the condition query returns the value
1
or
true
.
:::note
If the condition query returns any other value than
1
or
true
, the result will be interpreted as
false
, and the corresponding won't be executed.
::: | {"source_file": "startup-scripts.md"} | [
0.04746353626251221,
-0.04511941596865654,
-0.04365703836083412,
0.051078490912914276,
-0.03086225315928459,
-0.010285572148859501,
0.024219200015068054,
0.01859309710562229,
-0.08751899749040604,
0.025899050757288933,
0.0994400829076767,
-0.040960777550935745,
0.08246064186096191,
-0.0624... |
73721319-1dd1-4e8c-8ab8-0dc32889239f | description: 'Documentation for Update'
sidebar_title: 'Self-managed Upgrade'
slug: /operations/update
title: 'Self-managed Upgrade'
doc_type: 'guide'
ClickHouse upgrade overview {#clickhouse-upgrade-overview}
This document contains:
- general guidelines
- a recommended plan
- specifics for upgrading the binaries on your systems
General guidelines {#general-guidelines}
These notes should help you with planning, and to understand why we make the recommendations that we do later in the document.
Upgrade ClickHouse server separately from ClickHouse Keeper or ZooKeeper {#upgrade-clickhouse-server-separately-from-clickhouse-keeper-or-zookeeper}
Unless there is a security fix needed for ClickHouse Keeper or Apache ZooKeeper it is not necessary to upgrade Keeper when you upgrade ClickHouse server. Keeper stability is required during the upgrade process, so complete the ClickHouse server upgrades before considering an upgrade of Keeper.
Minor version upgrades should be adopted often {#minor-version-upgrades-should-be-adopted-often}
It is highly recommended to always upgrade to the newest minor version as soon as it is released. Minor releases do not have breaking changes but do have important bug fixes (and may have security fixes).
Test experimental features on a separate ClickHouse server running the target version {#test-experimental-features-on-a-separate-clickhouse-server-running-the-target-version}
The compatibility of experimental features can be broken at any moment in any way. If you are using experimental features, then check the changelogs and consider setting up a separate ClickHouse server with the target version installed and test your use of the experimental features there.
Downgrades {#downgrades}
If you upgrade and then realize that the new version is not compatible with some feature that you depend on you may be able to downgrade to a recent (less than one year old) version if you have not started to use any of the new features. Once the new features are used the downgrade will not work.
Multiple ClickHouse server versions in a cluster {#multiple-clickhouse-server-versions-in-a-cluster}
We make an effort to maintain a one-year compatibility window (which includes 2 LTS versions). This means that any two versions should be able to work together in a cluster if the difference between them is less than one year (or if there are less than two LTS versions between them). However, it is recommended to upgrade all members of a cluster to the same version as quickly as possible, as some minor issues are possible (like slowdown of distributed queries, retriable errors in some background operations in ReplicatedMergeTree, etc). | {"source_file": "update.md"} | [
0.03274352476000786,
-0.028439601883292198,
0.03880443423986435,
-0.05390443652868271,
0.02968197502195835,
-0.06227100268006325,
-0.04618089646100998,
-0.06504783034324646,
-0.10020923614501953,
0.05266617238521576,
0.07122919708490372,
0.020278777927160263,
0.007471384014934301,
0.001689... |
9bd17c39-5e16-49dc-a5a6-48985edcb32e | We never recommend running different versions in the same cluster when the release dates are more than one year. While we do not expect that you will have data loss, the cluster may become unusable. The issues that you should expect if you have more than one year difference in versions include:
the cluster may not work
some (or even all) queries may fail with arbitrary errors
arbitrary errors/warnings may appear in the logs
it may be impossible to downgrade
Incremental upgrades {#incremental-upgrades}
If the difference between the current version and the target version is more than one year, then it is recommended to either:
- Upgrade with downtime (stop all servers, upgrade all servers, run all servers).
- Or to upgrade through an intermediate version (a version less than one year more recent than the current version).
Recommended plan {#recommended-plan}
These are the recommended steps for a zero-downtime ClickHouse upgrade:
Make sure that your configuration changes are not in the default
/etc/clickhouse-server/config.xml
file and that they are instead in
/etc/clickhouse-server/config.d/
, as
/etc/clickhouse-server/config.xml
could be overwritten during an upgrade.
Read through the
changelogs
for breaking changes (going back from the target release to the release you are currently on).
Make any updates identified in the breaking changes that can be made before upgrading, and a list of the changes that will need to be made after the upgrade.
Identify one or more replicas for each shard to keep up while the rest of the replicas for each shard are upgraded.
On the replicas that will be upgraded, one at a time:
shutdown ClickHouse server
upgrade the server to the target version
bring ClickHouse server up
wait for the Keeper messages to indicate that the system is stable
continue to the next replica6. Check for errors in the Keeper log and the ClickHouse log
Upgrade the replicas identified in step 4 to the new version
Refer to the list of changes made in steps 1 through 3 and make the changes that need to be made after the upgrade.
:::note
This error message is expected when there are multiple versions of ClickHouse running in a replicated environment. You will stop seeing these when all replicas are upgraded to the same version.
text
MergeFromLogEntryTask: Code: 40. DB::Exception: Checksums of parts don't match:
hash of uncompressed files doesn't match. (CHECKSUM_DOESNT_MATCH) Data after merge is not
byte-identical to data on another replicas.
:::
ClickHouse server binary upgrade process {#clickhouse-server-binary-upgrade-process}
If ClickHouse was installed from
deb
packages, execute the following commands on the server:
bash
$ sudo apt-get update
$ sudo apt-get install clickhouse-client clickhouse-server
$ sudo service clickhouse-server restart
If you installed ClickHouse using something other than the recommended
deb
packages, use the appropriate update method. | {"source_file": "update.md"} | [
0.013945357874035835,
-0.03252560272812843,
0.07392249256372452,
-0.018820470198988914,
-0.004846683703362942,
-0.06576158106327057,
-0.10201345384120941,
-0.051311686635017395,
-0.07276976853609085,
-0.01840858906507492,
0.02822730876505375,
0.0017242179019376636,
0.00807943008840084,
-0.... |
2f2f5aa9-6c83-4169-abf5-d290ae0b19c2 | If you installed ClickHouse using something other than the recommended
deb
packages, use the appropriate update method.
:::note
You can update multiple servers at once as soon as there is no moment when all replicas of one shard are offline.
:::
The upgrade of older version of ClickHouse to specific version:
As an example:
xx.yy.a.b
is a current stable version. The latest stable version could be found
here
bash
$ sudo apt-get update
$ sudo apt-get install clickhouse-server=xx.yy.a.b clickhouse-client=xx.yy.a.b clickhouse-common-static=xx.yy.a.b
$ sudo service clickhouse-server restart | {"source_file": "update.md"} | [
0.03747444599866867,
-0.12685871124267578,
0.0012482275487855077,
-0.08797366917133331,
-0.03352927044034004,
-0.061461061239242554,
-0.04681757837533951,
-0.11271041631698608,
-0.040079470723867416,
0.0017061292892321944,
0.05559549480676651,
0.03622731938958168,
-0.005544769112020731,
0.... |
16df5d4f-c13f-4a56-95ed-6d9651aa9f13 | description: 'This page explains how ClickHouse server can be configured with configuration
files in XML or YAML syntax.'
sidebar_label: 'Configuration Files'
sidebar_position: 50
slug: /operations/configuration-files
title: 'Configuration Files'
doc_type: 'guide'
:::note
XML-based settings profiles and configuration files are not supported for ClickHouse Cloud. Therefore, in ClickHouse Cloud, you won't find a config.xml file. Instead, you should use SQL commands to manage settings through settings profiles.
For further details, see
"Configuring Settings"
:::
The ClickHouse server can be configured with configuration files in XML or YAML syntax.
In most installation types, the ClickHouse server runs with
/etc/clickhouse-server/config.xml
as the default configuration file, but it is also possible to specify the location of the configuration file manually at server startup using command line option
--config-file
or
-C
.
Additional configuration files may be placed into directory
config.d/
relative to the main configuration file, for example into directory
/etc/clickhouse-server/config.d/
.
Files in this directory and the main configuration are merged in a preprocessing step before the configuration is applied in ClickHouse server.
Configuration files are merged in alphabetical order.
To simplify updates and improve modularization, it is a best practice to keep the default
config.xml
file unmodified and place additional customization into
config.d/
.
The ClickHouse keeper configuration lives in
/etc/clickhouse-keeper/keeper_config.xml
.
Similarly, additional configuration files for Keeper need to be placed in
/etc/clickhouse-keeper/keeper_config.d/
.
It is possible to mix XML and YAML configuration files, for example you could have a main configuration file
config.xml
and additional configuration files
config.d/network.xml
,
config.d/timezone.yaml
and
config.d/keeper.yaml
.
Mixing XML and YAML within a single configuration file is not supported.
XML configuration files should use
<clickhouse>...</clickhouse>
as the top-level tag.
In YAML configuration files,
clickhouse:
is optional, if absent the parser inserts it automatically.
Merging configuration {#merging}
Two configuration files (usually the main configuration file and another configuration file from
config.d/
) are merged as follows:
If a node (i.e. a path leading to an element) appears in both files and does not have attributes
replace
or
remove
, it is included in the merged configuration file and children from both nodes are included and merged recursively.
If one of the two nodes contains the
replace
attribute, it is included in the merged configuration file but only children from the node with attribute
replace
are included.
If one of the two nodes contains the
remove
attribute, the node is not included in the merged configuration file (if it exists already, it is deleted).
For example, given two configuration files: | {"source_file": "configuration-files.md"} | [
0.05409931018948555,
-0.06567448377609253,
-0.03578456491231918,
-0.019450047984719276,
-0.07217279821634293,
0.015611979179084301,
0.031476736068725586,
-0.009230316616594791,
-0.07173151522874832,
0.00431528314948082,
0.07697012275457382,
0.01337048877030611,
0.0643104836344719,
-0.06045... |
f1b7c442-b5f9-441f-bbe9-1caf53a8980b | For example, given two configuration files:
xml title="config.xml"
<clickhouse>
<config_a>
<setting_1>1</setting_1>
</config_a>
<config_b>
<setting_2>2</setting_2>
</config_b>
<config_c>
<setting_3>3</setting_3>
</config_c>
</clickhouse>
and
xml title="config.d/other_config.xml"
<clickhouse>
<config_a>
<setting_4>4</setting_4>
</config_a>
<config_b replace="replace">
<setting_5>5</setting_5>
</config_b>
<config_c remove="remove">
<setting_6>6</setting_6>
</config_c>
</clickhouse>
The resulting merged configuration file will be:
xml
<clickhouse>
<config_a>
<setting_1>1</setting_1>
<setting_4>4</setting_4>
</config_a>
<config_b>
<setting_5>5</setting_5>
</config_b>
</clickhouse>
Substitution by environment variables and ZooKeeper nodes {#from_env_zk}
To specify that a value of an element should be replaced by the value of an environment variable, you can use the attribute
from_env
.
For example, with environment variable
$MAX_QUERY_SIZE = 150000
:
xml
<clickhouse>
<profiles>
<default>
<max_query_size from_env="MAX_QUERY_SIZE"/>
</default>
</profiles>
</clickhouse>
Thw resulting configuration will be:
xml
<clickhouse>
<profiles>
<default>
<max_query_size>150000</max_query_size>
</default>
</profiles>
</clickhouse>
The same is possible using
from_zk
(ZooKeeper node):
xml
<clickhouse>
<postgresql_port from_zk="/zk_configs/postgresql_port"/>
</clickhouse>
```shell
clickhouse-keeper-client
/ :) touch /zk_configs
/ :) create /zk_configs/postgresql_port "9005"
/ :) get /zk_configs/postgresql_port
9005
```
Resulting in the following configuration:
xml
<clickhouse>
<postgresql_port>9005</postgresql_port>
</clickhouse>
Default values {#default-values}
An element with the
from_env
or
from_zk
attributes may additionally have the attribute
replace="1"
(the latter must appear before
from_env
/
from_zk
).
In this case, the element may define a default value.
The element takes on the value of the environment variable or ZooKeeper node if set, otherwise it takes on the default value.
The previous example is repeated, but assuming
MAX_QUERY_SIZE
is not set:
xml
<clickhouse>
<profiles>
<default>
<max_query_size replace="1" from_env="MAX_QUERY_SIZE">150000</max_query_size>
</default>
</profiles>
</clickhouse>
Resulting in configuration:
xml
<clickhouse>
<profiles>
<default>
<max_query_size>150000</max_query_size>
</default>
</profiles>
</clickhouse>
Substitution with file content {#substitution-with-file-content}
It is also possible to replace parts of the configuration by file contents. This can be done in two ways: | {"source_file": "configuration-files.md"} | [
-0.00917763076722622,
0.00556735647842288,
-0.019044524058699608,
0.0009304778650403023,
-0.03066575899720192,
-0.016821669414639473,
0.048806726932525635,
0.010692465119063854,
-0.0745316669344902,
-0.025651371106505394,
0.062446366995573044,
-0.0035170982591807842,
0.10173055529594421,
-... |
e9c8dded-9212-42db-ad22-d102517f61fe | Substitution with file content {#substitution-with-file-content}
It is also possible to replace parts of the configuration by file contents. This can be done in two ways:
Substituting Values
: If an element has the attribute
incl
, its value will be replaced by the content of the referenced file. By default, the path to the file with substitutions is
/etc/metrika.xml
. This can be changed in the
include_from
element in the server config. The substitution values are specified in
/clickhouse/substitution_name
elements in this file. If a substitution specified in
incl
does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify attribute
optional="true"
(for example, settings for
macros
).
Substituting elements
: If you want to replace the entire element with a substitution, use
include
as the element name. The element name
include
can be combined with the attribute
from_zk = "/path/to/node"
. In this case, the element value is replaced by the contents of the ZooKeeper node at
/path/to/node
. This also works with you store an entire XML subtree as a Zookeeper node, it will be fully inserted into the source element.
An example of this is shown below:
``xml
<clickhouse>
<!-- Appends XML subtree found at
/profiles-in-zookeeper
ZK path to
` element. -->
<users>
<!-- Replaces `include` element with the subtree found at `/users-in-zookeeper` ZK path. -->
<include from_zk="/users-in-zookeeper" />
<include from_zk="/other-users-in-zookeeper" />
</users>
```
If you want to merge the substituting content with the existing configuration instead of appending, you can use the attribute
merge="true"
. For example:
<include from_zk="/some_path" merge="true">
. In this case, the existing configuration will be merged with the content from the substitution and the existing configuration settings will be replaced with values from the substitution.
Encrypting and hiding configuration {#encryption}
You can use symmetric encryption to encrypt a configuration element, for example, a plaintext password or private key.
To do so, first configure the
encryption codec
, then add the attribute
encrypted_by
with the name of the encryption codec as the value to the element to encrypt.
Unlike attributes
from_zk
,
from_env
and
incl
, or element
include
, no substitution (i.e. decryption of the encrypted value) is performed in the preprocessed file.
Decryption happens only at runtime in the server process.
For example:
```xml
<encryption_codecs>
<aes_128_gcm_siv>
<key_hex>00112233445566778899aabbccddeeff</key_hex>
</aes_128_gcm_siv>
</encryption_codecs>
<interserver_http_credentials>
<user>admin</user>
<password encrypted_by="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
</interserver_http_credentials>
``` | {"source_file": "configuration-files.md"} | [
-0.004602164030075073,
0.06346576660871506,
0.03376534953713417,
0.025282997637987137,
0.06925934553146362,
-0.04628567770123482,
0.024569127708673477,
-0.013371605426073074,
-0.027101408690214157,
0.013373884372413158,
0.03488976135849953,
-0.05617738515138626,
0.05637189373373985,
-0.015... |
b0b0589d-0cbd-47fb-b807-d9bad7cce1ba | ```
The attributes
from_env
and
from_zk
can also be applied to
encryption_codecs
:
```xml
<encryption_codecs>
<aes_128_gcm_siv>
<key_hex from_env="CLICKHOUSE_KEY_HEX"/>
</aes_128_gcm_siv>
</encryption_codecs>
<interserver_http_credentials>
<user>admin</user>
<password encrypted_by="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
</interserver_http_credentials>
```
```xml
<encryption_codecs>
<aes_128_gcm_siv>
<key_hex from_zk="/clickhouse/aes128_key_hex"/>
</aes_128_gcm_siv>
</encryption_codecs>
<interserver_http_credentials>
<user>admin</user>
<password encrypted_by="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
</interserver_http_credentials>
```
Encryption keys and encrypted values can be defined in either config file.
An example
config.xml
is given as:
```xml
<encryption_codecs>
<aes_128_gcm_siv>
<key_hex from_zk="/clickhouse/aes128_key_hex"/>
</aes_128_gcm_siv>
</encryption_codecs>
```
An example
users.xml
is given as:
```xml
<users>
<test_user>
<password encrypted_by="AES_128_GCM_SIV">96280000000D000000000030D4632962295D46C6FA4ABF007CCEC9C1D0E19DA5AF719C1D9A46C446</password>
<profile>default</profile>
</test_user>
</users>
```
To encrypt a value, you can use the (example) program
encrypt_decrypt
:
bash
./encrypt_decrypt /etc/clickhouse-server/config.xml -e AES_128_GCM_SIV abcd
text
961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85
Even with encrypted configuration elements, encrypted elements still appear in the preprocessed configuration file.
If this is a problem for your ClickHouse deployment, there are two alternatives: Either set file permissions of the preprocessed file to 600 or use the attribute
hide_in_preprocessed
.
For example:
```xml
<interserver_http_credentials hide_in_preprocessed="true">
<user>admin</user>
<password>secret</password>
</interserver_http_credentials>
```
User settings {#user-settings}
The
config.xml
file can specify a separate config with user settings, profiles, and quotas. The relative path to this config is set in the
users_config
element. By default, it is
users.xml
. If
users_config
is omitted, the user settings, profiles, and quotas are specified directly in
config.xml
.
User configuration can be split into separate files similar to
config.xml
and
config.d/
.
The directory name is defined as
users_config
setting without
.xml
postfix concatenated with
.d
.
The directory
users.d
is used by default, as
users_config
defaults to
users.xml
.
Note that configuration files are first
merged
taking into account settings, and includes are processed after that.
XML example {#example}
For example, you can have a separate config file for each user like this:
bash
$ cat /etc/clickhouse-server/users.d/alice.xml | {"source_file": "configuration-files.md"} | [
-0.08613049238920212,
0.05750017613172531,
-0.12533175945281982,
-0.03322957083582878,
0.02392101287841797,
-0.03977988287806511,
0.048508889973163605,
0.026263311505317688,
-0.023534610867500305,
-0.017338061705231667,
0.03328143432736397,
-0.048004891723394394,
0.08490792661905289,
-0.03... |
78fa95ec-9411-42c6-bcf5-c6c82ea0f22b | XML example {#example}
For example, you can have a separate config file for each user like this:
bash
$ cat /etc/clickhouse-server/users.d/alice.xml
xml
<clickhouse>
<users>
<alice>
<profile>analytics</profile>
<networks>
<ip>::/0</ip>
</networks>
<password_sha256_hex>...</password_sha256_hex>
<quota>analytics</quota>
</alice>
</users>
</clickhouse>
YAML examples {#example-1}
Here you can see the default config written in YAML:
config.yaml.example
.
There are some differences between YAML and XML formats in terms of ClickHouse configurations.
Tips for writing configuration in YAML format are presented below.
An XML tag with a text value is represented by a YAML key-value pair
yaml
key: value
Corresponding XML:
xml
<key>value</key>
A nested XML node is represented by a YAML map:
yaml
map_key:
key1: val1
key2: val2
key3: val3
Corresponding XML:
xml
<map_key>
<key1>val1</key1>
<key2>val2</key2>
<key3>val3</key3>
</map_key>
To create the same XML tag multiple times, use a YAML sequence:
yaml
seq_key:
- val1
- val2
- key1: val3
- map:
key2: val4
key3: val5
Corresponding XML:
xml
<seq_key>val1</seq_key>
<seq_key>val2</seq_key>
<seq_key>
<key1>val3</key1>
</seq_key>
<seq_key>
<map>
<key2>val4</key2>
<key3>val5</key3>
</map>
</seq_key>
To provide an XML attribute, you can use an attribute key with a
@
prefix. Note that
@
is reserved by YAML standard, so must be wrapped in double quotes:
yaml
map:
"@attr1": value1
"@attr2": value2
key: 123
Corresponding XML:
```xml
123
```
It is also possible to use attributes in YAML sequence:
yaml
seq:
- "@attr1": value1
- "@attr2": value2
- 123
- abc
Corresponding XML:
xml
<seq attr1="value1" attr2="value2">123</seq>
<seq attr1="value1" attr2="value2">abc</seq>
The aforementioned syntax does not allow to express XML text nodes with XML attributes as YAML. This special case can be achieved using an
#text
attribute key:
yaml
map_key:
"@attr1": value1
"#text": value2
Corresponding XML:
xml
<map_key attr1="value1">value2</map>
Implementation details {#implementation-details}
For each config file, the server also generates
file-preprocessed.xml
files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file.
The server tracks changes in config files, as well as files and ZooKeeper nodes that were used when performing substitutions and overrides, and reloads the settings for users and clusters on the fly. This means that you can modify the cluster, users, and their settings without restarting the server. | {"source_file": "configuration-files.md"} | [
0.04944944009184837,
-0.014221162535250187,
-0.09087783098220825,
-0.03204183280467987,
-0.04628530517220497,
-0.037595298141241074,
0.06962255388498306,
0.004565408919006586,
-0.03464654088020325,
-0.03453441709280014,
0.07247039675712585,
-0.00895793829113245,
0.08158215135335922,
-0.064... |
1b9f0e53-3244-460a-8107-8e1da5e9de25 | description: 'Guide to backing up and restoring ClickHouse databases and tables'
sidebar_label: 'Backup and restore'
sidebar_position: 10
slug: /operations/backup
title: 'Backup and restore'
doc_type: 'guide'
Backup and restore
Backup to a local disk
Configuring backup/restore to use an S3 endpoint
Backup/restore using an S3 disk
Alternatives
Command summary {#command-summary}
```bash
BACKUP|RESTORE
TABLE [db.]table_name [AS [db.]table_name_in_backup]
[PARTITION[S] partition_expr [, ...]] |
DICTIONARY [db.]dictionary_name [AS [db.]name_in_backup] |
DATABASE database_name [AS database_name_in_backup]
[EXCEPT TABLES ...] |
TEMPORARY TABLE table_name [AS table_name_in_backup] |
VIEW view_name [AS view_name_in_backup] |
ALL [EXCEPT {TABLES|DATABASES}...] } [, ...]
[ON CLUSTER 'cluster_name']
TO|FROM File('
/
') | Disk('
', '
/') | S3('
/
', '
', '
')
[SETTINGS base_backup = File('
/
') | Disk(...) | S3('
/
', '
', '
')]
[SYNC|ASYNC]
```
:::note ALL
Prior to version 23.4 of ClickHouse,
ALL
was only applicable to the
RESTORE
command.
:::
Background {#background}
While
replication
provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases, mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes β for example, by default
you can't just drop tables with a MergeTree-like engine containing more than 50 Gb of data
. However, these safeguards do not cover all possible cases and can be circumvented.
In order to effectively mitigate possible human errors, you should carefully prepare a strategy for backing up and restoring your data
in advance
.
Each company has different resources available and business requirements, so there's no universal solution for ClickHouse backups and restores that will fit every situation. What works for one gigabyte of data likely won't work for tens of petabytes. There are a variety of possible approaches with their own pros and cons, which will be discussed below. It is a good idea to use several approaches instead of just one in order to compensate for their various shortcomings.
:::note
Keep in mind that if you backed something up and never tried to restore it, chances are that restore will not work properly when you actually need it (or at least it will take longer than business can tolerate). So whatever backup approach you choose, make sure to automate the restore process as well, and practice it on a spare ClickHouse cluster regularly.
:::
Backup to a local disk {#backup-to-a-local-disk}
Configure a backup destination {#configure-a-backup-destination} | {"source_file": "backup.md"} | [
-0.01337988581508398,
-0.08407678455114365,
-0.042037565261125565,
0.005367433652281761,
0.051217783242464066,
0.022695178166031837,
0.010043058544397354,
-0.005184378009289503,
-0.11214654892683029,
0.0015670236898586154,
0.04459120333194733,
0.011990109458565712,
0.13234606385231018,
-0.... |
e432cf37-d34f-402e-b975-d9471e654af8 | Backup to a local disk {#backup-to-a-local-disk}
Configure a backup destination {#configure-a-backup-destination}
In the examples below you will see the backup destination specified like
Disk('backups', '1.zip')
. To prepare the destination add a file to
/etc/clickhouse-server/config.d/backup_disk.xml
specifying the backup destination. For example, this file defines disk named
backups
and then adds that disk to the
backups > allowed_disk
list:
```xml
<backups>
<type>local</type>
<path>/backups/</path>
</backups>
</disks>
</storage_configuration>
<backups>
<allowed_disk>backups</allowed_disk>
<allowed_path>/backups/</allowed_path>
</backups>
```
Parameters {#parameters}
Backups can be either full or incremental, and can include tables (including materialized views, projections, and dictionaries), and databases. Backups can be synchronous (default) or asynchronous. They can be compressed. Backups can be password protected.
The BACKUP and RESTORE statements take a list of DATABASE and TABLE names, a destination (or source), options and settings:
- The destination for the backup, or the source for the restore. This is based on the disk defined earlier. For example
Disk('backups', 'filename.zip')
- ASYNC: backup or restore asynchronously
- PARTITIONS: a list of partitions to restore
- SETTINGS:
-
id
: the identifier of a backup or restore operation. If it's unset or empty then a randomly generated UUID will be used.
If it's explicitly set to a nonempty string then it should be different each time. This
id
is used to find rows in table
system.backups
related to a specific backup or restore operation.
-
compression_method
and compression_level
-
password
for the file on disk
-
base_backup
: the destination of the previous backup of this source. For example,
Disk('backups', '1.zip')
-
use_same_s3_credentials_for_base_backup
: whether base backup to S3 should inherit credentials from the query. Only works with
S3
.
-
use_same_password_for_base_backup
: whether base backup archive should inherit the password from the query.
-
structure_only
: if enabled, allows to only backup or restore the CREATE statements without the data of tables
-
storage_policy
: storage policy for the tables being restored. See
Using Multiple Block Devices for Data Storage
. This setting is only applicable to the
RESTORE
command. The specified storage policy applies only to tables with an engine from the
MergeTree
family.
-
s3_storage_class
: the storage class used for S3 backup. For example,
STANDARD
-
azure_attempt_to_create_container
: when using Azure Blob Storage, whether the specified container will try to be created if it doesn't exist. Default: true.
-
core settings
can be used here too
Usage examples {#usage-examples}
Backup and then restore a table:
sql
BACKUP TABLE test.table TO Disk('backups', '1.zip') | {"source_file": "backup.md"} | [
-0.03582439944148064,
-0.027610672637820244,
-0.08502928912639618,
0.027563564479351044,
-0.0034443726763129234,
-0.014143412001430988,
-0.01842948980629444,
-0.002200855640694499,
-0.05021100118756294,
0.06701551377773285,
0.03774327039718628,
0.005076720379292965,
0.09401917457580566,
-0... |
901416ac-c7ab-42e3-94eb-2abfb158aa05 | Usage examples {#usage-examples}
Backup and then restore a table:
sql
BACKUP TABLE test.table TO Disk('backups', '1.zip')
Corresponding restore:
sql
RESTORE TABLE test.table FROM Disk('backups', '1.zip')
:::note
The above RESTORE would fail if the table
test.table
contains data, you would have to drop the table in order to test the RESTORE, or use the setting
allow_non_empty_tables=true
:
sql
RESTORE TABLE test.table FROM Disk('backups', '1.zip')
SETTINGS allow_non_empty_tables=true
:::
Tables can be restored, or backed up, with new names:
sql
RESTORE TABLE test.table AS test.table2 FROM Disk('backups', '1.zip')
sql
BACKUP TABLE test.table3 AS test.table4 TO Disk('backups', '2.zip')
Incremental backups {#incremental-backups}
Incremental backups can be taken by specifying the
base_backup
.
:::note
Incremental backups depend on the base backup. The base backup must be kept available in order to be able to restore from an incremental backup.
:::
Incrementally store new data. The setting
base_backup
causes data since a previous backup to
Disk('backups', 'd.zip')
to be stored to
Disk('backups', 'incremental-a.zip')
:
sql
BACKUP TABLE test.table TO Disk('backups', 'incremental-a.zip')
SETTINGS base_backup = Disk('backups', 'd.zip')
Restore all data from the incremental backup and the base_backup into a new table
test.table2
:
sql
RESTORE TABLE test.table AS test.table2
FROM Disk('backups', 'incremental-a.zip');
Assign a password to the backup {#assign-a-password-to-the-backup}
Backups written to disk can have a password applied to the file:
sql
BACKUP TABLE test.table
TO Disk('backups', 'password-protected.zip')
SETTINGS password='qwerty'
Restore:
sql
RESTORE TABLE test.table
FROM Disk('backups', 'password-protected.zip')
SETTINGS password='qwerty'
Compression settings {#compression-settings}
If you would like to specify the compression method or level:
sql
BACKUP TABLE test.table
TO Disk('backups', 'filename.zip')
SETTINGS compression_method='lzma', compression_level=3
Restore specific partitions {#restore-specific-partitions}
If specific partitions associated with a table need to be restored these can be specified. To restore partitions 1 and 4 from backup:
sql
RESTORE TABLE test.table PARTITIONS '2', '3'
FROM Disk('backups', 'filename.zip')
Backups as tar archives {#backups-as-tar-archives}
Backups can also be stored as tar archives. The functionality is the same as for zip, except that a password is not supported.
Write a backup as a tar:
sql
BACKUP TABLE test.table TO Disk('backups', '1.tar')
Corresponding restore:
sql
RESTORE TABLE test.table FROM Disk('backups', '1.tar')
To change the compression method, the correct file suffix should be appended to the backup name. I.E to compress the tar archive using gzip:
sql
BACKUP TABLE test.table TO Disk('backups', '1.tar.gz') | {"source_file": "backup.md"} | [
-0.037424348294734955,
-0.03528448939323425,
-0.02780969813466072,
0.059466537088155746,
0.012658683583140373,
-0.0389929823577404,
-0.0053146351128816605,
-0.023361999541521072,
-0.09252354502677917,
0.06165635958313942,
0.0362570583820343,
0.04913976043462753,
0.09779287129640579,
-0.037... |
c371fc94-00f0-421a-bb66-b33024d7503c | sql
BACKUP TABLE test.table TO Disk('backups', '1.tar.gz')
The supported compression file suffixes are
tar.gz
,
.tgz
tar.bz2
,
tar.lzma
,
.tar.zst
,
.tzst
and
.tar.xz
.
Check the status of backups {#check-the-status-of-backups}
The backup command returns an
id
and
status
, and that
id
can be used to get the status of the backup. This is very useful to check the progress of long ASYNC backups. The example below shows a failure that happened when trying to overwrite an existing backup file:
sql
BACKUP TABLE helloworld.my_first_table TO Disk('backups', '1.zip') ASYNC
```response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusβββββββββββ
β 7678b0b3-f519-4e6e-811f-5a0781a4eb52 β CREATING_BACKUP β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
sql
SELECT
*
FROM system.backups
WHERE id='7678b0b3-f519-4e6e-811f-5a0781a4eb52'
FORMAT Vertical
```response
Row 1:
ββββββ
id: 7678b0b3-f519-4e6e-811f-5a0781a4eb52
name: Disk('backups', '1.zip')
highlight-next-line
status: BACKUP_FAILED
num_files: 0
uncompressed_size: 0
compressed_size: 0
highlight-next-line
error: Code: 598. DB::Exception: Backup Disk('backups', '1.zip') already exists. (BACKUP_ALREADY_EXISTS) (version 22.8.2.11 (official build))
start_time: 2022-08-30 09:21:46
end_time: 2022-08-30 09:21:46
1 row in set. Elapsed: 0.002 sec.
```
Along with
system.backups
table, all backup and restore operations are also tracked in the system log table
backup_log
:
sql
SELECT *
FROM system.backup_log
WHERE id = '7678b0b3-f519-4e6e-811f-5a0781a4eb52'
ORDER BY event_time_microseconds ASC
FORMAT Vertical
```response
Row 1:
ββββββ
event_date: 2023-08-18
event_time_microseconds: 2023-08-18 11:13:43.097414
id: 7678b0b3-f519-4e6e-811f-5a0781a4eb52
name: Disk('backups', '1.zip')
status: CREATING_BACKUP
error:
start_time: 2023-08-18 11:13:43
end_time: 1970-01-01 03:00:00
num_files: 0
total_size: 0
num_entries: 0
uncompressed_size: 0
compressed_size: 0
files_read: 0
bytes_read: 0
Row 2:
ββββββ
event_date: 2023-08-18
event_time_microseconds: 2023-08-18 11:13:43.174782
id: 7678b0b3-f519-4e6e-811f-5a0781a4eb52
name: Disk('backups', '1.zip')
status: BACKUP_FAILED
highlight-next-line
error: Code: 598. DB::Exception: Backup Disk('backups', '1.zip') already exists. (BACKUP_ALREADY_EXISTS) (version 23.8.1.1)
start_time: 2023-08-18 11:13:43
end_time: 2023-08-18 11:13:43
num_files: 0
total_size: 0
num_entries: 0
uncompressed_size: 0
compressed_size: 0
files_read: 0
bytes_read: 0 | {"source_file": "backup.md"} | [
-0.08535976707935333,
0.024416586384177208,
-0.040312640368938446,
0.05177689343690872,
0.04621497541666031,
-0.10605408251285553,
0.005404777824878693,
0.015937015414237976,
-0.051431939005851746,
0.09419501572847366,
-0.027835380285978317,
0.018592804670333862,
0.039478953927755356,
-0.0... |
dfea7c58-1929-450b-b480-de577956245b | 2 rows in set. Elapsed: 0.075 sec.
```
Configuring BACKUP/RESTORE to use an S3 Endpoint {#configuring-backuprestore-to-use-an-s3-endpoint}
To write backups to an S3 bucket you need three pieces of information:
- S3 endpoint,
for example
https://mars-doc-test.s3.amazonaws.com/backup-S3/
- Access key ID,
for example
ABC123
- Secret access key,
for example
Abc+123
:::note
Creating an S3 bucket is covered in
Use S3 Object Storage as a ClickHouse disk
, just come back to this doc after saving the policy, there is no need to configure ClickHouse to use the S3 bucket.
:::
The destination for a backup will be specified like this:
sql
S3('<S3 endpoint>/<directory>', '<Access key ID>', '<Secret access key>')
sql
CREATE TABLE data
(
`key` Int,
`value` String,
`array` Array(String)
)
ENGINE = MergeTree
ORDER BY tuple()
sql
INSERT INTO data SELECT *
FROM generateRandom('key Int, value String, array Array(String)')
LIMIT 1000
Create a base (initial) backup {#create-a-base-initial-backup}
Incremental backups require a
base
backup to start from, this example will be used
later as the base backup. The first parameter of the S3 destination is the S3 endpoint followed by the directory within the bucket to use for this backup. In this example the directory is named
my_backup
.
sql
BACKUP TABLE data TO S3('https://mars-doc-test.s3.amazonaws.com/backup-S3/my_backup', 'ABC123', 'Abc+123')
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
β de442b75-a66c-4a3c-a193-f76f278c70f3 β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
Add more data {#add-more-data}
Incremental backups are populated with the difference between the base backup and the current content of the table being backed up. Add more data before taking the incremental backup:
sql
INSERT INTO data SELECT *
FROM generateRandom('key Int, value String, array Array(String)')
LIMIT 100
Take an incremental backup {#take-an-incremental-backup}
This backup command is similar to the base backup, but adds
SETTINGS base_backup
and the location of the base backup. Note that the destination for the incremental backup is not the same directory as the base, it is the same endpoint with a different target directory within the bucket. The base backup is in
my_backup
, and the incremental will be written to
my_incremental
:
sql
BACKUP TABLE data TO S3('https://mars-doc-test.s3.amazonaws.com/backup-S3/my_incremental', 'ABC123', 'Abc+123') SETTINGS base_backup = S3('https://mars-doc-test.s3.amazonaws.com/backup-S3/my_backup', 'ABC123', 'Abc+123')
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
β f6cd3900-850f-41c9-94f1-0c4df33ea528 β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
Restore from the incremental backup {#restore-from-the-incremental-backup} | {"source_file": "backup.md"} | [
-0.006376621779054403,
-0.06244876608252525,
-0.06573363393545151,
-0.018466107547283173,
-0.011824284680187702,
0.002231837948784232,
0.00998720433562994,
-0.00031451330869458616,
-0.016250312328338623,
0.09372087568044662,
0.0059277331456542015,
-0.011316685006022453,
0.13060414791107178,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.