id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
7ba9eb08-0d9f-4725-9793-cfbf203a9526 | For the
volume_priority
:
- If all volumes have this parameter, they are prioritized in the specified order.
- If only
some
volumes have it, volumes that do not have it have the lowest priority. Those that do have it are prioritized according to the tag value, the priority of the rest is determined by the order of description in the configuration file relative to each other.
- If
no
volumes are given this parameter, their order is determined by the order of the description in the configuration file.
- The priority of volumes may not be identical.
storage_connections_soft_limit {#storage_connections_soft_limit}
Connections above this limit have significantly shorter time to live. The limit applies to the storages connections.
storage_connections_store_limit {#storage_connections_store_limit}
Connections above this limit reset after use. Set to 0 to turn connection cache off. The limit applies to the storages connections.
storage_connections_warn_limit {#storage_connections_warn_limit}
Warning massages are written to the logs if number of in-use connections are higher than this limit. The limit applies to the storages connections.
storage_metadata_write_full_object_key {#storage_metadata_write_full_object_key}
Write disk metadata files with VERSION_FULL_OBJECT_KEY format. This is enabled by default. The setting is deprecated.
storage_shared_set_join_use_inner_uuid {#storage_shared_set_join_use_inner_uuid}
If enabled, an inner UUID is generated during the creation of SharedSet and SharedJoin. ClickHouse Cloud only
table_engines_require_grant {#table_engines_require_grant}
If set to true, users require a grant to create a table with a specific engine e.g.
GRANT TABLE ENGINE ON TinyLog to user
.
:::note
By default, for backward compatibility creating table with a specific table engine ignores grant, however you can change this behaviour by setting this to true.
:::
tables_loader_background_pool_size {#tables_loader_background_pool_size}
Sets the number of threads performing asynchronous load jobs in background pool. The background pool is used for loading tables asynchronously after server start in case there are no queries waiting for the table. It could be beneficial to keep low number of threads in background pool if there are a lot of tables. It will reserve CPU resources for concurrent query execution.
:::note
A value of
0
means all available CPUs will be used.
:::
tables_loader_foreground_pool_size {#tables_loader_foreground_pool_size}
Sets the number of threads performing load jobs in foreground pool. The foreground pool is used for loading table synchronously before server start listening on a port and for loading tables that are waited for. Foreground pool has higher priority than background pool. It means that no job starts in background pool while there are jobs running in foreground pool.
:::note
A value of
0
means all available CPUs will be used.
::: | {"source_file": "settings.md"} | [
0.009003729559481144,
-0.015663720667362213,
-0.044861190021038055,
0.007736251689493656,
-0.007093296851962805,
-0.06345149129629135,
-0.006554512307047844,
0.032272130250930786,
0.014116485603153706,
0.053637564182281494,
0.04366754740476608,
0.10399787873029709,
0.07552473992109299,
0.0... |
621769af-ebc6-45e4-be42-dc1fe9862a93 | :::note
A value of
0
means all available CPUs will be used.
:::
tcp_close_connection_after_queries_num {#tcp_close_connection_after_queries_num}
Maximum number of queries allowed per TCP connection before the connection is closed. Set to 0 for unlimited queries.
tcp_close_connection_after_queries_seconds {#tcp_close_connection_after_queries_seconds}
Maximum lifetime of a TCP connection in seconds before it is closed. Set to 0 for unlimited connection lifetime.
tcp_port {#tcp_port}
Port for communicating with clients over the TCP protocol.
Example
xml
<tcp_port>9000</tcp_port>
tcp_port_secure {#tcp_port_secure}
TCP port for secure communication with clients. Use it with
OpenSSL
settings.
Default value
xml
<tcp_port_secure>9440</tcp_port_secure>
tcp_ssh_port {#tcp_ssh_port}
Port for the SSH server which allows the user to connect and execute queries in an interactive fashion using the embedded client over the PTY.
Example:
xml
<tcp_ssh_port>9022</tcp_ssh_port>
temporary_data_in_cache {#temporary_data_in_cache}
With this option, temporary data will be stored in the cache for the particular disk.
In this section, you should specify the disk name with the type
cache
.
In that case, the cache and temporary data will share the same space, and the disk cache can be evicted to create temporary data.
:::note
Only one option can be used to configure temporary data storage:
tmp_path
,
tmp_policy
,
temporary_data_in_cache
.
:::
Example
Both the cache for
local_disk
, and temporary data will be stored in
/tiny_local_cache
on the filesystem, managed by
tiny_local_cache
.
```xml
local
/local_disk/
cache
local_disk
/tiny_local_cache/
10M
1M
1
tiny_local_cache
```
temporary_data_in_distributed_cache {#temporary_data_in_distributed_cache}
Store temporary data in the distributed cache.
text_index_dictionary_block_cache_max_entries {#text_index_dictionary_block_cache_max_entries}
Size of cache for text index dictionary block in entries. Zero means disabled.
text_index_dictionary_block_cache_policy {#text_index_dictionary_block_cache_policy}
Text index dictionary block cache policy name.
text_index_dictionary_block_cache_size {#text_index_dictionary_block_cache_size}
Size of cache for text index dictionary blocks. Zero means disabled.
:::note
This setting can be modified at runtime and will take effect immediately.
:::
text_index_dictionary_block_cache_size_ratio {#text_index_dictionary_block_cache_size_ratio}
The size of the protected queue (in case of SLRU policy) in the text index dictionary block cache relative to the cache's total size.
text_index_header_cache_max_entries {#text_index_header_cache_max_entries}
Size of cache for text index header in entries. Zero means disabled.
text_index_header_cache_policy {#text_index_header_cache_policy}
Text index header cache policy name. | {"source_file": "settings.md"} | [
0.024888643994927406,
0.06498458236455917,
-0.13617391884326935,
0.011851800605654716,
-0.0643596276640892,
0.003188624046742916,
-0.004066120367497206,
0.010019218549132347,
-0.002397448755800724,
-0.03227289021015167,
0.06038368493318558,
0.006156392861157656,
0.045291680842638016,
-0.01... |
7b05a9fa-89b7-4c32-a904-3a4137c9d35d | Size of cache for text index header in entries. Zero means disabled.
text_index_header_cache_policy {#text_index_header_cache_policy}
Text index header cache policy name.
text_index_header_cache_size {#text_index_header_cache_size}
Size of cache for text index headers. Zero means disabled.
:::note
This setting can be modified at runtime and will take effect immediately.
:::
text_index_header_cache_size_ratio {#text_index_header_cache_size_ratio}
The size of the protected queue (in case of SLRU policy) in the text index header cache relative to the cache's total size.
text_index_postings_cache_max_entries {#text_index_postings_cache_max_entries}
Size of cache for text index posting list in entries. Zero means disabled.
text_index_postings_cache_policy {#text_index_postings_cache_policy}
Text index posting list cache policy name.
text_index_postings_cache_size {#text_index_postings_cache_size}
Size of cache for text index posting lists. Zero means disabled.
:::note
This setting can be modified at runtime and will take effect immediately.
:::
text_index_postings_cache_size_ratio {#text_index_postings_cache_size_ratio}
The size of the protected queue (in case of SLRU policy) in the text index posting list cache relative to the cache's total size.
text_log {#text_log}
Settings for the
text_log
system table for logging text messages.
Additionally:
| Setting | Description | Default Value |
|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|
|
level
| Maximum Message Level (by default
Trace
) which will be stored in a table. |
Trace
|
Example
xml
<clickhouse>
<text_log>
<level>notice</level>
<database>system</database>
<table>text_log</table>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<max_size_rows>1048576</max_size_rows>
<reserved_size_rows>8192</reserved_size_rows>
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
<flush_on_crash>false</flush_on_crash>
<!-- <partition_by>event_date</partition_by> -->
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
</text_log>
</clickhouse>
thread_pool_queue_size {#thread_pool_queue_size} | {"source_file": "settings.md"} | [
0.013088151812553406,
0.08127740770578384,
-0.06550996750593185,
0.042035363614559174,
-0.007587133441120386,
-0.0013482742942869663,
0.05348898842930794,
-0.005028768442571163,
-0.005566511768847704,
-0.0171977486461401,
0.05772373825311661,
0.02146376483142376,
0.006548190489411354,
-0.0... |
f0e5e520-ee99-44f4-823a-325c348c9450 | thread_pool_queue_size {#thread_pool_queue_size}
The maximum number of jobs that can be scheduled on the Global Thread pool. Increasing queue size leads to larger memory usage. It is recommended to keep this value equal to
max_thread_pool_size
.
:::note
A value of
0
means unlimited.
:::
Example
xml
<thread_pool_queue_size>12000</thread_pool_queue_size>
threadpool_local_fs_reader_pool_size {#threadpool_local_fs_reader_pool_size}
The number of threads in the thread pool for reading from local filesystem when
local_filesystem_read_method = 'pread_threadpool'
.
threadpool_local_fs_reader_queue_size {#threadpool_local_fs_reader_queue_size}
The maximum number of jobs that can be scheduled on the thread pool for reading from local filesystem.
threadpool_remote_fs_reader_pool_size {#threadpool_remote_fs_reader_pool_size}
Number of threads in the Thread pool used for reading from remote filesystem when
remote_filesystem_read_method = 'threadpool'
.
threadpool_remote_fs_reader_queue_size {#threadpool_remote_fs_reader_queue_size}
The maximum number of jobs that can be scheduled on the thread pool for reading from remote filesystem.
threadpool_writer_pool_size {#threadpool_writer_pool_size}
Size of background pool for write requests to object storages
threadpool_writer_queue_size {#threadpool_writer_queue_size}
Number of tasks which is possible to push into background pool for write requests to object storages
throw_on_unknown_workload {#throw_on_unknown_workload}
Defines behaviour on access to unknown WORKLOAD with query setting 'workload'.
If
true
, RESOURCE_ACCESS_DENIED exception is thrown from a query that is trying to access unknown workload. Useful to enforce resource scheduling for all queries after WORKLOAD hierarchy is established and contains WORKLOAD default.
If
false
(default), unlimited access w/o resource scheduling is provided to a query with 'workload' setting pointing to unknown WORKLOAD. This is important during setting up hierarchy of WORKLOAD, before WORKLOAD default is added.
Example
xml
<throw_on_unknown_workload>true</throw_on_unknown_workload>
See Also
-
Workload Scheduling
timezone {#timezone}
The server's time zone.
Specified as an IANA identifier for the UTC timezone or geographic location (for example, Africa/Abidjan).
The time zone is necessary for conversions between String and DateTime formats when DateTime fields are output to text format (printed on the screen or in a file), and when getting DateTime from a string. Besides, the time zone is used in functions that work with the time and date if they didn't receive the time zone in the input parameters.
Example
xml
<timezone>Asia/Istanbul</timezone>
See also
session_timezone
tmp_path {#tmp_path}
Path on the local filesystem to store temporary data for processing large queries. | {"source_file": "settings.md"} | [
0.015407226048409939,
-0.006856269668787718,
-0.06059065833687782,
0.009300528094172478,
0.03595401719212532,
-0.021824834868311882,
-0.006617395672947168,
0.04359128698706627,
0.027371328324079514,
0.017844228073954582,
-0.015790525823831558,
0.02341497503221035,
-0.025517133995890617,
0.... |
a7201904-2799-4d12-8347-dac87e9b57ec | Example
xml
<timezone>Asia/Istanbul</timezone>
See also
session_timezone
tmp_path {#tmp_path}
Path on the local filesystem to store temporary data for processing large queries.
:::note
- Only one option can be used to configure temporary data storage:
tmp_path
,
tmp_policy
,
temporary_data_in_cache
.
- The trailing slash is mandatory.
:::
Example
xml
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
tmp_policy {#tmp_policy}
Policy for storage with temporary data. All files with
tmp
prefix will be removed at start.
:::note
Recommendations for using object storage as
tmp_policy
:
- Use separate
bucket:path
on each server
- Use
metadata_type=plain
- You may also want to set TTL for this bucket
:::
:::note
- Only one option can be used to configure temporary data storage:
tmp_path
,
tmp_policy
,
temporary_data_in_cache
.
-
move_factor
,
keep_free_space_bytes
,
max_data_part_size_bytes
and are ignored.
- Policy should have exactly
one volume
For more information see the
MergeTree Table Engine
documentation.
:::
Example
When
/disk1
is full, temporary data will be stored on
/disk2
.
```xml
/disk1/
/disk2/
disk1
disk2
tmp_two_disks
```
top_level_domains_list {#top_level_domains_list}
Defines a list of custom top level domains to add where each entry is, of the format
<name>/path/to/file</name>
.
For example:
xml
<top_level_domains_lists>
<public_suffix_list>/path/to/public_suffix_list.dat</public_suffix_list>
</top_level_domains_lists>
See also:
- function
cutToFirstSignificantSubdomainCustom
and variations thereof,
which accepts a custom TLD list name, returning the part of the domain that includes top-level subdomains up to the first significant subdomain.
total_memory_profiler_sample_max_allocation_size {#total_memory_profiler_sample_max_allocation_size}
Collect random allocations of size less or equal than specified value with probability equal to
total_memory_profiler_sample_probability
. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.
total_memory_profiler_sample_min_allocation_size {#total_memory_profiler_sample_min_allocation_size}
Collect random allocations of size greater or equal than specified value with probability equal to
total_memory_profiler_sample_probability
. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.
total_memory_profiler_step {#total_memory_profiler_step}
Whenever server memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stack trace. Zero means disabled memory profiler. Values lower than a few megabytes will slow down server.
total_memory_tracker_sample_probability {#total_memory_tracker_sample_probability} | {"source_file": "settings.md"} | [
0.03333399072289467,
0.04004756733775139,
-0.059362467378377914,
-0.00023302494082599878,
-0.008424057625234127,
-0.09559454023838043,
0.006369179114699364,
-0.03193530812859535,
0.019146036356687546,
0.041472043842077255,
0.026683658361434937,
0.002816659864038229,
-0.06641556322574615,
0... |
1b38c6b1-489d-4a00-b6fd-303d4c08342f | total_memory_tracker_sample_probability {#total_memory_tracker_sample_probability}
Allows to collect random allocations and de-allocations and writes them in the
system.trace_log
system table with
trace_type
equal to a
MemorySample
with the specified probability. The probability is for every allocation or deallocations, regardless of the size of the allocation. Note that sampling happens only when the amount of untracked memory exceeds the untracked memory limit (default value is
4
MiB). It can be lowered if
total_memory_profiler_step
is lowered. You can set
total_memory_profiler_step
equal to
1
for extra fine-grained sampling.
Possible values:
Positive double.
0
β Writing of random allocations and de-allocations in the
system.trace_log
system table is disabled.
trace_log {#trace_log}
Settings for the
trace_log
system table operation.
The default server configuration file
config.xml
contains the following settings section:
xml
<trace_log>
<database>system</database>
<table>trace_log</table>
<partition_by>toYYYYMM(event_date)</partition_by>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<max_size_rows>1048576</max_size_rows>
<reserved_size_rows>8192</reserved_size_rows>
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
<flush_on_crash>false</flush_on_crash>
<symbolize>false</symbolize>
</trace_log>
uncompressed_cache_policy {#uncompressed_cache_policy}
Uncompressed cache policy name.
uncompressed_cache_size {#uncompressed_cache_size}
Maximum size (in bytes) for uncompressed data used by table engines from the MergeTree family.
There is one shared cache for the server. Memory is allocated on demand. The cache is used if the option
use_uncompressed_cache
is enabled.
The uncompressed cache is advantageous for very short queries in individual cases.
:::note
A value of
0
means disabled.
This setting can be modified at runtime and will take effect immediately.
:::
uncompressed_cache_size_ratio {#uncompressed_cache_size_ratio}
The size of the protected queue (in case of SLRU policy) in the uncompressed cache relative to the cache's total size.
url_scheme_mappers {#url_scheme_mappers}
Configuration for translating shortened or symbolic URL prefixes into full URLs.
Example:
xml
<url_scheme_mappers>
<s3>
<to>https://{bucket}.s3.amazonaws.com</to>
</s3>
<gs>
<to>https://storage.googleapis.com/{bucket}</to>
</gs>
<oss>
<to>https://{bucket}.oss.aliyuncs.com</to>
</oss>
</url_scheme_mappers>
use_minimalistic_part_header_in_zookeeper {#use_minimalistic_part_header_in_zookeeper}
Storage method for data part headers in ZooKeeper. This setting only applies to the
MergeTree
family. It can be specified:
Globally in the
merge_tree
section of the
config.xml
file | {"source_file": "settings.md"} | [
0.07392879575490952,
-0.05351806804537773,
-0.09592927247285843,
0.055744580924510956,
0.03882777690887451,
-0.09198306500911713,
0.09257975220680237,
0.048823174089193344,
-0.04742378741502762,
0.026687094941735268,
0.02166762948036194,
0.024152521044015884,
0.043551843613386154,
-0.05464... |
d14b43a7-5774-4b36-be61-caf9674ca5c4 | Storage method for data part headers in ZooKeeper. This setting only applies to the
MergeTree
family. It can be specified:
Globally in the
merge_tree
section of the
config.xml
file
ClickHouse uses the setting for all the tables on the server. You can change the setting at any time. Existing tables change their behaviour when the setting changes.
For each table
When creating a table, specify the corresponding
engine setting
. The behaviour of an existing table with this setting does not change, even if the global setting changes.
Possible values
0
β Functionality is turned off.
1
β Functionality is turned on.
If
use_minimalistic_part_header_in_zookeeper = 1
, then
replicated
tables store the headers of the data parts compactly using a single
znode
. If the table contains many columns, this storage method significantly reduces the volume of the data stored in Zookeeper.
:::note
After applying
use_minimalistic_part_header_in_zookeeper = 1
, you can't downgrade the ClickHouse server to a version that does not support this setting. Be careful when upgrading ClickHouse on servers in a cluster. Don't upgrade all the servers at once. It is safer to test new versions of ClickHouse in a test environment, or on just a few servers of a cluster.
Data part headers already stored with this setting can't be restored to their previous (non-compact) representation.
:::
user_defined_executable_functions_config {#user_defined_executable_functions_config}
The path to the config file for executable user defined functions.
Path:
Specify the absolute path or the path relative to the server config file.
The path can contain wildcards * and ?.
See also:
- "
Executable User Defined Functions
.".
Example
xml
<user_defined_executable_functions_config>*_function.xml</user_defined_executable_functions_config>
user_defined_path {#user_defined_path}
The directory with user defined files. Used for SQL user defined functions
SQL User Defined Functions
.
Example
xml
<user_defined_path>/var/lib/clickhouse/user_defined/</user_defined_path>
user_directories {#user_directories}
Section of the configuration file that contains settings:
- Path to configuration file with predefined users.
- Path to folder where users created by SQL commands are stored.
- ZooKeeper node path where users created by SQL commands are stored and replicated.
If this section is specified, the path from
users_config
and
access_control_path
won't be used.
The
user_directories
section can contain any number of items, the order of the items means their precedence (the higher the item the higher the precedence).
Examples
xml
<user_directories>
<users_xml>
<path>/etc/clickhouse-server/users.xml</path>
</users_xml>
<local_directory>
<path>/var/lib/clickhouse/access/</path>
</local_directory>
</user_directories>
Users, roles, row policies, quotas, and profiles can be also stored in ZooKeeper: | {"source_file": "settings.md"} | [
0.00002858004518202506,
0.04156501218676567,
-0.0011539699044078588,
0.021921446546912193,
0.05436038225889206,
-0.11105746030807495,
-0.025349419564008713,
0.016805147752165794,
-0.05341395363211632,
0.06663951277732849,
-0.001536722294986248,
-0.047344110906124115,
0.07278230786323547,
-... |
6480702f-7b43-4cd6-8e33-20491f269efd | Users, roles, row policies, quotas, and profiles can be also stored in ZooKeeper:
xml
<user_directories>
<users_xml>
<path>/etc/clickhouse-server/users.xml</path>
</users_xml>
<replicated>
<zookeeper_path>/clickhouse/access/</zookeeper_path>
</replicated>
</user_directories>
You can also define sections
memory
β means storing information only in memory, without writing to disk, and
ldap
β means storing information on an LDAP server.
To add an LDAP server as a remote user directory of users that are not defined locally, define a single
ldap
section with the following settings:
| Setting | Description |
|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
server
| one of LDAP server names defined in
ldap_servers
config section. This parameter is mandatory and cannot be empty. |
|
roles
| section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server. If no roles are specified, user will not be able to perform any actions after authentication. If any of the listed roles is not defined locally at the time of authentication, the authentication attempt will fail as if the provided password was incorrect. |
Example
xml
<ldap>
<server>my_ldap_server</server>
<roles>
<my_local_role1 />
<my_local_role2 />
</roles>
</ldap>
user_files_path {#user_files_path}
The directory with user files. Used in the table function
file()
,
fileCluster()
.
Example
xml
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
user_scripts_path {#user_scripts_path}
The directory with user scripts files. Used for Executable user defined functions
Executable User Defined Functions
.
Example
xml
<user_scripts_path>/var/lib/clickhouse/user_scripts/</user_scripts_path>
Type:
Default:
users_config {#users_config}
Path to the file that contains:
User configurations.
Access rights.
Settings profiles.
Quota settings.
Example | {"source_file": "settings.md"} | [
0.05552646145224571,
-0.05085134878754616,
-0.08818983286619186,
0.04547955468297005,
-0.057779014110565186,
0.011850078590214252,
0.03853630647063255,
0.06101684644818306,
-0.056789420545101166,
0.02073444239795208,
0.012472713366150856,
-0.02490588091313839,
0.07686605304479599,
0.012944... |
1863e32c-3d96-45d9-977f-ff60582f896e | Type:
Default:
users_config {#users_config}
Path to the file that contains:
User configurations.
Access rights.
Settings profiles.
Quota settings.
Example
xml
<users_config>users.xml</users_config>
validate_tcp_client_information {#validate_tcp_client_information}
Determines whether validation of client information is enabled when a query packet is received.
By default, it is
false
:
xml
<validate_tcp_client_information>false</validate_tcp_client_information>
vector_similarity_index_cache_max_entries {#vector_similarity_index_cache_max_entries}
Size of cache for vector similarity index in entries. Zero means disabled.
vector_similarity_index_cache_policy {#vector_similarity_index_cache_policy}
Vector similarity index cache policy name.
vector_similarity_index_cache_size {#vector_similarity_index_cache_size}
Size of cache for vector similarity indexes. Zero means disabled.
:::note
This setting can be modified at runtime and will take effect immediately.
:::
vector_similarity_index_cache_size_ratio {#vector_similarity_index_cache_size_ratio}
The size of the protected queue (in case of SLRU policy) in the vector similarity index cache relative to the cache's total size.
wait_dictionaries_load_at_startup {#wait_dictionaries_load_at_startup}
This setting allows to specify behavior if
dictionaries_lazy_load
is
false
.
(If
dictionaries_lazy_load
is
true
this setting doesn't affect anything.)
If
wait_dictionaries_load_at_startup
is
false
, then the server
will start loading all the dictionaries at startup and it will receive connections in parallel with that loading.
When a dictionary is used in a query for the first time then the query will wait until the dictionary is loaded if it's not loaded yet.
Setting
wait_dictionaries_load_at_startup
to
false
can make ClickHouse start faster, however some queries can be executed slower
(because they will have to wait for some dictionaries to be loaded).
If
wait_dictionaries_load_at_startup
is
true
, then the server will wait at startup
until all the dictionaries finish their loading (successfully or not) before receiving any connections.
Example
xml
<wait_dictionaries_load_at_startup>true</wait_dictionaries_load_at_startup>
workload_path {#workload_path}
The directory used as a storage for all
CREATE WORKLOAD
and
CREATE RESOURCE
queries. By default
/workload/
folder under server working directory is used.
Example
xml
<workload_path>/var/lib/clickhouse/workload/</workload_path>
See Also
-
Workload Hierarchy
-
workload_zookeeper_path
workload_zookeeper_path {#workload_zookeeper_path}
The path to a ZooKeeper node, which is used as a storage for all
CREATE WORKLOAD
and
CREATE RESOURCE
queries. For consistency all SQL definitions are stored as a value of this single znode. By default ZooKeeper is not used and definitions are stored on
disk
.
Example | {"source_file": "settings.md"} | [
0.001539774239063263,
0.06577204167842865,
-0.07541392743587494,
-0.019440384581685066,
-0.04953166842460632,
-0.047987256199121475,
0.06167656555771828,
0.005491428542882204,
-0.03289688378572464,
-0.05106554552912712,
0.006853614933788776,
0.019155019894242287,
0.011019840836524963,
0.00... |
03bae99f-d167-467e-a0c1-bb32fe3475d8 | Example
xml
<workload_zookeeper_path>/clickhouse/workload/definitions.sql</workload_zookeeper_path>
See Also
-
Workload Hierarchy
-
workload_path
zookeeper {#zookeeper}
Contains settings that allow ClickHouse to interact with a
ZooKeeper
cluster. ClickHouse uses ZooKeeper for storing metadata of replicas when using replicated tables. If replicated tables are not used, this section of parameters can be omitted.
The following settings can be configured by sub-tags: | {"source_file": "settings.md"} | [
0.002781566698104143,
-0.05214698240160942,
-0.08739709109067917,
0.03194553777575493,
-0.025329481810331345,
-0.06738702207803726,
-0.006129731424152851,
-0.027290387079119682,
-0.04447757080197334,
0.019264668226242065,
-0.003374178195372224,
-0.05601579323410988,
0.0675790011882782,
-0.... |
bcd8188e-2b85-4228-8d54-a2e4818722c6 | | Setting | Description |
|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
node
| ZooKeeper endpoint. You can set multiple endpoints. Eg.
<node index="1"><host>example_host</host><port>2181</port></node>
. The
index
attribute specifies the node order when trying to connect to the ZooKeeper cluster. |
|
session_timeout_ms
| Maximum timeout for the client session in milliseconds. |
|
operation_timeout_ms
| Maximum timeout for one operation in milliseconds. |
|
root | {"source_file": "settings.md"} | [
0.006743159145116806,
0.055357493460178375,
-0.008023551665246487,
-0.013639668934047222,
-0.0598844550549984,
0.0660252496600151,
0.016043217852711678,
0.04080517217516899,
0.024940861389040947,
-0.0701589286327362,
0.026648269966244698,
-0.03768584132194519,
-0.015370936132967472,
-0.062... |
64864f9e-1fca-4895-b3f6-13a05f75245b | |
root
(optional) | The znode that is used as the root for znodes used by the ClickHouse server. |
|
fallback_session_lifetime.min
(optional) | Minimum limit for the lifetime of a zookeeper session to the fallback node when primary is unavailable (load-balancing). Set in seconds. Default: 3 hours. |
|
fallback_session_lifetime.max
(optional) | Maximum limit for the lifetime of a zookeeper session to the fallback node when primary is unavailable (load-balancing). Set in seconds. Default: 6 hours. |
|
identity
(optional) | User and password required by ZooKeeper to access requested znodes. |
|
use_compression
(optional) | Enables compression in Keeper protocol if set to true. | | {"source_file": "settings.md"} | [
-0.07438772171735764,
0.08212616294622421,
-0.06243289262056351,
0.017360730096697807,
0.06343723833560944,
-0.09883461147546768,
0.008974136784672737,
0.01708987168967724,
-0.019088221713900566,
0.03796442225575447,
-0.031070483848452568,
0.013596243225038052,
0.04983137175440788,
0.01401... |
92d483e7-2e4f-407b-be76-3570576c651c | There is also the
zookeeper_load_balancing
setting (optional) which lets you select the algorithm for ZooKeeper node selection:
| Algorithm Name | Description |
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
|
random
| randomly selects one of ZooKeeper nodes. |
|
in_order
| selects the first ZooKeeper node, if it's not available then the second, and so on. |
|
nearest_hostname
| selects a ZooKeeper node with a hostname that is most similar to the server's hostname, hostname is compared with name prefix. |
|
hostname_levenshtein_distance
| just like nearest_hostname, but it compares hostname in a levenshtein distance manner. |
|
first_or_random
| selects the first ZooKeeper node, if it's not available then randomly selects one of remaining ZooKeeper nodes. |
|
round_robin
| selects the first ZooKeeper node, if reconnection happens selects the next. |
Example configuration
xml
<zookeeper>
<node>
<host>example1</host>
<port>2181</port>
</node>
<node>
<host>example2</host>
<port>2181</port>
</node>
<session_timeout_ms>30000</session_timeout_ms>
<operation_timeout_ms>10000</operation_timeout_ms>
<!-- Optional. Chroot suffix. Should exist. -->
<root>/path/to/zookeeper/node</root>
<!-- Optional. Zookeeper digest ACL string. -->
<identity>user:password</identity>
<!--<zookeeper_load_balancing>random / in_order / nearest_hostname / hostname_levenshtein_distance / first_or_random / round_robin</zookeeper_load_balancing>-->
<zookeeper_load_balancing>random</zookeeper_load_balancing>
</zookeeper>
See Also
Replication
ZooKeeper Programmer's Guide
Optional secured communication between ClickHouse and Zookeeper
zookeeper_log {#zookeeper_log}
Settings for the
zookeeper_log
system table.
The following settings can be configured by sub-tags:
Example
xml
<clickhouse>
<zookeeper_log>
<database>system</database>
<table>zookeeper_log</table>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<ttl>event_date + INTERVAL 1 WEEK DELETE</ttl>
</zookeeper_log>
</clickhouse> | {"source_file": "settings.md"} | [
-0.004011740442365408,
0.0428333543241024,
0.027383562177419662,
0.014706983231008053,
0.07018422335386276,
-0.08281493932008743,
0.00005749802221544087,
-0.0586664155125618,
-0.033547867089509964,
0.04163685068488121,
-0.01383779663592577,
-0.0925842896103859,
0.03896486759185791,
-0.0740... |
f0992300-3e87-4886-9829-07480f0cfa6c | description: 'Documentation for the sampling query profiler tool in ClickHouse'
sidebar_label: 'Query Profiling'
sidebar_position: 54
slug: /operations/optimizing-performance/sampling-query-profiler
title: 'Sampling query profiler'
doc_type: 'reference'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
Sampling query profiler
ClickHouse runs sampling profiler that allows analyzing query execution. Using profiler you can find source code routines that used the most frequently during query execution. You can trace CPU time and wall-clock time spent including idle time.
Query profiler is automatically enabled in ClickHouse Cloud and you can run a sample query as follows
:::note If you are running the following query in ClickHouse Cloud, make sure to change
FROM system.trace_log
to
FROM clusterAllReplicas(default, system.trace_log)
to select from all nodes of the cluster
:::
sql
SELECT
count(),
arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym
FROM system.trace_log
WHERE query_id = 'ebca3574-ad0a-400a-9cbc-dca382f5998c' AND trace_type = 'CPU' AND event_date = today()
GROUP BY trace
ORDER BY count() DESC
LIMIT 10
SETTINGS allow_introspection_functions = 1
In self-managed deployments, to use query profiler:
Setup the
trace_log
section of the server configuration.
This section configures the
trace_log
system table containing the results of the profiler functioning. It is configured by default. Remember that data in this table is valid only for a running server. After the server restart, ClickHouse does not clean up the table and all the stored virtual memory address may become invalid.
Setup the
query_profiler_cpu_time_period_ns
or
query_profiler_real_time_period_ns
settings. Both settings can be used simultaneously.
These settings allow you to configure profiler timers. As these are the session settings, you can get different sampling frequency for the whole server, individual users or user profiles, for your interactive session, and for each individual query.
The default sampling frequency is one sample per second and both CPU and real timers are enabled. This frequency allows collecting enough information about ClickHouse cluster. At the same time, working with this frequency, profiler does not affect ClickHouse server's performance. If you need to profile each individual query try to use higher sampling frequency.
To analyze the
trace_log
system table:
Install the
clickhouse-common-static-dbg
package. See
Install from DEB Packages
.
Allow introspection functions by the
allow_introspection_functions
setting.
For security reasons, introspection functions are disabled by default. | {"source_file": "sampling-query-profiler.md"} | [
0.04799414053559303,
-0.021479245275259018,
-0.0625717043876648,
0.0874817967414856,
0.05440163239836693,
-0.1096997857093811,
0.0671238899230957,
-0.0026720939204096794,
-0.03798443824052811,
-0.012152164243161678,
-0.025262482464313507,
-0.05732562392950058,
0.01312100701034069,
-0.08368... |
f960559d-89a4-4d9e-baa3-0c6a0c52084f | Allow introspection functions by the
allow_introspection_functions
setting.
For security reasons, introspection functions are disabled by default.
Use the
addressToLine
,
addressToLineWithInlines
,
addressToSymbol
and
demangle
introspection functions
to get function names and their positions in ClickHouse code. To get a profile for some query, you need to aggregate data from the
trace_log
table. You can aggregate data by individual functions or by the whole stack traces.
If you need to visualize
trace_log
info, try
flamegraph
and
speedscope
.
Example {#example}
In this example we:
Filtering
trace_log
data by a query identifier and the current date.
Aggregating by stack trace.
Using introspection functions, we will get a report of:
Names of symbols and corresponding source code functions.
Source code locations of these functions.
sql
SELECT
count(),
arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym
FROM system.trace_log
WHERE (query_id = 'ebca3574-ad0a-400a-9cbc-dca382f5998c') AND (event_date = today())
GROUP BY trace
ORDER BY count() DESC
LIMIT 10 | {"source_file": "sampling-query-profiler.md"} | [
0.06395748257637024,
-0.04084281250834465,
-0.08899466693401337,
0.0710000991821289,
-0.044845473021268845,
-0.0319652333855629,
0.09314589947462082,
0.013055562973022461,
-0.12823420763015747,
-0.05667387694120407,
0.004864877089858055,
-0.055268220603466034,
0.051004793494939804,
-0.0711... |
e49e6007-38c5-4bf8-8c97-72beac6e5b9f | description: 'Documentation for profile guided optimization'
sidebar_label: 'Profile guided optimization (PGO)'
sidebar_position: 54
slug: /operations/optimizing-performance/profile-guided-optimization
title: 'Profile guided optimization'
doc_type: 'guide'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
Profile guided optimization
Profile-Guided Optimization (PGO) is a compiler optimization technique where a program is optimized based on the runtime profile.
According to the tests, PGO helps with achieving better performance for ClickHouse. According to the tests, we see improvements up to 15% in QPS on the ClickBench test suite. The more detailed results are available
here
. The performance benefits depend on your typical workload - you can get better or worse results.
More information about PGO in ClickHouse you can read in the corresponding GitHub
issue
.
How to build ClickHouse with PGO? {#how-to-build-clickhouse-with-pgo}
There are two major kinds of PGO:
Instrumentation
and
Sampling
(also known as AutoFDO). In this guide is described the Instrumentation PGO with ClickHouse.
Build ClickHouse in Instrumented mode. In Clang it can be done via passing
-fprofile-generate
option to
CXXFLAGS
.
Run instrumented ClickHouse on a sample workload. Here you need to use your usual workload. One of the approaches could be using
ClickBench
as a sample workload. ClickHouse in the instrumentation mode could work slowly so be ready for that and do not run instrumented ClickHouse in performance-critical environments.
Recompile ClickHouse once again with
-fprofile-use
compiler flags and profiles that are collected from the previous step.
A more detailed guide on how to apply PGO is in the Clang
documentation
.
If you are going to collect a sample workload directly from a production environment, we recommend trying to use Sampling PGO. | {"source_file": "profile-guided-optimization.md"} | [
-0.02878974936902523,
0.026450909674167633,
-0.03539430350065231,
0.05420202761888504,
-0.035481080412864685,
-0.03721503168344498,
-0.01377186831086874,
0.05095799267292023,
-0.12820211052894592,
0.004314939025789499,
-0.04502148553729057,
0.034331388771533966,
0.04562794417142868,
-0.009... |
8a10f8d3-8338-450c-94c4-9e42d98bb2dd | description: 'Documentation for the ClickHouse Keeper client utility'
sidebar_label: 'clickhouse-keeper-client'
slug: /operations/utilities/clickhouse-keeper-client
title: 'clickhouse-keeper-client utility'
doc_type: 'reference'
clickhouse-keeper-client utility
A client application to interact with clickhouse-keeper by its native protocol.
Keys {#clickhouse-keeper-client}
-q QUERY
,
--query=QUERY
β Query to execute. If this parameter is not passed,
clickhouse-keeper-client
will start in interactive mode.
-h HOST
,
--host=HOST
β Server host. Default value:
localhost
.
-p N
,
--port=N
β Server port. Default value: 9181
-c FILE_PATH
,
--config-file=FILE_PATH
β Set path of config file to get the connection string. Default value:
config.xml
.
--connection-timeout=TIMEOUT
β Set connection timeout in seconds. Default value: 10s.
--session-timeout=TIMEOUT
β Set session timeout in seconds. Default value: 10s.
--operation-timeout=TIMEOUT
β Set operation timeout in seconds. Default value: 10s.
--history-file=FILE_PATH
β Set path of history file. Default value:
~/.keeper-client-history
.
--log-level=LEVEL
β Set log level. Default value:
information
.
--no-confirmation
β If set, will not require a confirmation on several commands. Default value
false
for interactive and
true
for query
--help
β Shows the help message.
Example {#clickhouse-keeper-client-example}
```bash
./clickhouse-keeper-client -h localhost -p 9181 --connection-timeout 30 --session-timeout 30 --operation-timeout 30
Connected to ZooKeeper at [::1]:9181 with session_id 137
/ :) ls
keeper foo bar
/ :) cd 'keeper'
/keeper :) ls
api_version
/keeper :) cd 'api_version'
/keeper/api_version :) ls
/keeper/api_version :) cd 'xyz'
Path /keeper/api_version/xyz does not exist
/keeper/api_version :) cd ../../
/ :) ls
keeper foo bar
/ :) get 'keeper/api_version'
2
```
Commands {#clickhouse-keeper-client-commands}
ls '[path]'
-- Lists the nodes for the given path (default: cwd)
cd '[path]'
-- Changes the working path (default
.
)
cp '<src>' '<dest>'
-- Copies 'src' node to 'dest' path
cpr '<src>' '<dest>'
-- Copies 'src' node subtree to 'dest' path
mv '<src>' '<dest>'
-- Moves 'src' node to the 'dest' path
mvr '<src>' '<dest>'
-- Moves 'src' node subtree to 'dest' path
exists '<path>'
-- Returns
1
if node exists,
0
otherwise
set '<path>' <value> [version]
-- Updates the node's value. Only updates if version matches (default: -1)
create '<path>' <value> [mode]
-- Creates new node with the set value
touch '<path>'
-- Creates new node with an empty string as value. Doesn't throw an exception if the node already exists
get '<path>'
-- Returns the node's value
rm '<path>' [version]
-- Removes the node only if version matches (default: -1)
rmr '<path>' [limit]
-- Recursively deletes path if the subtree size is smaller than the limit. Confirmation required (default limit = 100) | {"source_file": "clickhouse-keeper-client.md"} | [
0.003265771083533764,
0.021740369498729706,
-0.10256074368953705,
-0.02016770839691162,
-0.035961151123046875,
-0.06487132608890533,
0.009437761269509792,
-0.03387260064482689,
-0.03728228434920311,
-0.00902344286441803,
0.025408117100596428,
-0.025076333433389664,
0.06201876327395439,
-0.... |
781f5e2d-55f1-4c15-9a12-3dd7ea2fd36e | rmr '<path>' [limit]
-- Recursively deletes path if the subtree size is smaller than the limit. Confirmation required (default limit = 100)
flwc <command>
-- Executes four-letter-word command
help
-- Prints this message
get_direct_children_number '[path]'
-- Get numbers of direct children nodes under a specific path
get_all_children_number '[path]'
-- Get all numbers of children nodes under a specific path
get_stat '[path]'
-- Returns the node's stat (default
.
)
find_super_nodes <threshold> '[path]'
-- Finds nodes with number of children larger than some threshold for the given path (default
.
)
delete_stale_backups
-- Deletes ClickHouse nodes used for backups that are now inactive
find_big_family [path] [n]
-- Returns the top n nodes with the biggest family in the subtree (default path =
.
and n = 10)
sync '<path>'
-- Synchronizes node between processes and leader
reconfig <add|remove|set> "<arg>" [version]
-- Reconfigure Keeper cluster. See /docs/en/guides/sre/keeper/clickhouse-keeper#reconfiguration | {"source_file": "clickhouse-keeper-client.md"} | [
-0.03898114338517189,
-0.010395458899438381,
-0.03487573564052582,
0.05959472432732582,
0.06671277433633804,
-0.11855173110961914,
-0.006824012380093336,
-0.0019578023348003626,
-0.04975387826561928,
0.03851720318198204,
0.07181018590927124,
-0.013816853985190392,
0.05985531583428383,
-0.0... |
8325a7f1-0156-4413-b3ff-ea989b7aba0d | description: 'Documentation for Clickhouse Obfuscator'
slug: /operations/utilities/clickhouse-obfuscator
title: 'clickhouse-obfuscator'
doc_type: 'reference'
A simple tool for table data obfuscation.
It reads an input table and produces an output table, that retains some properties of input, but contains different data.
It allows publishing almost real production data for usage in benchmarks.
It is designed to retain the following properties of data:
- cardinalities of values (number of distinct values) for every column and every tuple of columns;
- conditional cardinalities: number of distinct values of one column under the condition on the value of another column;
- probability distributions of the absolute value of integers; the sign of signed integers; exponent and sign for floats;
- probability distributions of the length of strings;
- probability of zero values of numbers; empty strings and arrays,
NULL
s;
data compression ratio when compressed with LZ77 and entropy family of codecs;
continuity (magnitude of difference) of time values across the table; continuity of floating-point values;
date component of
DateTime
values;
UTF-8 validity of string values;
string values look natural.
Most of the properties above are viable for performance testing:
reading data, filtering, aggregation, and sorting will work at almost the same speed
as on original data due to saved cardinalities, magnitudes, compression ratios, etc.
It works in a deterministic fashion: you define a seed value and the transformation is determined by input data and by seed.
Some transformations are one to one and could be reversed, so you need to have a large seed and keep it in secret.
It uses some cryptographic primitives to transform data but from the cryptographic point of view, it does not do it properly, that is why you should not consider the result as secure unless you have another reason. The result may retain some data you don't want to publish.
It always leaves 0, 1, -1 numbers, dates, lengths of arrays, and null flags exactly as in source data.
For example, you have a column
IsMobile
in your table with values 0 and 1. In transformed data, it will have the same value.
So, the user will be able to count the exact ratio of mobile traffic.
Let's give another example. When you have some private data in your table, like user email, and you don't want to publish any single email address.
If your table is large enough and contains multiple different emails and no email has a very high frequency than all others, it will anonymize all data. But if you have a small number of different values in a column, it can reproduce some of them.
You should look at the working algorithm of this tool works, and fine-tune its command line parameters.
This tool works fine only with at least a moderate amount of data (at least 1000s of rows). | {"source_file": "clickhouse-obfuscator.md"} | [
-0.05127260088920593,
0.01676085963845253,
-0.13064025342464447,
-0.015060296282172203,
0.00307526346296072,
-0.09461206197738647,
0.03384881466627121,
-0.013415968976914883,
-0.0028467432130128145,
-0.007875255309045315,
0.02806749939918518,
-0.049393728375434875,
0.0839916467666626,
-0.0... |
5147fbf3-fe82-4bab-a326-007db3f8461c | description: 'Guide to using the format utility for working with ClickHouse data formats'
slug: /operations/utilities/clickhouse-format
title: 'clickhouse-format'
doc_type: 'reference'
clickhouse-format utility
Allows formatting input queries.
Keys:
--help
or
-h
β Produce help message.
--query
β Format queries of any length and complexity.
--hilite
or
--highlight
β Add syntax highlight with ANSI terminal escape sequences.
--oneline
β Format in single line.
--max_line_length
β Format in single line queries with length less than specified.
--comments
β Keep comments in the output.
--quiet
or
-q
β Just check syntax, no output on success.
--multiquery
or
-n
β Allow multiple queries in the same file.
--obfuscate
β Obfuscate instead of formatting.
--seed <string>
β Seed arbitrary string that determines the result of obfuscation.
--backslash
β Add a backslash at the end of each line of the formatted query. Can be useful when you copy a query from web or somewhere else with multiple lines, and want to execute it in command line.
--semicolons_inline
β In multiquery mode, write semicolons on last line of query instead of a new line.
Examples {#examples}
Formatting a query:
bash
$ clickhouse-format --query "select number from numbers(10) where number%2 order by number desc;"
Result:
bash
SELECT number
FROM numbers(10)
WHERE number % 2
ORDER BY number DESC
Highlighting and single line:
bash
$ clickhouse-format --oneline --hilite <<< "SELECT sum(number) FROM numbers(5);"
Result:
sql
SELECT sum(number) FROM numbers(5)
Multiqueries:
bash
$ clickhouse-format -n <<< "SELECT min(number) FROM numbers(5); SELECT max(number) FROM numbers(5);"
Result:
```sql
SELECT min(number)
FROM numbers(5)
;
SELECT max(number)
FROM numbers(5)
;
```
Obfuscating:
bash
$ clickhouse-format --seed Hello --obfuscate <<< "SELECT cost_first_screen BETWEEN a AND b, CASE WHEN x >= 123 THEN y ELSE NULL END;"
Result:
sql
SELECT treasury_mammoth_hazelnut BETWEEN nutmeg AND span, CASE WHEN chive >= 116 THEN switching ELSE ANYTHING END;
Same query and another seed string:
bash
$ clickhouse-format --seed World --obfuscate <<< "SELECT cost_first_screen BETWEEN a AND b, CASE WHEN x >= 123 THEN y ELSE NULL END;"
Result:
sql
SELECT horse_tape_summer BETWEEN folklore AND moccasins, CASE WHEN intestine >= 116 THEN nonconformist ELSE FORESTRY END;
Adding backslash:
bash
$ clickhouse-format --backslash <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
Result:
sql
SELECT * \
FROM \
( \
SELECT 1 AS x \
UNION ALL \
SELECT 1 \
UNION DISTINCT \
SELECT 3 \
) | {"source_file": "clickhouse-format.md"} | [
-0.023097405210137367,
0.02214735373854637,
-0.05618488788604736,
0.06552988290786743,
-0.0701981782913208,
-0.04341673105955124,
0.013186155818402767,
0.014082006178796291,
-0.0731978565454483,
-0.0004627735470421612,
-0.0017550949705764651,
-0.039309918880462646,
0.10116256773471832,
-0.... |
19003291-4fca-472d-b370-1b56ecf15461 | description: 'Documentation for Clickhouse-disks'
sidebar_label: 'clickhouse-disks'
sidebar_position: 59
slug: /operations/utilities/clickhouse-disks
title: 'Clickhouse-disks'
doc_type: 'reference'
Clickhouse-disks
A utility providing filesystem-like operations for ClickHouse disks. It can work in both interactive and not interactive modes.
Program-wide options {#program-wide-options}
--config-file, -C
-- path to ClickHouse config, defaults to
/etc/clickhouse-server/config.xml
.
--save-logs
-- Log progress of invoked commands to
/var/log/clickhouse-server/clickhouse-disks.log
.
--log-level
-- What
type
of events to log, defaults to
none
.
--disk
-- what disk to use for
mkdir, move, read, write, remove
commands. Defaults to
default
.
--query, -q
-- single query that can be executed without launching interactive mode
--help, -h
-- print all the options and commands with description
Lazy initialization {#lazy-initialization}
All disks which are available in config are initialized lazily. This means that the corresponding object for a disk is initialized only when corresponding disk is used in some command. This is done to make the utility more robust and to avoid touching of disks which are described in config but not used by a user and can fail during initialization. However, there should be a disk which is initialized at the clickhouse-disks launch. This disk is specified with parameter
--disk
through command-line (default value is
default
).
Default Disks {#default-disks}
After launching, there are two disks that are not specified in the configuration but are available for initialization.
local
Disk
: This disk is designed to mimic the local file system from which the
clickhouse-disks
utility was launched. Its initial path is the directory from which
clickhouse-disks
was started, and it is mounted at the root directory of the file system.
default
Disk
: This disk is mounted to the local file system in the directory specified by the
clickhouse/path
parameter in the configuration (the default value is
/var/lib/clickhouse
). Its initial path is set to
/
.
Clickhouse-disks state {#clickhouse-disks-state}
For each disk that was added the utility stores current directory (as in a usual filesystem). User can change current directory and switch between disks.
State is reflected in a prompt "
disk_name
:
path_name
"
Commands {#commands}
In these documentation file all mandatory positional arguments are referred as
<parameter>
, named arguments are referred as
[--parameter value]
. All positional parameters could be mentioned as a named parameter with a corresponding name.
cd (change-dir, change_dir) [--disk disk] <path>
Change directory to path
path
on disk
disk
(default value is a current disk). No disk switching happens. | {"source_file": "clickhouse-disks.md"} | [
0.0366685576736927,
-0.03885778784751892,
-0.03720397874712944,
0.032742638140916824,
-0.05805109068751335,
-0.0699097067117691,
0.018712228164076805,
0.024825427681207657,
-0.06181023642420769,
0.023937668651342392,
0.05488043278455734,
0.00819624774158001,
0.04654311388731003,
0.00090696... |
dd1a4ce3-5836-4778-bc26-3ee164d34d94 | cd (change-dir, change_dir) [--disk disk] <path>
Change directory to path
path
on disk
disk
(default value is a current disk). No disk switching happens.
copy (cp) [--disk-from disk_1] [--disk-to disk_2] <path-from> <path-to>
.
Recursively copy data from
path-from
at disk
disk_1
(default value is a current disk (parameter
disk
in a non-interactive mode))
to
path-to
at disk
disk_2
(default value is a current disk (parameter
disk
in a non-interactive mode)).
current_disk_with_path (current, current_disk, current_path)
Print current state in format:
Disk: "current_disk" Path: "current path on current disk"
help [<command>]
Print help message about command
command
. If
command
is not specified print information about all commands.
move (mv) <path-from> <path-to>
.
Move file or directory from
path-from
to
path-to
within current disk.
remove (rm, delete) <path>
.
Remove
path
recursively on a current disk.
link (ln) <path-from> <path-to>
.
Create a hardlink from
path-from
to
path-to
on a current disk.
list (ls) [--recursive] <path>
List files at
path
s on a current disk. Non-recursive by default.
list-disks (list_disks, ls-disks, ls_disks)
.
List disks names.
mkdir [--recursive] <path>
on a current disk.
Create a directory. Non-recursive by default.
read (r) <path-from> [--path-to path]
Read a file from
path-from
to
path
(
stdout
if not supplied).
switch-disk [--path path] <disk>
Switch to disk
disk
on path
path
(if
path
is not specified default value is a previous path on disk
disk
).
write (w) [--path-from path] <path-to>
.
Write a file from
path
(
stdin
if
path
is not supplied, input must finish by Ctrl+D) to
path-to
. | {"source_file": "clickhouse-disks.md"} | [
-0.0499853752553463,
0.0014799757627770305,
0.012946369126439095,
0.03712866082787514,
-0.014911727979779243,
-0.11604233086109161,
0.07235080748796463,
0.022832319140434265,
-0.03520701080560684,
0.044057682156562805,
0.09100200980901718,
0.010443148203194141,
0.04919540137052536,
-0.0165... |
ac5983bf-4da5-4262-9480-a9852bd8d5e3 | description: 'Documentation for clickhouse_backupview {#clickhouse_backupview}'
slug: /operations/utilities/backupview
title: 'clickhouse_backupview'
doc_type: 'reference'
clickhouse_backupview {#clickhouse_backupview}
Python module to help analyzing backups made by the
BACKUP
command.
The main motivation was to allows getting some information from a backup without actually restoring it.
This module provides functions to
- enumerate files contained in a backup
- read files from a backup
- get useful information in readable form about databases, tables, parts contained in a backup
- check integrity of a backup
Example: {#example}
```python
from clickhouse_backupview import open_backup, S3, FileInfo
Open a backup. We could also use a local path:
backup = open_backup("/backups/my_backup_1/")
backup = open_backup(S3("uri", "access_key_id", "secret_access_key"))
Get a list of databasess inside the backup.
print(backup.get_databases()))
Get a list of tables inside the backup,
and for each table its create query and a list of parts and partitions.
for db in backup.get_databases():
for tbl in backup.get_tables(database=db):
print(backup.get_create_query(database=db, table=tbl))
print(backup.get_partitions(database=db, table=tbl))
print(backup.get_parts(database=db, table=tbl))
Extract everything from the backup.
backup.extract_all(table="mydb.mytable", out='/tmp/my_backup_1/all/')
Extract the data of a specific table.
backup.extract_table_data(table="mydb.mytable", out='/tmp/my_backup_1/mytable/')
Extract a single partition.
backup.extract_table_data(table="mydb.mytable", partition="202201", out='/tmp/my_backup_1/202201/')
Extract a single part.
backup.extract_table_data(table="mydb.mytable", part="202201_100_200_3", out='/tmp/my_backup_1/202201_100_200_3/')
```
For more examples see the
test
. | {"source_file": "backupview.md"} | [
-0.07847639173269272,
-0.03441881388425827,
-0.06582894921302795,
0.01143832877278328,
0.05246996507048607,
-0.06102803349494934,
0.0040056719444692135,
0.018506772816181183,
-0.03033330850303173,
-0.05745931342244148,
0.021468166261911392,
0.04370282590389252,
0.12451143562793732,
-0.0946... |
86667976-afda-4728-ba09-fba16b0fe4f3 | description: 'Documentation for Clickhouse Compressor'
slug: /operations/utilities/clickhouse-compressor
title: 'clickhouse-compressor'
doc_type: 'reference'
Simple program for data compression and decompression.
Examples {#examples}
Compress data with LZ4:
bash
$ ./clickhouse-compressor < input_file > output_file
Decompress data from LZ4 format:
bash
$ ./clickhouse-compressor --decompress < input_file > output_file
Compress data with ZSTD at level 5:
bash
$ ./clickhouse-compressor --codec 'ZSTD(5)' < input_file > output_file
Compress data with Delta of four bytes and ZSTD level 10.
bash
$ ./clickhouse-compressor --codec 'Delta(4)' --codec 'ZSTD(10)' < input_file > output_file | {"source_file": "clickhouse-compressor.md"} | [
-0.0652545839548111,
0.07910845428705215,
-0.08959569782018661,
-0.0037983094807714224,
-0.004361728671938181,
-0.06308820098638535,
-0.015818770974874496,
0.028259914368391037,
-0.06798363476991653,
0.011685522273182869,
-0.0026973995845764875,
0.026515066623687744,
0.04438867047429085,
-... |
3014f82b-8755-4df5-813c-1904d745fabc | description: 'Page listing various useful ClickHouse tools and utilities.'
keywords: ['tools', 'utilities']
sidebar_label: 'List of tools and utilities'
sidebar_position: 56
slug: /operations/utilities/
title: 'List of tools and utilities'
doc_type: 'landing-page'
| Tool/Utility | Description |
|------|-------------|
|
clickhouse-local
| Allows running SQL queries on data without starting the ClickHouse server, similar to how
awk
does this.|
|
clickhouse-benchmark
| Loads server with the custom queries and settings.|
|
clickhouse-format
| Enables formatting input queries.|
|
ClickHouse obfuscator
| Obfuscates data.|
|
ClickHouse compressor
| Compresses and decompresses data.|
|
clickhouse-disks
| Provides filesystem-like operations on files among different ClickHouse disks.|
|
clickhouse-odbc-bridge
| A proxy server for ODBC driver.|
|
clickhouse_backupview
| A python module to analyze ClickHouse backups.| | {"source_file": "index.md"} | [
-0.053001049906015396,
-0.007694771513342857,
-0.1119011715054512,
0.06865755468606949,
-0.00792212039232254,
-0.12020359933376312,
0.0016056628664955497,
-0.006990667432546616,
-0.05131140351295471,
0.00023482419783249497,
0.02353573590517044,
0.033867307007312775,
0.09048708528280258,
-0... |
0fe3c32a-935f-4837-b41e-e8e44b1f526a | description: 'Documentation for clickhouse-benchmark '
sidebar_label: 'clickhouse-benchmark'
sidebar_position: 61
slug: /operations/utilities/clickhouse-benchmark
title: 'clickhouse-benchmark'
doc_type: 'reference'
clickhouse-benchmark
Connects to a ClickHouse server and repeatedly sends specified queries.
Syntax
bash
$ clickhouse-benchmark --query ["single query"] [keys]
or
bash
$ echo "single query" | clickhouse-benchmark [keys]
or
bash
$ clickhouse-benchmark [keys] <<< "single query"
If you want to send a set of queries, create a text file and place each query on the individual string in this file. For example:
sql
SELECT * FROM system.numbers LIMIT 10000000;
SELECT 1;
Then pass this file to a standard input of
clickhouse-benchmark
:
bash
clickhouse-benchmark [keys] < queries_file;
Command-line options {#clickhouse-benchmark-command-line-options}
--query=QUERY
β Query to execute. If this parameter is not passed,
clickhouse-benchmark
will read queries from standard input.
--query_id=ID
β Query Id.
--query_id_prefix=ID_PREFIX
β Query Id Prefix.
-c N
,
--concurrency=N
β Number of queries that
clickhouse-benchmark
sends simultaneously. Default value: 1.
-C N
,
--max_concurrency=N
β Gradually increases number of parallel queries up to specified value, making one report for every concurrency level.
--precise
β Enables precise per-interval reporting with weighted metrics.
-d N
,
--delay=N
β Interval in seconds between intermediate reports (to disable reports set 0). Default value: 1.
-h HOST
,
--host=HOST
β Server host. Default value:
localhost
. For the
comparison mode
you can use multiple
-h
keys.
-i N
,
--iterations=N
β Total number of queries. Default value: 0 (repeat forever).
-r
,
--randomize
β Random order of queries execution if there is more than one input query.
-s
,
--secure
β Using
TLS
connection.
-t N
,
--timelimit=N
β Time limit in seconds.
clickhouse-benchmark
stops sending queries when the specified time limit is reached. Default value: 0 (time limit disabled).
--port=N
β Server port. Default value: 9000. For the
comparison mode
you can use multiple
--port
keys.
--confidence=N
β Level of confidence for T-test. Possible values: 0 (80%), 1 (90%), 2 (95%), 3 (98%), 4 (99%), 5 (99.5%). Default value: 5. In the
comparison mode
clickhouse-benchmark
performs the
Independent two-sample Student's t-test
to determine whether the two distributions aren't different with the selected level of confidence.
--cumulative
β Printing cumulative data instead of data per interval.
--database=DATABASE_NAME
β ClickHouse database name. Default value:
default
.
--user=USERNAME
β ClickHouse user name. Default value:
default
.
--password=PSWD
β ClickHouse user password. Default value: empty string.
--stacktrace
β Stack traces output. When the key is set,
clickhouse-bencmark
outputs stack traces of exceptions. | {"source_file": "clickhouse-benchmark.md"} | [
0.06700950860977173,
-0.016835622489452362,
-0.11943762749433517,
0.0495573952794075,
-0.11297208815813065,
-0.028402741998434067,
0.07775560021400452,
0.02967854030430317,
-0.06990457326173782,
-0.017747411504387856,
-0.020605560392141342,
-0.03242134675383568,
0.09800256788730621,
-0.115... |
671e8312-f874-4970-a4ae-966d536a16af | --password=PSWD
β ClickHouse user password. Default value: empty string.
--stacktrace
β Stack traces output. When the key is set,
clickhouse-bencmark
outputs stack traces of exceptions.
--stage=WORD
β Query processing stage at server. ClickHouse stops query processing and returns an answer to
clickhouse-benchmark
at the specified stage. Possible values:
complete
,
fetch_columns
,
with_mergeable_state
. Default value:
complete
.
--roundrobin
β Instead of comparing queries for different
--host
/
--port
just pick one random
--host
/
--port
for every query and send query to it.
--reconnect=N
β Control reconnection behaviour. Possible values 0 (never reconnect), 1 (reconnect for every query), or N (reconnect after every N queries). Default value: 0.
--max-consecutive-errors=N
β Number of allowed consecutive errors. Default value: 0.
--ignore-error
,
--continue_on_errors
β Continue testing even if queries failed.
--client-side-time
β Display the time including network communication instead of server-side time; Note that for server versions before 22.8 we always display client-side time.
--proto-caps
β Enable/disable chunking in data transfer. choices (can be comma-separated):
chunked_optional
,
notchunked
,
notchunked_optional
,
send_chunked
,
send_chunked_optional
,
send_notchunked
,
send_notchunked_optional
,
recv_chunked
,
recv_chunked_optional
,
recv_notchunked
,
recv_notchunked_optional
. Default value:
notchunked
.
--help
β Shows the help message.
--verbose
β Increase help message verbosity.
If you want to apply some
settings
for queries, pass them as a key
--<session setting name>= SETTING_VALUE
. For example,
--max_memory_usage=1048576
.
Environment variable options {#clickhouse-benchmark-environment-variable-options}
The user name, password and host can be set via environment variables
CLICKHOUSE_USER
,
CLICKHOUSE_PASSWORD
and
CLICKHOUSE_HOST
.
Command line arguments
--user
,
--password
or
--host
take precedence over environment variables.
Output {#clickhouse-benchmark-output}
By default,
clickhouse-benchmark
reports for each
--delay
interval.
Example of the report:
```text
Queries executed: 10.
localhost:9000, queries 10, QPS: 6.772, RPS: 67904487.440, MiB/s: 518.070, result RPS: 67721584.984, result MiB/s: 516.675.
0.000% 0.145 sec.
10.000% 0.146 sec.
20.000% 0.146 sec.
30.000% 0.146 sec.
40.000% 0.147 sec.
50.000% 0.148 sec.
60.000% 0.148 sec.
70.000% 0.148 sec.
80.000% 0.149 sec.
90.000% 0.150 sec.
95.000% 0.150 sec.
99.000% 0.150 sec.
99.900% 0.150 sec.
99.990% 0.150 sec.
```
In the report you can find:
Number of queries in the
Queries executed:
field.
Status string containing (in order):
Endpoint of ClickHouse server.
Number of processed queries.
QPS: How many queries the server performed per second during a period specified in the
--delay
argument. | {"source_file": "clickhouse-benchmark.md"} | [
-0.008278693072497845,
-0.09273934364318848,
-0.13335338234901428,
0.08170715719461441,
-0.10410357266664505,
-0.061753980815410614,
0.028625646606087685,
-0.012222792953252792,
-0.045907050371170044,
-0.00045199584565125406,
0.002316018333658576,
-0.00298488256521523,
0.08165724575519562,
... |
6105ae14-0db3-4a99-8d7e-a1c70361b06a | Endpoint of ClickHouse server.
Number of processed queries.
QPS: How many queries the server performed per second during a period specified in the
--delay
argument.
RPS: How many rows the server reads per second during a period specified in the
--delay
argument.
MiB/s: How many mebibytes the server reads per second during a period specified in the
--delay
argument.
result RPS: How many rows placed by the server to the result of a query per second during a period specified in the
--delay
argument.
result MiB/s. How many mebibytes placed by the server to the result of a query per second during a period specified in the
--delay
argument.
Percentiles of queries execution time.
Comparison Mode {#clickhouse-benchmark-comparison-mode}
clickhouse-benchmark
can compare performances for two running ClickHouse servers.
To use the comparison mode, specify endpoints of both servers by two pairs of
--host
,
--port
keys. Keys matched together by position in arguments list, the first
--host
is matched with the first
--port
and so on.
clickhouse-benchmark
establishes connections to both servers, then sends queries. Each query addressed to a randomly selected server. The results are shown in a table.
Example {#clickhouse-benchmark-example}
bash
$ echo "SELECT * FROM system.numbers LIMIT 10000000 OFFSET 10000000" | clickhouse-benchmark --host=localhost --port=9001 --host=localhost --port=9000 -i 10
```text
Loaded 1 queries.
Queries executed: 5.
localhost:9001, queries 2, QPS: 3.764, RPS: 75446929.370, MiB/s: 575.614, result RPS: 37639659.982, result MiB/s: 287.168.
localhost:9000, queries 3, QPS: 3.815, RPS: 76466659.385, MiB/s: 583.394, result RPS: 38148392.297, result MiB/s: 291.049.
0.000% 0.258 sec. 0.250 sec.
10.000% 0.258 sec. 0.250 sec.
20.000% 0.258 sec. 0.250 sec.
30.000% 0.258 sec. 0.267 sec.
40.000% 0.258 sec. 0.267 sec.
50.000% 0.273 sec. 0.267 sec.
60.000% 0.273 sec. 0.267 sec.
70.000% 0.273 sec. 0.267 sec.
80.000% 0.273 sec. 0.269 sec.
90.000% 0.273 sec. 0.269 sec.
95.000% 0.273 sec. 0.269 sec.
99.000% 0.273 sec. 0.269 sec.
99.900% 0.273 sec. 0.269 sec.
99.990% 0.273 sec. 0.269 sec.
No difference proven at 99.5% confidence
``` | {"source_file": "clickhouse-benchmark.md"} | [
-0.02627207338809967,
-0.005952097941190004,
-0.08537982404232025,
0.03836902603507042,
-0.06346452981233597,
-0.05262339860200882,
0.03377509117126465,
0.035264868289232254,
0.07306767255067825,
0.020135752856731415,
-0.03293618559837341,
0.0010832281550392509,
0.07358758896589279,
-0.025... |
4a8af07d-60b4-4964-be6e-465a75bfc42a | description: 'Guide to using clickhouse-local for processing data without a server'
sidebar_label: 'clickhouse-local'
sidebar_position: 60
slug: /operations/utilities/clickhouse-local
title: 'clickhouse-local'
doc_type: 'reference'
clickhouse-local
When to use clickhouse-local vs. ClickHouse {#when-to-use-clickhouse-local-vs-clickhouse}
clickhouse-local
is an easy-to-use version of ClickHouse that is ideal for developers who need to perform fast processing on local and remote files using SQL without having to install a full database server. With
clickhouse-local
, developers can use SQL commands (using the
ClickHouse SQL dialect
directly from the command line, providing a simple and efficient way to access ClickHouse features without the need for a full ClickHouse installation. One of the main benefits of
clickhouse-local
is that it is already included when installing
clickhouse-client
. This means that developers can get started with
clickhouse-local
quickly, without the need for a complex installation process.
While
clickhouse-local
is a great tool for development and testing purposes, and for processing files, it is not suitable for serving end users or applications. In these scenarios, it is recommended to use the open-source
ClickHouse
. ClickHouse is a powerful OLAP database that is designed to handle large-scale analytical workloads. It provides fast and efficient processing of complex queries on large datasets, making it ideal for use in production environments where high-performance is critical. Additionally, ClickHouse offers a wide range of features such as replication, sharding, and high availability, which are essential for scaling up to handle large datasets and serving applications. If you need to handle larger datasets or serve end users or applications, we recommend using open-source ClickHouse instead of
clickhouse-local
.
Please read the docs below that show example use cases for
clickhouse-local
, such as
querying local file
or
reading a parquet file in S3
.
Download clickhouse-local {#download-clickhouse-local}
clickhouse-local
is executed using the same
clickhouse
binary that runs the ClickHouse server and
clickhouse-client
. The easiest way to download the latest version is with the following command:
bash
curl https://clickhouse.com/ | sh
:::note
The binary you just downloaded can run all sorts of ClickHouse tools and utilities. If you want to run ClickHouse as a database server, check out the
Quick Start
.
:::
Query data in a file using SQL {#query_data_in_file}
A common use of
clickhouse-local
is to run ad-hoc queries on files: where you don't have to insert the data into a table.
clickhouse-local
can stream the data from a file into a temporary table and execute your SQL.
If the file is sitting on the same machine as
clickhouse-local
, you can simply specify the file to load. The following
reviews.tsv
file contains a sampling of Amazon product reviews: | {"source_file": "clickhouse-local.md"} | [
0.050353601574897766,
-0.05530448630452156,
-0.04405837506055832,
0.030601438134908676,
-0.028413008898496628,
-0.07285727560520172,
0.015060890465974808,
-0.007635939866304398,
-0.07758874446153641,
0.028027113527059555,
0.05476114898920059,
0.015071545727550983,
0.021637503057718277,
-0.... |
58ff9d8a-14fa-4a4d-af8a-9bdd3683eece | If the file is sitting on the same machine as
clickhouse-local
, you can simply specify the file to load. The following
reviews.tsv
file contains a sampling of Amazon product reviews:
bash
./clickhouse local -q "SELECT * FROM 'reviews.tsv'"
This command is a shortcut of:
bash
./clickhouse local -q "SELECT * FROM file('reviews.tsv')"
ClickHouse knows the file uses a tab-separated format from filename extension. If you need to explicitly specify the format, simply add one of the
many ClickHouse input formats
:
bash
./clickhouse local -q "SELECT * FROM file('reviews.tsv', 'TabSeparated')"
The
file
table function creates a table, and you can use
DESCRIBE
to see the inferred schema:
bash
./clickhouse local -q "DESCRIBE file('reviews.tsv')"
:::tip
You are allowed to use globs in file name (See
glob substitutions
).
Examples:
bash
./clickhouse local -q "SELECT * FROM 'reviews*.jsonl'"
./clickhouse local -q "SELECT * FROM 'review_?.csv'"
./clickhouse local -q "SELECT * FROM 'review_{1..3}.csv'"
:::
response
marketplace Nullable(String)
customer_id Nullable(Int64)
review_id Nullable(String)
product_id Nullable(String)
product_parent Nullable(Int64)
product_title Nullable(String)
product_category Nullable(String)
star_rating Nullable(Int64)
helpful_votes Nullable(Int64)
total_votes Nullable(Int64)
vine Nullable(String)
verified_purchase Nullable(String)
review_headline Nullable(String)
review_body Nullable(String)
review_date Nullable(Date)
Let's find a product with the highest rating:
bash
./clickhouse local -q "SELECT
argMax(product_title,star_rating),
max(star_rating)
FROM file('reviews.tsv')"
response
Monopoly Junior Board Game 5
Query data in a Parquet file in AWS S3 {#query-data-in-a-parquet-file-in-aws-s3}
If you have a file in S3, use
clickhouse-local
and the
s3
table function to query the file in place (without inserting the data into a ClickHouse table). We have a file named
house_0.parquet
in a public bucket that contains home prices of property sold in the United Kingdom. Let's see how many rows it has:
bash
./clickhouse local -q "
SELECT count()
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/house_parquet/house_0.parquet')"
The file has 2.7M rows:
response
2772030
It's always useful to see what the inferred schema that ClickHouse determines from the file:
bash
./clickhouse local -q "DESCRIBE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/house_parquet/house_0.parquet')"
response
price Nullable(Int64)
date Nullable(UInt16)
postcode1 Nullable(String)
postcode2 Nullable(String)
type Nullable(String)
is_new Nullable(UInt8)
duration Nullable(String)
addr1 Nullable(String)
addr2 Nullable(String)
street Nullable(String)
locality Nullable(String)
town Nullable(String)
district Nullable(String)
county Nullable(String) | {"source_file": "clickhouse-local.md"} | [
0.041302964091300964,
-0.02121848054230213,
-0.09643030911684036,
0.02558659017086029,
0.03359070420265198,
0.027919281274080276,
0.06308748573064804,
0.02456054836511612,
0.01471786666661501,
0.04404212534427643,
0.014903045259416103,
-0.02332557551562786,
0.002794614527374506,
0.02049266... |
0e43c9bc-90b9-419b-927b-22d4719bc070 | Let's see what the most expensive neighborhoods are:
bash
./clickhouse local -q "
SELECT
town,
district,
count() AS c,
round(avg(price)) AS price,
bar(price, 0, 5000000, 100)
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/house_parquet/house_0.parquet')
GROUP BY
town,
district
HAVING c >= 100
ORDER BY price DESC
LIMIT 10"
response
LONDON CITY OF LONDON 886 2271305 ββββββββββββββββββββββββββββββββββββββββββββββ
LEATHERHEAD ELMBRIDGE 206 1176680 ββββββββββββββββββββββββ
LONDON CITY OF WESTMINSTER 12577 1108221 βββββββββββββββββββββββ
LONDON KENSINGTON AND CHELSEA 8728 1094496 ββββββββββββββββββββββ
HYTHE FOLKESTONE AND HYTHE 130 1023980 βββββββββββββββββββββ
CHALFONT ST GILES CHILTERN 113 835754 βββββββββββββββββ
AMERSHAM BUCKINGHAMSHIRE 113 799596 ββββββββββββββββ
VIRGINIA WATER RUNNYMEDE 356 789301 ββββββββββββββββ
BARNET ENFIELD 282 740514 βββββββββββββββ
NORTHWOOD THREE RIVERS 184 731609 βββββββββββββββ
:::tip
When you are ready to insert your files into ClickHouse, startup a ClickHouse server and insert the results of your
file
and
s3
table functions into a
MergeTree
table. View the
Quick Start
for more details.
:::
Format Conversions {#format-conversions}
You can use
clickhouse-local
for converting data between different formats. Example:
bash
$ clickhouse-local --input-format JSONLines --output-format CSV --query "SELECT * FROM table" < data.json > data.csv
Formats are auto-detected from file extensions:
bash
$ clickhouse-local --query "SELECT * FROM table" < data.json > data.csv
As a shortcut, you can write it using the
--copy
argument:
bash
$ clickhouse-local --copy < data.json > data.csv
Usage {#usage}
By default
clickhouse-local
has access to data of a ClickHouse server on the same host, and it does not depend on the server's configuration. It also supports loading server configuration using
--config-file
argument. For temporary data, a unique temporary data directory is created by default.
Basic usage (Linux):
bash
$ clickhouse-local --structure "table_structure" --input-format "format_of_incoming_data" --query "query"
Basic usage (Mac):
bash
$ ./clickhouse local --structure "table_structure" --input-format "format_of_incoming_data" --query "query"
:::note
clickhouse-local
is also supported on Windows through WSL2.
:::
Arguments:
-S
,
--structure
β table structure for input data.
--input-format
β input format,
TSV
by default.
-F
,
--file
β path to data,
stdin
by default.
-q
,
--query
β queries to execute with
;
as delimiter.
--query
can be specified multiple times, e.g.
--query "SELECT 1" --query "SELECT 2"
. Cannot be used simultaneously with
--queries-file
. | {"source_file": "clickhouse-local.md"} | [
0.06083206087350845,
-0.0569211021065712,
-0.06496495753526688,
-0.0284989383071661,
-0.017690371721982956,
-0.03853809833526611,
-0.041293252259492874,
-0.018882745876908302,
-0.06205925717949867,
0.04286295175552368,
0.05370140075683594,
-0.07020305097103119,
0.04048210009932518,
-0.0909... |
29ad56b6-a568-4af8-92cf-70a33a793c3f | --queries-file
- file path with queries to execute.
--queries-file
can be specified multiple times, e.g.
--query queries1.sql --query queries2.sql
. Cannot be used simultaneously with
--query
.
--multiquery, -n
β If specified, multiple queries separated by semicolons can be listed after the
--query
option. For convenience, it is also possible to omit
--query
and pass the queries directly after
--multiquery
.
-N
,
--table
β table name where to put output data,
table
by default.
-f
,
--format
,
--output-format
β output format,
TSV
by default.
-d
,
--database
β default database,
_local
by default.
--stacktrace
β whether to dump debug output in case of exception.
--echo
β print query before execution.
--verbose
β more details on query execution.
--logger.console
β Log to console.
--logger.log
β Log file name.
--logger.level
β Log level.
--ignore-error
β do not stop processing if a query failed.
-c
,
--config-file
β path to configuration file in same format as for ClickHouse server, by default the configuration empty.
--no-system-tables
β do not attach system tables.
--help
β arguments references for
clickhouse-local
.
-V
,
--version
β print version information and exit.
Also, there are arguments for each ClickHouse configuration variable which are more commonly used instead of
--config-file
.
Examples {#examples}
bash
$ echo -e "1,2\n3,4" | clickhouse-local --structure "a Int64, b Int64" \
--input-format "CSV" --query "SELECT * FROM table"
Read 2 rows, 32.00 B in 0.000 sec., 5182 rows/sec., 80.97 KiB/sec.
1 2
3 4
Previous example is the same as:
bash
$ echo -e "1,2\n3,4" | clickhouse-local -n --query "
CREATE TABLE table (a Int64, b Int64) ENGINE = File(CSV, stdin);
SELECT a, b FROM table;
DROP TABLE table;"
Read 2 rows, 32.00 B in 0.000 sec., 4987 rows/sec., 77.93 KiB/sec.
1 2
3 4
You don't have to use
stdin
or
--file
argument, and can open any number of files using the
file
table function
:
```bash
$ echo 1 | tee 1.tsv
1
$ echo 2 | tee 2.tsv
2
$ clickhouse-local --query "
select * from file('1.tsv', TSV, 'a int') t1
cross join file('2.tsv', TSV, 'b int') t2"
1 2
```
Now let's output memory user for each Unix user:
Query:
bash
$ ps aux | tail -n +2 | awk '{ printf("%s\t%s\n", $1, $4) }' \
| clickhouse-local --structure "user String, mem Float64" \
--query "SELECT user, round(sum(mem), 2) as memTotal
FROM table GROUP BY user ORDER BY memTotal DESC FORMAT Pretty"
Result:
text
Read 186 rows, 4.15 KiB in 0.035 sec., 5302 rows/sec., 118.34 KiB/sec.
ββββββββββββ³βββββββββββ
β user β memTotal β
β‘ββββββββββββββββββββββ©
β bayonet β 113.5 β
ββββββββββββΌβββββββββββ€
β root β 8.8 β
ββββββββββββΌβββββββββββ€
...
Related Content {#related-content-1}
Extracting, converting, and querying data in local files using clickhouse-local
Getting Data Into ClickHouse - Part 1 | {"source_file": "clickhouse-local.md"} | [
0.01724238507449627,
-0.03907330706715584,
-0.07481104135513306,
0.04238390550017357,
-0.051852524280548096,
-0.03500472009181976,
0.07926175743341446,
0.061233483254909515,
-0.012436473742127419,
0.05220779776573181,
0.00020052208856213838,
-0.0379033237695694,
0.02876933291554451,
-0.064... |
1ec6bc8c-a2d9-438c-b4ee-d8c72a803354 | Related Content {#related-content-1}
Extracting, converting, and querying data in local files using clickhouse-local
Getting Data Into ClickHouse - Part 1
Exploring massive, real-world data sets: 100+ Years of Weather Records in ClickHouse
Blog:
Extracting, Converting, and Querying Data in Local Files using clickhouse-local | {"source_file": "clickhouse-local.md"} | [
0.019086308777332306,
0.04449994117021561,
0.02548273652791977,
0.025743156671524048,
0.050390712916851044,
-0.04229890555143356,
-0.021302400156855583,
-0.06183219701051712,
-0.08916708827018738,
-0.023393936455249786,
0.05159047618508339,
0.013084247708320618,
0.0662444457411766,
0.00420... |
f93544c2-7428-43e1-94e6-907775a415ff | description: 'Documentation for Odbc Bridge'
slug: /operations/utilities/odbc-bridge
title: 'clickhouse-odbc-bridge'
doc_type: 'reference'
Simple HTTP-server which works like a proxy for ODBC driver. The main motivation
was possible segfaults or another faults in ODBC implementations, which can
crash whole clickhouse-server process.
This tool works via HTTP, not via pipes, shared memory, or TCP because:
- It's simpler to implement
- It's simpler to debug
- jdbc-bridge can be implemented in the same way
Usage {#usage}
clickhouse-server
use this tool inside odbc table function and StorageODBC.
However it can be used as standalone tool from command line with the following
parameters in POST-request URL:
-
connection_string
-- ODBC connection string.
-
sample_block
-- columns description in ClickHouse NamesAndTypesList format, name in backticks,
type as string. Name and type are space separated, rows separated with
newline.
-
max_block_size
-- optional parameter, sets maximum size of single block.
Query is send in post body. Response is returned in RowBinary format.
Example: {#example}
```bash
$ clickhouse-odbc-bridge --http-port 9018 --daemon
$ curl -d "query=SELECT PageID, ImpID, AdType FROM Keys ORDER BY PageID, ImpID" --data-urlencode "connection_string=DSN=ClickHouse;DATABASE=stat" --data-urlencode "sample_block=columns format version: 1
3 columns:
`PageID` String
`ImpID` String
`AdType` String
" "http://localhost:9018/" > result.txt
$ cat result.txt
12246623837185725195925621517
``` | {"source_file": "odbc-bridge.md"} | [
-0.055849138647317886,
-0.03477838262915611,
-0.08255269378423691,
0.068026602268219,
-0.08453735709190369,
-0.11651265621185303,
-0.0013396949507296085,
0.06842324882745743,
-0.00013656057126354426,
-0.002163978759199381,
-0.09829682111740112,
0.006385871674865484,
0.02671789936721325,
-0... |
94377757-8d8d-4dea-b410-20742fb1e420 | description: 'Documentation for Http'
slug: /operations/external-authenticators/http
title: 'HTTP'
doc_type: 'reference'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
HTTP server can be used to authenticate ClickHouse users. HTTP authentication can only be used as an external authenticator for existing users, which are defined in
users.xml
or in local access control paths. Currently,
Basic
authentication scheme using GET method is supported.
HTTP authentication server definition {#http-auth-server-definition}
To define HTTP authentication server you must add
http_authentication_servers
section to the
config.xml
.
Example
```xml
<!- ... -->
http://localhost:8000/auth
1000
1000
1000
3
50
1000
Custom-Auth-Header-1
Custom-Auth-Header-2
</basic_auth_server>
</http_authentication_servers>
```
Note, that you can define multiple HTTP servers inside the
http_authentication_servers
section using distinct names.
Parameters
-
uri
- URI for making authentication request
Timeouts in milliseconds on the socket used for communicating with the server:
-
connection_timeout_ms
- Default: 1000 ms.
-
receive_timeout_ms
- Default: 1000 ms.
-
send_timeout_ms
- Default: 1000 ms.
Retry parameters:
-
max_tries
- The maximum number of attempts to make an authentication request. Default: 3
-
retry_initial_backoff_ms
- The backoff initial interval on retry. Default: 50 ms
-
retry_max_backoff_ms
- The maximum backoff interval. Default: 1000 ms
Forward headers:
The part defines which headers will be forwarded from client request headers to external HTTP authenticator. Note that headers will be matched against config ones in case-insensitive way, but forwarded as-is i.e. unmodified.
Enabling HTTP authentication in
users.xml
{#enabling-http-auth-in-users-xml}
In order to enable HTTP authentication for the user, specify
http_authentication
section instead of
password
or similar sections in the user definition.
Parameters:
-
server
- Name of the HTTP authentication server configured in the main
config.xml
file as described previously.
-
scheme
- HTTP authentication scheme.
Basic
is only supported now. Default: Basic
Example (goes into
users.xml
):
xml
<clickhouse>
<!- ... -->
<my_user>
<!- ... -->
<http_authentication>
<server>basic_server</server>
<scheme>basic</scheme>
</http_authentication>
</test_user_2>
</clickhouse>
:::note
Note that HTTP authentication cannot be used alongside with any other authentication mechanism. The presence of any other sections like
password
alongside
http_authentication
will force ClickHouse to shutdown.
:::
Enabling HTTP authentication using SQL {#enabling-http-auth-using-sql} | {"source_file": "http.md"} | [
-0.04297176003456116,
0.00012085487833246589,
-0.07210134714841843,
-0.04907890781760216,
-0.06749176979064941,
-0.07395033538341522,
0.00922737643122673,
0.031334444880485535,
-0.05928482860326767,
0.0018651490099728107,
0.06442437320947647,
-0.07016903162002563,
0.06254929304122925,
-0.0... |
76981cb2-aa23-4f75-8746-5d1dca07543c | Enabling HTTP authentication using SQL {#enabling-http-auth-using-sql}
When
SQL-driven Access Control and Account Management
is enabled in ClickHouse, users identified by HTTP authentication can also be created using SQL statements.
sql
CREATE USER my_user IDENTIFIED WITH HTTP SERVER 'basic_server' SCHEME 'Basic'
...or,
Basic
is default without explicit scheme definition
sql
CREATE USER my_user IDENTIFIED WITH HTTP SERVER 'basic_server'
Passing session settings {#passing-session-settings}
If a response body from HTTP authentication server has JSON format and contains
settings
sub-object, ClickHouse will try parse its key: value pairs as string values and set them as session settings for authenticated user's current session. If parsing is failed, a response body from server will be ignored. | {"source_file": "http.md"} | [
-0.018916510045528412,
-0.05167962238192558,
-0.12190350890159607,
0.01779109612107277,
-0.1615336537361145,
-0.07714997231960297,
0.09239177405834198,
-0.012041488662362099,
-0.10696931183338165,
-0.009153072722256184,
0.03650331497192383,
-0.033535365015268326,
0.06747893244028091,
0.013... |
3c4acec6-175b-49fb-a91f-3eeecd32575b | description: 'Existing and properly configured ClickHouse users can be authenticated
via Kerberos authentication protocol.'
slug: /operations/external-authenticators/kerberos
title: 'Kerberos'
doc_type: 'reference'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
Kerberos
Existing and properly configured ClickHouse users can be authenticated via Kerberos authentication protocol.
Currently, Kerberos can only be used as an external authenticator for existing users, which are defined in
users.xml
or in local access control paths. Those users may only use HTTP requests and must be able to authenticate using GSS-SPNEGO mechanism.
For this approach, Kerberos must be configured in the system and must be enabled in ClickHouse config.
Enabling Kerberos in ClickHouse {#enabling-kerberos-in-clickhouse}
To enable Kerberos, one should include
kerberos
section in
config.xml
. This section may contain additional parameters.
Parameters {#parameters}
principal
- canonical service principal name that will be acquired and used when accepting security contexts.
This parameter is optional, if omitted, the default principal will be used.
realm
- a realm, that will be used to restrict authentication to only those requests whose initiator's realm matches it.
This parameter is optional, if omitted, no additional filtering by realm will be applied.
keytab
- path to service keytab file.
This parameter is optional, if omitted, path to service keytab file must be set in
KRB5_KTNAME
environment variable.
Example (goes into
config.xml
):
xml
<clickhouse>
<!- ... -->
<kerberos />
</clickhouse>
With principal specification:
xml
<clickhouse>
<!- ... -->
<kerberos>
<principal>HTTP/clickhouse.example.com@EXAMPLE.COM</principal>
</kerberos>
</clickhouse>
With filtering by realm:
xml
<clickhouse>
<!- ... -->
<kerberos>
<realm>EXAMPLE.COM</realm>
</kerberos>
</clickhouse>
:::note
You can define only one
kerberos
section. The presence of multiple
kerberos
sections will force ClickHouse to disable Kerberos authentication.
:::
:::note
principal
and
realm
sections cannot be specified at the same time. The presence of both
principal
and
realm
sections will force ClickHouse to disable Kerberos authentication.
:::
Kerberos as an external authenticator for existing users {#kerberos-as-an-external-authenticator-for-existing-users}
Kerberos can be used as a method for verifying the identity of locally defined users (users defined in
users.xml
or in local access control paths). Currently,
only
requests over the HTTP interface can be
kerberized
(via GSS-SPNEGO mechanism).
Kerberos principal name format usually follows this pattern:
primary/instance@REALM | {"source_file": "kerberos.md"} | [
-0.07712329179048538,
-0.04640025645494461,
-0.050911471247673035,
-0.056242991238832474,
-0.07935886830091476,
-0.02555161714553833,
0.040590837597846985,
-0.002050527138635516,
-0.07934705913066864,
0.027486754581332207,
0.04695255309343338,
-0.03342444449663162,
0.04700266942381859,
0.0... |
9f0d0b76-3e83-4522-8119-aaf7b71d9cdd | Kerberos principal name format usually follows this pattern:
primary/instance@REALM
The
/instance
part may occur zero or more times.
The
primary
part of the canonical principal name of the initiator is expected to match the kerberized user name for authentication to succeed
.
Enabling Kerberos in
users.xml
{#enabling-kerberos-in-users-xml}
In order to enable Kerberos authentication for the user, specify
kerberos
section instead of
password
or similar sections in the user definition.
Parameters:
realm
- a realm that will be used to restrict authentication to only those requests whose initiator's realm matches it.
This parameter is optional, if omitted, no additional filtering by realm will be applied.
Example (goes into
users.xml
):
xml
<clickhouse>
<!- ... -->
<users>
<!- ... -->
<my_user>
<!- ... -->
<kerberos>
<realm>EXAMPLE.COM</realm>
</kerberos>
</my_user>
</users>
</clickhouse>
:::note
Note that Kerberos authentication cannot be used alongside with any other authentication mechanism. The presence of any other sections like
password
alongside
kerberos
will force ClickHouse to shutdown.
:::
:::info Reminder
Note, that now, once user
my_user
uses
kerberos
, Kerberos must be enabled in the main
config.xml
file as described previously.
:::
Enabling Kerberos using SQL {#enabling-kerberos-using-sql}
When
SQL-driven Access Control and Account Management
is enabled in ClickHouse, users identified by Kerberos can also be created using SQL statements.
sql
CREATE USER my_user IDENTIFIED WITH kerberos REALM 'EXAMPLE.COM'
...or, without filtering by realm:
sql
CREATE USER my_user IDENTIFIED WITH kerberos | {"source_file": "kerberos.md"} | [
0.005368469748646021,
-0.01913352683186531,
-0.04768000543117523,
-0.11527182161808014,
-0.04526228830218315,
-0.04280644655227661,
0.019842566922307014,
-0.03382201865315437,
0.013846655376255512,
0.007024010177701712,
0.005187822971493006,
-0.0977807492017746,
0.08336895704269409,
0.0120... |
34bf8e5c-b70a-49e0-9333-b8edbec56c31 | description: 'Documentation for Ssl X509'
slug: /operations/external-authenticators/ssl-x509
title: 'SSL X.509 certificate authentication'
doc_type: 'reference'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
SSL 'strict' option
enables mandatory certificate validation for the incoming connections. In this case, only connections with trusted certificates can be established. Connections with untrusted certificates will be rejected. Thus, certificate validation allows to uniquely authenticate an incoming connection.
Common Name
or
subjectAltName extension
field of the certificate is used to identify the connected user.
subjectAltName extension
supports the usage of one wildcard '*' in the server configuration. This allows to associate multiple certificates with the same user. Additionally, reissuing and revoking of the certificates does not affect the ClickHouse configuration.
To enable SSL certificate authentication, a list of
Common Name
's or
Subject Alt Name
's for each ClickHouse user must be specified in the settings file
users.xml
:
Example
xml
<clickhouse>
<!- ... -->
<users>
<user_name_1>
<ssl_certificates>
<common_name>host.domain.com:example_user</common_name>
<common_name>host.domain.com:example_user_dev</common_name>
<!-- More names -->
</ssl_certificates>
<!-- Other settings -->
</user_name_1>
<user_name_2>
<ssl_certificates>
<subject_alt_name>DNS:host.domain.com</subject_alt_name>
<!-- More names -->
</ssl_certificates>
<!-- Other settings -->
</user_name_2>
<user_name_3>
<ssl_certificates>
<!-- Wildcard support -->
<subject_alt_name>URI:spiffe://foo.com/*/bar</subject_alt_name>
</ssl_certificates>
</user_name_3>
</users>
</clickhouse>
For the SSL
chain of trust
to work correctly, it is also important to make sure that the
caConfig
parameter is configured properly. | {"source_file": "ssl-x509.md"} | [
-0.06068911775946617,
0.007438689935952425,
-0.024418262764811516,
0.008516160771250725,
-0.0021136468276381493,
-0.04279641807079315,
0.022646630182862282,
-0.0008579045534133911,
-0.0024680746719241142,
-0.10006643831729889,
0.05690675973892212,
-0.041763126850128174,
0.129608154296875,
... |
8ce80af9-ab06-4bdb-8406-9b58ba8e06a0 | description: 'Guide to configuring LDAP authentication for ClickHouse'
slug: /operations/external-authenticators/ldap
title: 'LDAP'
doc_type: 'reference'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
LDAP server can be used to authenticate ClickHouse users. There are two different approaches for doing this:
Use LDAP as an external authenticator for existing users, which are defined in
users.xml
or in local access control paths.
Use LDAP as an external user directory and allow locally undefined users to be authenticated if they exist on the LDAP server.
For both of these approaches, an internally named LDAP server must be defined in the ClickHouse config so that other parts of the config can refer to it.
LDAP server definition {#ldap-server-definition}
To define LDAP server you must add
ldap_servers
section to the
config.xml
.
Example
```xml
<!- ... -->
<!- Typical LDAP server. -->
localhost
636
uid={user_name},ou=users,dc=example,dc=com
300
yes
tls1.2
demand
/path/to/tls_cert_file
/path/to/tls_key_file
/path/to/tls_ca_cert_file
/path/to/tls_ca_cert_dir
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384
<!- Typical Active Directory with configured user DN detection for further role mapping. -->
<my_ad_server>
<host>localhost</host>
<port>389</port>
<bind_dn>EXAMPLE\{user_name}</bind_dn>
<user_dn_detection>
<base_dn>CN=Users,DC=example,DC=com</base_dn>
<search_filter>(&(objectClass=user)(sAMAccountName={user_name}))</search_filter>
</user_dn_detection>
<enable_tls>no</enable_tls>
</my_ad_server>
</ldap_servers>
```
Note, that you can define multiple LDAP servers inside the
ldap_servers
section using distinct names.
Parameters
host
β LDAP server hostname or IP, this parameter is mandatory and cannot be empty.
port
β LDAP server port, default is
636
if
enable_tls
is set to
true
,
389
otherwise.
bind_dn
β Template used to construct the DN to bind to.
The resulting DN will be constructed by replacing all
{user_name}
substrings of the template with the actual user name during each authentication attempt.
user_dn_detection
β Section with LDAP search parameters for detecting the actual user DN of the bound user.
This is mainly used in search filters for further role mapping when the server is Active Directory. The resulting user DN will be used when replacing
{user_dn}
substrings wherever they are allowed. By default, user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected user DN value.
base_dn
β Template used to construct the base DN for the LDAP search.
The resulting DN will be constructed by replacing all
{user_name}
and
{bind_dn}
substrings of the template with the actual user name and bind DN during the LDAP search. | {"source_file": "ldap.md"} | [
-0.021182971075177193,
-0.0654708668589592,
-0.03979986906051636,
-0.00970471277832985,
-0.03153585270047188,
-0.051105886697769165,
0.0540437325835228,
0.021541506052017212,
-0.10112940520048141,
-0.040328796952962875,
0.0690765306353569,
-0.054788392037153244,
0.05795887112617493,
0.0058... |
84cf864f-165e-43a7-b8b4-c9f4a8b02be1 | The resulting DN will be constructed by replacing all
{user_name}
and
{bind_dn}
substrings of the template with the actual user name and bind DN during the LDAP search.
scope
β Scope of the LDAP search.
Accepted values are:
base
,
one_level
,
children
,
subtree
(the default).
search_filter
β Template used to construct the search filter for the LDAP search.
The resulting filter will be constructed by replacing all
{user_name}
,
{bind_dn}
, and
{base_dn}
substrings of the template with the actual user name, bind DN, and base DN during the LDAP search.
Note, that the special characters must be escaped properly in XML.
verification_cooldown
β A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server.
Specify
0
(the default) to disable caching and force contacting the LDAP server for each authentication request.
enable_tls
β A flag to trigger the use of the secure connection to the LDAP server.
Specify
no
for plain text
ldap://
protocol (not recommended).
Specify
yes
for LDAP over SSL/TLS
ldaps://
protocol (recommended, the default).
Specify
starttls
for legacy StartTLS protocol (plain text
ldap://
protocol, upgraded to TLS).
tls_minimum_protocol_version
β The minimum protocol version of SSL/TLS.
Accepted values are:
ssl2
,
ssl3
,
tls1.0
,
tls1.1
,
tls1.2
(the default).
tls_require_cert
β SSL/TLS peer certificate verification behavior.
Accepted values are:
never
,
allow
,
try
,
demand
(the default).
tls_cert_file
β Path to certificate file.
tls_key_file
β Path to certificate key file.
tls_ca_cert_file
β Path to CA certificate file.
tls_ca_cert_dir
β Path to the directory containing CA certificates.
tls_cipher_suite
β Allowed cipher suite (in OpenSSL notation).
LDAP external authenticator {#ldap-external-authenticator}
A remote LDAP server can be used as a method for verifying passwords for locally defined users (users defined in
users.xml
or in local access control paths). To achieve this, specify previously defined LDAP server name instead of
password
or similar sections in the user definition.
At each login attempt, ClickHouse tries to "bind" to the specified DN defined by the
bind_dn
parameter in the
LDAP server definition
using the provided credentials, and if successful, the user is considered authenticated. This is often called a "simple bind" method.
Example
xml
<clickhouse>
<!- ... -->
<users>
<!- ... -->
<my_user>
<!- ... -->
<ldap>
<server>my_ldap_server</server>
</ldap>
</my_user>
</users>
</clickhouse>
Note, that user
my_user
refers to
my_ldap_server
. This LDAP server must be configured in the main
config.xml
file as described previously. | {"source_file": "ldap.md"} | [
-0.10333120822906494,
0.05649230256676674,
-0.0007799926097504795,
-0.0014347574906423688,
-0.06263453513383865,
-0.0596717894077301,
0.08403991162776947,
-0.00266062468290329,
-0.04180356115102768,
-0.07644630968570709,
0.07251197099685669,
-0.034561507403850555,
0.022022871300578117,
-0.... |
c1f44cd8-820d-4be6-bc12-2ba04c03aecd | Note, that user
my_user
refers to
my_ldap_server
. This LDAP server must be configured in the main
config.xml
file as described previously.
When SQL-driven
Access Control and Account Management
is enabled, users that are authenticated by LDAP servers can also be created using the
CREATE USER
statement.
Query:
sql
CREATE USER my_user IDENTIFIED WITH ldap SERVER 'my_ldap_server';
LDAP external user directory {#ldap-external-user-directory}
In addition to the locally defined users, a remote LDAP server can be used as a source of user definitions. To achieve this, specify previously defined LDAP server name (see
LDAP Server Definition
) in the
ldap
section inside the
users_directories
section of the
config.xml
file.
At each login attempt, ClickHouse tries to find the user definition locally and authenticate it as usual. If the user is not defined, ClickHouse will assume the definition exists in the external LDAP directory and will try to "bind" to the specified DN at the LDAP server using the provided credentials. If successful, the user will be considered existing and authenticated. The user will be assigned roles from the list specified in the
roles
section. Additionally, LDAP "search" can be performed and results can be transformed and treated as role names and then be assigned to the user if the
role_mapping
section is also configured. All this implies that the SQL-driven
Access Control and Account Management
is enabled and roles are created using the
CREATE ROLE
statement.
Example
Goes into
config.xml
.
```xml
<!- ... -->
<!- Typical LDAP server. -->
my_ldap_server
ou=groups,dc=example,dc=com
subtree
(&(objectClass=groupOfNames)(member={bind_dn}))
cn
clickhouse_
<!- Typical Active Directory with role mapping that relies on the detected user DN. -->
<ldap>
<server>my_ad_server</server>
<role_mapping>
<base_dn>CN=Users,DC=example,DC=com</base_dn>
<attribute>CN</attribute>
<scope>subtree</scope>
<search_filter>(&(objectClass=group)(member={user_dn}))</search_filter>
<prefix>clickhouse_</prefix>
</role_mapping>
</ldap>
</user_directories>
```
Note that
my_ldap_server
referred in the
ldap
section inside the
user_directories
section must be a previously defined LDAP server that is configured in the
config.xml
(see
LDAP Server Definition
).
Parameters
server
β One of LDAP server names defined in the
ldap_servers
config section above. This parameter is mandatory and cannot be empty.
roles
β Section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server.
If no roles are specified here or assigned during role mapping (below), user will not be able to perform any actions after authentication.
role_mapping
β Section with LDAP search parameters and mapping rules. | {"source_file": "ldap.md"} | [
0.0002076528180623427,
-0.14621804654598236,
-0.029535584151744843,
0.02486470341682434,
-0.08971115946769714,
-0.05316326394677162,
0.1375742405653,
0.05826101079583168,
-0.0855475515127182,
-0.0534326545894146,
0.03496228903532028,
-0.05920955166220665,
0.1131645068526268,
-0.01727109774... |
86966b4c-b60c-4a7b-9c73-ca823c187dcb | role_mapping
β Section with LDAP search parameters and mapping rules.
When a user authenticates, while still bound to LDAP, an LDAP search is performed using
search_filter
and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the
CREATE ROLE
statement.
There can be multiple
role_mapping
sections defined inside the same
ldap
section. All of them will be applied.
base_dn
β Template used to construct the base DN for the LDAP search.
The resulting DN will be constructed by replacing all
{user_name}
,
{bind_dn}
, and
{user_dn}
substrings of the template with the actual user name, bind DN, and user DN during each LDAP search.
scope
β Scope of the LDAP search.
Accepted values are:
base
,
one_level
,
children
,
subtree
(the default).
search_filter
β Template used to construct the search filter for the LDAP search.
The resulting filter will be constructed by replacing all
{user_name}
,
{bind_dn}
,
{user_dn}
, and
{base_dn}
substrings of the template with the actual user name, bind DN, user DN, and base DN during each LDAP search.
Note, that the special characters must be escaped properly in XML.
attribute
β Attribute name whose values will be returned by the LDAP search.
cn
, by default.
prefix
β Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default. | {"source_file": "ldap.md"} | [
-0.04982014000415802,
0.02115710824728012,
0.01847756840288639,
-0.004357774276286364,
-0.015332499518990517,
-0.03989867866039276,
0.05441533401608467,
-0.01153530366718769,
-0.09050966799259186,
-0.08900249749422073,
0.06261157989501953,
-0.01790415495634079,
0.0292393509298563,
-0.02526... |
446f13a6-cb1d-4c42-91db-6623f2af675c | description: 'Overview of external authentication methods supported by ClickHouse'
pagination_next: operations/external-authenticators/kerberos
sidebar_label: 'External User Authenticators and Directories'
sidebar_position: 48
slug: /operations/external-authenticators/
title: 'External User Authenticators and Directories'
doc_type: 'reference'
import SelfManaged from '@site/docs/_snippets/_self_managed_only_no_roadmap.md';
ClickHouse supports authenticating and managing users using external services.
The following external authenticators and directories are supported:
LDAP
Authenticator
and
Directory
Kerberos
Authenticator
SSL X.509 authentication
HTTP
Authenticator | {"source_file": "index.md"} | [
-0.08229926228523254,
-0.062190473079681396,
-0.058607812970876694,
-0.06493326276540756,
-0.022733550518751144,
-0.0258614644408226,
0.013263936154544353,
0.009853964671492577,
-0.044800978153944016,
-0.019841812551021576,
0.05310039594769478,
0.002382125938311219,
0.04308600351214409,
0.... |
6e1fff8a-aec7-4e07-ac4f-5bc2a719d5ea | description: 'System table containing logging entries with information about
BACKUP
and
RESTORE
operations.'
keywords: ['system table', 'backup_log']
slug: /operations/system-tables/backup_log
title: 'system.backup_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.backup_log
Contains logging entries with information about
BACKUP
and
RESTORE
operations.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β Date of the entry.
event_time
(
DateTime
) β The date and time of the entry.
event_time_microseconds
(
DateTime64
) β Time of the entry with microseconds precision.
id
(
String
) β Identifier of the backup or restore operation.
name
(
String
) β Name of the backup storage (the contents of the
FROM
or
TO
clause).
status
(
Enum8
) β Operation status. Possible values:
'CREATING_BACKUP'
'BACKUP_CREATED'
'BACKUP_FAILED'
'RESTORING'
'RESTORED'
'RESTORE_FAILED'
error
(
String
) β Error message of the failed operation (empty string for successful operations).
start_time
(
DateTime
) β Start time of the operation.
end_time
(
DateTime
) β End time of the operation.
num_files
(
UInt64
) β Number of files stored in the backup.
total_size
(
UInt64
) β Total size of files stored in the backup.
num_entries
(
UInt64
) β Number of entries in the backup, i.e. the number of files inside the folder if the backup is stored as a folder, or the number of files inside the archive if the backup is stored as an archive. It is not the same as
num_files
if it's an incremental backup or if it contains empty files or duplicates. The following is always true:
num_entries <= num_files
.
uncompressed_size
(
UInt64
) β Uncompressed size of the backup.
compressed_size
(
UInt64
) β Compressed size of the backup. If the backup is not stored as an archive it equals to
uncompressed_size
.
files_read
(
UInt64
) β Number of files read during the restore operation.
bytes_read
(
UInt64
) β Total size of files read during the restore operation.
Example
sql
BACKUP TABLE test_db.my_table TO Disk('backups_disk', '1.zip')
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
β e5b74ecb-f6f1-426a-80be-872f90043885 β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
sql
SELECT * FROM system.backup_log WHERE id = 'e5b74ecb-f6f1-426a-80be-872f90043885' ORDER BY event_date, event_time_microseconds \G
```response
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-08-19
event_time_microseconds: 2023-08-19 11:05:21.998566
id: e5b74ecb-f6f1-426a-80be-872f90043885
name: Disk('backups_disk', '1.zip')
status: CREATING_BACKUP
error: | {"source_file": "backup_log.md"} | [
0.00854251254349947,
-0.0031008163932710886,
-0.03810868784785271,
-0.011034335941076279,
0.0321737639605999,
-0.08214343339204788,
0.028993993997573853,
0.04666523635387421,
-0.008876385167241096,
0.07436591386795044,
-0.001562759280204773,
-0.013684171251952648,
0.12789078056812286,
-0.1... |
99bada4c-3a6b-4e89-bcf8-b7a40d15fd3c | start_time: 2023-08-19 11:05:21
end_time: 1970-01-01 03:00:00
num_files: 0
total_size: 0
num_entries: 0
uncompressed_size: 0
compressed_size: 0
files_read: 0
bytes_read: 0
Row 2:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-08-19
event_time: 2023-08-19 11:08:56
event_time_microseconds: 2023-08-19 11:08:56.916192
id: e5b74ecb-f6f1-426a-80be-872f90043885
name: Disk('backups_disk', '1.zip')
status: BACKUP_CREATED
error:
start_time: 2023-08-19 11:05:21
end_time: 2023-08-19 11:08:56
num_files: 57
total_size: 4290364870
num_entries: 46
uncompressed_size: 4290362365
compressed_size: 3525068304
files_read: 0
bytes_read: 0
sql
RESTORE TABLE test_db.my_table FROM Disk('backups_disk', '1.zip')
response
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββ
β cdf1f731-52ef-42da-bc65-2e1bfcd4ce90 β RESTORED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββ
sql
SELECT * FROM system.backup_log WHERE id = 'cdf1f731-52ef-42da-bc65-2e1bfcd4ce90' ORDER BY event_date, event_time_microseconds \G
response
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-08-19
event_time_microseconds: 2023-08-19 11:09:19.718077
id: cdf1f731-52ef-42da-bc65-2e1bfcd4ce90
name: Disk('backups_disk', '1.zip')
status: RESTORING
error:
start_time: 2023-08-19 11:09:19
end_time: 1970-01-01 03:00:00
num_files: 0
total_size: 0
num_entries: 0
uncompressed_size: 0
compressed_size: 0
files_read: 0
bytes_read: 0
Row 2:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-08-19
event_time_microseconds: 2023-08-19 11:09:29.334234
id: cdf1f731-52ef-42da-bc65-2e1bfcd4ce90
name: Disk('backups_disk', '1.zip')
status: RESTORED
error:
start_time: 2023-08-19 11:09:19
end_time: 2023-08-19 11:09:29
num_files: 57
total_size: 4290364870
num_entries: 46
uncompressed_size: 4290362365
compressed_size: 4290362365
files_read: 57
bytes_read: 4290364870
```
This is essentially the same information that is written in the system table
system.backups
:
sql
SELECT * FROM system.backups ORDER BY start_time | {"source_file": "backup_log.md"} | [
-0.03763715922832489,
0.020121606066823006,
-0.03358595818281174,
0.02656189166009426,
0.05221930146217346,
-0.10394944250583649,
-0.03544297069311142,
0.004149490501731634,
0.028365416452288628,
0.06297316402196884,
-0.001985904760658741,
-0.05410144478082657,
0.0420154370367527,
-0.04154... |
25c2829a-bdeb-42f5-959b-52937eb5b9dc | This is essentially the same information that is written in the system table
system.backups
:
sql
SELECT * FROM system.backups ORDER BY start_time
response
ββidββββββββββββββββββββββββββββββββββββ¬βnameβββββββββββββββββββββββββββ¬βstatusββββββββββ¬βerrorββ¬ββββββββββstart_timeββ¬ββββββββββββend_timeββ¬βnum_filesββ¬βtotal_sizeββ¬βnum_entriesββ¬βuncompressed_sizeββ¬βcompressed_sizeββ¬βfiles_readββ¬βbytes_readββ
β e5b74ecb-f6f1-426a-80be-872f90043885 β Disk('backups_disk', '1.zip') β BACKUP_CREATED β β 2023-08-19 11:05:21 β 2023-08-19 11:08:56 β 57 β 4290364870 β 46 β 4290362365 β 3525068304 β 0 β 0 β
β cdf1f731-52ef-42da-bc65-2e1bfcd4ce90 β Disk('backups_disk', '1.zip') β RESTORED β β 2023-08-19 11:09:19 β 2023-08-19 11:09:29 β 57 β 4290364870 β 46 β 4290362365 β 4290362365 β 57 β 4290364870 β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββ΄βββββββββββββ
See Also
Backup and Restore | {"source_file": "backup_log.md"} | [
-0.02181512862443924,
-0.004354013130068779,
-0.02923552133142948,
0.04726623371243477,
0.015265848487615585,
-0.019894424825906754,
-0.007604754064232111,
0.008007858879864216,
-0.008736487478017807,
0.06602936238050461,
0.02536003291606903,
0.008625064045190811,
0.06410480290651321,
-0.1... |
c6b136de-1a90-4108-96b8-dd5ded7d5bae | description: 'System table containing information about local files that are in the
queue to be sent to the shards.'
keywords: ['system table', 'distribution_queue']
slug: /operations/system-tables/distribution_queue
title: 'system.distribution_queue'
doc_type: 'reference'
Contains information about local files that are in the queue to be sent to the shards. These local files contain new parts that are created by inserting new data into the Distributed table in asynchronous mode.
Columns:
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
data_path
(
String
) β Path to the folder with local files.
is_blocked
(
UInt8
) β Flag indicates whether sending local files to the server is blocked.
error_count
(
UInt64
) β Number of errors.
data_files
(
UInt64
) β Number of local files in a folder.
data_compressed_bytes
(
UInt64
) β Size of compressed data in local files, in bytes.
broken_data_files
(
UInt64
) β Number of files that has been marked as broken (due to an error).
broken_data_compressed_bytes
(
UInt64
) β Size of compressed data in broken files, in bytes.
last_exception
(
String
) β Text message about the last error that occurred (if any).
last_exception_time
(
DateTime
) β Time when last exception occurred.
Example
sql
SELECT * FROM system.distribution_queue LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
database: default
table: dist
data_path: ./store/268/268bc070-3aad-4b1a-9cf2-4987580161af/default@127%2E0%2E0%2E2:9000/
is_blocked: 1
error_count: 0
data_files: 1
data_compressed_bytes: 499
last_exception:
See Also
Distributed table engine | {"source_file": "distribution_queue.md"} | [
0.01632474735379219,
-0.03360101580619812,
-0.0452192947268486,
0.042728278785943985,
0.024496937170624733,
-0.14160194993019104,
0.02668088674545288,
-0.021110031753778458,
0.040155742317438126,
0.07017744332551956,
0.04576980322599411,
0.0567510761320591,
0.03732357919216156,
-0.05206898... |
550ad1ec-bbe7-4c4e-bae3-54828f176772 | description: 'System table containing information about dictionaries'
keywords: ['system table', 'dictionaries']
slug: /operations/system-tables/dictionaries
title: 'system.dictionaries'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about
dictionaries
.
Columns:
database
(
String
) β Name of the database containing the dictionary created by DDL query. Empty string for other dictionaries.
name
(
String
) β Dictionary name.
uuid
(
UUID
) β Dictionary UUID.
status
(
Enum8('NOT_LOADED' = 0, 'LOADED' = 1, 'FAILED' = 2, 'LOADING' = 3, 'FAILED_AND_RELOADING' = 4, 'LOADED_AND_RELOADING' = 5, 'NOT_EXIST' = 6)
) β Dictionary status.Possible values:
NOT_LOADED
β Dictionary was not loaded because it was not used
LOADED
β Dictionary loaded successfully
FAILED
β Unable to load the dictionary as a result of an error
LOADING
β Dictionary is loading now
LOADED_AND_RELOADING
β Dictionary is loaded successfully
and is being reloaded right now (frequent reasons: SYSTEM RELOAD DICTIONARY query
β
timeout
β
dictionary config has changed)
β
FAILED_AND_RELOADING
β Could not load the dictionary as a result of an error and is loading now.
origin
(
String
) β Path to the configuration file that describes the dictionary.
type
(
String
) β Type of a dictionary allocation. Storing Dictionaries in Memory.
key.names
(
Array(String)
) β Array of key names provided by the dictionary.
key.types
(
Array(String)
) β Corresponding array of key types provided by the dictionary.
attribute.names
(
Array(String)
) β Array of attribute names provided by the dictionary.
attribute.types
(
Array(String)
) β Corresponding array of attribute types provided by the dictionary.
bytes_allocated
(
UInt64
) β Amount of RAM allocated for the dictionary.
hierarchical_index_bytes_allocated
(
UInt64
) β Amount of RAM allocated for hierarchical index.
query_count
(
UInt64
) β Number of queries since the dictionary was loaded or since the last successful reboot.
hit_rate
(
Float64
) β For cache dictionaries, the percentage of uses for which the value was in the cache.
found_rate
(
Float64
) β The percentage of uses for which the value was found.
element_count
(
UInt64
) β Number of items stored in the dictionary.
load_factor
(
Float64
) β Percentage filled in the dictionary (for a hashed dictionary, the percentage filled in the hash table).
source
(
String
) β Text describing the data source for the dictionary.
lifetime_min
(
UInt64
) β Minimum lifetime of the dictionary in memory, after which ClickHouse tries to reload the dictionary (if invalidate_query is set, then only if it has changed). Set in seconds.
lifetime_max
(
UInt64
) β Maximum lifetime of the dictionary in memory, after which ClickHouse tries to reload the dictionary (if invalidate_query is set, then only if it has changed). Set in seconds. | {"source_file": "dictionaries.md"} | [
-0.0075950403697788715,
-0.04105490818619728,
-0.04397221654653549,
-0.07330498844385147,
0.011217036284506321,
-0.11850176751613617,
0.01173435803502798,
-0.029599009081721306,
-0.02514019049704075,
0.03083261102437973,
0.021901439875364304,
-0.02358338236808777,
0.10191051661968231,
-0.0... |
f53d605f-ff97-4797-a212-fa1ed81b7928 | loading_start_time
(
DateTime
) β Start time for loading the dictionary.
last_successful_update_time
(
DateTime
) β End time for loading or updating the dictionary. Helps to monitor some troubles with dictionary sources and investigate the causes.
error_count
(
UInt64
) β Number of errors since last successful loading. Helps to monitor some troubles with dictionary sources and investigate the causes.
loading_duration
(
Float32
) β Duration of a dictionary loading.
last_exception
(
String
) β Text of the error that occurs when creating or reloading the dictionary if the dictionary couldn't be created.
comment
(
String
) β Text of the comment to dictionary.
Example
Configure the dictionary:
sql
CREATE DICTIONARY dictionary_with_comment
(
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'source_table'))
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000)
COMMENT 'The temporary dictionary';
Make sure that the dictionary is loaded.
sql
SELECT * FROM system.dictionaries LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
database: default
name: dictionary_with_comment
uuid: 4654d460-0d03-433a-8654-d4600d03d33a
status: NOT_LOADED
origin: 4654d460-0d03-433a-8654-d4600d03d33a
type:
key.names: ['id']
key.types: ['UInt64']
attribute.names: ['value']
attribute.types: ['String']
bytes_allocated: 0
query_count: 0
hit_rate: 0
found_rate: 0
element_count: 0
load_factor: 0
source:
lifetime_min: 0
lifetime_max: 0
loading_start_time: 1970-01-01 00:00:00
last_successful_update_time: 1970-01-01 00:00:00
loading_duration: 0
last_exception:
comment: The temporary dictionary | {"source_file": "dictionaries.md"} | [
-0.021769408136606216,
-0.044347673654556274,
-0.08395104855298996,
0.03978217765688896,
-0.06431829929351807,
-0.12380246073007584,
0.022692011669278145,
-0.005851211026310921,
0.01859939657151699,
0.042380332946777344,
0.04354347661137581,
0.0017168261110782623,
0.07052288204431534,
-0.0... |
ec0dc2bb-5d15-45cb-b44e-3cf0f0e767fc | description: 'System table containing metadata of each table that the server knows
about.'
keywords: ['system table', 'tables']
slug: /operations/system-tables/tables
title: 'system.tables'
doc_type: 'reference'
system.tables
Contains metadata of each table that the server knows about.
Detached
tables are not shown in
system.tables
.
Temporary tables
are visible in the
system.tables
only in those session where they have been created. They are shown with the empty
database
field and with the
is_temporary
flag switched on.
Columns:
database
(
String
) β The name of the database the table is in.
name
(
String
) β Table name.
uuid
(
UUID
) β Table uuid (Atomic database).
engine
(
String
) β Table engine name (without parameters).
is_temporary
(
UInt8
) - Flag that indicates whether the table is temporary.
data_paths
(
Array
(
String
)) - Paths to the table data in the file systems.
metadata_path
(
String
) - Path to the table metadata in the file system.
metadata_modification_time
(
DateTime
) - Time of latest modification of the table metadata.
metadata_version
(
Int32
) - Metadata version for ReplicatedMergeTree table, 0 for non ReplicatedMergeTree table.
dependencies_database
(
Array
(
String
)) - Database dependencies.
dependencies_table
(
Array
(
String
)) - Table dependencies (
materialized views
the current table).
create_table_query
(
String
) - The query that was used to create the table.
engine_full
(
String
) - Parameters of the table engine.
as_select
(
String
) -
SELECT
query for view.
parameterized_view_parameters
(
Array
of
Tuple
) β Parameters of parameterized view.
partition_key
(
String
) - The partition key expression specified in the table.
sorting_key
(
String
) - The sorting key expression specified in the table.
primary_key
(
String
) - The primary key expression specified in the table.
sampling_key
(
String
) - The sampling key expression specified in the table.
storage_policy
(
String
) - The storage policy:
MergeTree
Distributed
total_rows
(
Nullable
(
UInt64
)) - Total number of rows, if it is possible to quickly determine exact number of rows in the table, otherwise
NULL
(including underlying
Buffer
table).
total_bytes
(
Nullable
(
UInt64
)) - Total number of bytes (including indices and projections), if it is possible to quickly determine exact number of bytes for the table on storage, otherwise
NULL
(does not includes any underlying storage).
If the table stores data on disk, returns used space on disk (i.e.Β compressed).
If the table stores data in memory, returns approximated number of used bytes in memory. | {"source_file": "tables.md"} | [
0.0379975326359272,
-0.01187786553055048,
-0.12389308214187622,
0.024965446442365646,
0.057284463196992874,
-0.14153562486171722,
0.0442146472632885,
0.06420861929655075,
-0.011221524327993393,
0.06917327642440796,
0.06600414216518402,
-0.05178101360797882,
0.06362961232662201,
-0.07418915... |
4b4087cf-cb32-40d9-8826-604d4d83fb53 | If the table stores data on disk, returns used space on disk (i.e.Β compressed).
If the table stores data in memory, returns approximated number of used bytes in memory.
total_bytes_uncompressed
(
Nullable
(
UInt64
)) - Total number of uncompressed bytes (including indices and projections), if it's possible to quickly determine the exact number of bytes from the part checksums for the table on storage, otherwise
NULL
(does not take underlying storage (if any) into account).
lifetime_rows
(
Nullable
(
UInt64
)) - Total number of rows INSERTed since server start (only for
Buffer
tables).
lifetime_bytes
(
Nullable
(
UInt64
)) - Total number of bytes INSERTed since server start (only for
Buffer
tables).
comment
(
String
) - The comment for the table.
has_own_data
(
UInt8
) β Flag that indicates whether the table itself stores some data on disk or only accesses some other source.
loading_dependencies_database
(
Array
(
String
)) - Database loading dependencies (list of objects which should be loaded before the current object).
loading_dependencies_table
(
Array
(
String
)) - Table loading dependencies (list of objects which should be loaded before the current object).
loading_dependent_database
(
Array
(
String
)) - Dependent loading database.
loading_dependent_table
(
Array
(
String
)) - Dependent loading table.
The
system.tables
table is used in
SHOW TABLES
query implementation.
Example
sql
SELECT * FROM system.tables LIMIT 2 FORMAT Vertical;
``text
Row 1:
ββββββ
database: base
name: t1
uuid: 81b1c20a-b7c6-4116-a2ce-7583fb6b6736
engine: MergeTree
is_temporary: 0
data_paths: ['/var/lib/clickhouse/store/81b/81b1c20a-b7c6-4116-a2ce-7583fb6b6736/']
metadata_path: /var/lib/clickhouse/store/461/461cf698-fd0b-406d-8c01-5d8fd5748a91/t1.sql
metadata_modification_time: 2021-01-25 19:14:32
dependencies_database: []
dependencies_table: []
create_table_query: CREATE TABLE base.t1 (
n` UInt64) ENGINE = MergeTree ORDER BY n SETTINGS index_granularity = 8192
engine_full: MergeTree ORDER BY n SETTINGS index_granularity = 8192
as_select: SELECT database AS table_catalog
partition_key:
sorting_key: n
primary_key: n
sampling_key:
storage_policy: default
total_rows: 1
total_bytes: 99
lifetime_rows: α΄Ία΅α΄Έα΄Έ
lifetime_bytes: α΄Ία΅α΄Έα΄Έ
comment:
has_own_data: 0
loading_dependencies_database: []
loading_dependencies_table: []
loading_dependent_database: []
loading_dependent_table: [] | {"source_file": "tables.md"} | [
0.01389408390969038,
-0.0015290050068870187,
-0.11651306599378586,
0.06247411668300629,
0.028048476204276085,
-0.08703405410051346,
0.046317208558321,
0.021222922950983047,
-0.007126024924218655,
0.055051252245903015,
-0.017521817237138748,
0.015142909251153469,
0.06875036656856537,
-0.061... |
72f011ec-2b76-4d45-b03d-ded009cca550 | Row 2:
ββββββ
database: default
name: 53r93yleapyears
uuid: 00000000-0000-0000-0000-000000000000
engine: MergeTree
is_temporary: 0
data_paths: ['/var/lib/clickhouse/data/default/53r93yleapyears/']
metadata_path: /var/lib/clickhouse/metadata/default/53r93yleapyears.sql
metadata_modification_time: 2020-09-23 09:05:36
dependencies_database: []
dependencies_table: []
create_table_query: CREATE TABLE default.
53r93yleapyears
(
id
Int8,
febdays
Int8) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 8192
engine_full: MergeTree ORDER BY id SETTINGS index_granularity = 8192
as_select: SELECT name AS catalog_name
partition_key:
sorting_key: id
primary_key: id
sampling_key:
storage_policy: default
total_rows: 2
total_bytes: 155
lifetime_rows: α΄Ία΅α΄Έα΄Έ
lifetime_bytes: α΄Ία΅α΄Έα΄Έ
comment:
has_own_data: 0
loading_dependencies_database: []
loading_dependencies_table: []
loading_dependent_database: []
loading_dependent_table: []
``` | {"source_file": "tables.md"} | [
0.02899702452123165,
-0.022384360432624817,
0.023801861330866814,
0.009151318110525608,
-0.021029869094491005,
-0.06560054421424866,
0.011381092481315136,
-0.014145325869321823,
-0.028203533962368965,
0.03555815666913986,
0.07644463330507278,
-0.06062210723757744,
0.002511951606720686,
-0.... |
63ac6710-bf85-460e-aa09-1c84e6e6c66d | description: 'System table containing information about resources residing on the
local server with one row for every resource.'
keywords: ['system table', 'resources']
slug: /operations/system-tables/resources
title: 'system.resources'
doc_type: 'reference'
system.resources
Contains information about
resources
residing on the local server. The table contains a row for every resource.
Example:
sql
SELECT *
FROM system.resources
FORMAT Vertical
```text
Row 1:
ββββββ
name: io_read
read_disks: ['s3']
write_disks: []
create_query: CREATE RESOURCE io_read (READ DISK s3)
Row 2:
ββββββ
name: io_write
read_disks: []
write_disks: ['s3']
create_query: CREATE RESOURCE io_write (WRITE DISK s3)
```
Columns:
name
(
String
) β The name of the resource.
read_disks
(
Array(String)
) β The list of disk names that uses this resource for read operations.
write_disks
(
Array(String)
) β The list of disk names that uses this resource for write operations.
unit
(
String
) β Resource unit used for cost measurements.
create_query
(
String
) β CREATE query of the resource. | {"source_file": "resources.md"} | [
0.01624711975455284,
-0.025637924671173096,
-0.1262449324131012,
0.04556182771921158,
0.016742872074246407,
-0.1088365688920021,
0.085731141269207,
0.016435373574495316,
0.018055662512779236,
0.04883049428462982,
-0.007439646404236555,
-0.001788406865671277,
0.11487212777137756,
-0.0721535... |
6599b03c-ad6e-4848-bb94-aa9993e6a325 | description: 'System table containing profiling information on the processors level
(which can be found in
EXPLAIN PIPELINE
)'
keywords: ['system table', 'processors_profile_log', 'EXPLAIN PIPELINE']
slug: /operations/system-tables/processors_profile_log
title: 'system.processors_profile_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.processors_profile_log
This table contains profiling on processors level (that you can find in
EXPLAIN PIPELINE
).
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β The date when the event happened.
event_time
(
DateTime
) β The date and time when the event happened.
event_time_microseconds
(
DateTime64
) β The date and time with microseconds precision when the event happened.
id
(
UInt64
) β ID of processor
parent_ids
(
Array(UInt64)
) β Parent processors IDs
plan_step
(
UInt64
) β ID of the query plan step which created this processor. The value is zero if the processor was not added from any step.
plan_group
(
UInt64
) β Group of the processor if it was created by query plan step. A group is a logical partitioning of processors added from the same query plan step. Group is used only for beautifying the result of EXPLAIN PIPELINE result.
initial_query_id
(
String
) β ID of the initial query (for distributed query execution).
query_id
(
String
) β ID of the query
name
(
LowCardinality(String)
) β Name of the processor.
elapsed_us
(
UInt64
) β Number of microseconds this processor was executed.
input_wait_elapsed_us
(
UInt64
) β Number of microseconds this processor was waiting for data (from other processor).
output_wait_elapsed_us
(
UInt64
) β Number of microseconds this processor was waiting because output port was full.
input_rows
(
UInt64
) β The number of rows consumed by processor.
input_bytes
(
UInt64
) β The number of bytes consumed by processor.
output_rows
(
UInt64
) β The number of rows generated by processor.
output_bytes
(
UInt64
) β The number of bytes generated by processor.
Example
Query:
```sql
EXPLAIN PIPELINE
SELECT sleep(1)
ββexplainββββββββββββββββββββββββββ
β (Expression) β
β ExpressionTransform β
β (SettingQuotaAndLimits) β
β (ReadFromStorage) β
β SourceFromSingleChunk 0 β 1 β
βββββββββββββββββββββββββββββββββββ
SELECT sleep(1)
SETTINGS log_processors_profiles = 1
Query id: feb5ed16-1c24-4227-aa54-78c02b3b27d4
ββsleep(1)ββ
β 0 β
ββββββββββββ
1 rows in set. Elapsed: 1.018 sec.
SELECT
name,
elapsed_us,
input_wait_elapsed_us,
output_wait_elapsed_us
FROM system.processors_profile_log
WHERE query_id = 'feb5ed16-1c24-4227-aa54-78c02b3b27d4'
ORDER BY name ASC
```
Result: | {"source_file": "processors_profile_log.md"} | [
0.07594140619039536,
0.017158569768071175,
-0.07283614575862885,
0.03967678174376488,
0.018015705049037933,
-0.14024679362773895,
-0.019760478287935257,
0.070188008248806,
-0.0050906105898320675,
0.036123909056186676,
-0.03328417241573334,
-0.039358485490083694,
0.005549472291022539,
-0.08... |
1dab8319-3170-42c4-a98a-c2872ef8eb0a | Result:
text
ββnameβββββββββββββββββββββ¬βelapsed_usββ¬βinput_wait_elapsed_usββ¬βoutput_wait_elapsed_usββ
β ExpressionTransform β 1000497 β 2823 β 197 β
β LazyOutputFormat β 36 β 1002188 β 0 β
β LimitsCheckingTransform β 10 β 1002994 β 106 β
β NullSource β 5 β 1002074 β 0 β
β NullSource β 1 β 1002084 β 0 β
β SourceFromSingleChunk β 45 β 4736 β 1000819 β
βββββββββββββββββββββββββββ΄βββββββββββββ΄ββββββββββββββββββββββββ΄βββββββββββββββββββββββββ
Here you can see:
ExpressionTransform
was executing
sleep(1)
function, so it
work
will takes 1e6, and so
elapsed_us
> 1e6.
SourceFromSingleChunk
need to wait, because
ExpressionTransform
does not accept any data during execution of
sleep(1)
, so it will be in
PortFull
state for 1e6 us, and so
output_wait_elapsed_us
> 1e6.
LimitsCheckingTransform
/
NullSource
/
LazyOutputFormat
need to wait until
ExpressionTransform
will execute
sleep(1)
to process the result, so
input_wait_elapsed_us
> 1e6.
See Also
EXPLAIN PIPELINE | {"source_file": "processors_profile_log.md"} | [
-0.03554869443178177,
0.04103744029998779,
-0.029972366988658905,
0.0524151548743248,
0.000419384625274688,
0.024071164429187775,
-0.040634237229824066,
0.031587760895490646,
0.04007061570882797,
0.007398318499326706,
-0.0073051415383815765,
-0.07378428429365158,
-0.03170498088002205,
-0.1... |
eaaf2abb-a19a-4aab-a412-299d42368e7d | description: 'System table containing the role grants for users and roles.'
keywords: ['system table', 'role_grants']
slug: /operations/system-tables/role_grants
title: 'system.role_grants'
doc_type: 'reference'
system.role_grants
Contains the role grants for users and roles. To add entries to this table, use
GRANT role TO user
.
Columns:
user_name
(
Nullable
(
String
)) β User name.
role_name
(
Nullable
(
String
)) β Role name.
granted_role_name
(
String
) β Name of role granted to the
role_name
role. To grant one role to another one use
GRANT role1 TO role2
.
granted_role_is_default
(
UInt8
) β Flag that shows whether
granted_role
is a default role. Possible values:
1 β
granted_role
is a default role.
0 β
granted_role
is not a default role.
with_admin_option
(
UInt8
) β Flag that shows whether
granted_role
is a role with
ADMIN OPTION
privilege. Possible values:
1 β The role has
ADMIN OPTION
privilege.
0 β The role without
ADMIN OPTION
privilege. | {"source_file": "role_grants.md"} | [
-0.003207107074558735,
-0.06422576308250427,
-0.052112236618995667,
0.015816129744052887,
-0.05621977522969246,
-0.007090942934155464,
0.06820874661207199,
0.05732811242341995,
-0.13883604109287262,
-0.05760394409298897,
0.03461523354053497,
-0.06209033355116844,
0.0558134950697422,
0.0504... |
c460edb0-d929-4284-9ad0-e55f10cf04cc | description: 'System table containing information about parts of MergeTree'
keywords: ['system table', 'parts']
slug: /operations/system-tables/parts
title: 'system.parts'
doc_type: 'reference'
system.parts
Contains information about parts of
MergeTree
tables.
Each row describes one data part.
Columns:
partition
(
String
) β The partition name. To learn what a partition is, see the description of the
ALTER
query.
Formats:
YYYYMM
for automatic partitioning by month.
any_string
when partitioning manually.
name
(
String
) β Name of the data part. The part naming structure can be used to determine many aspects of the data, ingest, and merge patterns. The part naming format is the following:
text
<partition_id>_<minimum_block_number>_<maximum_block_number>_<level>_<data_version>
Definitions:
partition_id
- identifies the partition key
minimum_block_number
- identifies the minimum block number in the part. ClickHouse always merges continuous blocks
maximum_block_number
- identifies the maximum block number in the part
level
- incremented by one with each additional merge on the part. A level of 0 indicates this is a new part that has not been merged. It is important to remember that all parts in ClickHouse are always immutable
data_version
- optional value, incremented when a part is mutated (again, mutated data is always only written to a new part, since parts are immutable)
uuid
(
UUID
) - The UUID of data part.
part_type
(
String
) β The data part storing format.
Possible Values:
Wide
β Each column is stored in a separate file in a filesystem.
Compact
β All columns are stored in one file in a filesystem.
Data storing format is controlled by the
min_bytes_for_wide_part
and
min_rows_for_wide_part
settings of the
MergeTree
table.
active
(
UInt8
) β Flag that indicates whether the data part is active. If a data part is active, it's used in a table. Otherwise, it's deleted. Inactive data parts remain after merging.
marks
(
UInt64
) β The number of marks. To get the approximate number of rows in a data part, multiply
marks
by the index granularity (usually 8192) (this hint does not work for adaptive granularity).
rows
(
UInt64
) β The number of rows.
bytes_on_disk
(
UInt64
) β Total size of all the data part files in bytes.
data_compressed_bytes
(
UInt64
) β Total size of compressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
data_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
primary_key_size
(
UInt64
) β The amount of memory (in bytes) used by primary key values in the primary.idx/cidx file on disk.
marks_bytes
(
UInt64
) β The size of the file with marks. | {"source_file": "parts.md"} | [
-0.014448383823037148,
-0.008365750312805176,
-0.01785215176641941,
-0.03788899630308151,
-0.0025267554447054863,
-0.07778654247522354,
0.011897356249392033,
0.0997646301984787,
-0.029420912265777588,
-0.05609027296304703,
0.013390716165304184,
0.000758205249439925,
0.023872295394539833,
-... |
9f0b2ba3-4932-4a0c-b424-b1a7985a05ee | primary_key_size
(
UInt64
) β The amount of memory (in bytes) used by primary key values in the primary.idx/cidx file on disk.
marks_bytes
(
UInt64
) β The size of the file with marks.
secondary_indices_compressed_bytes
(
UInt64
) β Total size of compressed data for secondary indices in the data part. All the auxiliary files (for example, files with marks) are not included.
secondary_indices_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data for secondary indices in the data part. All the auxiliary files (for example, files with marks) are not included.
secondary_indices_marks_bytes
(
UInt64
) β The size of the file with marks for secondary indices.
modification_time
(
DateTime
) β The time the directory with the data part was modified. This usually corresponds to the time of data part creation.
remove_time
(
DateTime
) β The time when the data part became inactive.
refcount
(
UInt32
) β The number of places where the data part is used. A value greater than 2 indicates that the data part is used in queries or merges.
min_date
(
Date
) β The minimum value of the date key in the data part.
max_date
(
Date
) β The maximum value of the date key in the data part.
min_time
(
DateTime
) β The minimum value of the date and time key in the data part.
max_time
(
DateTime
) β The maximum value of the date and time key in the data part.
partition_id
(
String
) β ID of the partition.
min_block_number
(
UInt64
) β The minimum data block number that makes up the current part after merging.
max_block_number
(
UInt64
) β The maximum data block number that makes up the current part after merging.
level
(
UInt32
) β Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts.
data_version
(
UInt64
) β Number that is used to determine which mutations should be applied to the data part (mutations with a version higher than
data_version
).
primary_key_bytes_in_memory
(
UInt64
) β The amount of memory (in bytes) used by primary key values (will be
0
in case of
primary_key_lazy_load=1
and
use_primary_key_cache=1
).
primary_key_bytes_in_memory_allocated
(
UInt64
) β The amount of memory (in bytes) reserved for primary key values (will be
0
in case of
primary_key_lazy_load=1
and
use_primary_key_cache=1
).
is_frozen
(
UInt8
) β Flag that shows that a partition data backup exists. 1, the backup exists. 0, the backup does not exist. For more details, see
FREEZE PARTITION
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
engine
(
String
) β Name of the table engine without parameters.
path
(
String
) β Absolute path to the folder with data part files.
disk_name
(
String
) β Name of a disk that stores the data part.
hash_of_all_files
(
String
) β
sipHash128
of compressed files. | {"source_file": "parts.md"} | [
-0.034707024693489075,
0.03526463732123375,
-0.08839620649814606,
0.03586491942405701,
0.03482551500201225,
-0.02432173676788807,
-0.008417950943112373,
0.0837373286485672,
0.07059398293495178,
0.0182998888194561,
0.06858109682798386,
0.07923129945993423,
0.022380493581295013,
-0.069440953... |
701a5b5d-0b2a-45a0-ae7b-878bf8726585 | disk_name
(
String
) β Name of a disk that stores the data part.
hash_of_all_files
(
String
) β
sipHash128
of compressed files.
hash_of_uncompressed_files
(
String
) β
sipHash128
of uncompressed files (files with marks, index file etc.).
uncompressed_hash_of_compressed_files
(
String
) β
sipHash128
of data in the compressed files as if they were uncompressed.
delete_ttl_info_min
(
DateTime
) β The minimum value of the date and time key for
TTL DELETE rule
.
delete_ttl_info_max
(
DateTime
) β The maximum value of the date and time key for
TTL DELETE rule
.
move_ttl_info.expression
(
Array
(
String
)) β Array of expressions. Each expression defines a
TTL MOVE rule
.
:::note
The
move_ttl_info.expression
array is kept mostly for backward compatibility, now the simplest way to check
TTL MOVE
rule is to use the
move_ttl_info.min
and
move_ttl_info.max
fields.
:::
move_ttl_info.min
(
Array
(
DateTime
)) β Array of date and time values. Each element describes the minimum key value for a
TTL MOVE rule
.
move_ttl_info.max
(
Array
(
DateTime
)) β Array of date and time values. Each element describes the maximum key value for a
TTL MOVE rule
.
bytes
(
UInt64
) β Alias for
bytes_on_disk
.
marks_size
(
UInt64
) β Alias for
marks_bytes
.
Example
sql
SELECT * FROM system.parts LIMIT 1 FORMAT Vertical; | {"source_file": "parts.md"} | [
-0.05431366711854935,
0.042860910296440125,
-0.061681076884269714,
-0.05563480034470558,
0.033609483391046524,
-0.07628518342971802,
-0.017358750104904175,
0.03405793756246567,
0.07226802408695221,
-0.0013176450738683343,
0.047184571623802185,
0.08604994416236877,
0.032719049602746964,
-0.... |
366ed6c8-b25f-4f3f-afe4-22014e96fd96 | bytes
(
UInt64
) β Alias for
bytes_on_disk
.
marks_size
(
UInt64
) β Alias for
marks_bytes
.
Example
sql
SELECT * FROM system.parts LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
partition: tuple()
name: all_1_4_1_6
part_type: Wide
active: 1
marks: 2
rows: 6
bytes_on_disk: 310
data_compressed_bytes: 157
data_uncompressed_bytes: 91
secondary_indices_compressed_bytes: 58
secondary_indices_uncompressed_bytes: 6
secondary_indices_marks_bytes: 48
marks_bytes: 144
modification_time: 2020-06-18 13:01:49
remove_time: 1970-01-01 00:00:00
refcount: 1
min_date: 1970-01-01
max_date: 1970-01-01
min_time: 1970-01-01 00:00:00
max_time: 1970-01-01 00:00:00
partition_id: all
min_block_number: 1
max_block_number: 4
level: 1
data_version: 6
primary_key_bytes_in_memory: 8
primary_key_bytes_in_memory_allocated: 64
is_frozen: 0
database: default
table: months
engine: MergeTree
disk_name: default
path: /var/lib/clickhouse/data/default/months/all_1_4_1_6/
hash_of_all_files: 2d0657a16d9430824d35e327fcbd87bf
hash_of_uncompressed_files: 84950cc30ba867c77a408ae21332ba29
uncompressed_hash_of_compressed_files: 1ad78f1c6843bbfb99a2c931abe7df7d
delete_ttl_info_min: 1970-01-01 00:00:00
delete_ttl_info_max: 1970-01-01 00:00:00
move_ttl_info.expression: []
move_ttl_info.min: []
move_ttl_info.max: []
See Also
MergeTree family
TTL for Columns and Tables | {"source_file": "parts.md"} | [
-0.004403496626764536,
0.00850577000528574,
-0.019839443266391754,
0.027458200231194496,
-0.0012835462111979723,
-0.057550497353076935,
0.024864640086889267,
0.059398241341114044,
0.02594098262488842,
-0.008543516509234905,
-0.0075532919727265835,
-0.006812219973653555,
0.02131425403058529,
... |
d179b5df-f6bb-4fa1-873a-fc8466b6e7b9 | description: 'System table containing information about the dependent views executed
when running a query, for example, the view type or the execution time.'
keywords: ['system table', 'query_views_log']
slug: /operations/system-tables/query_views_log
title: 'system.query_views_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.query_views_log
Contains information about the dependent views executed when running a query, for example, the view type or the execution time.
To start logging:
Configure parameters in the
query_views_log
section.
Set
log_query_views
to 1.
The flushing period of data is set in
flush_interval_milliseconds
parameter of the
query_views_log
server settings section. To force flushing, use the
SYSTEM FLUSH LOGS
query.
ClickHouse does not delete data from the table automatically. See
Introduction
for more details.
You can use the
log_queries_probability
) setting to reduce the number of queries, registered in the
query_views_log
table.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β The date when the last event of the view happened.
event_time
(
DateTime
) β The date and time when the view finished execution.
event_time_microseconds
(
DateTime
) β The date and time when the view finished execution with microseconds precision.
view_duration_ms
(
UInt64
) β Duration of view execution (sum of its stages) in milliseconds.
initial_query_id
(
String
) β ID of the initial query (for distributed query execution).
view_name
(
String
) β Name of the view.
view_uuid
(
UUID
) β UUID of the view.
view_type
(
Enum8
) β Type of the view. Values:
'Default' = 1
β
Default views
. Should not appear in this log.
'Materialized' = 2
β
Materialized views
.
'Live' = 3
β
Live views
.
view_query
(
String
) β The query executed by the view.
view_target
(
String
) β The name of the view target table.
read_rows
(
UInt64
) β Number of read rows.
read_bytes
(
UInt64
) β Number of read bytes.
written_rows
(
UInt64
) β Number of written rows.
written_bytes
(
UInt64
) β Number of written bytes.
peak_memory_usage
(
Int64
) β The maximum difference between the amount of allocated and freed memory in context of this view.
ProfileEvents
(
Map(String, UInt64)
) β ProfileEvents that measure different metrics. The description of them could be found in the table
system.events
.
status
(
Enum8
) β Status of the view. Values:
'QueryStart' = 1
β Successful start the view execution. Should not appear.
'QueryFinish' = 2
β Successful end of the view execution.
'ExceptionBeforeStart' = 3
β Exception before the start of the view execution.
'ExceptionWhileProcessing' = 4
β Exception during the view execution.
exception_code
(
Int32
) β Code of an exception.
exception
(
String
) β Exception message. | {"source_file": "query_views_log.md"} | [
0.029748689383268356,
-0.08218298107385635,
-0.041142236441373825,
0.03275061771273613,
0.020672675222158432,
-0.0884595587849617,
0.05556001514196396,
-0.08467981964349747,
0.048076335340738297,
0.06516128033399582,
0.0013407255755737424,
-0.014935673214495182,
0.07622609287500381,
-0.023... |
416fa7ff-ed33-496d-90da-bc29d134df08 | 'ExceptionWhileProcessing' = 4
β Exception during the view execution.
exception_code
(
Int32
) β Code of an exception.
exception
(
String
) β Exception message.
stack_trace
(
String
) β
Stack trace
. An empty string, if the query was completed successfully.
Example
Query:
sql
SELECT * FROM system.query_views_log LIMIT 1 \G;
Result:
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2021-06-22
event_time: 2021-06-22 13:23:07
event_time_microseconds: 2021-06-22 13:23:07.738221
view_duration_ms: 0
initial_query_id: c3a1ac02-9cad-479b-af54-9e9c0a7afd70
view_name: default.matview_inner
view_uuid: 00000000-0000-0000-0000-000000000000
view_type: Materialized
view_query: SELECT * FROM default.table_b
view_target: default.`.inner.matview_inner`
read_rows: 4
read_bytes: 64
written_rows: 2
written_bytes: 32
peak_memory_usage: 4196188
ProfileEvents: {'FileOpen':2,'WriteBufferFromFileDescriptorWrite':2,'WriteBufferFromFileDescriptorWriteBytes':187,'IOBufferAllocs':3,'IOBufferAllocBytes':3145773,'FunctionExecute':3,'DiskWriteElapsedMicroseconds':13,'InsertedRows':2,'InsertedBytes':16,'SelectedRows':4,'SelectedBytes':48,'ContextLock':16,'RWLockAcquiredReadLocks':1,'RealTimeMicroseconds':698,'SoftPageFaults':4,'OSReadChars':463}
status: QueryFinish
exception_code: 0
exception:
stack_trace:
See Also | {"source_file": "query_views_log.md"} | [
0.050382766872644424,
-0.049890413880348206,
-0.04507317766547203,
0.039657168090343475,
-0.00364808295853436,
-0.04979480057954788,
0.02735130675137043,
0.007917540147900581,
0.012868820689618587,
0.01885811798274517,
0.0504717081785202,
-0.09578561782836914,
0.003708441508933902,
-0.0587... |
36312f5b-0075-49e8-a8dd-f34241fa9f6b | description: 'System table containing logging entries with information about various
blob storage operations such as uploads and deletes.'
keywords: ['system table', 'blob_storage_log']
slug: /operations/system-tables/blob_storage_log
title: 'system.blob_storage_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains logging entries with information about various blob storage operations such as uploads and deletes.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β Date of the event.
event_time
(
DateTime
) β Time of the event.
event_time_microseconds
(
DateTime64
) β Time of the event with microseconds precision.
event_type
(
Enum8
) β Type of the event. Possible values:
'Upload'
'Delete'
'MultiPartUploadCreate'
'MultiPartUploadWrite'
'MultiPartUploadComplete'
'MultiPartUploadAbort'
query_id
(
String
) β Identifier of the query associated with the event, if any.
thread_id
(
UInt64
) β Identifier of the thread performing the operation.
thread_name
(
String
) β Name of the thread performing the operation.
disk_name
(
LowCardinality(String)
) β Name of the associated disk.
bucket
(
String
) β Name of the bucket.
remote_path
(
String
) β Path to the remote resource.
local_path
(
String
) β Path to the metadata file on the local system, which references the remote resource.
data_size
(
UInt32
) β Size of the data involved in the upload event.
error
(
String
) β Error message associated with the event, if any.
Example
Suppose a blob storage operation uploads a file, and an event is logged:
sql
SELECT * FROM system.blob_storage_log WHERE query_id = '7afe0450-504d-4e4b-9a80-cd9826047972' ORDER BY event_date, event_time_microseconds \G
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-10-31
event_time: 2023-10-31 16:03:40
event_time_microseconds: 2023-10-31 16:03:40.481437
event_type: Upload
query_id: 7afe0450-504d-4e4b-9a80-cd9826047972
thread_id: 2381740
disk_name: disk_s3
bucket: bucket1
remote_path: rrr/kxo/tbnqtrghgtnxkzgtcrlutwuslgawe
local_path: store/654/6549e8b3-d753-4447-8047-d462df6e6dbe/tmp_insert_all_1_1_0/checksums.txt
data_size: 259
error:
In this example, upload operation was associated with the
INSERT
query with ID
7afe0450-504d-4e4b-9a80-cd9826047972
. The local metadata file
store/654/6549e8b3-d753-4447-8047-d462df6e6dbe/tmp_insert_all_1_1_0/checksums.txt
refers to remote path
rrr/kxo/tbnqtrghgtnxkzgtcrlutwuslgawe
in bucket
bucket1
on disk
disk_s3
, with a size of 259 bytes.
See Also
External Disks for Storing Data | {"source_file": "blob_storage_log.md"} | [
0.046657174825668335,
-0.0022216748911887407,
-0.07146130502223969,
0.00034538996987976134,
0.06004336103796959,
-0.1076887995004654,
0.040328238159418106,
0.004873721394687891,
0.0627269372344017,
0.10576988756656647,
0.025336649268865585,
-0.007618554402142763,
0.09581196308135986,
-0.04... |
17864c07-e891-49de-b49c-6acac41c3aef | description: 'System table containing information about storage policies and volumes
which are defined in server configuration.'
keywords: ['system table', 'storage_policies']
slug: /operations/system-tables/storage_policies
title: 'system.storage_policies'
doc_type: 'reference'
system.storage_policies
Contains information about storage policies and volumes which are defined in
server configuration
.
Columns:
policy_name
(
String
) β Name of the storage policy.
volume_name
(
String
) β Volume name defined in the storage policy.
volume_priority
(
UInt64
) β Volume order number in the configuration, the data fills the volumes according this priority, i.e. data during inserts and merges is written to volumes with a lower priority (taking into account other rules: TTL,
max_data_part_size
,
move_factor
).
disks
(
Array(String)
) β Disk names, defined in the storage policy.
volume_type
(
Enum8
) β Type of volume. Can have one of the following values:
JBOD
SINGLE_DISK
UNKNOWN
max_data_part_size
(
UInt64
) β Maximum size of a data part that can be stored on volume disks (0 β no limit).
move_factor
(
Float64
) β Ratio of free disk space. When the ratio exceeds the value of configuration parameter, ClickHouse start to move data to the next volume in order.
prefer_not_to_merge
(
UInt8
) β Value of the
prefer_not_to_merge
setting. Should be always false. When this setting is enabled, you did a mistake.
perform_ttl_move_on_insert
(
UInt8
) β Value of the
perform_ttl_move_on_insert
setting. β Disables TTL move on data part INSERT. By default if we insert a data part that already expired by the TTL move rule it immediately goes to a volume/disk declared in move rule. This can significantly slowdown insert in case if destination volume/disk is slow (e.g. S3).
load_balancing
(
Enum8
) β Policy for disk balancing. Can have one of the following values:
ROUND_ROBIN
LEAST_USED
If the storage policy contains more then one volume, then information for each volume is stored in the individual row of the table. | {"source_file": "storage_policies.md"} | [
-0.007808650843799114,
-0.03784296289086342,
-0.09867732971906662,
-0.044099580496549606,
-0.024994149804115295,
-0.07008832693099976,
0.0003943646443076432,
0.08287703990936279,
-0.013357692398130894,
0.0372815765440464,
0.048415809869766235,
0.0597793385386467,
0.06390392035245895,
-0.04... |
3590da25-08b4-4baf-a644-86468bfb6306 | description: 'System table containing information about existing data skipping indices
in all the tables.'
keywords: ['system table', 'data_skipping_indices']
slug: /operations/system-tables/data_skipping_indices
title: 'system.data_skipping_indices'
doc_type: 'reference'
Contains information about existing data skipping indices in all the tables.
Columns:
database
(
String
) β Database name.
table
(
String
) β Table name.
name
(
String
) β Index name.
type
(
String
) β Index type.
type_full
(
String
) β Index type expression from create statement.
expr
(
String
) β Expression for the index calculation.
granularity
(
UInt64
) β The number of granules in the block.
data_compressed_bytes
(
UInt64
) β The size of compressed data, in bytes.
data_uncompressed_bytes
(
UInt64
) β The size of decompressed data, in bytes.
marks_bytes
(
UInt64
) β The size of marks, in bytes.
Example
sql
SELECT * FROM system.data_skipping_indices LIMIT 2 FORMAT Vertical;
```text
Row 1:
ββββββ
database: default
table: user_actions
name: clicks_idx
type: minmax
type_full: minmax
expr: clicks
granularity: 1
data_compressed_bytes: 58
data_uncompressed_bytes: 6
marks_bytes: 48
Row 2:
ββββββ
database: default
table: users
name: contacts_null_idx
type: minmax
type_full: minmax
expr: assumeNotNull(contacts_null)
granularity: 1
data_compressed_bytes: 58
data_uncompressed_bytes: 6
marks_bytes: 48
``` | {"source_file": "data_skipping_indices.md"} | [
0.008022231049835682,
-0.00024151387333404273,
-0.037162914872169495,
0.06980450451374054,
0.01105544250458479,
-0.12703648209571838,
0.07999344170093536,
0.04710878059267998,
-0.03054124303162098,
0.016709450632333755,
0.02343662828207016,
0.002431996399536729,
0.0628528743982315,
-0.0981... |
c93e84e3-1ac5-46a0-a603-34e3d050d29d | description: 'System table containing information about session settings for current
user.'
keywords: ['system table', 'settings']
slug: /operations/system-tables/settings
title: 'system.settings'
doc_type: 'reference'
system.settings
Contains information about session settings for current user.
Columns:
name
(
String
) β Setting name.
value
(
String
) β Setting value.
changed
(
UInt8
) β Shows whether the setting was explicitly defined in the config or explicitly changed.
description
(
String
) β Short setting description.
min
(
Nullable
(
String
)) β Minimum value of the setting, if any is set via
constraints
. If the setting has no minimum value, contains
NULL
.
max
(
Nullable
(
String
)) β Maximum value of the setting, if any is set via
constraints
. If the setting has no maximum value, contains
NULL
.
readonly
(
UInt8
) β Shows whether the current user can change the setting:
0
β Current user can change the setting.
1
β Current user can't change the setting.
default
(
String
) β Setting default value.
is_obsolete
(
UInt8
) - Shows whether a setting is obsolete.
tier
(
Enum8
) β Support level for this feature. ClickHouse features are organized in tiers, varying depending on the current status of their development and the expectations one might have when using them. Values:
'Production'
β The feature is stable, safe to use and does not have issues interacting with other
production
features. .
'Beta'
β The feature is stable and safe. The outcome of using it together with other features is unknown and correctness is not guaranteed. Testing and reports are welcome.
'Experimental'
β The feature is under development. Only intended for developers and ClickHouse enthusiasts. The feature might or might not work and could be removed at any time.
'Obsolete'
β No longer supported. Either it is already removed or it will be removed in future releases.
Example
The following example shows how to get information about settings which name contains
min_i
.
sql
SELECT *
FROM system.settings
WHERE name LIKE '%min_insert_block_size_%'
FORMAT Vertical
``text
Row 1:
ββββββ
name: min_insert_block_size_rows
value: 1048449
changed: 0
description: Sets the minimum number of rows in the block that can be inserted into a table by an
INSERT` query. Smaller-sized blocks are squashed into bigger ones.
Possible values:
Positive integer.
0 β Squashing disabled.
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
default: 1048449
alias_for:
is_obsolete: 0
tier: Production
Row 2:
ββββββ
name: min_insert_block_size_bytes
value: 268402944
changed: 0
description: Sets the minimum number of bytes in the block which can be inserted into a table by an
INSERT
query. Smaller-sized blocks are squashed into bigger ones.
Possible values:
Positive integer. | {"source_file": "settings.md"} | [
-0.0027911276556551456,
0.05132872611284256,
-0.08319184184074402,
0.0017519647954031825,
-0.1036793515086174,
-0.0013491158606484532,
0.05973236635327339,
0.1086406260728836,
-0.13898204267024994,
-0.008527888916432858,
0.000821391586214304,
-0.06525101512670517,
0.04191802442073822,
-0.0... |
a47dc10a-34ed-496b-bbc9-5c4586375d31 | Possible values:
Positive integer.
0 β Squashing disabled.
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
default: 268402944
alias_for:
is_obsolete: 0
tier: Production
Row 3:
ββββββ
name: min_insert_block_size_rows_for_materialized_views
value: 0
changed: 0
description: Sets the minimum number of rows in the block which can be inserted into a table by an
INSERT
query. Smaller-sized blocks are squashed into bigger ones. This setting is applied only for blocks inserted into
materialized view
. By adjusting this setting, you control blocks squashing while pushing to materialized view and avoid excessive memory usage.
Possible values:
Any positive integer.
0 β Squashing disabled.
See Also
min_insert_block_size_rows
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
default: 0
alias_for:
is_obsolete: 0
tier: Production
Row 4:
ββββββ
name: min_insert_block_size_bytes_for_materialized_views
value: 0
changed: 0
description: Sets the minimum number of bytes in the block which can be inserted into a table by an
INSERT
query. Smaller-sized blocks are squashed into bigger ones. This setting is applied only for blocks inserted into
materialized view
. By adjusting this setting, you control blocks squashing while pushing to materialized view and avoid excessive memory usage.
Possible values:
Any positive integer.
0 β Squashing disabled.
See also
min_insert_block_size_bytes
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
default: 0
alias_for:
is_obsolete: 0
tier: Production
```
Using of
WHERE changed
can be useful, for example, when you want to check:
Whether settings in configuration files are loaded correctly and are in use.
Settings that changed in the current session.
sql
SELECT * FROM system.settings WHERE changed AND name='load_balancing'
See also
Settings
Permissions for Queries
Constraints on Settings
SHOW SETTINGS
statement | {"source_file": "settings.md"} | [
-0.020792553201317787,
-0.020385753363370895,
-0.11300160735845566,
0.04719245433807373,
-0.04930587857961655,
0.001670738565735519,
-0.06693395227193832,
0.020902423188090324,
0.012732826173305511,
0.0356367789208889,
0.02563273347914219,
-0.023864980787038803,
0.02194308489561081,
-0.032... |
1c490540-e4af-46c7-b642-f812eba44807 | description: 'Overview of what system tables are and why they are useful.'
keywords: ['system tables', 'overview']
sidebar_label: 'Overview'
sidebar_position: 52
slug: /operations/system-tables/overview
title: 'System Tables Overview'
doc_type: 'reference'
System tables overview {#system-tables-introduction}
System tables provide information about:
Server states, processes, and environment.
Server's internal processes.
Options used when the ClickHouse binary was built.
System tables:
Located in the
system
database.
Available only for reading data.
Can't be dropped or altered, but can be detached.
Most of the system tables store their data in RAM. A ClickHouse server creates such system tables at the start.
Unlike other system tables, the system log tables
metric_log
,
query_log
,
query_thread_log
,
trace_log
,
part_log
,
crash_log
,
text_log
and
backup_log
are served by
MergeTree
table engine and store their data in a filesystem by default. If you remove a table from a filesystem, the ClickHouse server creates the empty one again at the time of the next data writing. If system table schema changed in a new release, then ClickHouse renames the current table and creates a new one.
System log tables can be customized by creating a config file with the same name as the table under
/etc/clickhouse-server/config.d/
, or setting corresponding elements in
/etc/clickhouse-server/config.xml
. Elements can be customized are:
database
: database the system log table belongs to. This option is deprecated now. All system log tables are under database
system
.
table
: table to insert data.
partition_by
: specify
PARTITION BY
expression.
ttl
: specify table
TTL
expression.
flush_interval_milliseconds
: interval of flushing data to disk.
engine
: provide full engine expression (starting with
ENGINE =
) with parameters. This option conflicts with
partition_by
and
ttl
. If set together, the server will raise an exception and exit.
An example:
xml
<clickhouse>
<query_log>
<database>system</database>
<table>query_log</table>
<partition_by>toYYYYMM(event_date)</partition_by>
<ttl>event_date + INTERVAL 30 DAY DELETE</ttl>
<!--
<engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
-->
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<max_size_rows>1048576</max_size_rows>
<reserved_size_rows>8192</reserved_size_rows>
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
<flush_on_crash>false</flush_on_crash>
</query_log>
</clickhouse>
By default, table growth is unlimited. To control a size of a table, you can use
TTL
settings for removing outdated log records. Also you can use the partitioning feature of
MergeTree
-engine tables. | {"source_file": "overview.md"} | [
0.010980644263327122,
-0.08093180507421494,
-0.0677872896194458,
-0.008670482784509659,
0.018430370837450027,
-0.097224161028862,
0.06736968457698822,
0.07094661146402359,
0.06181411072611809,
0.06352987885475159,
0.021761799231171608,
0.07269575446844101,
0.062255147844552994,
-0.06340010... |
3e90a6e3-a689-4277-ac99-260fa38c029e | Sources of System Metrics {#system-tables-sources-of-system-metrics}
For collecting system metrics ClickHouse server uses:
CAP_NET_ADMIN
capability.
procfs
(only in Linux).
procfs
If ClickHouse server does not have
CAP_NET_ADMIN
capability, it tries to fall back to
ProcfsMetricsProvider
.
ProcfsMetricsProvider
allows collecting per-query system metrics (for CPU and I/O).
If procfs is supported and enabled on the system, ClickHouse server collects these metrics:
OSCPUVirtualTimeMicroseconds
OSCPUWaitMicroseconds
OSIOWaitMicroseconds
OSReadChars
OSWriteChars
OSReadBytes
OSWriteBytes
:::note
OSIOWaitMicroseconds
is disabled by default in Linux kernels starting from 5.14.x.
You can enable it using
sudo sysctl kernel.task_delayacct=1
or by creating a
.conf
file in
/etc/sysctl.d/
with
kernel.task_delayacct = 1
:::
System tables in ClickHouse Cloud {#system-tables-in-clickhouse-cloud}
In ClickHouse Cloud, system tables provide critical insights into the state and performance of the service, just as they do in self-managed deployments. Some system tables operate at the cluster-wide level, especially those that derive their data from Keeper nodes, which manage distributed metadata. These tables reflect the collective state of the cluster and should be consistent when queried on individual nodes. For example, the
parts
should be consistent irrespective of the node it is queried from:
``sql
SELECT hostname(), count()
FROM system.parts
WHERE
table` = 'pypi'
ββhostname()βββββββββββββββββββββ¬βcount()ββ
β c-ecru-qn-34-server-vccsrty-0 β 26 β
βββββββββββββββββββββββββββββββββ΄ββββββββββ
1 row in set. Elapsed: 0.005 sec.
SELECT
hostname(),
count()
FROM system.parts
WHERE
table
= 'pypi'
ββhostname()βββββββββββββββββββββ¬βcount()ββ
β c-ecru-qn-34-server-w59bfco-0 β 26 β
βββββββββββββββββββββββββββββββββ΄ββββββββββ
1 row in set. Elapsed: 0.004 sec.
```
Conversely, other system tables are node-specific e.g. in-memory or persisting their data using the MergeTree table engine. This is typical for data such as logs and metrics. This persistence ensures that historical data remains available for analysis. However, these node-specific tables are inherently unique to each node.
In general, the following rules can be applied when determining if a system table is node-specific:
System tables with a
_log
suffix.
System tables that expose metrics e.g.
metrics
,
asynchronous_metrics
,
events
.
System tables that expose ongoing processes e.g.
processes
,
merges
.
Additionally, new versions of system tables may be created as a result of upgrades or changes to their schema. These versions are named using a numerical suffix.
For example, consider the
system.query_log
tables, which contain a row for each query executed by the node:
```sql
SHOW TABLES FROM system LIKE 'query_log%' | {"source_file": "overview.md"} | [
-0.019379539415240288,
-0.12516465783119202,
-0.07913246750831604,
0.013850054703652859,
0.05195046216249466,
-0.08293874561786652,
0.061850160360336304,
0.01705339550971985,
0.04199382662773132,
0.024871855974197388,
0.03210671618580818,
-0.06981950253248215,
-0.006764459889382124,
-0.009... |
cc3a3efe-feef-4784-8d0d-10df9052f5a7 | For example, consider the
system.query_log
tables, which contain a row for each query executed by the node:
```sql
SHOW TABLES FROM system LIKE 'query_log%'
ββnameββββββββββ
β query_log β
β query_log_1 β
β query_log_10 β
β query_log_2 β
β query_log_3 β
β query_log_4 β
β query_log_5 β
β query_log_6 β
β query_log_7 β
β query_log_8 β
β query_log_9 β
ββββββββββββββββ
11 rows in set. Elapsed: 0.004 sec.
```
Querying multiple versions {#querying-multiple-versions}
We can query across these tables using the
merge
function. For example, the query below identifies the latest query issued to the target node in each
query_log
table:
```sql
SELECT
_table,
max(event_time) AS most_recent
FROM merge('system', '^query_log')
GROUP BY _table
ORDER BY most_recent DESC
ββ_tableββββββββ¬βββββββββmost_recentββ
β query_log β 2025-04-13 10:59:29 β
β query_log_1 β 2025-04-09 12:34:46 β
β query_log_2 β 2025-04-09 12:33:45 β
β query_log_3 β 2025-04-07 17:10:34 β
β query_log_5 β 2025-03-24 09:39:39 β
β query_log_4 β 2025-03-24 09:38:58 β
β query_log_6 β 2025-03-19 16:07:41 β
β query_log_7 β 2025-03-18 17:01:07 β
β query_log_8 β 2025-03-18 14:36:07 β
β query_log_10 β 2025-03-18 14:01:33 β
β query_log_9 β 2025-03-18 14:01:32 β
ββββββββββββββββ΄ββββββββββββββββββββββ
11 rows in set. Elapsed: 0.373 sec. Processed 6.44 million rows, 25.77 MB (17.29 million rows/s., 69.17 MB/s.)
Peak memory usage: 28.45 MiB.
```
:::note Don't rely on the numerical suffix for ordering
While the numeric suffix on tables can suggest the order of data, it should never be relied upon. For this reason, always use the merge table function combined with a date filter when targeting specific date ranges.
:::
Importantly, these tables are still
local to each node
.
Querying across nodes {#querying-across-nodes}
To comprehensively view the entire cluster, users can leverage the
clusterAllReplicas
function in combination with the
merge
function. The
clusterAllReplicas
function allows querying system tables across all replicas within the "default" cluster, consolidating node-specific data into a unified result. When combined with the
merge
function this can be used to target all system data for a specific table in a cluster.
This approach is particularly valuable for monitoring and debugging cluster-wide operations, ensuring users can effectively analyze the health and performance of their ClickHouse Cloud deployment.
:::note
ClickHouse Cloud provides clusters of multiple replicas for redundancy and failover. This enables its features, such as dynamic autoscaling and zero-downtime upgrades. At a certain moment in time, new nodes could be in the process of being added to the cluster or removed from the cluster. To skip these nodes, add
SETTINGS skip_unavailable_shards = 1
to queries using
clusterAllReplicas
as shown below.
:::
For example, consider the difference when querying the
query_log
table - often essential to analysis. | {"source_file": "overview.md"} | [
0.04282471165060997,
-0.0560791902244091,
0.018084291368722916,
-0.004558908753097057,
0.044074445962905884,
-0.08422253280878067,
0.0201865267008543,
0.01488588098436594,
0.010431032627820969,
0.06987341493368149,
-0.01690269075334072,
0.049241818487644196,
0.022997433319687843,
-0.075478... |
925211d6-e468-4850-9185-4c8304699d2b | For example, consider the difference when querying the
query_log
table - often essential to analysis.
```sql
SELECT
hostname() AS host,
count()
FROM system.query_log
WHERE (event_time >= '2025-04-01 00:00:00') AND (event_time <= '2025-04-12 00:00:00')
GROUP BY host
ββhostβββββββββββββββββββββββββββ¬βcount()ββ
β c-ecru-qn-34-server-s5bnysl-0 β 650543 β
βββββββββββββββββββββββββββββββββ΄ββββββββββ
1 row in set. Elapsed: 0.010 sec. Processed 17.87 thousand rows, 71.51 KB (1.75 million rows/s., 7.01 MB/s.)
SELECT
hostname() AS host,
count()
FROM clusterAllReplicas('default', system.query_log)
WHERE (event_time >= '2025-04-01 00:00:00') AND (event_time <= '2025-04-12 00:00:00')
GROUP BY host SETTINGS skip_unavailable_shards = 1
ββhostβββββββββββββββββββββββββββ¬βcount()ββ
β c-ecru-qn-34-server-s5bnysl-0 β 650543 β
β c-ecru-qn-34-server-6em4y4t-0 β 656029 β
β c-ecru-qn-34-server-iejrkg0-0 β 641155 β
βββββββββββββββββββββββββββββββββ΄ββββββββββ
3 rows in set. Elapsed: 0.026 sec. Processed 1.97 million rows, 7.88 MB (75.51 million rows/s., 302.05 MB/s.)
```
Querying across nodes and versions {#querying-across-nodes-and-versions}
Due to system table versioning this still does not represent the full data in the cluster. When combining the above with the
merge
function we get an accurate result for our date range:
```sql
SELECT
hostname() AS host,
count()
FROM clusterAllReplicas('default', merge('system', '^query_log'))
WHERE (event_time >= '2025-04-01 00:00:00') AND (event_time <= '2025-04-12 00:00:00')
GROUP BY host SETTINGS skip_unavailable_shards = 1
ββhostβββββββββββββββββββββββββββ¬βcount()ββ
β c-ecru-qn-34-server-s5bnysl-0 β 3008000 β
β c-ecru-qn-34-server-6em4y4t-0 β 3659443 β
β c-ecru-qn-34-server-iejrkg0-0 β 1078287 β
βββββββββββββββββββββββββββββββββ΄ββββββββββ
3 rows in set. Elapsed: 0.462 sec. Processed 7.94 million rows, 31.75 MB (17.17 million rows/s., 68.67 MB/s.)
```
Related content {#related-content}
Blog:
System Tables and a window into the internals of ClickHouse
Blog:
Essential monitoring queries - part 1 - INSERT queries
Blog:
Essential monitoring queries - part 2 - SELECT queries | {"source_file": "overview.md"} | [
0.10223685950040817,
-0.014267142862081528,
0.03950905427336693,
0.11831429600715637,
0.061076778918504715,
-0.08906823396682739,
0.0022104608360677958,
-0.060783904045820236,
0.05925026163458824,
0.0329354852437973,
0.018362969160079956,
-0.04732763022184372,
0.015520886518061161,
-0.0306... |
49717557-c899-4402-bdc5-4aca79ccc884 | description: 'System table containing descriptions of table engines supported by the
server and the features they support.'
keywords: ['system table', 'table_engines']
slug: /operations/system-tables/table_engines
title: 'system.table_engines'
doc_type: 'reference'
system.table_engines
Contains description of table engines supported by server and their feature support information.
This table contains the following columns (the column type is shown in brackets):
name
(
String
) β The name of table engine.
supports_settings
(
UInt8
) β Flag that indicates if table engine supports SETTINGS clause.
supports_skipping_indices
(
UInt8
) β Flag that indicates if table engine supports skipping indices.
supports_projections
(
UInt8
) β Flag that indicated if table engine supports projections.
supports_sort_order
(
UInt8
) β Flag that indicates if table engine supports clauses PARTITION_BY, PRIMARY_KEY, ORDER_BY and SAMPLE_BY.
supports_ttl
(
UInt8
) β Flag that indicates if table engine supports TTL.
supports_replication
(
UInt8
) β Flag that indicates if table engine supports data replication.
supports_deduplication
(
UInt8
) β Flag that indicates if table engine supports data deduplication.
supports_parallel_insert
(
UInt8
) β Flag that indicates if table engine supports parallel insert (see max_insert_threads setting).
Example:
sql
SELECT *
FROM system.table_engines
WHERE name IN ('Kafka', 'MergeTree', 'ReplicatedCollapsingMergeTree')
text
ββnameβββββββββββββββββββββββββββ¬βsupports_settingsββ¬βsupports_skipping_indicesββ¬βsupports_sort_orderββ¬βsupports_ttlββ¬βsupports_replicationββ¬βsupports_deduplicationββ¬βsupports_parallel_insertββ
β MergeTree β 1 β 1 β 1 β 1 β 0 β 0 β 1 β
β Kafka β 1 β 0 β 0 β 0 β 0 β 0 β 0 β
β ReplicatedCollapsingMergeTree β 1 β 1 β 1 β 1 β 1 β 1 β 1 β
βββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ
See also
MergeTree family
query clauses
Kafka
settings
Join
settings | {"source_file": "table_engines.md"} | [
-0.050571199506521225,
-0.05072212591767311,
-0.05545814707875252,
0.024528902024030685,
-0.030675673857331276,
-0.0644097551703453,
-0.0348205529153347,
0.03259855508804321,
-0.07278916239738464,
0.0035277768038213253,
0.01997026987373829,
0.011091125197708607,
0.029295997694134712,
-0.05... |
1f273ec5-4d1b-4d3d-8ec6-950f7bd19972 | description: 'System table used for implementing the
SHOW PROCESSLIST
query.'
keywords: ['system table', 'processes']
slug: /operations/system-tables/processes
title: 'system.processes'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.processes
This system table is used for implementing the
SHOW PROCESSLIST
query.
Columns:
is_initial_query
(
UInt8
) β Whether this query comes directly from user or was issues by ClickHouse server in a scope of distributed query execution.
user
(
String
) β The user who made the query. Keep in mind that for distributed processing, queries are sent to remote servers under the default user. The field contains the username for a specific query, not for a query that this query initiated.
query_id
(
String
) β Query ID, if defined.
address
(
IPv6
) β The IP address the query was made from. The same for distributed processing. To track where a distributed query was originally made from, look at system.processes on the query requestor server.
port
(
UInt16
) β The client port the query was made from.
initial_user
(
String
) β Name of the user who ran the initial query (for distributed query execution).
initial_query_id
(
String
) β ID of the initial query (for distributed query execution).
initial_address
(
IPv6
) β IP address that the parent query was launched from.
initial_port
(
UInt16
) β The client port that was used to make the parent query.
interface
(
UInt8
) β The interface which was used to send the query. TCP = 1, HTTP = 2, GRPC = 3, MYSQL = 4, POSTGRESQL = 5, LOCAL = 6, TCP_INTERSERVER = 7.
os_user
(
String
) β Operating system username who runs clickhouse-client.
client_hostname
(
String
) β Hostname of the client machine where the clickhouse-client or another TCP client is run.
client_name
(
String
) β The clickhouse-client or another TCP client name.
client_revision
(
UInt64
) β Revision of the clickhouse-client or another TCP client.
client_version_major
(
UInt64
) β Major version of the clickhouse-client or another TCP client.
client_version_minor
(
UInt64
) β Minor version of the clickhouse-client or another TCP client.
client_version_patch
(
UInt64
) β Patch component of the clickhouse-client or another TCP client version.
http_method
(
UInt8
) β HTTP method that initiated the query. Possible values: 0 β The query was launched from the TCP interface. 1 β GET method was used. 2 β POST method was used.
http_user_agent
(
String
) β HTTP header UserAgent passed in the HTTP query.
http_referer
(
String
) β HTTP header Referer passed in the HTTP query (contains an absolute or partial address of the page making the query).
forwarded_for
(
String
) β HTTP header X-Forwarded-For passed in the HTTP query.
quota_key
(
String
) β The quota key specified in the quotas setting (see keyed). | {"source_file": "processes.md"} | [
-0.01007907185703516,
-0.04684390500187874,
-0.09873511642217636,
-0.055099792778491974,
-0.010400934144854546,
-0.09369833022356033,
0.0626446083188057,
0.013679486699402332,
0.0053563970141112804,
0.07738320529460907,
-0.0383642353117466,
0.04158662632107735,
0.10280687361955643,
-0.0830... |
d2f8fa28-4440-45fd-97f9-4dd334dc6f2b | forwarded_for
(
String
) β HTTP header X-Forwarded-For passed in the HTTP query.
quota_key
(
String
) β The quota key specified in the quotas setting (see keyed).
distributed_depth
(
UInt64
) β The number of times query was retransmitted between server nodes internally.
elapsed
(
Float64
) β The time in seconds since request execution started.
is_cancelled
(
UInt8
) β Was query cancelled.
is_all_data_sent
(
UInt8
) β Was all data sent to the client (in other words query had been finished on the server).
read_rows
(
UInt64
) β The number of rows read from the table. For distributed processing, on the requestor server, this is the total for all remote servers.
read_bytes
(
UInt64
) β The number of uncompressed bytes read from the table. For distributed processing, on the requestor server, this is the total for all remote servers.
total_rows_approx
(
UInt64
) β The approximation of the total number of rows that should be read. For distributed processing, on the requestor server, this is the total for all remote servers. It can be updated during request processing, when new sources to process become known.
written_rows
(
UInt64
) β The amount of rows written to the storage.
written_bytes
(
UInt64
) β The amount of bytes written to the storage.
memory_usage
(
Int64
) β Amount of RAM the query uses. It might not include some types of dedicated memory
peak_memory_usage
(
Int64
) β The current peak of memory usage.
query
(
String
) β The query text. For INSERT, it does not include the data to insert.
normalized_query_hash
(
UInt64
) β A numeric hash value, such as it is identical for queries differ only by values of literals.
query_kind
(
String
) β The type of the query - SELECT, INSERT, etc.
thread_ids
(
Array(UInt64)
) β The list of identifiers of all threads which participated in this query.
ProfileEvents
(
Map(String, UInt64)
) β ProfileEvents calculated for this query.
Settings
(
Map(String, String)
) β The list of modified user-level settings.
current_database
(
String
) β The name of the current database.
sql
SELECT * FROM system.processes LIMIT 10 FORMAT Vertical; | {"source_file": "processes.md"} | [
-0.018672436475753784,
-0.012642021290957928,
-0.0666060596704483,
0.03742954879999161,
-0.04409646987915039,
-0.09081210196018219,
-0.0379333570599556,
0.049828626215457916,
0.04905414953827858,
0.09010007232427597,
-0.026689641177654266,
0.05351484939455986,
0.0992225632071495,
-0.114473... |
1fbb36e4-cf94-476d-98d9-26256385239b | current_database
(
String
) β The name of the current database.
sql
SELECT * FROM system.processes LIMIT 10 FORMAT Vertical;
```response
Row 1:
ββββββ
is_initial_query: 1
user: default
query_id: 35a360fa-3743-441d-8e1f-228c938268da
address: ::ffff:172.23.0.1
port: 47588
initial_user: default
initial_query_id: 35a360fa-3743-441d-8e1f-228c938268da
initial_address: ::ffff:172.23.0.1
initial_port: 47588
interface: 1
os_user: bharatnc
client_hostname: tower
client_name: ClickHouse
client_revision: 54437
client_version_major: 20
client_version_minor: 7
client_version_patch: 2
http_method: 0
http_user_agent:
quota_key:
elapsed: 0.000582537
is_cancelled: 0
is_all_data_sent: 0
read_rows: 0
read_bytes: 0
total_rows_approx: 0
written_rows: 0
written_bytes: 0
memory_usage: 0
peak_memory_usage: 0
query: SELECT * from system.processes LIMIT 10 FORMAT Vertical;
thread_ids: [67]
ProfileEvents: {'Query':1,'SelectQuery':1,'ReadCompressedBytes':36,'CompressedReadBufferBlocks':1,'CompressedReadBufferBytes':10,'IOBufferAllocs':1,'IOBufferAllocBytes':89,'ContextLock':15,'RWLockAcquiredReadLocks':1}
Settings: {'background_pool_size':'32','load_balancing':'random','allow_suspicious_low_cardinality_types':'1','distributed_aggregation_memory_efficient':'1','skip_unavailable_shards':'1','log_queries':'1','max_bytes_before_external_group_by':'20000000000','max_bytes_before_external_sort':'20000000000','allow_introspection_functions':'1'}
1 rows in set. Elapsed: 0.002 sec.
``` | {"source_file": "processes.md"} | [
0.014613393694162369,
-0.032608650624752045,
-0.09645246714353561,
-0.015249493531882763,
-0.0753747820854187,
-0.10524812340736389,
-0.0003421020810492337,
-0.006228293292224407,
-0.03434330224990845,
0.026014169678092003,
0.017698902636766434,
-0.04533575847744942,
0.02390170842409134,
-... |
471b2687-6a4c-4a4c-bf2b-807ccfd8de78 | description: 'System table containing information about columns in all tables'
keywords: ['system table', 'columns']
slug: /operations/system-tables/columns
title: 'system.columns'
doc_type: 'reference'
Contains information about columns in all tables.
You can use this table to get information similar to the
DESCRIBE TABLE
query, but for multiple tables at once.
Columns from
temporary tables
are visible in the
system.columns
only in those session where they have been created. They are shown with the empty
database
field.
The
system.columns
table contains the following columns (the column type is shown in brackets):
database
(
String
) β Database name.
table
(
String
) β Table name.
name
(
String
) β Column name.
type
(
String
) β Column type.
position
(
UInt64
) β Ordinal position of a column in a table starting with 1.
default_kind
(
String
) β Expression type (DEFAULT, MATERIALIZED, ALIAS) for the default value, or an empty string if it is not defined.
default_expression
(
String
) β Expression for the default value, or an empty string if it is not defined.
data_compressed_bytes
(
UInt64
) β The size of compressed data, in bytes.
data_uncompressed_bytes
(
UInt64
) β The size of decompressed data, in bytes.
marks_bytes
(
UInt64
) β The size of marks, in bytes.
comment
(
String
) β Comment on the column, or an empty string if it is not defined.
is_in_partition_key
(
UInt8
) β Flag that indicates whether the column is in the partition expression.
is_in_sorting_key
(
UInt8
) β Flag that indicates whether the column is in the sorting key expression.
is_in_primary_key
(
UInt8
) β Flag that indicates whether the column is in the primary key expression.
is_in_sampling_key
(
UInt8
) β Flag that indicates whether the column is in the sampling key expression.
compression_codec
(
String
) β Compression codec name.
character_octet_length
(
Nullable(UInt64)
) β Maximum length in bytes for binary data, character data, or text data and images. In ClickHouse makes sense only for FixedString data type. Otherwise, the NULL value is returned.
numeric_precision
(
Nullable(UInt64)
) β Accuracy of approximate numeric data, exact numeric data, integer data, or monetary data. In ClickHouse it is bit width for integer types and decimal precision for Decimal types. Otherwise, the NULL value is returned.
numeric_precision_radix
(
Nullable(UInt64)
) β The base of the number system is the accuracy of approximate numeric data, exact numeric data, integer data or monetary data. In ClickHouse it's 2 for integer types and 10 for Decimal types. Otherwise, the NULL value is returned.
numeric_scale
(
Nullable(UInt64)
) β The scale of approximate numeric data, exact numeric data, integer data, or monetary data. In ClickHouse makes sense only for Decimal types. Otherwise, the NULL value is returned. | {"source_file": "columns.md"} | [
0.033920180052518845,
0.008252611383795738,
-0.130684956908226,
0.045569755136966705,
0.0220026932656765,
-0.09634140878915787,
0.05043967813253403,
0.0851326733827591,
0.02067146822810173,
0.0688779279589653,
0.002334182383492589,
-0.0176265649497509,
0.06672993302345276,
-0.0794779211282... |
7b6f0de6-f81e-49c1-b91d-83f92cef1c2e | datetime_precision
(
Nullable(UInt64)
) β Decimal precision of DateTime64 data type. For other data types, the NULL value is returned.
serialization_hint
(
Nullable(String)
) β A hint for column to choose serialization on inserts according to statistics.
statistics
(
String
) β The types of statistics created in this columns.
Example
sql
SELECT * FROM system.columns LIMIT 2 FORMAT Vertical;
```text
Row 1:
ββββββ
database: INFORMATION_SCHEMA
table: COLUMNS
name: table_catalog
type: String
position: 1
default_kind:
default_expression:
data_compressed_bytes: 0
data_uncompressed_bytes: 0
marks_bytes: 0
comment:
is_in_partition_key: 0
is_in_sorting_key: 0
is_in_primary_key: 0
is_in_sampling_key: 0
compression_codec:
character_octet_length: α΄Ία΅α΄Έα΄Έ
numeric_precision: α΄Ία΅α΄Έα΄Έ
numeric_precision_radix: α΄Ία΅α΄Έα΄Έ
numeric_scale: α΄Ία΅α΄Έα΄Έ
datetime_precision: α΄Ία΅α΄Έα΄Έ
Row 2:
ββββββ
database: INFORMATION_SCHEMA
table: COLUMNS
name: table_schema
type: String
position: 2
default_kind:
default_expression:
data_compressed_bytes: 0
data_uncompressed_bytes: 0
marks_bytes: 0
comment:
is_in_partition_key: 0
is_in_sorting_key: 0
is_in_primary_key: 0
is_in_sampling_key: 0
compression_codec:
character_octet_length: α΄Ία΅α΄Έα΄Έ
numeric_precision: α΄Ία΅α΄Έα΄Έ
numeric_precision_radix: α΄Ία΅α΄Έα΄Έ
numeric_scale: α΄Ία΅α΄Έα΄Έ
datetime_precision: α΄Ία΅α΄Έα΄Έ
``` | {"source_file": "columns.md"} | [
0.018862398341298103,
-0.0074496278539299965,
-0.1305318921804428,
0.030095046386122704,
-0.05340714380145073,
-0.020486202090978622,
-0.03250158950686455,
0.08220543712377548,
-0.0380265936255455,
0.04857601597905159,
0.04171336442232132,
-0.07606552541255951,
0.011617244221270084,
-0.063... |
5afd5b78-931e-49a9-bd0a-332111b6d410 | description: 'System table containing formation about quota usage by the current user
such as how much of the quota is used and how much is left.'
keywords: ['system table', 'quota_usage']
slug: /operations/system-tables/quota_usage
title: 'system.quota_usage'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Quota usage by the current user: how much is used and how much is left.
Columns:
quota_name
(
String
) β Quota name.
quota_key
(
String
) β Key value.
start_time
(
Nullable(DateTime)
) β Start time for calculating resource consumption.
end_time
(
Nullable(DateTime)
) β End time for calculating resource consumption.
duration
(
Nullable(UInt32)
) β Length of the time interval for calculating resource consumption, in seconds.
queries
(
Nullable(UInt64)
) β The current number of executed queries.
max_queries
(
Nullable(UInt64)
) β The maximum allowed number of queries of all types allowed to be executed.
query_selects
(
Nullable(UInt64)
) β The current number of executed SELECT queries.
max_query_selects
(
Nullable(UInt64)
) β The maximum allowed number of SELECT queries allowed to be executed.
query_inserts
(
Nullable(UInt64)
) β The current number of executed INSERT queries.
max_query_inserts
(
Nullable(UInt64)
) β The maximum allowed number of INSERT queries allowed to be executed.
errors
(
Nullable(UInt64)
) β The current number of queries resulted in an error.
max_errors
(
Nullable(UInt64)
) β The maximum number of queries resulted in an error allowed within the specified period of time.
result_rows
(
Nullable(UInt64)
) β The current total number of rows in the result set of all queries within the current period of time.
max_result_rows
(
Nullable(UInt64)
) β The maximum total number of rows in the result set of all queries allowed within the specified period of time.
result_bytes
(
Nullable(UInt64)
) β The current total number of bytes in the result set of all queries within the current period of time.
max_result_bytes
(
Nullable(UInt64)
) β The maximum total number of bytes in the result set of all queries allowed within the specified period of time.
read_rows
(
Nullable(UInt64)
) β The current total number of rows read during execution of all queries within the current period of time.
max_read_rows
(
Nullable(UInt64)
) β The maximum number of rows to read during execution of all queries allowed within the specified period of time.
read_bytes
(
Nullable(UInt64)
) β The current total number of bytes read during execution of all queries within the current period of time.
max_read_bytes
(
Nullable(UInt64)
) β The maximum number of bytes to read during execution of all queries allowed within the specified period of time.
execution_time
(
Nullable(Float64)
) β The current total amount of time (in nanoseconds) spent to execute queries within the current period of time | {"source_file": "quota_usage.md"} | [
0.0036781509406864643,
0.004679418168962002,
-0.10208908468484879,
-0.006969887763261795,
-0.013383759185671806,
-0.0700288787484169,
0.04870133474469185,
0.07133305072784424,
-0.011526099406182766,
0.08938058465719223,
-0.041213445365428925,
-0.05082351341843605,
0.09439461678266525,
-0.0... |
3cd7c81c-a4a5-49dd-ba38-9563dfc426c6 | execution_time
(
Nullable(Float64)
) β The current total amount of time (in nanoseconds) spent to execute queries within the current period of time
max_execution_time
(
Nullable(Float64)
) β The maximum amount of time (in nanoseconds) allowed for all queries to execute within the specified period of time
written_bytes
(
Nullable(UInt64)
) β The current total number of bytes written during execution of all queries within the current period of time.
max_written_bytes
(
Nullable(UInt64)
) β The maximum number of bytes to written during execution of all queries allowed within the specified period of time.
failed_sequential_authentications
(
Nullable(UInt64)
) β The current number of consecutive authentication failures within the current period of time.
max_failed_sequential_authentications
(
Nullable(UInt64)
) β The maximum number of consecutive authentication failures allowed within the specified period of time.
See Also {#see-also}
SHOW QUOTA
) | {"source_file": "quota_usage.md"} | [
-0.03417360037565231,
-0.014452802017331123,
-0.11268382519483566,
0.005057245958596468,
-0.032930538058280945,
-0.008995686657726765,
-0.02188335731625557,
0.06111377850174904,
0.004714298527687788,
-0.011349798180162907,
0.012812764383852482,
0.046509772539138794,
0.07735075801610947,
-0... |
174a40b6-0473-4b51-aebe-405d07664b7e | description: 'System table containing information about disks defined in the server
configuration'
keywords: ['system table', 'disks']
slug: /operations/system-tables/disks
title: 'system.disks'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about disks defined in the
server configuration
.
Columns:
name
(
String
) β Name of a disk in the server configuration.
path
(
String
) β Path to the mount point in the file system.
free_space
(
UInt64
) β Free space on disk in bytes.
total_space
(
UInt64
) β Disk volume in bytes.
unreserved_space
(
UInt64
) β Free space which is not taken by reservations (free_space minus the size of reservations taken by merges, inserts, and other disk write operations currently running).
keep_free_space
(
UInt64
) β Amount of disk space that should stay free on disk in bytes. Defined in the keep_free_space_bytes parameter of disk configuration.
type
(
String
) β The disk type which tells where this disk stores the data - RAM, local drive or remote storage.
object_storage_type
(
String
) β Type of object storage if disk type is object_storage
metadata_type
(
String
) β Type of metadata storage if disk type is object_storage
is_encrypted
(
UInt8
) β Flag which shows whether this disk ecrypts the underlying data.
is_read_only
(
UInt8
) β Flag which indicates that you can only perform read operations with this disk.
is_write_once
(
UInt8
) β Flag which indicates if disk is write-once. Which means that it does support BACKUP to this disk, but does not support INSERT into MergeTree table on this disk.
is_remote
(
UInt8
) β Flag which indicated what operations with this disk involve network interaction.
is_broken
(
UInt8
) β Flag which indicates if disk is broken. Broken disks will have 0 space and cannot be used.
cache_path
(
String
) β The path to the cache directory on local drive in case when the disk supports caching.
Example
sql
SELECT * FROM system.disks;
```response
ββnameβββββ¬βpathββββββββββββββββββ¬βββfree_spaceββ¬ββtotal_spaceββ¬βkeep_free_spaceββ
β default β /var/lib/clickhouse/ β 276392587264 β 490652508160 β 0 β
βββββββββββ΄βββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ
1 rows in set. Elapsed: 0.001 sec.
``` | {"source_file": "disks.md"} | [
0.009424084797501564,
-0.03225453197956085,
-0.07616117596626282,
-0.02042335830628872,
0.06887172907590866,
-0.05469981208443642,
0.01896805502474308,
0.052208561450242996,
0.02927718125283718,
0.08790339529514313,
0.03643818199634552,
0.035837143659591675,
0.10977679491043091,
-0.1133462... |
8702d14a-ee04-40ed-94ba-9978abcdf08b | description: 'System table containing information about parameters
graphite_rollup
which are used in tables with
GraphiteMergeTree
type engines.'
keywords: ['system table', 'graphite_retentions']
slug: /operations/system-tables/graphite_retentions
title: 'system.graphite_retentions'
doc_type: 'reference'
Contains information about parameters
graphite_rollup
which are used in tables with
*GraphiteMergeTree
engines.
Columns:
config_name
(
String
) β graphite_rollup parameter name.
rule_type
(
String
) β The rule type. Possible values: RuleTypeAll = 0 - default, with regex, compatible with old scheme; RuleTypePlain = 1 - plain metrics, with regex, compatible with old scheme; RuleTypeTagged = 2 - tagged metrics, with regex, compatible with old scheme; RuleTypeTagList = 3 - tagged metrics, with regex (converted to RuleTypeTagged from string like 'retention=10min ; env=(staging|prod)')
regexp
(
String
) β A pattern for the metric name.
function
(
String
) β The name of the aggregating function.
age
(
UInt64
) β The minimum age of the data in seconds.
precision
(
UInt64
) β How precisely to define the age of the data in seconds.
priority
(
UInt16
) β Pattern priority.
is_default
(
UInt8
) β Whether the pattern is the default.
Tables.database
(
Array(String)
) β Array of names of database tables that use the
config_name
parameter.
Tables.table
(
Array(String)
) β Array of table names that use the
config_name
parameter. | {"source_file": "graphite_retentions.md"} | [
-0.06448518484830856,
0.05317525938153267,
-0.09957212209701538,
-0.008707207627594471,
0.01875545084476471,
-0.04449756443500519,
0.04224691912531853,
0.04012679681181908,
-0.07992936670780182,
0.057298049330711365,
0.04579203948378563,
-0.028735464438796043,
0.04015076160430908,
-0.00717... |
239e0426-1bba-4d37-9667-38868cd744aa | description: 'System table containing formation about quota usage by all users.'
keywords: ['system table', 'quotas_usage', 'quota']
slug: /operations/system-tables/quotas_usage
title: 'system.quotas_usage'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.quotas_usage
Quota usage by all users.
Columns:
quota_name
(
String
) β Quota name.
quota_key
(
String
) β Key value.
is_current
(
UInt8
) β Quota usage for current user.
start_time
(
Nullable(DateTime)
) β Start time for calculating resource consumption.
end_time
(
Nullable(DateTime)
) β End time for calculating resource consumption.
duration
(
Nullable(UInt32)
) β Length of the time interval for calculating resource consumption, in seconds.
queries
(
Nullable(UInt64)
) β The current number of executed queries.
max_queries
(
Nullable(UInt64)
) β The maximum allowed number of queries of all types allowed to be executed.
query_selects
(
Nullable(UInt64)
) β The current number of executed SELECT queries.
max_query_selects
(
Nullable(UInt64)
) β The maximum allowed number of SELECT queries allowed to be executed.
query_inserts
(
Nullable(UInt64)
) β The current number of executed INSERT queries.
max_query_inserts
(
Nullable(UInt64)
) β The maximum allowed number of INSERT queries allowed to be executed.
errors
(
Nullable(UInt64)
) β The current number of queries resulted in an error.
max_errors
(
Nullable(UInt64)
) β The maximum number of queries resulted in an error allowed within the specified period of time.
result_rows
(
Nullable(UInt64)
) β The current total number of rows in the result set of all queries within the current period of time.
max_result_rows
(
Nullable(UInt64)
) β The maximum total number of rows in the result set of all queries allowed within the specified period of time.
result_bytes
(
Nullable(UInt64)
) β The current total number of bytes in the result set of all queries within the current period of time.
max_result_bytes
(
Nullable(UInt64)
) β The maximum total number of bytes in the result set of all queries allowed within the specified period of time.
read_rows
(
Nullable(UInt64)
) β The current total number of rows read during execution of all queries within the current period of time.
max_read_rows
(
Nullable(UInt64)
) β The maximum number of rows to read during execution of all queries allowed within the specified period of time.
read_bytes
(
Nullable(UInt64)
) β The current total number of bytes read during execution of all queries within the current period of time.
max_read_bytes
(
Nullable(UInt64)
) β The maximum number of bytes to read during execution of all queries allowed within the specified period of time.
execution_time
(
Nullable(Float64)
) β The current total amount of time (in nanoseconds) spent to execute queries within the current period of time | {"source_file": "quotas_usage.md"} | [
0.008061263710260391,
-0.002707027131691575,
-0.0940207913517952,
-0.01056026853621006,
-0.012410197407007217,
-0.06415356695652008,
0.0503777414560318,
0.0542663037776947,
-0.009963939897716045,
0.08757365494966507,
-0.050592806190252304,
-0.04796016961336136,
0.08995649963617325,
-0.0615... |
0b11fdc4-b2db-4c66-9523-87a4e8739dbb | execution_time
(
Nullable(Float64)
) β The current total amount of time (in nanoseconds) spent to execute queries within the current period of time
max_execution_time
(
Nullable(Float64)
) β The maximum amount of time (in nanoseconds) allowed for all queries to execute within the specified period of time
written_bytes
(
Nullable(UInt64)
) β The current total number of bytes written during execution of all queries within the current period of time.
max_written_bytes
(
Nullable(UInt64)
) β The maximum number of bytes to written during execution of all queries allowed within the specified period of time.
failed_sequential_authentications
(
Nullable(UInt64)
) β The current number of consecutive authentication failures within the current period of time.
max_failed_sequential_authentications
(
Nullable(UInt64)
) β The maximum number of consecutive authentication failures allowed within the specified period of time.
See Also {#see-also}
SHOW QUOTA
) | {"source_file": "quotas_usage.md"} | [
-0.03417360037565231,
-0.014452802017331123,
-0.11268382519483566,
0.005057245958596468,
-0.032930538058280945,
-0.008995686657726765,
-0.02188335731625557,
0.06111377850174904,
0.004714298527687788,
-0.011349798180162907,
0.012812764383852482,
0.046509772539138794,
0.07735075801610947,
-0... |
61c3b374-6a40-4412-aec5-321f5d716eef | description: 'System table containing information about columns in projection parts for tables of the MergeTree family'
keywords: ['system table', 'projection_parts_columns']
slug: /operations/system-tables/projection_parts_columns
title: 'system.projection_parts_columns'
doc_type: 'reference'
system.projection_parts_columns
This table contains information about columns in projection parts for tables of the MergeTree family.
Columns {#columns}
partition
(
String
) β The partition name.
name
(
String
) β Name of the data part.
part_type
(
String
) β The data part storing format.
parent_name
(
String
) β The name of the source (parent) data part.
parent_uuid
(
UUID
) β The UUID of the source (parent) data part.
parent_part_type
(
String
) β The source (parent) data part storing format.
active
(
UInt8
) β Flag that indicates whether the data part is active
marks
(
UInt64
) β The number of marks.
rows
(
UInt64
) β The number of rows.
bytes_on_disk
(
UInt64
) β Total size of all the data part files in bytes.
data_compressed_bytes
(
UInt64
) β Total size of compressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
data_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
marks_bytes
(
UInt64
) β The size of the file with marks.
parent_marks
(
UInt64
) β The number of marks in the source (parent) part.
parent_rows
(
UInt64
) β The number of rows in the source (parent) part.
parent_bytes_on_disk
(
UInt64
) β Total size of all the source (parent) data part files in bytes.
parent_data_compressed_bytes
(
UInt64
) β Total size of compressed data in the source (parent) data part.
parent_data_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data in the source (parent) data part.
parent_marks_bytes
(
UInt64
) β The size of the file with marks in the source (parent) data part.
modification_time
(
DateTime
) β The time the directory with the data part was modified. This usually corresponds to the time of data part creation.
remove_time
(
DateTime
) β The time when the data part became inactive.
refcount
(
UInt32
) β The number of places where the data part is used. A value greater than 2 indicates that the data part is used in queries or merges.
min_date
(
Date
) β The minimum value for the Date column if that is included in the partition key.
max_date
(
Date
) β The maximum value for the Date column if that is included in the partition key.
min_time
(
DateTime
) β The minimum value for the DateTime column if that is included in the partition key.
max_time
(
DateTime
) β The maximum value for the DateTime column if that is included in the partition key.
partition_id
(
String
) β ID of the partition.
min_block_number
(
Int64
) β The minimum number of data parts that make up the current part after merging. | {"source_file": "projection_parts_columns.md"} | [
-0.0004955393960699439,
-0.01499329786747694,
-0.07145919650793076,
-0.005113011226058006,
0.06786393374204636,
-0.14670002460479736,
-0.013884259387850761,
0.08621440082788467,
-0.052935607731342316,
0.0292799212038517,
0.032190121710300446,
-0.005048457533121109,
0.06164698675274849,
-0.... |
5d779a70-ff53-4c90-8c5d-0f41ac59232d | partition_id
(
String
) β ID of the partition.
min_block_number
(
Int64
) β The minimum number of data parts that make up the current part after merging.
max_block_number
(
Int64
) β The maximum number of data parts that make up the current part after merging.
level
(
UInt32
) β Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts.
data_version
(
UInt64
) β Number that is used to determine which mutations should be applied to the data part (mutations with a version higher than data_version).
primary_key_bytes_in_memory
(
UInt64
) β The amount of memory (in bytes) used by primary key values.
primary_key_bytes_in_memory_allocated
(
UInt64
) β The amount of memory (in bytes) reserved for primary key values.
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
engine
(
String
) β Name of the table engine without parameters.
disk_name
(
String
) β Name of a disk that stores the data part.
path
(
String
) β Absolute path to the folder with data part files.
column
(
String
) β Name of the column.
type
(
String
) β Column type.
column_position
(
UInt64
) β Ordinal position of a column in a table starting with 1.
default_kind
(
String
) β Expression type (DEFAULT, MATERIALIZED, ALIAS) for the default value, or an empty string if it is not defined.
default_expression
(
String
) β Expression for the default value, or an empty string if it is not defined.
column_bytes_on_disk
(
UInt64
) β Total size of the column in bytes.
column_data_compressed_bytes
(
UInt64
) β Total size of compressed data in the column, in bytes.
column_data_uncompressed_bytes
(
UInt64
) β Total size of the decompressed data in the column, in bytes.
column_marks_bytes
(
UInt64
) β The size of the column with marks, in bytes.
column_modification_time
(
Nullable(DateTime)
) β The last time the column was modified. | {"source_file": "projection_parts_columns.md"} | [
0.011879045516252518,
-0.022981412708759308,
-0.11317568272352219,
-0.01140125934034586,
-0.024289263412356377,
-0.10210229456424713,
0.00006711910100420937,
0.12521269917488098,
-0.004783778917044401,
-0.04727252200245857,
0.050072114914655685,
0.03313814848661423,
0.0501665435731411,
-0.... |
8b84f888-d071-4560-ac55-67800cd30314 | description: 'System table containing information about async inserts. Each entry
represents an insert query buffered into an async insert query.'
keywords: ['system table', 'asynchronous_insert_log']
slug: /operations/system-tables/asynchronous_insert_log
title: 'system.asynchronous_insert_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.asynchronous_insert_log
Contains information about async inserts. Each entry represents an insert query buffered into an async insert query.
To start logging configure parameters in the
asynchronous_insert_log
section.
The flushing period of data is set in
flush_interval_milliseconds
parameter of the
asynchronous_insert_log
server settings section. To force flushing, use the
SYSTEM FLUSH LOGS
query.
ClickHouse does not delete data from the table automatically. See
Introduction
for more details.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β The date when the async insert happened.
event_time
(
DateTime
) β The date and time when the async insert finished execution.
event_time_microseconds
(
DateTime64
) β The date and time when the async insert finished execution with microseconds precision.
query
(
String
) β Query string.
database
(
String
) β The name of the database the table is in.
table
(
String
) β Table name.
format
(
String
) β Format name.
query_id
(
String
) β ID of the initial query.
bytes
(
UInt64
) β Number of inserted bytes.
exception
(
String
) β Exception message.
status
(
Enum8
) β Status of the view. Values:
'Ok' = 1
β Successful insert.
'ParsingError' = 2
β Exception when parsing the data.
'FlushError' = 3
β Exception when flushing the data.
flush_time
(
DateTime
) β The date and time when the flush happened.
flush_time_microseconds
(
DateTime64
) β The date and time when the flush happened with microseconds precision.
flush_query_id
(
String
) β ID of the flush query.
Example
Query:
sql
SELECT * FROM system.asynchronous_insert_log LIMIT 1 \G;
Result:
text
hostname: clickhouse.eu-central1.internal
event_date: 2023-06-08
event_time: 2023-06-08 10:08:53
event_time_microseconds: 2023-06-08 10:08:53.199516
query: INSERT INTO public.data_guess (user_id, datasource_id, timestamp, path, type, num, str) FORMAT CSV
database: public
table: data_guess
format: CSV
query_id: b46cd4c4-0269-4d0b-99f5-d27668c6102e
bytes: 133223
exception:
status: Ok
flush_time: 2023-06-08 10:08:55
flush_time_microseconds: 2023-06-08 10:08:55.139676
flush_query_id: cd2c1e43-83f5-49dc-92e4-2fbc7f8d3716
See Also | {"source_file": "asynchronous_insert_log.md"} | [
0.015519872307777405,
-0.030692802742123604,
-0.0709976777434349,
0.05303463339805603,
0.0044860513880848885,
-0.103928342461586,
0.027389591559767723,
-0.0757305771112442,
0.06913968175649643,
0.0789850726723671,
0.04199385643005371,
0.00685513811185956,
0.11479359865188599,
-0.0806278362... |
38765812-b316-47e0-9c65-9f008804746f | See Also
system.query_log
β Description of the
query_log
system table which contains common information about queries execution.
system.asynchronous_inserts
β This table contains information about pending asynchronous inserts in queue. | {"source_file": "asynchronous_insert_log.md"} | [
0.015491834841668606,
-0.0694384053349495,
-0.12238206714391708,
0.08788617700338364,
-0.026948722079396248,
-0.0983782485127449,
0.11401468515396118,
-0.029552720487117767,
0.024424085393548012,
0.04808786138892174,
0.01162926759570837,
0.05236385390162468,
0.05062064528465271,
-0.0979878... |
74f62428-bab6-4c2d-ab90-0cc82c668f49 | description: 'System table containing information about trace spans for executed queries.'
keywords: ['system table', 'opentelemetry_span_log']
slug: /operations/system-tables/opentelemetry_span_log
title: 'system.opentelemetry_span_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.opentelemetry_span_log
Contains information about
trace spans
for executed queries.
Columns:
trace_id
(
UUID
) β ID of the trace for executed query.
span_id
(
UInt64
) β ID of the
trace span
.
parent_span_id
(
UInt64
) β ID of the parent
trace span
.
operation_name
(
String
) β The name of the operation.
kind
(
Enum8
) β The
SpanKind
of the span.
INTERNAL
β Indicates that the span represents an internal operation within an application.
SERVER
β Indicates that the span covers server-side handling of a synchronous RPC or other remote request.
CLIENT
β Indicates that the span describes a request to some remote service.
PRODUCER
β Indicates that the span describes the initiators of an asynchronous request. This parent span will often end before the corresponding child CONSUMER span, possibly even before the child span starts.
CONSUMER
- Indicates that the span describes a child of an asynchronous PRODUCER request.
start_time_us
(
UInt64
) β The start time of the
trace span
(in microseconds).
finish_time_us
(
UInt64
) β The finish time of the
trace span
(in microseconds).
finish_date
(
Date
) β The finish date of the
trace span
.
attribute.names
(
Array
(
String
)) β
Attribute
names depending on the
trace span
. They are filled in according to the recommendations in the
OpenTelemetry
standard.
attribute.values
(
Array
(
String
)) β Attribute values depending on the
trace span
. They are filled in according to the recommendations in the
OpenTelemetry
standard.
Example
Query:
sql
SELECT * FROM system.opentelemetry_span_log LIMIT 1 FORMAT Vertical;
Result:
text
Row 1:
ββββββ
trace_id: cdab0847-0d62-61d5-4d38-dd65b19a1914
span_id: 701487461015578150
parent_span_id: 2991972114672045096
operation_name: DB::Block DB::InterpreterSelectQuery::getSampleBlockImpl()
kind: INTERNAL
start_time_us: 1612374594529090
finish_time_us: 1612374594529108
finish_date: 2021-02-03
attribute.names: []
attribute.values: []
See Also
OpenTelemetry | {"source_file": "opentelemetry_span_log.md"} | [
-0.0012759952805936337,
-0.029116053134202957,
-0.07134117931127548,
0.0367480143904686,
0.04501965641975403,
-0.1360250860452652,
0.02461211569607258,
0.004895288497209549,
0.004326163325458765,
0.04646917060017586,
0.017046285793185234,
0.019501570612192154,
0.07421975582838058,
-0.12078... |
1cce0aa3-4ad9-4759-b152-0dceb86e2f43 | description: 'System table containing information about the settings of S3Queue tables.
Available from server version
24.10
.'
keywords: ['system table', 's3_queue_settings']
slug: /operations/system-tables/s3_queue_settings
title: 'system.s3_queue_settings'
doc_type: 'reference'
system.s3_queue_settings
Contains information about the settings of
S3Queue
tables. Available from server version
24.10
.
Columns:
database
(
String
) β Database of the table with S3Queue Engine.
table
(
String
) β Name of the table with S3Queue Engine.
name
(
String
) β Setting name.
value
(
String
) β Setting value.
type
(
String
) β Setting type (implementation specific string value).
changed
(
UInt8
) β 1 if the setting was explicitly defined in the config or explicitly changed.
description
(
String
) β Setting description.
alterable
(
UInt8
) β Shows whether the current user can change the setting via ALTER TABLE MODIFY SETTING: 0 β Current user can change the setting, 1 β Current user can't change the setting. | {"source_file": "s3_queue_settings.md"} | [
-0.02599368803203106,
-0.04287011921405792,
-0.1210925430059433,
0.012986619956791401,
-0.08913513273000717,
-0.014661534689366817,
0.07884609699249268,
-0.0136481411755085,
-0.06931177526712418,
0.008858387358486652,
0.0046308995224535465,
-0.05313031002879143,
0.08247058838605881,
-0.080... |
acb491d4-14b4-4aa4-8769-a64f8a990270 | description: 'System table which shows the content of the query condition cache.'
keywords: ['system table', 'query_condition_cache']
slug: /operations/system-tables/query_condition_cache
title: 'system.query_condition_cache'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.query_condition_cache
Shows the content of the
query condition cache
.
Columns:
table_uuid
(
UUID
) β The table UUID.
part_name
(
String
) β The part name.
condition
(
String
) β The hashed filter condition. Only set if setting query_condition_cache_store_conditions_as_plaintext = true.
condition_hash
(
UInt64
) β The hash of the filter condition.
entry_size
(
UInt64
) β The size of the entry in bytes.
matching_marks
(
String
) β Matching marks.
Example
sql
SELECT * FROM system.query_condition_cache FORMAT Vertical;
``` text
Row 1:
ββββββ
table_uuid: 28270a24-ea27-49f6-99cd-97b9bee976ac
part_name: all_1_1_0
condition: or(equals(b, 10000_UInt16), equals(c, 10000_UInt16))
condition_hash: 5456494897146899690 -- 5.46 quintillion
entry_size: 40
matching_marks: 111111110000000000000000000000000000000000000000000000000111111110000000000000000
1 row in set. Elapsed: 0.004 sec.
``` | {"source_file": "query_condition_cache.md"} | [
0.005379849579185247,
0.041011303663253784,
-0.058863408863544464,
-0.041742563247680664,
-0.0021226394455879927,
-0.10438096523284912,
0.06461838632822037,
0.0018560478929430246,
0.006148197688162327,
0.05677010864019394,
-0.021186277270317078,
-0.01501036249101162,
0.1051713153719902,
-0... |
14d29301-8188-4520-94c3-ecccee96356c | description: 'System table containing information about metadata files read from Delta Lake tables. Each entry
represents a root metadata JSON file.'
keywords: ['system table', 'delta_lake_metadata_log']
slug: /operations/system-tables/delta_lake_metadata_log
title: 'system.delta_lake_metadata_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.delta_lake_metadata_log
The
system.delta_lake_metadata_log
table records metadata access and parsing events for Delta Lake tables read by ClickHouse. It provides detailed information about each metadata file, which is useful for debugging, auditing, and understanding Delta table structure evolution.
Purpose {#purpose}
This table logs every metadata file read from Delta Lake tables. It helps users trace how ClickHouse interprets Delta table metadata and diagnose issues related to schema evolution, snapshot resolution, or query planning.
:::note
This table is primarily intended for debugging purposes.
:::
Columns {#columns}
| Name | Type | Description |
|----------------|-----------|----------------------------------------------------------------------------------------------|
|
event_date
|
Date
| Date of the log file. |
|
event_time
|
DateTime
| Timestamp of the event. |
|
query_id
|
String
| Query ID that triggered the metadata read. |
|
table_path
|
String
| Path to the Delta Lake table. |
|
file_path
|
String
| Path to the root metadata JSON file. |
|
content
|
String
| Content in JSON format (raw metadata from .json). |
Controlling log verbosity {#controlling-log-verbosity}
You can control which metadata events are logged using the
delta_lake_log_metadata
setting.
To log all metadata used in the current query:
```sql
SELECT * FROM my_delta_table SETTINGS delta_lake_log_metadata = 1;
SYSTEM FLUSH LOGS delta_lake_metadata_log;
SELECT *
FROM system.delta_lake_metadata_log
WHERE query_id = '{previous_query_id}';
``` | {"source_file": "delta_metadata_log.md"} | [
-0.006040162872523069,
-0.01834970898926258,
-0.07001710683107376,
-0.057791899889707565,
0.06285659968852997,
-0.10410260409116745,
0.04357662424445152,
0.040454570204019547,
-0.03164101764559746,
0.042525049299001694,
0.02791755087673664,
-0.01771141029894352,
0.02906995825469494,
-0.083... |
4b7d7dfc-05db-409e-9699-0d4c7f289bb0 | description: 'System table useful for C++ experts and ClickHouse engineers containing
information for introspection of the
clickhouse
binary.'
keywords: ['system table', 'symbols']
slug: /operations/system-tables/symbols
title: 'system.symbols'
doc_type: 'reference'
Contains information for introspection of
clickhouse
binary. It requires the introspection privilege to access.
This table is only useful for C++ experts and ClickHouse engineers.
Columns:
symbol
(
String
) β Symbol name in the binary. It is mangled. You can apply
demangle(symbol)
to obtain a readable name.
address_begin
(
UInt64
) β Start address of the symbol in the binary.
address_end
(
UInt64
) β End address of the symbol in the binary.
name
(
String
) β Alias for
event
.
Example
sql
SELECT address_begin, address_end - address_begin AS size, demangle(symbol) FROM system.symbols ORDER BY size DESC LIMIT 10
text
ββaddress_beginββ¬βββββsizeββ¬βdemangle(symbol)βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 25000976 β 29466000 β icudt70_dat β
β 400605288 β 2097272 β arena_emap_global β
β 18760592 β 1048576 β CLD2::kQuadChrome1015_2 β
β 9807152 β 884808 β TopLevelDomainLookupHash::isValid(char const*, unsigned long)::wordlist β
β 57442432 β 850608 β llvm::X86Insts β
β 55682944 β 681360 β (anonymous namespace)::X86DAGToDAGISel::SelectCode(llvm::SDNode*)::MatcherTable β
β 55130368 β 502840 β (anonymous namespace)::X86InstructionSelector::getMatchTable() const::MatchTable0 β
β 402930616 β 404032 β qpl::ml::dispatcher::hw_dispatcher::get_instance()::instance β
β 274131872 β 356795 β DB::SettingsTraits::Accessor::instance()::$_0::operator()() const β
β 58293040 β 249424 β llvm::X86InstrNameData β
βββββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "symbols.md"} | [
0.02739975042641163,
-0.018941182643175125,
-0.05977405235171318,
0.0017103306017816067,
-0.06937894970178604,
-0.08737927675247192,
0.06966197490692139,
0.044680193066596985,
-0.0201036985963583,
0.00044425157830119133,
-0.02025357447564602,
-0.03614864870905876,
0.06897131353616714,
-0.0... |
ed6695d7-09e0-4133-9409-a6714d97abbe | description: 'System table containing information about distributed ddl queries (queries
using the ON CLUSTER clause) that were executed on a cluster.'
keywords: ['system table', 'distributed_ddl_queue']
slug: /operations/system-tables/distributed_ddl_queue
title: 'system.distributed_ddl_queue'
doc_type: 'reference'
Contains information about
distributed ddl queries (ON CLUSTER clause)
that were executed on a cluster.
Columns:
entry
(
String
) β Query id.
entry_version
(
Nullable(UInt8)
) β Version of the entry.
initiator_host
(
Nullable(String)
) β Host that initiated the DDL operation.
initiator_port
(
Nullable(UInt16)
) β Port used by the initiator.
cluster
(
String
) β Cluster name, empty if not determined.
query
(
String
) β Query executed.
settings
(
Map(String, String)
) β Settings used in the DDL operation.
query_create_time
(
DateTime
) β Query created time.
host
(
Nullable(String)
) β Hostname.
port
(
Nullable(UInt16)
) β Host Port.
status
(
Nullable(Enum8('Inactive' = 0, 'Active' = 1, 'Finished' = 2, 'Removing' = 3, 'Unknown' = 4))
) β Status of the query.
exception_code
(
Nullable(UInt16)
) β Exception code.
exception_text
(
Nullable(String)
) β Exception message.
query_finish_time
(
Nullable(DateTime)
) β Query finish time.
query_duration_ms
(
Nullable(UInt64)
) β Duration of query execution (in milliseconds).
Example
```sql
SELECT *
FROM system.distributed_ddl_queue
WHERE cluster = 'test_cluster'
LIMIT 2
FORMAT Vertical
Query id: f544e72a-6641-43f1-836b-24baa1c9632a
Row 1:
ββββββ
entry: query-0000000000
entry_version: 5
initiator_host: clickhouse01
initiator_port: 9000
cluster: test_cluster
query: CREATE DATABASE test_db UUID '4a82697e-c85e-4e5b-a01e-a36f2a758456' ON CLUSTER test_cluster
settings: {'max_threads':'16','use_uncompressed_cache':'0'}
query_create_time: 2023-09-01 16:15:14
host: clickhouse-01
port: 9000
status: Finished
exception_code: 0
exception_text:
query_finish_time: 2023-09-01 16:15:14
query_duration_ms: 154
Row 2:
ββββββ
entry: query-0000000001
entry_version: 5
initiator_host: clickhouse01
initiator_port: 9000
cluster: test_cluster
query: CREATE DATABASE test_db UUID '4a82697e-c85e-4e5b-a01e-a36f2a758456' ON CLUSTER test_cluster
settings: {'max_threads':'16','use_uncompressed_cache':'0'}
query_create_time: 2023-09-01 16:15:14
host: clickhouse-01
port: 9000
status: Finished
exception_code: 630
exception_text: Code: 630. DB::Exception: Cannot drop or rename test_db, because some tables depend on it:
query_finish_time: 2023-09-01 16:15:14
query_duration_ms: 154
2 rows in set. Elapsed: 0.025 sec.
``` | {"source_file": "distributed_ddl_queue.md"} | [
0.008113501593470573,
-0.05805471912026405,
-0.07614777237176895,
0.033363014459609985,
-0.026232074946165085,
-0.1175890564918518,
0.009294417686760426,
-0.04145314544439316,
-0.05329018831253052,
0.007372553925961256,
0.03022715635597706,
-0.020660603418946266,
0.052791062742471695,
-0.1... |
279aa5a0-5f55-432b-ba20-40d5753e8ad9 | description: 'System database providing an almost standardized DBMS-agnostic view
on metadata of database objects.'
keywords: ['system database', 'information_schema']
slug: /operations/system-tables/information_schema
title: 'INFORMATION_SCHEMA'
doc_type: 'reference'
INFORMATION_SCHEMA
(or:
information_schema
) is a system database which provides a (somewhat) standardized,
DBMS-agnostic view
on metadata of database objects. The views in
INFORMATION_SCHEMA
are generally inferior to normal system tables but tools can use them to obtain basic information in a cross-DBMS manner. The structure and content of views in
INFORMATION_SCHEMA
is supposed to evolves in a backwards-compatible way, i.e. only new functionality is added but existing functionality is not changed or removed. In terms of internal implementation, views in
INFORMATION_SCHEMA
usually map to to normal system tables like
system.columns
,
system.databases
and
system.tables
.
```sql
SHOW TABLES FROM INFORMATION_SCHEMA;
-- or:
SHOW TABLES FROM information_schema;
```
text
ββnameβββββββββββββββββββββ
β COLUMNS β
β KEY_COLUMN_USAGE β
β REFERENTIAL_CONSTRAINTS β
β SCHEMATA β
| STATISTICS |
β TABLES β
β VIEWS β
β columns β
β key_column_usage β
β referential_constraints β
β schemata β
| statistics |
β tables β
β views β
βββββββββββββββββββββββββββ
INFORMATION_SCHEMA
contains the following views:
COLUMNS
KEY_COLUMN_USAGE
REFERENTIAL_CONSTRAINTS
SCHEMATA
STATISTICS
TABLES
VIEWS
Case-insensitive equivalent views, e.g.
INFORMATION_SCHEMA.columns
are provided for reasons of compatibility with other databases. The same applies to all the columns in these views - both lowercase (for example,
table_name
) and uppercase (
TABLE_NAME
) variants are provided.
COLUMNS {#columns}
Contains columns read from the
system.columns
system table and columns that are not supported in ClickHouse or do not make sense (always
NULL
), but must be by the standard.
Columns:
Example
Query:
sql
SELECT table_catalog,
table_schema,
table_name,
column_name,
ordinal_position,
column_default,
is_nullable,
data_type,
character_maximum_length,
character_octet_length,
numeric_precision,
numeric_precision_radix,
numeric_scale,
datetime_precision,
character_set_catalog,
character_set_schema,
character_set_name,
collation_catalog,
collation_schema,
collation_name,
domain_catalog,
domain_schema,
domain_name,
column_comment,
column_type
FROM INFORMATION_SCHEMA.COLUMNS
WHERE (table_schema = currentDatabase() OR table_schema = '')
AND table_name NOT LIKE '%inner%'
LIMIT 1
FORMAT Vertical;
Result: | {"source_file": "information_schema.md"} | [
0.028364822268486023,
-0.07744956761598587,
-0.09829491376876831,
-0.016120264306664467,
0.0018872709479182959,
-0.06775961816310883,
0.04704541712999344,
0.041905295103788376,
-0.033596351742744446,
0.046971891075372696,
-0.012619499117136002,
-0.004540326073765755,
0.07516735792160034,
0... |
da1a1e9e-b647-45ea-8898-b616c50b9734 | Result:
text
Row 1:
ββββββ
table_catalog: default
table_schema: default
table_name: describe_example
column_name: id
ordinal_position: 1
column_default:
is_nullable: 0
data_type: UInt64
character_maximum_length: α΄Ία΅α΄Έα΄Έ
character_octet_length: α΄Ία΅α΄Έα΄Έ
numeric_precision: 64
numeric_precision_radix: 2
numeric_scale: 0
datetime_precision: α΄Ία΅α΄Έα΄Έ
character_set_catalog: α΄Ία΅α΄Έα΄Έ
character_set_schema: α΄Ία΅α΄Έα΄Έ
character_set_name: α΄Ία΅α΄Έα΄Έ
collation_catalog: α΄Ία΅α΄Έα΄Έ
collation_schema: α΄Ία΅α΄Έα΄Έ
collation_name: α΄Ία΅α΄Έα΄Έ
domain_catalog: α΄Ία΅α΄Έα΄Έ
domain_schema: α΄Ία΅α΄Έα΄Έ
domain_name: α΄Ία΅α΄Έα΄Έ
SCHEMATA {#schemata}
Contains columns read from the
system.databases
system table and columns that are not supported in ClickHouse or do not make sense (always
NULL
), but must be by the standard.
Columns:
catalog_name
(
String
) β The name of the database.
schema_name
(
String
) β The name of the database.
schema_owner
(
String
) β Schema owner name, always
'default'
.
default_character_set_catalog
(
Nullable
(
String
)) β
NULL
, not supported.
default_character_set_schema
(
Nullable
(
String
)) β
NULL
, not supported.
default_character_set_name
(
Nullable
(
String
)) β
NULL
, not supported.
sql_path
(
Nullable
(
String
)) β
NULL
, not supported.
Example
Query:
sql
SELECT catalog_name,
schema_name,
schema_owner,
default_character_set_catalog,
default_character_set_schema,
default_character_set_name,
sql_path
FROM information_schema.schemata
WHERE schema_name ILIKE 'information_schema'
LIMIT 1
FORMAT Vertical;
Result:
text
Row 1:
ββββββ
catalog_name: INFORMATION_SCHEMA
schema_name: INFORMATION_SCHEMA
schema_owner: default
default_character_set_catalog: α΄Ία΅α΄Έα΄Έ
default_character_set_schema: α΄Ία΅α΄Έα΄Έ
default_character_set_name: α΄Ία΅α΄Έα΄Έ
sql_path: α΄Ία΅α΄Έα΄Έ
TABLES {#tables}
Contains columns read from the
system.tables
system table.
Columns:
table_catalog
(
String
) β The name of the database in which the table is located.
table_schema
(
String
) β The name of the database in which the table is located.
table_name
(
String
) β Table name.
table_type
(
String
) β Table type. Possible values:
BASE TABLE
VIEW
FOREIGN TABLE
LOCAL TEMPORARY
SYSTEM VIEW
table_rows
(
Nullable
(
UInt64
)) β The total number of rows. NULL if it could not be determined.
data_length
(
Nullable
(
UInt64
)) β The size of the data on-disk. NULL if it could not be determined.
index_length
(
Nullable
(
UInt64
)) β The total size of the primary key, secondary indexes, and all marks.
table_collation
(
Nullable
(
String
)) β The table default collation. Always
utf8mb4_0900_ai_ci
.
table_comment
(
Nullable
(
String
)) β The comment used when creating the table. | {"source_file": "information_schema.md"} | [
0.08175261318683624,
-0.0500146821141243,
-0.15226835012435913,
0.012094794772565365,
-0.04171619936823845,
-0.048450782895088196,
0.012854089960455894,
0.03557003289461136,
-0.07421021908521652,
0.0068171084858477116,
0.09486526995897293,
-0.02199370600283146,
0.046128563582897186,
-0.074... |
51c15e77-2888-486e-b1b4-77a44ffb2855 | table_collation
(
Nullable
(
String
)) β The table default collation. Always
utf8mb4_0900_ai_ci
.
table_comment
(
Nullable
(
String
)) β The comment used when creating the table.
Example
Query:
sql
SELECT table_catalog,
table_schema,
table_name,
table_type,
table_collation,
table_comment
FROM INFORMATION_SCHEMA.TABLES
WHERE (table_schema = currentDatabase() OR table_schema = '')
AND table_name NOT LIKE '%inner%'
LIMIT 1
FORMAT Vertical;
Result:
text
Row 1:
ββββββ
table_catalog: default
table_schema: default
table_name: describe_example
table_type: BASE TABLE
table_collation: utf8mb4_0900_ai_ci
table_comment:
VIEWS {#views}
Contains columns read from the
system.tables
system table, when the table engine
View
is used.
Columns:
table_catalog
(
String
) β The name of the database in which the table is located.
table_schema
(
String
) β The name of the database in which the table is located.
table_name
(
String
) β Table name.
view_definition
(
String
) β
SELECT
query for view.
check_option
(
String
) β
NONE
, no checking.
is_updatable
(
Enum8
) β
NO
, the view is not updated.
is_insertable_into
(
Enum8
) β Shows whether the created view is
materialized
. Possible values:
NO
β The created view is not materialized.
YES
β The created view is materialized.
is_trigger_updatable
(
Enum8
) β
NO
, the trigger is not updated.
is_trigger_deletable
(
Enum8
) β
NO
, the trigger is not deleted.
is_trigger_insertable_into
(
Enum8
) β
NO
, no data is inserted into the trigger.
Example
Query:
sql
CREATE VIEW v (n Nullable(Int32), f Float64) AS SELECT n, f FROM t;
CREATE MATERIALIZED VIEW mv ENGINE = Null AS SELECT * FROM system.one;
SELECT table_catalog,
table_schema,
table_name,
view_definition,
check_option,
is_updatable,
is_insertable_into,
is_trigger_updatable,
is_trigger_deletable,
is_trigger_insertable_into
FROM information_schema.views
WHERE table_schema = currentDatabase()
LIMIT 1
FORMAT Vertical;
Result:
text
Row 1:
ββββββ
table_catalog: default
table_schema: default
table_name: mv
view_definition: SELECT * FROM system.one
check_option: NONE
is_updatable: NO
is_insertable_into: YES
is_trigger_updatable: NO
is_trigger_deletable: NO
is_trigger_insertable_into: NO
KEY_COLUMN_USAGE {#key_column_usage}
Contains columns from the
system.tables
system table which are restricted by constraints.
Columns:
constraint_catalog
(
String
) β Currently unused. Always
def
.
constraint_schema
(
String
) β The name of the schema (database) to which the constraint belongs.
constraint_name
(
Nullable
(
String
)) β The name of the constraint.
table_catalog
(
String
) β Currently unused. Always
def
. | {"source_file": "information_schema.md"} | [
0.015808667987585068,
-0.07692530751228333,
-0.1304895132780075,
0.046099953353405,
-0.029774384573101997,
-0.03742539510130882,
0.05870882794260979,
0.04337708279490471,
-0.00048074754886329174,
0.009691843762993813,
0.0683528482913971,
-0.018577417358756065,
0.07499038428068161,
-0.06490... |
04e4947c-0aac-4df4-8260-9d34bb8823e3 | constraint_name
(
Nullable
(
String
)) β The name of the constraint.
table_catalog
(
String
) β Currently unused. Always
def
.
table_schema
(
String
) β The name of the schema (database) to which the table belongs.
table_name
(
String
) β The name of the table that has the constraint.
column_name
(
Nullable
(
String
)) β The name of the column that has the constraint.
ordinal_position
(
UInt32
) β Currently unused. Always
1
.
position_in_unique_constraint
(
Nullable
(
UInt32
)) β Currently unused. Always
NULL
.
referenced_table_schema
(
Nullable
(
String
)) β Currently unused. Always NULL.
referenced_table_name
(
Nullable
(
String
)) β Currently unused. Always NULL.
referenced_column_name
(
Nullable
(
String
)) β Currently unused. Always NULL.
Example
sql
CREATE TABLE test (i UInt32, s String) ENGINE MergeTree ORDER BY i;
SELECT constraint_catalog,
constraint_schema,
constraint_name,
table_catalog,
table_schema,
table_name,
column_name,
ordinal_position,
position_in_unique_constraint,
referenced_table_schema,
referenced_table_name,
referenced_column_name
FROM information_schema.key_column_usage
WHERE table_name = 'test'
FORMAT Vertical;
Result:
response
Row 1:
ββββββ
constraint_catalog: def
constraint_schema: default
constraint_name: PRIMARY
table_catalog: def
table_schema: default
table_name: test
column_name: i
ordinal_position: 1
position_in_unique_constraint: α΄Ία΅α΄Έα΄Έ
referenced_table_schema: α΄Ία΅α΄Έα΄Έ
referenced_table_name: α΄Ία΅α΄Έα΄Έ
referenced_column_name: α΄Ία΅α΄Έα΄Έ
REFERENTIAL_CONSTRAINTS {#referential_constraints}
Contains information about foreign keys. Currently returns an empty result (no rows) which is just enough to provide compatibility with 3rd party tools like Tableau Online.
Columns:
constraint_catalog
(
String
) β Currently unused.
constraint_schema
(
String
) β Currently unused.
constraint_name
(
Nullable
(
String
)) β Currently unused.
unique_constraint_catalog
(
String
) β Currently unused.
unique_constraint_schema
(
String
) β Currently unused.
unique_constraint_name
(
Nullable
(
String
)) β Currently unused.
match_option
(
String
) β Currently unused.
update_rule
(
String
) β Currently unused.
delete_rule
(
String
) β Currently unused.
table_name
(
String
) β Currently unused.
referenced_table_name
(
String
) β Currently unused.
STATISTICS {#statistics}
Provides information about table indexes. Currently returns an empty result (no rows) which is just enough to provide compatibility with 3rd party tools like Tableau Online.
Columns:
table_catalog
(
String
) β Currently unused.
table_schema
(
String
) β Currently unused.
table_name
(
String
) β Currently unused.
non_unique
(
Int32
) β Currently unused. | {"source_file": "information_schema.md"} | [
0.03702115640044212,
-0.04228284955024719,
-0.09134061634540558,
0.02947424352169037,
-0.012479295954108238,
-0.00232280557975173,
0.03249768540263176,
-0.013231789693236351,
-0.09250857681035995,
-0.006084332708269358,
0.0738958865404129,
-0.039007823914289474,
0.08178442716598511,
-0.049... |
b5fc8485-6f7e-4da1-b809-7ad37236e8fe | Columns:
table_catalog
(
String
) β Currently unused.
table_schema
(
String
) β Currently unused.
table_name
(
String
) β Currently unused.
non_unique
(
Int32
) β Currently unused.
index_schema
(
String
) β Currently unused.
index_name
(
Nullable
(
String
)) β Currently unused.
seq_in_index
(
UInt32
) β Currently unused.
column_name
(
Nullable
(
String
)) β Currently unused.
collation
(
Nullable
(
String
)) β Currently unused.
cardinality
(
Nullable
(
Int64
)) β Currently unused.
sub_part
(
Nullable
(
Int64
)) β Currently unused.
packed
(
Nullable
(
String
)) β Currently unused.
nullable
(
String
) β Currently unused.
index_type
(
String
) β Currently unused.
comment
(
String
) β Currently unused.
index_comment
(
String
) β Currently unused.
is_visible
(
String
) β Currently unused.
expression
(
Nullable
(
String
)) β Currently unused. | {"source_file": "information_schema.md"} | [
0.033616650849580765,
-0.034400034695863724,
-0.09366723150014877,
0.08495525270700455,
0.06939690560102463,
-0.002872670302167535,
0.025649890303611755,
-0.0025382728781551123,
-0.018664097413420677,
0.013257521204650402,
0.05848905071616173,
-0.013002587482333183,
0.04887680336833,
-0.05... |
e2d08868-bc66-4865-b013-75d888784aea | description: 'System table containing information about and status of recent asynchronous
jobs (e.g. for tables which are loading). The table contains a row for every job.'
keywords: ['system table', 'asynchronous_loader']
slug: /operations/system-tables/asynchronous_loader
title: 'system.asynchronous_loader'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.asynchronous_loader
Contains information and status for recent asynchronous jobs (e.g. for tables loading). The table contains a row for every job. There is a tool for visualizing information from this table
utils/async_loader_graph
.
Example:
sql
SELECT *
FROM system.asynchronous_loader
LIMIT 1
FORMAT Vertical
Columns:
job
(
String
) β Job name (may be not unique).
job_id
(
UInt64
) β Unique ID of the job.
dependencies
(
Array(UInt64)
) β List of IDs of jobs that should be done before this job.
dependencies_left
(
UInt64
) β Current number of dependencies left to be done.
status
(
Enum8('PENDING' = 0, 'OK' = 1, 'FAILED' = 2, 'CANCELED' = 3)
) β Current load status of a job: PENDING: Load job is not started yet. OK: Load job executed and was successful. FAILED: Load job executed and failed. CANCELED: Load job is not going to be executed due to removal or dependency failure.
is_executing
(
UInt8
) β The job is currently being executed by a worker.
is_blocked
(
UInt8
) β The job waits for its dependencies to be done.
is_ready
(
UInt8
) β The job is ready to be executed and waits for a worker.
elapsed
(
Float64
) β Seconds elapsed since start of execution. Zero if job is not started. Total execution time if job finished.
pool_id
(
UInt64
) β ID of a pool currently assigned to the job.
pool
(
String
) β Name of
pool_id
pool.
priority
(
Int64
) β Priority of
pool_id
pool.
execution_pool_id
(
UInt64
) β ID of a pool the job is executed in. Equals initially assigned pool before execution starts.
execution_pool
(
String
) β Name of
execution_pool_id
pool.
execution_priority
(
Int64
) β Priority of execution_pool_id pool.
ready_seqno
(
Nullable(UInt64)
) β Not null for ready jobs. Worker pulls the next job to be executed from a ready queue of its pool. If there are multiple ready jobs, then job with the lowest value of
ready_seqno
is picked.
waiters
(
UInt64
) β The number of threads waiting on this job.
exception
(
Nullable(String)
) β Not null for failed and canceled jobs. Holds error message raised during query execution or error leading to cancelling of this job along with dependency failure chain of job names.
schedule_time
(
DateTime64(6)
) β Time when job was created and scheduled to be executed (usually with all its dependencies).
enqueue_time
(
Nullable(DateTime64(6))
) β Time when job became ready and was enqueued into a ready queue of it's pool. Null if the job is not ready yet. | {"source_file": "asynchronous_loader.md"} | [
-0.04317479580640793,
0.002643980085849762,
-0.08830934017896652,
0.053267836570739746,
0.006616802886128426,
-0.06209975853562355,
0.020182721316814423,
-0.05814822018146515,
-0.01542599219828844,
0.045481808483600616,
-0.041901081800460815,
0.035860102623701096,
0.06052103266119957,
-0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.