id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
1afaf798-7351-4f38-9f65-bc488f0368a6 | enqueue_time
(
Nullable(DateTime64(6))
) β Time when job became ready and was enqueued into a ready queue of it's pool. Null if the job is not ready yet.
start_time
(
Nullable(DateTime64(6))
) β Time when worker dequeues the job from ready queue and start its execution. Null if the job is not started yet.
finish_time
(
Nullable(DateTime64(6))
) β Time when job execution is finished. Null if the job is not finished yet.
A pending job might be in one of the following states:
-
is_executing
(
UInt8
) - The job is currently being executed by a worker.
-
is_blocked
(
UInt8
) - The job waits for its dependencies to be done.
-
is_ready
(
UInt8
) - The job is ready to be executed and waits for a worker.
-
elapsed
(
Float64
) - Seconds elapsed since start of execution. Zero if job is not started. Total execution time if job finished.
Every job has a pool associated with it and is started in this pool. Each pool has a constant priority and a mutable maximum number of workers. Higher priority (lower
priority
value) jobs are run first. No job with lower priority is started while there is at least one higher priority job ready or executing. Job priority can be elevated (but cannot be lowered) by prioritizing it. For example jobs for a table loading and startup will be prioritized if incoming query required this table. It is possible prioritize a job during its execution, but job is not moved from its
execution_pool
to newly assigned
pool
. The job uses
pool
for creating new jobs to avoid priority inversion. Already started jobs are not preempted by higher priority jobs and always run to completion after start.
-
pool_id
(
UInt64
) - ID of a pool currently assigned to the job.
-
pool
(
String
) - Name of
pool_id
pool.
-
priority
(
Int64
) - Priority of
pool_id
pool.
-
execution_pool_id
(
UInt64
) - ID of a pool the job is executed in. Equals initially assigned pool before execution starts.
-
execution_pool
(
String
) - Name of
execution_pool_id
pool.
-
execution_priority
(
Int64
) - Priority of
execution_pool_id
pool.
ready_seqno
(
Nullable(UInt64)
) - Not null for ready jobs. Worker pulls the next job to be executed from a ready queue of its pool. If there are multiple ready jobs, then job with the lowest value of
ready_seqno
is picked.
waiters
(
UInt64
) - The number of threads waiting on this job.
exception
(
Nullable(String)
) - Not null for failed and canceled jobs. Holds error message raised during query execution or error leading to cancelling of this job along with dependency failure chain of job names. | {"source_file": "asynchronous_loader.md"} | [
-0.09162558615207672,
0.008824552409350872,
-0.0678156241774559,
0.0040550935082137585,
0.020398687571287155,
-0.06767697632312775,
0.00411585159599781,
-0.0462116114795208,
-0.01655346341431141,
0.03950510546565056,
0.04055515304207802,
-0.04880814254283905,
-0.023524224758148193,
-0.0174... |
76d27589-59f9-420e-8278-e9e1973e7aa3 | Time instants during job lifetime:
-
schedule_time
(
DateTime64
) - Time when job was created and scheduled to be executed (usually with all its dependencies).
-
enqueue_time
(
Nullable(DateTime64)
) - Time when job became ready and was enqueued into a ready queue of its pool. Null if the job is not ready yet.
-
start_time
(
Nullable(DateTime64)
) - Time when worker dequeues the job from ready queue and start its execution. Null if the job is not started yet.
-
finish_time
(
Nullable(DateTime64)
) - Time when job execution is finished. Null if the job is not finished yet. | {"source_file": "asynchronous_loader.md"} | [
-0.08704002946615219,
0.02808348834514618,
-0.09206841140985489,
0.02027767151594162,
0.04437294602394104,
-0.05534139275550842,
-0.011470003984868526,
-0.027634814381599426,
-0.021864917129278183,
0.025106176733970642,
0.04073704034090042,
-0.022424748167395592,
-0.04868317022919655,
-0.0... |
14fd4477-c342-43fe-a786-d277de20d9c3 | description: 'System table containing a list of database engines supported by the
server.'
keywords: ['system table', 'database_engines']
slug: /operations/system-tables/database_engines
title: 'system.database_engines'
doc_type: 'reference'
Contains the list of database engines supported by the server.
This table contains the following columns (the column type is shown in brackets):
name
(
String
) β The name of database engine.
Example:
sql
SELECT *
FROM system.database_engines
WHERE name IN ('Atomic', 'Lazy', 'Ordinary')
text
ββnameββββββ
β Ordinary β
β Atomic β
β Lazy β
ββββββββββββ | {"source_file": "database_engines.md"} | [
0.00010564980038907379,
-0.02555144764482975,
-0.07695493847131729,
0.04281222075223923,
-0.0251948069781065,
-0.04526526853442192,
0.057218775153160095,
0.03899485990405083,
-0.09499279409646988,
-0.01883123628795147,
-0.007184572052210569,
-0.0041830874979496,
0.04527275636792183,
-0.097... |
c46832d3-099f-4f29-868d-a42fc1c0674c | description: 'System table containing information about quotas.'
keywords: ['system table', 'quotas', 'quota']
slug: /operations/system-tables/quotas
title: 'system.quotas'
doc_type: 'reference'
system.quotas
Contains information about
quotas
.
Columns:
-
name
(
String
) β Quota name.
-
id
(
UUID
) β Quota ID.
-
storage
(
String
) β Storage of quotas. Possible value: "users.xml" if a quota configured in the users.xml file, "disk" if a quota configured by an SQL-query.
-
keys
(
Array
(
Enum8
)) β Key specifies how the quota should be shared. If two connections use the same quota and key, they share the same amounts of resources. Values:
-
[]
β All users share the same quota.
-
['user_name']
β Connections with the same user name share the same quota.
-
['ip_address']
β Connections from the same IP share the same quota.
-
['client_key']
β Connections with the same key share the same quota. A key must be explicitly provided by a client. When using
clickhouse-client
, pass a key value in the
--quota_key
parameter, or use the
quota_key
parameter in the client configuration file. When using HTTP interface, use the
X-ClickHouse-Quota
header.
-
['user_name', 'client_key']
β Connections with the same
client_key
share the same quota. If a key isn't provided by a client, the quota is tracked for
user_name
.
-
['client_key', 'ip_address']
β Connections with the same
client_key
share the same quota. If a key isn't provided by a client, the quota is tracked for
ip_address
.
-
durations
(
Array
(
UInt64
)) β Time interval lengths in seconds.
-
apply_to_all
(
UInt8
) β Logical value. It shows which users the quota is applied to. Values:
-
0
β The quota applies to users specify in the
apply_to_list
.
-
1
β The quota applies to all users except those listed in
apply_to_except
.
-
apply_to_list
(
Array
(
String
)) β List of user names/
roles
that the quota should be applied to.
-
apply_to_except
(
Array
(
String
)) β List of user names/roles that the quota should not apply to.
See Also {#see-also}
SHOW QUOTAS | {"source_file": "quotas.md"} | [
-0.022692658007144928,
-0.017096608877182007,
-0.12410169094800949,
-0.01780274137854576,
-0.08922266960144043,
-0.05669461935758591,
0.1108204796910286,
0.032379359006881714,
0.023476403206586838,
0.007118111476302147,
0.04100106284022331,
-0.03826097026467323,
0.11210372298955917,
-0.046... |
48c3a4fc-26e5-41f5-80a8-19aa383995f5 | description: 'System table containing information about detached parts of MergeTree
tables'
keywords: ['system table', 'detached_parts']
slug: /operations/system-tables/detached_parts
title: 'system.detached_parts'
doc_type: 'reference'
Contains information about detached parts of
MergeTree
tables. The
reason
column specifies why the part was detached.
For user-detached parts, the reason is empty. Such parts can be attached with
ALTER TABLE ATTACH PARTITION\|PART
command.
For the description of other columns, see
system.parts
.
If part name is invalid, values of some columns may be
NULL
. Such parts can be deleted with
ALTER TABLE DROP DETACHED PART
. | {"source_file": "detached_parts.md"} | [
0.008692466653883457,
-0.018379146233201027,
0.027301624417304993,
0.0014808897394686937,
0.09268108755350113,
-0.108686663210392,
0.054314129054546356,
0.0609407015144825,
0.005872557405382395,
-0.02072146348655224,
0.05040498822927475,
-0.02041269652545452,
0.008626606315374374,
-0.03939... |
1d723b99-c1da-469d-9219-e6119ac88e49 | description: 'System table containing information about the parameters of the request
to the ZooKeeper server and the response from it.'
keywords: ['system table', 'zookeeper_log']
slug: /operations/system-tables/zookeeper_log
title: 'system.zookeeper_log'
doc_type: 'reference'
system.zookeeper_log
This table contains information about the parameters of the request to the ZooKeeper server and the response from it.
For requests, only columns with request parameters are filled in, and the remaining columns are filled with default values (
0
or
NULL
). When the response arrives, the data from the response is added to the other columns.
Columns with request parameters:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
type
(
Enum
) β Event type in the ZooKeeper client. Can have one of the following values:
Request
β The request has been sent.
Response
β The response was received.
Finalize
β The connection is lost, no response was received.
event_date
(
Date
) β The date when the event happened.
event_time
(
DateTime64
) β The date and time when the event happened.
address
(
IPv6
) β IP address of ZooKeeper server that was used to make the request.
port
(
UInt16
) β The port of ZooKeeper server that was used to make the request.
session_id
(
Int64
) β The session ID that the ZooKeeper server sets for each connection.
xid
(
Int32
) β The ID of the request within the session. This is usually a sequential request number. It is the same for the request row and the paired
response
/
finalize
row.
has_watch
(
UInt8
) β The request whether the
watch
has been set.
op_num
(
Enum
) β The type of request or response.
path
(
String
) β The path to the ZooKeeper node specified in the request, or an empty string if the request not requires specifying a path.
data
(
String
) β The data written to the ZooKeeper node (for the
SET
and
CREATE
requests β what the request wanted to write, for the response to the
GET
request β what was read) or an empty string.
is_ephemeral
(
UInt8
) β Is the ZooKeeper node being created as an
ephemeral
.
is_sequential
(
UInt8
) β Is the ZooKeeper node being created as an
sequential
.
version
(
Nullable(Int32)
) β The version of the ZooKeeper node that the request expects when executing. This is supported for
CHECK
,
SET
,
REMOVE
requests (is relevant
-1
if the request does not check the version or
NULL
for other requests that do not support version checking).
requests_size
(
UInt32
) β The number of requests included in the multi request (this is a special request that consists of several consecutive ordinary requests and executes them atomically). All requests included in multi request will have the same
xid
.
request_idx
(
UInt32
) β The number of the request included in multi request (for multi request β
0
, then in order from
1
).
Columns with request response parameters: | {"source_file": "zookeeper_log.md"} | [
0.015793632715940475,
0.045944537967443466,
-0.05159168690443039,
0.03584924712777138,
0.027850616723299026,
-0.12393596023321152,
0.025432011112570763,
0.024709884077310562,
0.01416013203561306,
0.06345882266759872,
-0.010149448178708553,
-0.07788864523172379,
0.0398387610912323,
0.005936... |
8d5c4e1e-ef83-4ebc-9c83-84e8f79cdf0c | request_idx
(
UInt32
) β The number of the request included in multi request (for multi request β
0
, then in order from
1
).
Columns with request response parameters:
zxid
(
Int64
) β ZooKeeper transaction ID. The serial number issued by the ZooKeeper server in response to a successfully executed request (
0
if the request was not executed/returned an error/the client does not know whether the request was executed).
error
(
Nullable(Enum)
) β Error code. Can have many values, here are just some of them:
ZOK
β The request was executed successfully.
ZCONNECTIONLOSS
β The connection was lost.
ZOPERATIONTIMEOUT
β The request execution timeout has expired.
ZSESSIONEXPIRED
β The session has expired.
NULL
β The request is completed.
watch_type
(
Nullable(Enum)
) β The type of the
watch
event (for responses with
op_num
=
Watch
), for the remaining responses:
NULL
.
watch_state
(
Nullable(Enum)
) β The status of the
watch
event (for responses with
op_num
=
Watch
), for the remaining responses:
NULL
.
path_created
(
String
) β The path to the created ZooKeeper node (for responses to the
CREATE
request), may differ from the
path
if the node is created as a
sequential
.
stat_czxid
(
Int64
) β The
zxid
of the change that caused this ZooKeeper node to be created.
stat_mzxid
(
Int64
) β The
zxid
of the change that last modified this ZooKeeper node.
stat_pzxid
(
Int64
) β The transaction ID of the change that last modified children of this ZooKeeper node.
stat_version
(
Int32
) β The number of changes to the data of this ZooKeeper node.
stat_cversion
(
Int32
) β The number of changes to the children of this ZooKeeper node.
stat_dataLength
(
Int32
) β The length of the data field of this ZooKeeper node.
stat_numChildren
(
Int32
) β The number of children of this ZooKeeper node.
children
(
Array(String)
) β The list of child ZooKeeper nodes (for responses to
LIST
request).
Example
Query:
sql
SELECT * FROM system.zookeeper_log WHERE (session_id = '106662742089334927') AND (xid = '10858') FORMAT Vertical;
Result:
```text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
type: Request
event_date: 2021-08-09
event_time: 2021-08-09 21:38:30.291792
address: ::
port: 2181
session_id: 106662742089334927
xid: 10858
has_watch: 1
op_num: List
path: /clickhouse/task_queue/ddl
data:
is_ephemeral: 0
is_sequential: 0
version: α΄Ία΅α΄Έα΄Έ
requests_size: 0
request_idx: 0
zxid: 0
error: α΄Ία΅α΄Έα΄Έ
watch_type: α΄Ία΅α΄Έα΄Έ
watch_state: α΄Ία΅α΄Έα΄Έ
path_created:
stat_czxid: 0
stat_mzxid: 0
stat_pzxid: 0
stat_version: 0
stat_cversion: 0
stat_dataLength: 0
stat_numChildren: 0
children: [] | {"source_file": "zookeeper_log.md"} | [
-0.03908335790038109,
0.06236523389816284,
-0.06516771018505096,
0.04228386655449867,
0.029577312991023064,
-0.09488677978515625,
0.012972581200301647,
0.026961874216794968,
0.016915110871195793,
0.03314408287405968,
-0.0441342368721962,
-0.035448748618364334,
0.03698018938302994,
0.020145... |
6616b28a-2231-44b5-9d10-34a533b9b54f | Row 2:
ββββββ
type: Response
event_date: 2021-08-09
event_time: 2021-08-09 21:38:30.292086
address: ::
port: 2181
session_id: 106662742089334927
xid: 10858
has_watch: 1
op_num: List
path: /clickhouse/task_queue/ddl
data:
is_ephemeral: 0
is_sequential: 0
version: α΄Ία΅α΄Έα΄Έ
requests_size: 0
request_idx: 0
zxid: 16926267
error: ZOK
watch_type: α΄Ία΅α΄Έα΄Έ
watch_state: α΄Ία΅α΄Έα΄Έ
path_created:
stat_czxid: 16925469
stat_mzxid: 16925469
stat_pzxid: 16926179
stat_version: 0
stat_cversion: 7
stat_dataLength: 0
stat_numChildren: 7
children: ['query-0000000006','query-0000000005','query-0000000004','query-0000000003','query-0000000002','query-0000000001','query-0000000000']
```
See Also
ZooKeeper
ZooKeeper guide | {"source_file": "zookeeper_log.md"} | [
-0.0922476202249527,
0.041610825806856155,
-0.08542279899120331,
0.031046632677316666,
0.0037425816990435123,
-0.11521470546722412,
0.0249163880944252,
-0.02920067124068737,
0.03228551894426346,
0.04039068892598152,
0.000952563772443682,
-0.08552427589893341,
-0.014716299250721931,
-0.0141... |
579e6ddf-87f7-4698-bc27-ead3b3797b5e | description: 'System table containing information about memory allocations done via
jemalloc allocator in different size classes (bins) aggregated from all arenas.'
keywords: ['system table', 'jemalloc_bins']
slug: /operations/system-tables/jemalloc_bins
title: 'system.jemalloc_bins'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about memory allocations done via jemalloc allocator in different size classes (bins) aggregated from all arenas.
These statistics might not be absolutely accurate because of thread local caching in jemalloc.
Columns:
index
(
UInt16
) β Index of the bin ordered by size.
large
(
UInt8
) β True for large allocations and False for small.
size
(
UInt64
) β Size of allocations in this bin.
allocations
(
Int64
) β Number of allocations.
deallocations
(
Int64
) β Number of deallocations.
Example
Find the sizes of allocations that contributed the most to the current overall memory usage.
sql
SELECT
*,
allocations - deallocations AS active_allocations,
size * active_allocations AS allocated_bytes
FROM system.jemalloc_bins
WHERE allocated_bytes > 0
ORDER BY allocated_bytes DESC
LIMIT 10
text
ββindexββ¬βlargeββ¬βββββsizeββ¬βallocactionsββ¬βdeallocationsββ¬βactive_allocationsββ¬βallocated_bytesββ
β 82 β 1 β 50331648 β 1 β 0 β 1 β 50331648 β
β 10 β 0 β 192 β 512336 β 370710 β 141626 β 27192192 β
β 69 β 1 β 5242880 β 6 β 2 β 4 β 20971520 β
β 3 β 0 β 48 β 16938224 β 16559484 β 378740 β 18179520 β
β 28 β 0 β 4096 β 122924 β 119142 β 3782 β 15491072 β
β 61 β 1 β 1310720 β 44569 β 44558 β 11 β 14417920 β
β 39 β 1 β 28672 β 1285 β 913 β 372 β 10665984 β
β 4 β 0 β 64 β 2837225 β 2680568 β 156657 β 10026048 β
β 6 β 0 β 96 β 2617803 β 2531435 β 86368 β 8291328 β
β 36 β 1 β 16384 β 22431 β 21970 β 461 β 7553024 β
βββββββββ΄ββββββββ΄βββββββββββ΄βββββββββββββββ΄ββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββ | {"source_file": "jemalloc_bins.md"} | [
0.1011945977807045,
-0.037539560347795486,
-0.07890132069587708,
0.00004030220225104131,
0.0238653477281332,
-0.05803113430738449,
0.024341993033885956,
0.03299205005168915,
-0.005600361619144678,
0.12786100804805756,
-0.04989004507660866,
0.006754277739673853,
0.05178504437208176,
-0.0695... |
d7c2f21d-6e60-44e3-ae5e-489986704bd9 | description: 'System table containing information about cached DNS records.'
keywords: ['system table', 'dns_cache']
slug: /operations/system-tables/dns_cache
title: 'system.dns_cache'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about cached DNS records.
Columns:
hostname
(
String
) β Hostname.
ip_address
(
String
) β IP address.
ip_family
(
Enum8('IPv4' = 0, 'IPv6' = 1, 'UNIX_LOCAL' = 2)
) β IP address family.
cached_at
(
DateTime
) β Record cached timestamp.
Example
Query:
sql
SELECT * FROM system.dns_cache;
Result:
| hostname | ip_address | ip_family | cached_at |
| :--- | :--- | :--- | :--- |
| localhost | ::1 | IPv6 | 2024-02-11 17:04:40 |
| localhost | 127.0.0.1 | IPv4 | 2024-02-11 17:04:40 |
See also
disable_internal_dns_cache setting
dns_cache_max_entries setting
dns_cache_update_period setting
dns_max_consecutive_failures setting | {"source_file": "dns_cache.md"} | [
0.011496983468532562,
0.02048197202384472,
-0.04134527966380119,
-0.05318758264183998,
-0.02028515562415123,
-0.11372897773981094,
0.024887774139642715,
-0.03084058314561844,
0.03497185930609703,
0.07953125238418579,
-0.024636412039399147,
-0.005053060594946146,
0.0315670482814312,
-0.1315... |
b1b01a17-acf6-4558-9d04-2a724d2a27d4 | description: 'System table containing information about threads that execute queries,
for example, thread name, thread start time, duration of query processing.'
keywords: ['system table', 'query_thread_log']
slug: /operations/system-tables/query_thread_log
title: 'system.query_thread_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.query_thread_log
Contains information about threads that execute queries, for example, thread name, thread start time, duration of query processing.
To start logging:
Configure parameters in the
query_thread_log
section.
Set
log_query_threads
to 1.
The flushing period of data is set in
flush_interval_milliseconds
parameter of the
query_thread_log
server settings section. To force flushing, use the
SYSTEM FLUSH LOGS
query.
ClickHouse does not delete data from the table automatically. See
Introduction
for more details.
You can use the
log_queries_probability
) setting to reduce the number of queries, registered in the
query_thread_log
table.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β The date when the thread has finished execution of the query.
event_time
(
DateTime
) β The date and time when the thread has finished execution of the query.
event_time_microseconds
(
DateTime
) β The date and time when the thread has finished execution of the query with microseconds precision.
query_start_time
(
DateTime
) β Start time of query execution.
query_start_time_microseconds
(
DateTime64
) β Start time of query execution with microsecond precision.
query_duration_ms
(
UInt64
) β Duration of query execution.
read_rows
(
UInt64
) β Number of read rows.
read_bytes
(
UInt64
) β Number of read bytes.
written_rows
(
UInt64
) β For
INSERT
queries, the number of written rows. For other queries, the column value is 0.
written_bytes
(
UInt64
) β For
INSERT
queries, the number of written bytes. For other queries, the column value is 0.
memory_usage
(
Int64
) β The difference between the amount of allocated and freed memory in context of this thread.
peak_memory_usage
(
Int64
) β The maximum difference between the amount of allocated and freed memory in context of this thread.
thread_name
(
String
) β Name of the thread.
thread_id
(
UInt64
) β OS thread ID.
master_thread_id
(
UInt64
) β OS initial ID of initial thread.
query
(
String
) β Query string.
is_initial_query
(
UInt8
) β Query type. Possible values:
1 β Query was initiated by the client.
0 β Query was initiated by another query for distributed query execution.
user
(
String
) β Name of the user who initiated the current query.
query_id
(
String
) β ID of the query.
address
(
IPv6
) β IP address that was used to make the query.
port
(
UInt16
) β The client port that was used to make the query. | {"source_file": "query_thread_log.md"} | [
0.023624101653695107,
-0.05471379682421684,
-0.05381530523300171,
0.00403248704969883,
-0.041921231895685196,
-0.10431670397520065,
0.05927727743983269,
-0.09201326221227646,
0.035395439714193344,
0.10828611999750137,
0.005503478460013866,
-0.015222699381411076,
0.07066506147384644,
-0.089... |
78a32082-db92-408c-908a-f3d9d33c08d9 | query_id
(
String
) β ID of the query.
address
(
IPv6
) β IP address that was used to make the query.
port
(
UInt16
) β The client port that was used to make the query.
initial_user
(
String
) β Name of the user who ran the initial query (for distributed query execution).
initial_query_id
(
String
) β ID of the initial query (for distributed query execution).
initial_address
(
IPv6
) β IP address that the parent query was launched from.
initial_port
(
UInt16
) β The client port that was used to make the parent query.
interface
(
UInt8
) β Interface that the query was initiated from. Possible values:
1 β TCP.
2 β HTTP.
os_user
(
String
) β OS's username who runs
clickhouse-client
.
client_hostname
(
String
) β Hostname of the client machine where the
clickhouse-client
or another TCP client is run.
client_name
(
String
) β The
clickhouse-client
or another TCP client name.
client_revision
(
UInt32
) β Revision of the
clickhouse-client
or another TCP client.
client_version_major
(
UInt32
) β Major version of the
clickhouse-client
or another TCP client.
client_version_minor
(
UInt32
) β Minor version of the
clickhouse-client
or another TCP client.
client_version_patch
(
UInt32
) β Patch component of the
clickhouse-client
or another TCP client version.
http_method
(
UInt8
) β HTTP method that initiated the query. Possible values:
0 β The query was launched from the TCP interface.
1 β
GET
method was used.
2 β
POST
method was used.
http_user_agent
(
String
) β The
UserAgent
header passed in the HTTP request.
quota_key
(
String
) β The "quota key" specified in the
quotas
setting (see
keyed
).
revision
(
UInt32
) β ClickHouse revision.
ProfileEvents
(
Map(String, UInt64)
) β ProfileEvents that measure different metrics for this thread. The description of them could be found in the table
system.events
.
Example
sql
SELECT * FROM system.query_thread_log LIMIT 1 \G | {"source_file": "query_thread_log.md"} | [
-0.0601053424179554,
0.04693177714943886,
-0.03594056889414787,
0.01262458972632885,
-0.079056516289711,
-0.09972906857728958,
-0.003921130672097206,
-0.017537029460072517,
-0.04325246438384056,
-0.026366515085101128,
-0.01880168728530407,
0.0157233327627182,
0.05900716781616211,
-0.095065... |
46103ad1-58c1-4fd0-9ec6-2e03171253e1 | Example
sql
SELECT * FROM system.query_thread_log LIMIT 1 \G
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2020-09-11
event_time: 2020-09-11 10:08:17
event_time_microseconds: 2020-09-11 10:08:17.134042
query_start_time: 2020-09-11 10:08:17
query_start_time_microseconds: 2020-09-11 10:08:17.063150
query_duration_ms: 70
read_rows: 0
read_bytes: 0
written_rows: 1
written_bytes: 12
memory_usage: 4300844
peak_memory_usage: 4300844
thread_name: TCPHandler
thread_id: 638133
master_thread_id: 638133
query: INSERT INTO test1 VALUES
is_initial_query: 1
user: default
query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef
address: ::ffff:127.0.0.1
port: 33452
initial_user: default
initial_query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef
initial_address: ::ffff:127.0.0.1
initial_port: 33452
interface: 1
os_user: bharatnc
client_hostname: tower
client_name: ClickHouse
client_revision: 54437
client_version_major: 20
client_version_minor: 7
client_version_patch: 2
http_method: 0
http_user_agent:
quota_key:
revision: 54440
ProfileEvents: {'Query':1,'SelectQuery':1,'ReadCompressedBytes':36,'CompressedReadBufferBlocks':1,'CompressedReadBufferBytes':10,'IOBufferAllocs':1,'IOBufferAllocBytes':89,'ContextLock':15,'RWLockAcquiredReadLocks':1}
See Also
system.query_log
β Description of the
query_log
system table which contains common information about queries execution.
system.query_views_log
β This table contains information about each view executed during a query. | {"source_file": "query_thread_log.md"} | [
0.017156442627310753,
-0.0178409181535244,
-0.06672992557287216,
0.07573823630809784,
-0.05750739574432373,
-0.11177805811166763,
0.04817601665854454,
0.012529972940683365,
0.010871278122067451,
0.054698921740055084,
-0.014540561474859715,
-0.03884830325841904,
0.03169877454638481,
-0.0842... |
650255e3-f9d8-4b9e-80e1-bb1e15dcb4f3 | description: 'System table containing information about merges and part mutations
currently in process for tables in the MergeTree family.'
keywords: ['system table', 'merges']
slug: /operations/system-tables/merges
title: 'system.merges'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.merges
Contains information about merges and part mutations currently in process for tables in the MergeTree family.
Columns:
database
(
String
) β The name of the database the table is in.
table
(
String
) β Table name.
elapsed
(
Float64
) β The time elapsed (in seconds) since the merge started.
progress
(
Float64
) β The percentage of completed work from 0 to 1.
num_parts
(
UInt64
) β The number of parts to be merged.
source_part_names
(
Array(String)
) β The list of source parts names.
result_part_name
(
String
) β The name of the part that will be formed as the result of merging.
source_part_paths
(
Array(String)
) β The list of paths for each source part.
result_part_path
(
String
) β The path of the part that will be formed as the result of merging.
partition_id
(
String
) β The identifier of the partition where the merge is happening.
partition
(
String
) β The name of the partition
is_mutation
(
UInt8
) β 1 if this process is a part mutation.
total_size_bytes_compressed
(
UInt64
) β The total size of the compressed data in the merged chunks.
total_size_bytes_uncompressed
(
UInt64
) β The total size of compressed data in the merged chunks.
total_size_marks
(
UInt64
) β The total number of marks in the merged parts.
bytes_read_uncompressed
(
UInt64
) β Number of bytes read, uncompressed.
rows_read
(
UInt64
) β Number of rows read.
bytes_written_uncompressed
(
UInt64
) β Number of bytes written, uncompressed.
rows_written
(
UInt64
) β Number of rows written.
columns_written
(
UInt64
) β Number of columns written (for Vertical merge algorithm).
memory_usage
(
UInt64
) β Memory consumption of the merge process.
thread_id
(
UInt64
) β Thread ID of the merge process.
merge_type
(
String
) β The type of current merge. Empty if it's an mutation.
merge_algorithm
(
String
) β The algorithm used in current merge. Empty if it's an mutation. | {"source_file": "merges.md"} | [
-0.005658737849444151,
-0.025553982704877853,
-0.04136516526341438,
-0.046152181923389435,
0.025180185213685036,
-0.14549772441387177,
-0.001704837311990559,
0.05230187252163887,
-0.0038878978230059147,
0.0630352795124054,
0.0457281619310379,
-0.009117670357227325,
0.06622499972581863,
-0.... |
d2a1c727-4d33-40e0-afb4-8ce76dfdfdbe | description: 'System table containing a history of memory and metric values from table
system.events
for individual queries, periodically flushed to disk.'
keywords: ['system table', 'query_metric_log']
slug: /operations/system-tables/query_metric_log
title: 'system.query_metric_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.query_metric_log
Contains a history of memory and metric values from table
system.events
for individual queries, periodically flushed to disk.
Once a query starts, data is collected at periodic intervals of
query_metric_log_interval
milliseconds (which is set to 1000
by default). The data is also collected when the query finishes if the query takes longer than
query_metric_log_interval
.
Columns:
-
query_id
(
String
) β ID of the query.
-
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
-
event_date
(
Date
) β Event date.
-
event_time
(
DateTime
) β Event time.
-
event_time_microseconds
(
DateTime64
) β Event time with microseconds resolution.
Example
sql
SELECT * FROM system.query_metric_log LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
query_id: 97c8ba04-b6d4-4bd7-b13e-6201c5c6e49d
hostname: clickhouse.eu-central1.internal
event_date: 2020-09-05
event_time: 2020-09-05 16:22:33
event_time_microseconds: 2020-09-05 16:22:33.196807
memory_usage: 313434219
peak_memory_usage: 598951986
ProfileEvent_Query: 0
ProfileEvent_SelectQuery: 0
ProfileEvent_InsertQuery: 0
ProfileEvent_FailedQuery: 0
ProfileEvent_FailedSelectQuery: 0
...
See also
query_metric_log setting
β Enabling and disabling the setting.
query_metric_log_interval
system.asynchronous_metrics
β Contains periodically calculated metrics.
system.events
β Contains a number of events that occurred.
system.metrics
β Contains instantly calculated metrics.
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "query_metric_log.md"} | [
0.041350774466991425,
-0.018907684832811356,
-0.08000187575817108,
0.05427236109972,
-0.020210320129990578,
-0.1159559115767479,
0.04495345056056976,
-0.0047317370772361755,
0.09841753542423248,
0.06846432387828827,
0.0037419272121042013,
0.010428084060549736,
0.07995278388261795,
-0.07816... |
8336bad2-efac-461c-bd89-d762a306928a | description: 'System table containing information about settings of AzureQueue tables.
Available from server version
24.10
.'
keywords: ['system table', 'azure_queue_settings']
slug: /operations/system-tables/azure_queue_settings
title: 'system.azure_queue_settings'
doc_type: 'reference'
Contains information about settings of
AzureQueue
tables.
Available from
24.10
server version.
Columns:
database
(
String
) β Database of the table with S3Queue Engine.
table
(
String
) β Name of the table with S3Queue Engine.
name
(
String
) β Setting name.
value
(
String
) β Setting value.
type
(
String
) β Setting type (implementation specific string value).
changed
(
UInt8
) β 1 if the setting was explicitly defined in the config or explicitly changed.
description
(
String
) β Setting description.
alterable
(
UInt8
) β Shows whether the current user can change the setting via ALTER TABLE MODIFY SETTING: 0 β Current user can change the setting, 1 β Current user can't change the setting. | {"source_file": "azure_queue_settings.md"} | [
-0.017269661650061607,
-0.03548121079802513,
-0.10778015106916428,
0.040830742567777634,
-0.11073397845029831,
0.022441061213612556,
0.10298239439725876,
-0.06510519236326218,
-0.026202231645584106,
0.08696314692497253,
0.009875930845737457,
-0.05753012374043465,
0.08792880922555923,
-0.04... |
1e3beb15-ed6a-4253-9256-e33f0eb9f34a | description: 'System iceberg snapshot history'
keywords: ['system iceberg_history']
slug: /operations/system-tables/iceberg_history
title: 'system.iceberg_history'
doc_type: 'reference'
system.iceberg_history
This system table contain the snapshot history of Iceberg tables existing in ClickHouse. It will be empty if you don't have any Iceberg table in ClickHouse.
Columns:
database
(
String
) β Database name.
table
(
String
) β Table name.
made_current_at
(
Nullable(DateTime64(3))
) β Date & time when this snapshot was made current snapshot
snapshot_id
(
UInt64
) β Snapshot id which is used to identify a snapshot.
parent_id
(
UInt64
) β Parent id of this snapshot.
is_current_ancestor
(
UInt8
) β Flag that indicates if this snapshot is an ancestor of the current snapshot. | {"source_file": "iceberg_history.md"} | [
-0.04139731824398041,
-0.0223714467138052,
-0.08149215579032898,
0.0011174922110512853,
0.053907766938209534,
-0.05990244448184967,
-0.044944021850824356,
0.01731172204017639,
-0.019577380269765854,
-0.008081036619842052,
0.04747391864657402,
0.006502453237771988,
0.04724749177694321,
-0.0... |
c613762f-63a1-40e8-8fcd-49f0b4b4ce3b | description: 'System table containing information about stack traces for fatal errors.'
keywords: ['system table', 'crash_log']
slug: /operations/system-tables/crash_log
title: 'system.crash_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about stack traces for fatal errors. The table does not exist in the database by default, it is created only when fatal errors occur.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
DateTime
) β Date of the event.
event_time
(
DateTime
) β Time of the event.
timestamp_ns
(
UInt64
) β Timestamp of the event with nanoseconds.
signal
(
Int32
) β Signal number.
thread_id
(
UInt64
) β Thread ID.
query_id
(
String
) β Query ID.
trace
(
Array
(
UInt64
)) β Stack trace at the moment of crash. Each element is a virtual memory address inside ClickHouse server process.
trace_full
(
Array
(
String
)) β Stack trace at the moment of crash. Each element contains a called method inside ClickHouse server process.
version
(
String
) β ClickHouse server version.
revision
(
UInt32
) β ClickHouse server revision.
build_id
(
String
) β BuildID that is generated by compiler.
Example
Query:
sql
SELECT * FROM system.crash_log ORDER BY event_time DESC LIMIT 1;
Result (not full):
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2020-10-14
event_time: 2020-10-14 15:47:40
timestamp_ns: 1602679660271312710
signal: 11
thread_id: 23624
query_id: 428aab7c-8f5c-44e9-9607-d16b44467e69
trace: [188531193,...]
trace_full: ['3. DB::(anonymous namespace)::FunctionFormatReadableTimeDelta::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0xb3cc1f9 in /home/username/work/ClickHouse/build/programs/clickhouse',...]
version: ClickHouse 20.11.1.1
revision: 54442
build_id:
See also
-
trace_log
system table | {"source_file": "crash_log.md"} | [
0.06675341725349426,
-0.070741206407547,
-0.04995076358318329,
-0.037167832255363464,
0.02511223964393139,
-0.0959448590874672,
0.016730360686779022,
0.06914819031953812,
0.01783987507224083,
0.11020269989967346,
0.003861964214593172,
-0.02483116276562214,
0.08285840600728989,
-0.101592697... |
80bd41ec-3154-4b1e-beda-6ce800283eb9 | description: 'System table containing information about all successful and failed
login and logout events.'
keywords: ['system table', 'session_log']
slug: /operations/system-tables/session_log
title: 'system.session_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.session_log
Contains information about all successful and failed login and logout events.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
type
(
Enum8
) β Login/logout result. Possible values:
LoginFailure
β Login error.
LoginSuccess
β Successful login.
Logout
β Logout from the system.
auth_id
(
UUID
) β Authentication ID, which is a UUID that is automatically generated each time user logins.
session_id
(
String
) β Session ID that is passed by client via
HTTP
interface.
event_date
(
Date
) β Login/logout date.
event_time
(
DateTime
) β Login/logout time.
event_time_microseconds
(
DateTime64
) β Login/logout starting time with microseconds precision.
user
(
String
) β User name.
auth_type
(
Enum8
) β The authentication type. Possible values:
NO_PASSWORD
PLAINTEXT_PASSWORD
SHA256_PASSWORD
DOUBLE_SHA1_PASSWORD
LDAP
KERBEROS
SSL_CERTIFICATE
profiles
(
Array
(
LowCardinality(String)
)) β The list of profiles set for all roles and/or users.
roles
(
Array
(
LowCardinality(String)
)) β The list of roles to which the profile is applied.
settings
(
Array
(
Tuple
(
LowCardinality(String)
,
String
))) β Settings that were changed when the client logged in/out.
client_address
(
IPv6
) β The IP address that was used to log in/out.
client_port
(
UInt16
) β The client port that was used to log in/out.
interface
(
Enum8
) β The interface from which the login was initiated. Possible values:
TCP
HTTP
gRPC
MySQL
PostgreSQL
client_hostname
(
String
) β The hostname of the client machine where the
clickhouse-client
or another TCP client is run.
client_name
(
String
) β The
clickhouse-client
or another TCP client name.
client_revision
(
UInt32
) β Revision of the
clickhouse-client
or another TCP client.
client_version_major
(
UInt32
) β The major version of the
clickhouse-client
or another TCP client.
client_version_minor
(
UInt32
) β The minor version of the
clickhouse-client
or another TCP client.
client_version_patch
(
UInt32
) β Patch component of the
clickhouse-client
or another TCP client version.
failure_reason
(
String
) β The exception message containing the reason for the login/logout failure.
Example
Query:
sql
SELECT * FROM system.session_log LIMIT 1 FORMAT Vertical;
Result: | {"source_file": "session_log.md"} | [
0.0718863233923912,
0.06742938607931137,
-0.06122324615716934,
-0.05367313697934151,
0.02437850460410118,
-0.06703785061836243,
0.08374574035406113,
0.05543516203761101,
0.04649706929922104,
0.05894925817847252,
-0.0006167399697005749,
-0.04391653835773468,
0.13725849986076355,
-0.02641585... |
4cc4cf6a-deb4-4ea9-ace5-1d3908601e47 | failure_reason
(
String
) β The exception message containing the reason for the login/logout failure.
Example
Query:
sql
SELECT * FROM system.session_log LIMIT 1 FORMAT Vertical;
Result:
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
type: LoginSuccess
auth_id: 45e6bd83-b4aa-4a23-85e6-bd83b4aa1a23
session_id:
event_date: 2021-10-14
event_time: 2021-10-14 20:33:52
event_time_microseconds: 2021-10-14 20:33:52.104247
user: default
auth_type: PLAINTEXT_PASSWORD
profiles: ['default']
roles: []
settings: [('load_balancing','random'),('max_memory_usage','10000000000')]
client_address: ::ffff:127.0.0.1
client_port: 38490
interface: TCP
client_hostname:
client_name: ClickHouse client
client_revision: 54449
client_version_major: 21
client_version_minor: 10
client_version_patch: 0
failure_reason: | {"source_file": "session_log.md"} | [
0.06653565913438797,
0.07454236596822739,
-0.044709231704473495,
-0.011414327658712864,
0.009972714819014072,
-0.04861077666282654,
0.10318988561630249,
0.03760988637804985,
0.019720399752259254,
0.002647374989464879,
-0.006193255539983511,
-0.04742567613720894,
0.06534545868635178,
0.0488... |
f8a8a391-881b-405b-8e9e-0660cd20bccf | description: 'System table containing information about and status of scheduling nodes
residing on the local server.'
keywords: ['system table', 'scheduler']
slug: /operations/system-tables/scheduler
title: 'system.scheduler'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.scheduler
Contains information about and status of
scheduling nodes
residing on the local server.
This table can be used for monitoring. The table contains a row for every scheduling node.
Example:
sql
SELECT *
FROM system.scheduler
WHERE resource = 'network_read' AND path = '/prio/fair/prod'
FORMAT Vertical
text
Row 1:
ββββββ
resource: network_read
path: /prio/fair/prod
type: fifo
weight: 5
priority: 0
is_active: 0
active_children: 0
dequeued_requests: 67
canceled_requests: 0
dequeued_cost: 4692272
canceled_cost: 0
busy_periods: 63
vruntime: 938454.1999999989
system_vruntime: α΄Ία΅α΄Έα΄Έ
queue_length: 0
queue_cost: 0
budget: -60524
is_satisfied: α΄Ία΅α΄Έα΄Έ
inflight_requests: α΄Ία΅α΄Έα΄Έ
inflight_cost: α΄Ία΅α΄Έα΄Έ
max_requests: α΄Ία΅α΄Έα΄Έ
max_cost: α΄Ία΅α΄Έα΄Έ
max_speed: α΄Ία΅α΄Έα΄Έ
max_burst: α΄Ία΅α΄Έα΄Έ
throttling_us: α΄Ία΅α΄Έα΄Έ
tokens: α΄Ία΅α΄Έα΄Έ
Columns:
resource
(
String
) β Resource name
path
(
String
) β Path to a scheduling node within this resource scheduling hierarchy
type
(
String
) β Type of a scheduling node.
weight
(
Float64
) β Weight of a node, used by a parent node of
fair
type.
priority
(
Int64
) β Priority of a node, used by a parent node of 'priority' type (Lower value means higher priority).
is_active
(
UInt8
) β Whether this node is currently active - has resource requests to be dequeued and constraints satisfied.
active_children
(
UInt64
) β The number of children in active state.
dequeued_requests
(
UInt64
) β The total number of resource requests dequeued from this node.
canceled_requests
(
UInt64
) β The total number of resource requests canceled from this node.
dequeued_cost
(
Int64
) β The sum of costs (e.g. size in bytes) of all requests dequeued from this node.
throughput
(
Float64
) β Current average throughput (dequeued cost per second).
canceled_cost
(
Int64
) β The sum of costs (e.g. size in bytes) of all requests canceled from this node.
busy_periods
(
UInt64
) β The total number of deactivations of this node.
vruntime
(
Nullable(Float64)
) β For children of
fair
nodes only. Virtual runtime of a node used by SFQ algorithm to select the next child to process in a max-min fair manner.
system_vruntime
(
Nullable(Float64)
) β For
fair
nodes only. Virtual runtime showing
vruntime
of the last processed resource request. Used during child activation as the new value of
vruntime
.
queue_length
(
Nullable(UInt64)
) β For
fifo
nodes only. Current number of resource requests residing in the queue. | {"source_file": "scheduler.md"} | [
0.022137636318802834,
0.0031802912708371878,
-0.06009885296225548,
0.06463716924190521,
0.05892356112599373,
-0.07713671028614044,
0.0469449907541275,
-0.0498693585395813,
-0.02949473261833191,
0.11604184657335281,
-0.045109823346138,
-0.016591046005487442,
0.045930925756692886,
-0.0661520... |
44546a04-c4e2-42a4-8f05-e8fe982cae70 | queue_length
(
Nullable(UInt64)
) β For
fifo
nodes only. Current number of resource requests residing in the queue.
queue_cost
(
Nullable(Int64)
) β For fifo nodes only. Sum of costs (e.g. size in bytes) of all requests residing in the queue.
budget
(
Nullable(Int64)
) β For fifo nodes only. The number of available 'cost units' for new resource requests. Can appear in case of discrepancy of estimated and real costs of resource requests (e.g. after read/write failure)
is_satisfied
(
Nullable(UInt8)
) β For constraint nodes only (e.g.
inflight_limit
). Equals to
1
if all the constraint of this node are satisfied.
inflight_requests
(
Nullable(Int64)
) β For
inflight_limit
nodes only. The number of resource requests dequeued from this node, that are currently in consumption state.
inflight_cost
(
Nullable(Int64)
) β For
inflight_limit
nodes only. The sum of costs (e.g. bytes) of all resource requests dequeued from this node, that are currently in consumption state.
max_requests
(
Nullable(Int64)
) β For
inflight_limit
nodes only. Upper limit for inflight_requests leading to constraint violation.
max_cost
(
Nullable(Int64)
) β For
inflight_limit
nodes only. Upper limit for inflight_cost leading to constraint violation.
max_speed
(
Nullable(Float64)
) β For
bandwidth_limit
nodes only. Upper limit for bandwidth in tokens per second.
max_burst
(
Nullable(Float64)
) β For
bandwidth_limit
nodes only. Upper limit for tokens available in token-bucket throttler.
throttling_us
(
Nullable(Int64)
) β For
bandwidth_limit
nodes only. Total number of microseconds this node was in throttling state.
tokens
(
Nullable(Float64)
) β For
bandwidth_limit
nodes only. Number of tokens currently available in token-bucket throttler. | {"source_file": "scheduler.md"} | [
-0.014175071381032467,
0.0120398486033082,
-0.08307715505361557,
0.05573517456650734,
0.036267805844545364,
-0.06987471133470535,
0.041197873651981354,
0.010443359613418579,
0.015073820017278194,
0.11822876334190369,
-0.015339423902332783,
-0.05064305290579796,
-0.0063627599738538265,
-0.0... |
724ad981-dcb2-48e4-b13b-8126b2d6b973 | description: 'System table containing error codes with the number of times they have
been triggered.'
keywords: ['system table', 'errors']
slug: /operations/system-tables/errors
title: 'system.errors'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains error codes with the number of times they have been triggered.
To show all possible error codes, including ones which were not triggered, set setting
system_events_show_zero_values
to 1.
Columns:
name
(
String
) β Name of the error (errorCodeToName).
code
(
Int32
) β Code number of the error.
value
(
UInt64
) β The number of times this error happened.
last_error_time
(
DateTime
) β The time when the last error happened.
last_error_message
(
String
) β Message for the last error.
last_error_format_string
(
String
) β Format string for the last error.
last_error_trace
(
Array(UInt64)
) β A stack trace that represents a list of physical addresses where the called methods are stored.
remote
(
UInt8
) β Remote exception (i.e. received during one of the distributed queries).
query_id
(
String
) β Id of a query that caused an error (if available).
:::note
Counters for some errors may increase during successful query execution. It's not recommended to use this table for server monitoring purposes unless you are sure that corresponding error can not be a false positive.
:::
Example
```sql
SELECT name, code, value
FROM system.errors
WHERE value > 0
ORDER BY code ASC
LIMIT 1
ββnameββββββββββββββ¬βcodeββ¬βvalueββ
β CANNOT_OPEN_FILE β 76 β 1 β
ββββββββββββββββββββ΄βββββββ΄ββββββββ
```
sql
WITH arrayMap(x -> demangle(addressToSymbol(x)), last_error_trace) AS all
SELECT name, arrayStringConcat(all, '\n') AS res
FROM system.errors
LIMIT 1
SETTINGS allow_introspection_functions=1\G | {"source_file": "errors.md"} | [
0.029966654255986214,
-0.038674864917993546,
-0.028496533632278442,
-0.03361174836754799,
0.04061632603406906,
-0.05569317191839218,
0.000022430620447266847,
0.043887101113796234,
0.019950751215219498,
0.09239855408668518,
0.009074261412024498,
-0.04074783995747566,
0.14857006072998047,
-0... |
6f683883-ef89-4cad-802f-308373417577 | description: 'System table containing licenses of third-party libraries that are located
in the contrib directory of ClickHouse sources.'
keywords: ['system table', 'licenses']
slug: /operations/system-tables/licenses
title: 'system.licenses'
doc_type: 'reference'
system.licenses
Contains licenses of third-party libraries that are located in the
contrib
directory of ClickHouse sources.
Columns:
library_name
(
String
) β Name of the library.
license_type
(
String
) β License type β e.g. Apache, MIT.
license_path
(
String
) β Path to the file with the license text.
license_text
(
String
) β License text.
Example
sql
SELECT library_name, license_type, license_path FROM system.licenses LIMIT 15
```text
ββlibrary_nameββββββββ¬βlicense_typeββ¬βlicense_pathβββββββββββββββββββββββββ
β aws-c-common β Apache β /contrib/aws-c-common/LICENSE β
β base64 β BSD 2-clause β /contrib/aklomp-base64/LICENSE β
β brotli β MIT β /contrib/brotli/LICENSE β
β [...] β [...] β [...] β
ββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββ
``` | {"source_file": "licenses.md"} | [
0.005375465843826532,
-0.021433642134070396,
-0.113616444170475,
-0.07814858853816986,
0.061317212879657745,
-0.032988373190164566,
0.09249652177095413,
-0.04071633145213127,
0.0028272673953324556,
0.03484570235013962,
0.08819454163312912,
-0.02243766561150551,
0.07682120054960251,
-0.1274... |
4d93083a-c0ea-471b-b5bc-3c6df1bb6ec1 | description: 'System table containing information useful for an overview of memory
usage and ProfileEvents of users.'
keywords: ['system table', 'user_processes']
slug: /operations/system-tables/user_processes
title: 'system.user_processes'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.user_processes
This system table can be used to get overview of memory usage and ProfileEvents of users.
Columns:
user
(
String
) β User name.
memory_usage
(
Int64
) β Sum of RAM used by all processes of the user. It might not include some types of dedicated memory. See the max_memory_usage setting.
peak_memory_usage
(
Int64
) β The peak of memory usage of the user. It can be reset when no queries are run for the user.
ProfileEvents
(
Map(String, UInt64)
) β Summary of ProfileEvents that measure different metrics for the user. The description of them could be found in the table system.events
sql
SELECT * FROM system.user_processes LIMIT 10 FORMAT Vertical;
```response
Row 1:
ββββββ
user: default
memory_usage: 9832
peak_memory_usage: 9832
ProfileEvents: {'Query':5,'SelectQuery':5,'QueriesWithSubqueries':38,'SelectQueriesWithSubqueries':38,'QueryTimeMicroseconds':842048,'SelectQueryTimeMicroseconds':842048,'ReadBufferFromFileDescriptorRead':6,'ReadBufferFromFileDescriptorReadBytes':234,'IOBufferAllocs':3,'IOBufferAllocBytes':98493,'ArenaAllocChunks':283,'ArenaAllocBytes':1482752,'FunctionExecute':670,'TableFunctionExecute':16,'DiskReadElapsedMicroseconds':19,'NetworkSendElapsedMicroseconds':684,'NetworkSendBytes':139498,'SelectedRows':6076,'SelectedBytes':685802,'ContextLock':1140,'RWLockAcquiredReadLocks':193,'RWLockReadersWaitMilliseconds':4,'RealTimeMicroseconds':1585163,'UserTimeMicroseconds':889767,'SystemTimeMicroseconds':13630,'SoftPageFaults':1947,'OSCPUWaitMicroseconds':6,'OSCPUVirtualTimeMicroseconds':903251,'OSReadChars':28631,'OSWriteChars':28888,'QueryProfilerRuns':3,'LogTrace':79,'LogDebug':24}
1 row in set. Elapsed: 0.010 sec.
``` | {"source_file": "user_processes.md"} | [
0.10002268850803375,
-0.02056335099041462,
-0.12968957424163818,
-0.0038652417715638876,
-0.008150359615683556,
-0.018805228173732758,
0.12452729791402817,
0.11108049750328064,
-0.02968798205256462,
0.06380810588598251,
-0.007947755977511406,
0.005373690742999315,
0.061588045209646225,
-0.... |
91d1d3e6-a281-4460-b542-40a3a5f648ec | description: 'System table containing information about currently running background
fetches.'
keywords: ['system table', 'replicated_fetches']
slug: /operations/system-tables/replicated_fetches
title: 'system.replicated_fetches'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.replicated_fetches
Contains information about currently running background fetches.
Columns:
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
elapsed
(
Float64
) β The time elapsed (in seconds) since showing currently running background fetches started.
progress
(
Float64
) β The percentage of completed work from 0 to 1.
result_part_name
(
String
) β The name of the part that will be formed as the result of showing currently running background fetches.
result_part_path
(
String
) β Absolute path to the part that will be formed as the result of showing currently running background fetches.
partition_id
(
String
) β ID of the partition.
total_size_bytes_compressed
(
UInt64
) β The total size (in bytes) of the compressed data in the result part.
bytes_read_compressed
(
UInt64
) β The number of compressed bytes read from the result part.
source_replica_path
(
String
) β Absolute path to the source replica.
source_replica_hostname
(
String
) β Hostname of the source replica.
source_replica_port
(
UInt16
) β Port number of the source replica.
interserver_scheme
(
String
) β Name of the interserver scheme.
URI
(
String
) β Uniform resource identifier.
to_detached
(
UInt8
) β The flag indicates whether the currently running background fetch is being performed using the TO DETACHED expression.
thread_id
(
UInt64
) β Thread identifier.
Example
sql
SELECT * FROM system.replicated_fetches LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
database: default
table: t
elapsed: 7.243039876
progress: 0.41832135995612835
result_part_name: all_0_0_0
result_part_path: /var/lib/clickhouse/store/700/70080a04-b2de-4adf-9fa5-9ea210e81766/all_0_0_0/
partition_id: all
total_size_bytes_compressed: 1052783726
bytes_read_compressed: 440401920
source_replica_path: /clickhouse/test/t/replicas/1
source_replica_hostname: node1
source_replica_port: 9009
interserver_scheme: http
URI: http://node1:9009/?endpoint=DataPartsExchange%3A%2Fclickhouse%2Ftest%2Ft%2Freplicas%2F1&part=all_0_0_0&client_protocol_version=4&compress=false
to_detached: 0
thread_id: 54
See Also
Managing ReplicatedMergeTree Tables | {"source_file": "replicated_fetches.md"} | [
0.008051182143390179,
-0.004060210660099983,
-0.0642898753285408,
-0.018727881833910942,
0.027016734704375267,
-0.12828803062438965,
-0.01066086906939745,
0.027959054335951805,
0.01883663423359394,
0.0913907065987587,
0.00020819815108552575,
-0.039085667580366135,
0.11171014606952667,
-0.1... |
4617cf6c-bf71-4040-b9c9-0a185eec63ba | description: 'System table containing information about supported data types'
keywords: ['system table', 'data_type_families']
slug: /operations/system-tables/data_type_families
title: 'system.data_type_families'
doc_type: 'reference'
Contains information about supported
data types
.
Columns:
name
(
String
) β Data type name.
case_insensitive
(
UInt8
) β Property that shows whether you can use a data type name in a query in case insensitive manner or not. For example,
Date
and
date
are both valid.
alias_to
(
String
) β Data type name for which
name
is an alias.
Example
sql
SELECT * FROM system.data_type_families WHERE alias_to = 'String'
text
ββnameββββββββ¬βcase_insensitiveββ¬βalias_toββ
β LONGBLOB β 1 β String β
β LONGTEXT β 1 β String β
β TINYTEXT β 1 β String β
β TEXT β 1 β String β
β VARCHAR β 1 β String β
β MEDIUMBLOB β 1 β String β
β BLOB β 1 β String β
β TINYBLOB β 1 β String β
β CHAR β 1 β String β
β MEDIUMTEXT β 1 β String β
ββββββββββββββ΄βββββββββββββββββββ΄βββββββββββ
See Also
Syntax
β Information about supported syntax. | {"source_file": "data_type_families.md"} | [
0.005594591144472361,
-0.05536048114299774,
0.020314635708928108,
0.03899798169732094,
-0.020070746541023254,
-0.05569719150662422,
0.07150828093290329,
0.080271877348423,
-0.100405752658844,
-0.008385472930967808,
0.02996794879436493,
-0.033948060125112534,
0.038417767733335495,
-0.008538... |
59c58c78-bfd4-4910-9f81-576102aef8d6 | description: 'System table containing information about projection parts for tables of the MergeTree family.'
keywords: ['system table', 'projection_parts']
slug: /operations/system-tables/projection_parts
title: 'system.projection_parts'
doc_type: 'reference'
system.projection_parts
This table contains information about projection parts for tables of the MergeTree family.
Columns {#columns}
partition
(
String
) β The partition name.
name
(
String
) β Name of the data part.
part_type
(
String
) β The data part storing format. Possible Values: Wide (a file per column) and Compact (a single file for all columns).
parent_name
(
String
) β The name of the source (parent) data part.
parent_uuid
(
UUID
) β The UUID of the source (parent) data part.
parent_part_type
(
String
) β The source (parent) data part storing format.
active
(
UInt8
) β Flag that indicates whether the data part is active. If a data part is active, it's used in a table. Otherwise, it's about to be deleted. Inactive data parts appear after merging and mutating operations.
marks
(
UInt64
) β The number of marks. To get the approximate number of rows in a data part, multiply marks by the index granularity (usually 8192) (this hint does not work for adaptive granularity).
rows
(
UInt64
) β The number of rows.
bytes_on_disk
(
UInt64
) β Total size of all the data part files in bytes.
data_compressed_bytes
(
UInt64
) β Total size of compressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
data_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
marks_bytes
(
UInt64
) β The size of the file with marks.
parent_marks
(
UInt64
) β The number of marks in the source (parent) part.
parent_rows
(
UInt64
) β The number of rows in the source (parent) part.
parent_bytes_on_disk
(
UInt64
) β Total size of all the source (parent) data part files in bytes.
parent_data_compressed_bytes
(
UInt64
) β Total size of compressed data in the source (parent) data part.
parent_data_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data in the source (parent) data part.
parent_marks_bytes
(
UInt64
) β The size of the file with marks in the source (parent) data part.
modification_time
(
DateTime
) β The time the directory with the data part was modified. This usually corresponds to the time of data part creation.
remove_time
(
DateTime
) β The time when the data part became inactive.
refcount
(
UInt32
) β The number of places where the data part is used. A value greater than 2 indicates that the data part is used in queries or merges.
min_date
(
Date
) β The minimum value of the date key in the data part.
max_date
(
Date
) β The maximum value of the date key in the data part.
min_time
(
DateTime
) β The minimum value of the date and time key in the data part. | {"source_file": "projection_parts.md"} | [
-0.015831122174859047,
-0.0155255775898695,
-0.06351086497306824,
0.011760089546442032,
0.06619132310152054,
-0.11602037400007248,
0.01734618842601776,
0.06312791258096695,
-0.01860138401389122,
0.018749691545963287,
0.06407743692398071,
0.039897698909044266,
0.05205122381448746,
-0.076590... |
db39d252-0b94-4556-8ec6-c58da2aeb19c | max_date
(
Date
) β The maximum value of the date key in the data part.
min_time
(
DateTime
) β The minimum value of the date and time key in the data part.
max_time
(
DateTime
) β The maximum value of the date and time key in the data part.
partition_id
(
String
) β ID of the partition.
min_block_number
(
Int64
) β The minimum number of data parts that make up the current part after merging.
max_block_number
(
Int64
) β The maximum number of data parts that make up the current part after merging.
level
(
UInt32
) β Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts.
data_version
(
UInt64
) β Number that is used to determine which mutations should be applied to the data part (mutations with a version higher than data_version).
primary_key_bytes_in_memory
(
UInt64
) β The amount of memory (in bytes) used by primary key values.
primary_key_bytes_in_memory_allocated
(
UInt64
) β The amount of memory (in bytes) reserved for primary key values.
is_frozen
(
UInt8
) β Flag that shows that a partition data backup exists. 1, the backup exists. 0, the backup does not exist.
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
engine
(
String
) β Name of the table engine without parameters.
disk_name
(
String
) β Name of a disk that stores the data part.
path
(
String
) β Absolute path to the folder with data part files.
hash_of_all_files
(
String
) β sipHash128 of compressed files.
hash_of_uncompressed_files
(
String
) β sipHash128 of uncompressed files (files with marks, index file etc.).
uncompressed_hash_of_compressed_files
(
String
) β sipHash128 of data in the compressed files as if they were uncompressed.
delete_ttl_info_min
(
DateTime
) β The minimum value of the date and time key for TTL DELETE rule.
delete_ttl_info_max
(
DateTime
) β The maximum value of the date and time key for TTL DELETE rule.
move_ttl_info.expression
(
Array(String)
) β Array of expressions. Each expression defines a TTL MOVE rule.
move_ttl_info.min
(
Array(DateTime)
) β Array of date and time values. Each element describes the minimum key value for a TTL MOVE rule.
move_ttl_info.max
(
Array(DateTime)
) β Array of date and time values. Each element describes the maximum key value for a TTL MOVE rule.
default_compression_codec
(
String
) β The name of the codec used to compress this data part (in case when there is no explicit codec for columns).
recompression_ttl_info.expression
(
Array(String)
) β The TTL expression.
recompression_ttl_info.min
(
Array(DateTime)
) β The minimum value of the calculated TTL expression within this part. Used to understand whether we have at least one row with expired TTL.
recompression_ttl_info.max
(
Array(DateTime)
) β The maximum value of the calculated TTL expression within this part. Used to understand whether we have all rows with expired TTL. | {"source_file": "projection_parts.md"} | [
-0.0303315632045269,
-0.009817867539823055,
-0.055706266313791275,
-0.037432774901390076,
-0.02228544093668461,
-0.09654460102319717,
-0.0594082735478878,
0.13114218413829803,
0.008390357717871666,
-0.042991191148757935,
0.061790041625499725,
0.040755946189165115,
0.02019110880792141,
-0.0... |
b7d7b0ec-fd91-447d-b96c-b38938397aa6 | recompression_ttl_info.max
(
Array(DateTime)
) β The maximum value of the calculated TTL expression within this part. Used to understand whether we have all rows with expired TTL.
group_by_ttl_info.expression
(
Array(String)
) β The TTL expression.
group_by_ttl_info.min
(
Array(DateTime)
) β The minimum value of the calculated TTL expression within this part. Used to understand whether we have at least one row with expired TTL.
group_by_ttl_info.max
(
Array(DateTime)
) β The maximum value of the calculated TTL expression within this part. Used to understand whether we have all rows with expired TTL.
rows_where_ttl_info.expression
(
Array(String)
) β The TTL expression.
rows_where_ttl_info.min
(
Array(DateTime)
) β The minimum value of the calculated TTL expression within this part. Used to understand whether we have at least one row with expired TTL.
rows_where_ttl_info.max
(
Array(DateTime)
) β The maximum value of the calculated TTL expression within this part. Used to understand whether we have all rows with expired TTL.
is_broken
(
UInt8
) β Whether projection part is broken
exception_code
(
Int32
) β Exception message explaining broken state of the projection part
exception
(
String
) β Exception code explaining broken state of the projection part | {"source_file": "projection_parts.md"} | [
-0.046565502882003784,
0.0254508089274168,
-0.026530204340815544,
0.002975030802190304,
-0.026217862963676453,
-0.038781847804784775,
-0.03587658330798149,
0.06538458168506622,
0.02104666270315647,
-0.031732331961393356,
-0.013391016982495785,
0.039328787475824356,
0.027038641273975372,
-0... |
fcc22400-eb4f-43a9-9d13-2f259ccae146 | description: 'System table containing information about existing projections in all
tables.'
keywords: ['system table', 'projections']
slug: /operations/system-tables/projections
title: 'system.projections'
doc_type: 'reference'
system.projections
Contains information about existing projections in all tables.
Columns:
database
(
String
) β Database name.
table
(
String
) β Table name.
name
(
String
) β Projection name.
type
(
Enum8('Normal' = 0, 'Aggregate' = 1)
) β Projection type.
sorting_key
(
Array(String)
) β Projection sorting key.
query
(
String
) β Projection query.
Example
sql
SELECT * FROM system.projections LIMIT 2 FORMAT Vertical;
```text
Row 1:
ββββββ
database: default
table: landing
name: improved_sorting_key
type: Normal
sorting_key: ['user_id','date']
query: SELECT * ORDER BY user_id, date
Row 2:
ββββββ
database: default
table: landing
name: agg_no_key
type: Aggregate
sorting_key: []
query: SELECT count()
``` | {"source_file": "projections.md"} | [
0.04719656705856323,
-0.03260016813874245,
-0.07509057968854904,
0.02729833498597145,
-0.039699845016002655,
-0.06349559128284454,
0.03620326891541481,
0.043994225561618805,
-0.013454221189022064,
0.08839373290538788,
0.01830592192709446,
0.006349849049001932,
0.08675649762153625,
-0.05580... |
43ab5a6f-1016-403e-94d2-eb00e3e3854f | description: 'This table contains histogram metrics that can be calculated instantly
and exported in the Prometheus format. It is always up to date.'
keywords: ['system table', 'histogram_metrics']
slug: /operations/system-tables/histogram_metrics
title: 'system.histogram_metrics'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
histogram_metrics {#histogram_metrics}
This table contains histogram metrics that can be calculated instantly and exported in the Prometheus format. It is always up to date. Replaces the deprecated
system.latency_log
.
Columns:
metric
(
String
) β Metric name.
value
(
Int64
) β Metric value.
description
(
String
) β Metric description.
labels
(
Map(String, String)
) β Metric labels.
Example
You can use a query like this to export all the histogram metrics in the Prometheus format.
sql
SELECT
metric AS name,
toFloat64(value) AS value,
description AS help,
labels,
'histogram' AS type
FROM system.histogram_metrics
FORMAT Prometheus
Metric descriptions {#metric_descriptions}
keeper_response_time_ms_bucket {#keeper_response_time_ms_bucket}
The response time of Keeper, in milliseconds.
See Also
-
system.asynchronous_metrics
β Contains periodically calculated metrics.
-
system.events
β Contains a number of events that occurred.
-
system.metric_log
β Contains a history of metrics values from tables
system.metrics
and
system.events
.
-
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "histogram_metrics.md"} | [
0.06325060874223709,
-0.025587690994143486,
-0.08829192817211151,
-0.00440107099711895,
-0.017145449295639992,
-0.048536885529756546,
0.034973323345184326,
0.03703557699918747,
-0.017748259007930756,
0.06984832882881165,
-0.001614163862541318,
-0.08483512699604034,
0.05771814286708832,
-0.... |
8c2f0dd1-9997-403a-b121-6a391dfd53da | description: 'System table containing stack traces collected by the sampling query
profiler.'
keywords: ['system table', 'trace_log']
slug: /operations/system-tables/trace_log
title: 'system.trace_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.trace_log
Contains stack traces collected by the
sampling query profiler
.
ClickHouse creates this table when the
trace_log
server configuration section is set. Also see settings:
query_profiler_real_time_period_ns
,
query_profiler_cpu_time_period_ns
,
memory_profiler_step
,
memory_profiler_sample_probability
,
trace_profile_events
.
To analyze logs, use the
addressToLine
,
addressToLineWithInlines
,
addressToSymbol
and
demangle
introspection functions.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β Date of sampling moment.
event_time
(
DateTime
) β Timestamp of the sampling moment.
event_time_microseconds
(
DateTime64
) β Timestamp of the sampling moment with microseconds precision.
timestamp_ns
(
UInt64
) β Timestamp of the sampling moment in nanoseconds.
revision
(
UInt32
) β ClickHouse server build revision.
When connecting to the server by
clickhouse-client
, you see the string similar to
Connected to ClickHouse server version 19.18.1.
. This field contains the
revision
, but not the
version
of a server.
trace_type
(
Enum8
) β Trace type:
Real
represents collecting stack traces by wall-clock time.
CPU
represents collecting stack traces by CPU time.
Memory
represents collecting allocations and deallocations when memory allocation exceeds the subsequent watermark.
MemorySample
represents collecting random allocations and deallocations.
MemoryPeak
represents collecting updates of peak memory usage.
ProfileEvent
represents collecting of increments of profile events.
JemallocSample
represents collecting of jemalloc samples.
MemoryAllocatedWithoutCheck
represents collection of significant allocations (>16MiB) that is done with ignoring any memory limits (for ClickHouse developers only).
thread_id
(
UInt64
) β Thread identifier.
query_id
(
String
) β Query identifier that can be used to get details about a query that was running from the
query_log
system table.
trace
(
Array(UInt64)
) β Stack trace at the moment of sampling. Each element is a virtual memory address inside ClickHouse server process.
size
(
Int64
) - For trace types
Memory
,
MemorySample
or
MemoryPeak
is the amount of memory allocated, for other trace types is 0.
event
(
LowCardinality(String)
) - For trace type
ProfileEvent
is the name of updated profile event, for other trace types is an empty string.
increment
(
UInt64
) - For trace type
ProfileEvent
is the amount of increment of profile event, for other trace types is 0. | {"source_file": "trace_log.md"} | [
0.05420379340648651,
-0.07832018285989761,
-0.08779290318489075,
0.007103527896106243,
0.05003652349114418,
-0.13986463844776154,
0.057242389768362045,
0.03543728217482567,
0.0013351843226701021,
0.0654798150062561,
0.012311289086937904,
-0.024532297626137733,
0.0773041620850563,
-0.092207... |
9cf93dee-d944-441f-8a7e-a661a09432ca | increment
(
UInt64
) - For trace type
ProfileEvent
is the amount of increment of profile event, for other trace types is 0.
symbols
, (
Array(LowCardinality(String))
), If the symbolization is enabled, contains demangled symbol names, corresponding to the
trace
.
lines
, (
Array(LowCardinality(String))
), If the symbolization is enabled, contains strings with file names with line numbers, corresponding to the
trace
.
The symbolization can be enabled or disabled in the
symbolize
under
trace_log
in the server's configuration file.
Example
sql
SELECT * FROM system.trace_log LIMIT 1 \G
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2020-09-10
event_time: 2020-09-10 11:23:09
event_time_microseconds: 2020-09-10 11:23:09.872924
timestamp_ns: 1599762189872924510
revision: 54440
trace_type: Memory
thread_id: 564963
query_id:
trace: [371912858,371912789,371798468,371799717,371801313,371790250,624462773,566365041,566440261,566445834,566460071,566459914,566459842,566459580,566459469,566459389,566459341,566455774,371993941,371988245,372158848,372187428,372187309,372187093,372185478,140222123165193,140222122205443]
size: 5244400 | {"source_file": "trace_log.md"} | [
0.0720023512840271,
-0.017640413716435432,
-0.06955336779356003,
0.025174878537654877,
-0.027054475620388985,
-0.020168041810393333,
0.10914064943790436,
0.014484301209449768,
0.022084034979343414,
-0.03298638015985489,
0.04720962792634964,
-0.009544780477881432,
0.0009694457985460758,
-0.... |
d2cdc312-ef4d-40a7-b194-fa7f786558e5 | description: 'System table containing all active roles at the moment, including the
current role of the current user and the granted roles for the current role'
keywords: ['system table', 'enabled_roles']
slug: /operations/system-tables/enabled_roles
title: 'system.enabled_roles'
doc_type: 'reference'
Contains all active roles at the moment, including the current role of the current user and granted roles for the current role.
Columns:
role_name
(
String
)) β Role name.
with_admin_option
(
UInt8
) β Flag that shows whether
enabled_role
is a role with
ADMIN OPTION
privilege.
is_current
(
UInt8
) β Flag that shows whether
enabled_role
is a current role of a current user.
is_default
(
UInt8
) β Flag that shows whether
enabled_role
is a default role. | {"source_file": "enabled_roles.md"} | [
0.035222120583057404,
-0.03417126089334488,
-0.026170819997787476,
0.03488839790225029,
0.025923749431967735,
0.003779471619054675,
0.08384039998054504,
0.03201741352677345,
-0.16071867942810059,
-0.02249380201101303,
0.05014028027653694,
0.011754961684346199,
0.023951798677444458,
0.03516... |
1da994bb-735f-4b34-b2f3-6efe18c77740 | description: 'This table contains warning messages about clickhouse server.'
keywords: [ 'system table', 'warnings' ]
slug: /operations/system-tables/system_warnings
title: 'system.warnings'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.warnings
This table shows warnings about the ClickHouse server.
Warnings of the same type are combined into a single warning.
For example, if the number N of attached databases exceeds a configurable threshold T, a single entry containing the current value N is shown instead of N separate entries.
If current value drops below the threshold, the entry is removed from the table.
The table can be configured with these settings:
max_table_num_to_warn
max_database_num_to_warn
max_dictionary_num_to_warn
max_view_num_to_warn
max_part_num_to_warn
max_pending_mutations_to_warn
max_pending_mutations_execution_time_to_warn
max_named_collection_num_to_warn
resource_overload_warnings
Columns:
message
(
String
) β Warning message.
message_format_string
(
LowCardinality(String)
) β The format string used to format the message.
Example
Query:
sql
SELECT * FROM system.warnings LIMIT 2 \G;
Result:
```text
Row 1:
ββββββ
message: The number of active parts is more than 10.
message_format_string: The number of active parts is more than {}.
Row 2:
ββββββ
message: The number of attached databases is more than 2.
message_format_string: The number of attached databases is more than {}.
``` | {"source_file": "system_warnings.md"} | [
0.04707257077097893,
-0.05916237086057663,
-0.07802391797304153,
0.02964741550385952,
0.0424458272755146,
-0.05109260976314545,
0.11430148780345917,
-0.00167759764008224,
-0.0017332349671050906,
0.038182228803634644,
0.014867868274450302,
-0.05180243402719498,
0.19088125228881836,
-0.06764... |
2658f5b7-10b2-4e89-ac4a-67d9876409b1 | description: 'System table containing logging entries with information about
BACKUP
and
RESTORE
operations.'
keywords: ['system table', 'backups']
slug: /operations/system-tables/backups
title: 'system.backups'
doc_type: 'reference'
system.backups
Contains a list of all
BACKUP
or
RESTORE
operations with their current states and other properties. Note, that table is not persistent and it shows only operations executed after the last server restart.
Here's the markdown table with the name and comment columns: | {"source_file": "backups.md"} | [
-0.029416099190711975,
-0.044003620743751526,
-0.05154069513082504,
0.021885661408305168,
0.013227195478975773,
-0.030953820794820786,
0.04938647523522377,
0.013976102694869041,
-0.052317168563604355,
0.03586335480213165,
0.0008541453862562776,
0.03641821816563606,
0.0742180198431015,
-0.0... |
9bf3e238-255a-4dbd-9db6-51f7a4e97ae0 | Here's the markdown table with the name and comment columns:
| Column | Description |
|---------------------|----------------------------------------------------------------------------------------------------------------------|
|
id
| Operation ID, can be either passed via SETTINGS id=... or be randomly generated UUID. |
|
name
| Operation name, a string like
Disk('backups', 'my_backup')
|
|
base_backup_name
| Base Backup Operation name, a string like
Disk('backups', 'my_base_backup')
|
|
query_id
| Query ID of a query that started backup. |
|
status
| Status of backup or restore operation. |
|
error
| The error message if any. |
|
start_time
| The time when operation started. |
|
end_time
| The time when operation finished. |
|
num_files
| The number of files stored in the backup. |
|
total_size
| The total size of files stored in the backup. |
|
num_entries
| The number of entries in the backup, i.e. the number of files inside the folder if the backup is stored as a folder. |
|
uncompressed_size
| The uncompressed size of the backup. |
|
compressed_size
| The compressed size of the backup. |
|
files_read
| Returns the number of files read during RESTORE from this backup. |
|
bytes_read
| Returns the total size of files read during RESTORE from this backup. |
|
ProfileEvents
| All the profile events captured during this operation. | | {"source_file": "backups.md"} | [
-0.058968063443899155,
0.03293747827410698,
-0.03959232568740845,
0.07056556642055511,
-0.006374670658260584,
-0.041307996958494186,
0.029189106076955795,
0.016059471294283867,
-0.025771012529730797,
-0.003245303640142083,
0.02613288350403309,
-0.044407982379198074,
0.12917965650558472,
-0... |
e01215e6-57e6-498d-a803-9ed74454b1bc | description: 'System table containing information about configured roles.'
keywords: ['system table', 'roles']
slug: /operations/system-tables/roles
title: 'system.roles'
doc_type: 'reference'
system.roles
Contains information about configured
roles
.
Columns:
name
(
String
) β Role name.
id
(
UUID
) β Role ID.
storage
(
String
) β Path to the storage of roles. Configured in the
access_control_path
parameter.
See Also {#see-also}
SHOW ROLES | {"source_file": "roles.md"} | [
0.049525193870067596,
-0.006662989035248756,
-0.11468435078859329,
0.05851642042398453,
0.009016837924718857,
-0.0351518839597702,
0.06014841049909592,
0.04617815092206001,
-0.12678030133247375,
-0.021955640986561775,
0.006512906402349472,
0.02349265106022358,
0.07199075073003769,
-0.03473... |
a1527c11-02a0-431a-a9dd-37925408616d | description: 'System table containing active roles for the current user.'
keywords: ['system table', 'current_roles']
slug: /operations/system-tables/current_roles
title: 'system.current_roles'
doc_type: 'reference'
Contains active roles of a current user.
SET ROLE
changes the contents of this table.
Columns:
role_name
(
String
)) β Role name.
with_admin_option
(
UInt8
) β Flag that shows whether
current_role
is a role with
ADMIN OPTION
privilege.
is_default
(
UInt8
) β Flag that shows whether
current_role
is a default role. | {"source_file": "current_roles.md"} | [
0.012809502892196178,
-0.04115179553627968,
-0.04198308289051056,
0.01940031722187996,
-0.02095954492688179,
-0.03134043514728546,
0.10488935559988022,
0.0772276297211647,
-0.14336659014225006,
0.024554923176765442,
0.011120007373392582,
-0.010321994312107563,
0.03249931335449219,
0.032224... |
6c196a8e-03bf-4aaf-b4ae-b4e61c98e7cf | description: 'System table containing a list of user accounts configured on the server.'
keywords: ['system table', 'users']
slug: /operations/system-tables/users
title: 'system.users'
doc_type: 'reference'
system.users
Contains a list of
user accounts
configured on the server.
Columns:
name
(
String
) β User name.
id
(
UUID
) β User ID.
storage
(
String
) β Path to the storage of users. Configured in the access_control_path parameter.
auth_type
(
Array(Enum8('no_password' = 0, 'plaintext_password' = 1, 'sha256_password' = 2, 'double_sha1_password' = 3, 'ldap' = 4, 'kerberos' = 5, 'ssl_certificate' = 6, 'bcrypt_password' = 7, 'ssh_key' = 8, 'http' = 9, 'jwt' = 10, 'scram_sha256_password' = 11, 'no_authentication' = 12))
) β Shows the authentication types. There are multiple ways of user identification: with no password, with plain text password, with SHA256-encoded password, with double SHA-1-encoded password or with bcrypt-encoded password.
auth_params
(
Array(String)
) β Authentication parameters in the JSON format depending on the auth_type.
host_ip
(
Array(String)
) β IP addresses of hosts that are allowed to connect to the ClickHouse server.
host_names
(
Array(String)
) β Names of hosts that are allowed to connect to the ClickHouse server.
host_names_regexp
(
Array(String)
) β Regular expression for host names that are allowed to connect to the ClickHouse server.
host_names_like
(
Array(String)
) β Names of hosts that are allowed to connect to the ClickHouse server, set using the LIKE predicate.
default_roles_all
(
UInt8
) β Shows that all granted roles set for user by default.
default_roles_list
(
Array(String)
) β List of granted roles provided by default.
default_roles_except
(
Array(String)
) β All the granted roles set as default excepting of the listed ones.
grantees_any
(
UInt8
) β The flag that indicates whether a user with any grant option can grant it to anyone.
grantees_list
(
Array(String)
) β The list of users or roles to which this user is allowed to grant options to.
grantees_except
(
Array(String)
) β The list of users or roles to which this user is forbidden from grant options to.
default_database
(
String
) β The name of the default database for this user.
See Also {#see-also}
SHOW USERS | {"source_file": "users.md"} | [
0.009131780825555325,
0.00308447377756238,
-0.14317762851715088,
0.017882993444800377,
-0.02297464944422245,
-0.01250837929546833,
0.0915057584643364,
0.017726724967360497,
-0.021637042984366417,
-0.017676889896392822,
0.019011737778782845,
-0.023313969373703003,
0.05295058339834213,
-0.03... |
3390d489-8e17-4eb5-9b44-461c9e645ddf | description: 'System table containing information about events that occurred with
data parts in the MergeTree family tables, such as adding or merging of data.'
keywords: ['system table', 'part_log']
slug: /operations/system-tables/part_log
title: 'system.part_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.part_log
The
system.part_log
table is created only if the
part_log
server setting is specified.
This table contains information about events that occurred with
data parts
in the
MergeTree
family tables, such as adding or merging data.
The
system.part_log
table contains the following columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
query_id
(
String
) β Identifier of the
INSERT
query that created this data part.
event_type
(
Enum8
) β Type of the event that occurred with the data part. Can have one of the following values:
NewPart
β Inserting of a new data part.
MergePartsStart
β Merging of data parts has started.
MergeParts
β Merging of data parts has finished.
DownloadPart
β Downloading a data part.
RemovePart
β Removing or detaching a data part using
DETACH PARTITION
.
MutatePartStart
β Mutating of a data part has started.
MutatePart
β Mutating of a data part has finished.
MovePart
β Moving the data part from the one disk to another one.
merge_reason
(
Enum8
) β The reason for the event with type
MERGE_PARTS
. Can have one of the following values:
NotAMerge
β The current event has the type other than
MERGE_PARTS
.
RegularMerge
β Some regular merge.
TTLDeleteMerge
β Cleaning up expired data.
TTLRecompressMerge
β Recompressing data part with the.
merge_algorithm
(
Enum8
) β Merge algorithm for the event with type
MERGE_PARTS
. Can have one of the following values:
Undecided
Horizontal
Vertical
event_date
(
Date
) β Event date.
event_time
(
DateTime
) β Event time.
event_time_microseconds
(
DateTime64
) β Event time with microseconds precision.
duration_ms
(
UInt64
) β Duration.
database
(
String
) β Name of the database the data part is in.
table
(
String
) β Name of the table the data part is in.
table_uuid
(
UUID
) β UUID of the table the data part belongs to.
part_name
(
String
) β Name of the data part.
partition_id
(
String
) β ID of the partition that the data part was inserted to. The column takes the
all
value if the partitioning is by
tuple()
.
partition
(
String
) - The partition name.
part_type
(
String
) - The type of the part. Possible values: Wide and Compact.
disk_name
(
String
) - The disk name data part lies on.
path_on_disk
(
String
) β Absolute path to the folder with data part files.
rows
(
UInt64
) β The number of rows in the data part.
size_in_bytes
(
UInt64
) β Size of the data part in bytes. | {"source_file": "part_log.md"} | [
0.04156169667840004,
-0.02629736065864563,
0.024978764355182648,
-0.007365113589912653,
0.05601079761981964,
-0.12315307557582855,
0.04218244180083275,
0.025322217494249344,
0.034976597875356674,
0.05708624795079231,
0.034853413701057434,
-0.030006660148501396,
0.08352424204349518,
-0.0837... |
28dab216-9113-4549-b25b-5650b92605b3 | path_on_disk
(
String
) β Absolute path to the folder with data part files.
rows
(
UInt64
) β The number of rows in the data part.
size_in_bytes
(
UInt64
) β Size of the data part in bytes.
merged_from
(
Array(String)
) β An array of names of the parts which the current part was made up from (after the merge).
bytes_uncompressed
(
UInt64
) β Size of uncompressed bytes.
read_rows
(
UInt64
) β The number of rows was read during the merge.
read_bytes
(
UInt64
) β The number of bytes was read during the merge.
peak_memory_usage
(
Int64
) β The maximum difference between the amount of allocated and freed memory in the context of this thread.
error
(
UInt16
) β The code number of the occurred error.
exception
(
String
) β Text message of the occurred error.
ProfileEvents
(
Map(String, UInt64)
) β ProfileEvents that measure different metrics. The description of them can be found in the table
system.events
.
The
system.part_log
table is created after the first inserting data to the
MergeTree
table.
Example
sql
SELECT * FROM system.part_log LIMIT 1 FORMAT Vertical; | {"source_file": "part_log.md"} | [
0.05962106212973595,
-0.04387491196393967,
-0.06272837519645691,
0.010537161491811275,
0.010035715065896511,
-0.09670240432024002,
0.014116930775344372,
0.12540876865386963,
-0.005592041183263063,
0.003912379499524832,
0.05336907505989075,
0.06100761517882347,
0.02977141924202442,
-0.07986... |
36b69d12-031d-414a-ab5c-bd374ef036b8 | Example
sql
SELECT * FROM system.part_log LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
query_id:
event_type: MergeParts
merge_reason: RegularMerge
merge_algorithm: Vertical
event_date: 2025-07-19
event_time: 2025-07-19 23:54:19
event_time_microseconds: 2025-07-19 23:54:19.710761
duration_ms: 2158
database: default
table: github_events
table_uuid: 1ad33424-f5f5-402b-ac03-ec82282634ab
part_name: all_1_7_1
partition_id: all
partition: tuple()
part_type: Wide
disk_name: default
path_on_disk: ./data/store/1ad/1ad33424-f5f5-402b-ac03-ec82282634ab/all_1_7_1/
rows: 3285726 -- 3.29 million
size_in_bytes: 438968542 -- 438.97 million
merged_from: ['all_1_1_0','all_2_2_0','all_3_3_0','all_4_4_0','all_5_5_0','all_6_6_0','all_7_7_0']
bytes_uncompressed: 1373137767 -- 1.37 billion
read_rows: 3285726 -- 3.29 million
read_bytes: 1429206946 -- 1.43 billion
peak_memory_usage: 303611887 -- 303.61 million
error: 0
exception:
ProfileEvents: {'FileOpen':703,'ReadBufferFromFileDescriptorRead':3824,'ReadBufferFromFileDescriptorReadBytes':439601681,'WriteBufferFromFileDescriptorWrite':592,'WriteBufferFromFileDescriptorWriteBytes':438988500,'ReadCompressedBytes':439601681,'CompressedReadBufferBlocks':6314,'CompressedReadBufferBytes':1539835748,'OpenedFileCacheHits':50,'OpenedFileCacheMisses':484,'OpenedFileCacheMicroseconds':222,'IOBufferAllocs':1914,'IOBufferAllocBytes':319810140,'ArenaAllocChunks':8,'ArenaAllocBytes':131072,'MarkCacheMisses':7,'CreatedReadBufferOrdinary':534,'DiskReadElapsedMicroseconds':139058,'DiskWriteElapsedMicroseconds':51639,'AnalyzePatchRangesMicroseconds':28,'ExternalProcessingFilesTotal':1,'RowsReadByMainReader':170857759,'WaitMarksLoadMicroseconds':988,'LoadedMarksFiles':7,'LoadedMarksCount':14,'LoadedMarksMemoryBytes':728,'Merge':2,'MergeSourceParts':14,'MergedRows':3285733,'MergedColumns':4,'GatheredColumns':51,'MergedUncompressedBytes':1429207058,'MergeTotalMilliseconds':2158,'MergeExecuteMilliseconds':2155,'MergeHorizontalStageTotalMilliseconds':145,'MergeHorizontalStageExecuteMilliseconds':145,'MergeVerticalStageTotalMilliseconds':2008,'MergeVerticalStageExecuteMilliseconds':2006,'MergeProjectionStageTotalMilliseconds':5,'MergeProjectionStageExecuteMilliseconds':4,'MergingSortedMilliseconds':7,'GatheringColumnMilliseconds':56,'ContextLock':2091,'PartsLockHoldMicroseconds':77,'PartsLockWaitMicroseconds':1,'RealTimeMicroseconds':2157475,'CannotWriteToWriteBufferDiscard':36,'LogTrace':6,'LogDebug':59,'LoggerElapsedNanoseconds':514040,'ConcurrencyControlSlotsGranted':53,'ConcurrencyControlSlotsAcquired':53} | {"source_file": "part_log.md"} | [
0.044240064918994904,
-0.030187644064426422,
0.021147344261407852,
-0.008343660272657871,
0.03791220113635063,
-0.1072041466832161,
0.02476462721824646,
0.021573811769485474,
0.046082012355327606,
0.019752440974116325,
0.046107128262519836,
-0.028313331305980682,
-0.012603997252881527,
-0.... |
33f6e537-9354-4a05-a25e-57a53e3cda84 | description: 'System table containing information about and status of replicated tables
residing on the local server. Useful for monitoring.'
keywords: ['system table', 'replicas']
slug: /operations/system-tables/replicas
title: 'system.replicas'
doc_type: 'reference'
system.replicas
Contains information and status for replicated tables residing on the local server.
This table can be used for monitoring. The table contains a row for every Replicated* table.
Example:
sql
SELECT *
FROM system.replicas
WHERE table = 'test_table'
FORMAT Vertical
```text
Query id: dc6dcbcb-dc28-4df9-ae27-4354f5b3b13e
Row 1:
βββββββ
database: db
table: test_table
engine: ReplicatedMergeTree
is_leader: 1
can_become_leader: 1
is_readonly: 0
is_session_expired: 0
future_parts: 0
parts_to_check: 0
zookeeper_path: /test/test_table
replica_name: r1
replica_path: /test/test_table/replicas/r1
columns_version: -1
queue_size: 27
inserts_in_queue: 27
merges_in_queue: 0
part_mutations_in_queue: 0
queue_oldest_time: 2021-10-12 14:48:48
inserts_oldest_time: 2021-10-12 14:48:48
merges_oldest_time: 1970-01-01 03:00:00
part_mutations_oldest_time: 1970-01-01 03:00:00
oldest_part_to_get: 1_17_17_0
oldest_part_to_merge_to:
oldest_part_to_mutate_to:
log_max_index: 206
log_pointer: 207
last_queue_update: 2021-10-12 14:50:08
absolute_delay: 99
total_replicas: 5
active_replicas: 5
lost_part_count: 0
last_queue_update_exception:
zookeeper_exception:
replica_is_active: {'r1':1,'r2':1}
```
Columns:
database
(
String
) - Database name
table
(
String
) - Table name
engine
(
String
) - Table engine name
is_leader
(
UInt8
) - Whether the replica is the leader.
Multiple replicas can be leaders at the same time. A replica can be prevented from becoming a leader using the
merge_tree
setting
replicated_can_become_leader
. The leaders are responsible for scheduling background merges.
Note that writes can be performed to any replica that is available and has a session in ZK, regardless of whether it is a leader.
can_become_leader
(
UInt8
) - Whether the replica can be a leader.
is_readonly
(
UInt8
) - Whether the replica is in read-only mode.
This mode is turned on if the config does not have sections with ClickHouse Keeper, if an unknown error occurred when reinitializing sessions in ClickHouse Keeper, and during session reinitialization in ClickHouse Keeper.
is_session_expired
(
UInt8
) - the session with ClickHouse Keeper has expired. Basically the same as
is_readonly
. | {"source_file": "replicas.md"} | [
-0.006903640925884247,
0.007495887111872435,
-0.10373303294181824,
0.03271428495645523,
0.014363802038133144,
-0.07591383159160614,
0.015845704823732376,
-0.013572847470641136,
-0.033522460609674454,
0.09979109466075897,
0.030373213812708855,
-0.02834390476346016,
0.08650681376457214,
-0.0... |
3e0b5cfe-d552-4157-a563-1efed968d83f | is_session_expired
(
UInt8
) - the session with ClickHouse Keeper has expired. Basically the same as
is_readonly
.
future_parts
(
UInt32
) - The number of data parts that will appear as the result of INSERTs or merges that haven't been done yet.
parts_to_check
(
UInt32
) - The number of data parts in the queue for verification. A part is put in the verification queue if there is suspicion that it might be damaged.
zookeeper_path
(
String
) - Path to table data in ClickHouse Keeper.
replica_name
(
String
) - Replica name in ClickHouse Keeper. Different replicas of the same table have different names.
replica_path
(
String
) - Path to replica data in ClickHouse Keeper. The same as concatenating 'zookeeper_path/replicas/replica_path'.
columns_version
(
Int32
) - Version number of the table structure. Indicates how many times ALTER was performed. If replicas have different versions, it means some replicas haven't made all of the ALTERs yet.
queue_size
(
UInt32
) - Size of the queue for operations waiting to be performed. Operations include inserting blocks of data, merges, and certain other actions. It usually coincides with
future_parts
.
inserts_in_queue
(
UInt32
) - Number of inserts of blocks of data that need to be made. Insertions are usually replicated fairly quickly. If this number is large, it means something is wrong.
merges_in_queue
(
UInt32
) - The number of merges waiting to be made. Sometimes merges are lengthy, so this value may be greater than zero for a long time.
part_mutations_in_queue
(
UInt32
) - The number of mutations waiting to be made.
queue_oldest_time
(
DateTime
) - If
queue_size
greater than 0, shows when the oldest operation was added to the queue.
inserts_oldest_time
(
DateTime
) - See
queue_oldest_time
merges_oldest_time
(
DateTime
) - See
queue_oldest_time
part_mutations_oldest_time
(
DateTime
) - See
queue_oldest_time
The next 4 columns have a non-zero value only where there is an active session with ZK.
log_max_index
(
UInt64
) - Maximum entry number in the log of general activity.
log_pointer
(
UInt64
) - Maximum entry number in the log of general activity that the replica copied to its execution queue, plus one. If
log_pointer
is much smaller than
log_max_index
, something is wrong.
last_queue_update
(
DateTime
) - When the queue was updated last time.
absolute_delay
(
UInt64
) - How big lag in seconds the current replica has.
total_replicas
(
UInt8
) - The total number of known replicas of this table.
active_replicas
(
UInt8
) - The number of replicas of this table that have a session in ClickHouse Keeper (i.e., the number of functioning replicas).
lost_part_count
(
UInt64
) - The number of data parts lost in the table by all replicas in total since table creation. Value is persisted in ClickHouse Keeper and can only increase. | {"source_file": "replicas.md"} | [
-0.02172747254371643,
-0.014100763015449047,
-0.059597164392471313,
0.011771748773753643,
0.01425464078783989,
-0.13052384555339813,
0.008527602069079876,
-0.019769158214330673,
0.01562116201967001,
0.06628338247537613,
0.05086376890540123,
-0.022810686379671097,
0.02746760845184326,
-0.00... |
efcd63e2-fc21-4bb8-9f6f-972d6d9eef7d | lost_part_count
(
UInt64
) - The number of data parts lost in the table by all replicas in total since table creation. Value is persisted in ClickHouse Keeper and can only increase.
last_queue_update_exception
(
String
) - When the queue contains broken entries. Especially important when ClickHouse breaks backward compatibility between versions and log entries written by newer versions aren't parseable by old versions.
zookeeper_exception
(
String
) - The last exception message, got if the error happened when fetching the info from ClickHouse Keeper.
replica_is_active
(
Map(String, UInt8)
) β Map between replica name and is replica active.
If you request all the columns, the table may work a bit slowly, since several reads from ClickHouse Keeper are made for each row.
If you do not request the last 4 columns (log_max_index, log_pointer, total_replicas, active_replicas), the table works quickly.
For example, you can check that everything is working correctly like this:
sql
SELECT
database,
table,
is_leader,
is_readonly,
is_session_expired,
future_parts,
parts_to_check,
columns_version,
queue_size,
inserts_in_queue,
merges_in_queue,
log_max_index,
log_pointer,
total_replicas,
active_replicas
FROM system.replicas
WHERE
is_readonly
OR is_session_expired
OR future_parts > 20
OR parts_to_check > 10
OR queue_size > 20
OR inserts_in_queue > 10
OR log_max_index - log_pointer > 10
OR total_replicas < 2
OR active_replicas < total_replicas
If this query does not return anything, it means that everything is fine. | {"source_file": "replicas.md"} | [
0.03131233900785446,
-0.0519842728972435,
0.006240897811949253,
0.045697927474975586,
-0.0018750015879049897,
-0.09827584773302078,
-0.0326087549328804,
-0.04233178496360779,
-0.004302854649722576,
0.08929543197154999,
0.03573115915060043,
-0.0462670736014843,
0.051423199474811554,
-0.0297... |
365b0ce4-ff70-4b7c-8059-9e4a2f982922 | description: 'System table containing information about and status of replicated database.'
keywords: ['system table', 'database_replicas']
slug: /operations/system-tables/database_replicas
title: 'system.database_replicas'
doc_type: 'reference'
Contains information of each Replicated database replicas.
Columns:
database
(
String
) β The name of the Replicated database is in.
is_readonly
(
UInt8
) β Whether the database replica is in read-only mode.
max_log_ptr
(
Int32
) β Maximum entry number in the log of general activity.
replica_name
(
String
) β Replica name in ClickHouse Keeper.
replica_path
(
String
) β Path to replica data in ClickHouse Keeper.
zookeeper_path
(
String
) β Path to database data in ClickHouse Keeper.
shard_name
(
String
) β The name of the shard in the cluster.
log_ptr
(
Int32
) β Maximum entry number in the log of general activity that the replica copied to its execution queue, plus one.
total_replicas
(
UInt32
) β The total number of known replicas of this database.
zookeeper_exception
(
String
) β The last exception message, got if the error happened when fetching the info from ClickHouse Keeper.
is_session_expired
(
UInt8
) β The session with ClickHouse Keeper has expired. Basically the same as
is_readonly
.
Example
sql
SELECT * FROM system.database_replicas FORMAT Vertical;
text
Row 1:
ββββββ
database: db_2
is_readonly: 0
max_log_ptr: 2
replica_name: replica1
replica_path: /test/db_2/replicas/shard1|replica1
zookeeper_path: /test/db_2
shard_name: shard1
log_ptr: 2
total_replicas: 1
zookeeper_exception:
is_session_expired: 0 | {"source_file": "database_replicas.md"} | [
0.03525771200656891,
-0.06437231600284576,
-0.09765760600566864,
-0.006817162968218327,
-0.024511581286787987,
-0.10621026158332825,
0.020851798355579376,
0.014026865363121033,
-0.030239267274737358,
0.07792805135250092,
0.010996261611580849,
-0.011782648973166943,
0.11975976824760437,
-0.... |
35e41cc7-5996-4aec-9d2d-c91ba6f97a6b | description: 'System table containing information about Refreshable Materialized Views.'
keywords: ['system table', 'view_refreshes']
slug: /operations/system-tables/view_refreshes
title: 'system.view_refreshes'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.view_refreshes
Information about
Refreshable Materialized Views
. Contains all refreshable materialized views, regardless of whether there's a refresh in progress or not.
Columns:
database
(
String
) β The name of the database the table is in.
view
(
String
) β Table name.
uuid
(
UUID
) β Table uuid (Atomic database).
status
(
String
) β Current state of the refresh.
last_success_time
(
Nullable(DateTime)
) β Time when the latest successful refresh started. NULL if no successful refreshes happened since server startup or table creation.
last_success_duration_ms
(
Nullable(UInt64)
) β How long the latest refresh took.
last_refresh_time
(
Nullable(DateTime)
) β Time when the latest refresh attempt finished (if known) or started (if unknown or still running). NULL if no refresh attempts happened since server startup or table creation.
last_refresh_replica
(
String
) β If coordination is enabled, name of the replica that made the current (if running) or previous (if not running) refresh attempt.
next_refresh_time
(
Nullable(DateTime)
) β Time at which the next refresh is scheduled to start, if status = Scheduled.
exception
(
String
) β Error message from previous attempt if it failed.
retry
(
UInt64
) β How many failed attempts there were so far, for the current refresh. Not available if status is
RunningOnAnotherReplica
.
progress
(
Nullable(Float64)
) β Progress of the current running or most recently completed refresh at the given replica, between 0 and 1. NULL if status is
RunningOnAnotherReplica
or the refresh is not running.
read_rows
(
Nullable(UInt64)
) β Number of rows read by the current running or most recently completed refresh at the given replica. NULL if status is
RunningOnAnotherReplica
.
read_bytes
(
Nullable(UInt64)
) β Number of bytes read by the current running or most recently completed refresh at the given replica. NULL if status is
RunningOnAnotherReplica
total_rows
(
Nullable(UInt64)
) β Estimated total number of rows that need to be read by the current running or most recently completed refresh at the given replica. NULL if status is
RunningOnAnotherReplica
written_rows
(
Nullable(UInt64)
) β Number of rows written by the current running or most recently completed refresh at the given replica. NULL if status is
RunningOnAnotherReplica
written_bytes
(
Nullable(UInt64)
) β Number of bytes written by the current running or most recently completed refresh at the given replica. NULL if status is
RunningOnAnotherReplica
Example | {"source_file": "view_refreshes.md"} | [
0.01196164172142744,
-0.08951012045145035,
-0.06741032004356384,
0.006688795983791351,
0.029884053394198418,
-0.07394006848335266,
-0.009130513295531273,
-0.022869016975164413,
0.004319090861827135,
0.10435450822114944,
0.021288182586431503,
-0.007713197730481625,
0.07507314532995224,
-0.0... |
9a825ab7-3f3f-4b17-840f-634c25da50ae | Example
```sql
SELECT
database,
view,
status,
last_refresh_result,
last_refresh_time,
next_refresh_time
FROM system.view_refreshes
ββdatabaseββ¬βviewββββββββββββββββββββββββ¬βstatusβββββ¬βlast_refresh_resultββ¬βββlast_refresh_timeββ¬βββnext_refresh_timeββ
β default β hello_documentation_reader β Scheduled β Finished β 2023-12-01 01:24:00 β 2023-12-01 01:25:00 β
ββββββββββββ΄βββββββββββββββββββββββββββββ΄ββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ
``` | {"source_file": "view_refreshes.md"} | [
0.04039771109819412,
-0.09202982485294342,
-0.08411618322134018,
0.06537392735481262,
-0.06046360731124878,
-0.007921326905488968,
0.03592068329453468,
0.025080503895878792,
0.004175178706645966,
0.04152485355734825,
0.05566570162773132,
0.01918203942477703,
0.04504677280783653,
-0.0462723... |
20bb8c66-ae31-4ae7-a127-7d7114638b59 | description: 'System table containing information about tables that drop table has
been executed on but for which data cleanup has not yet been performed'
keywords: ['system table', 'dropped_tables']
slug: /operations/system-tables/dropped_tables
title: 'system.dropped_tables'
doc_type: 'reference'
Contains information about tables that drop table has been executed on but for which data cleanup has not yet been performed.
Columns:
index
(
UInt32
) β Index in marked_dropped_tables queue.
database
(
String
) β Database name.
table
(
String
) β Table name.
uuid
(
UUID
) β Table UUID.
engine
(
String
) β Table engine name.
metadata_dropped_path
(
String
) β Path of table's metadata file in metadata_dropped directory.
table_dropped_time
(
DateTime
) β The time when the next attempt to remove table's data is scheduled on. Usually it's the table when the table was dropped plus
database_atomic_delay_before_drop_table_sec
.
Example
The following example shows how to get information about
dropped_tables
.
sql
SELECT *
FROM system.dropped_tables\G
text
Row 1:
ββββββ
index: 0
database: default
table: test
uuid: 03141bb2-e97a-4d7c-a172-95cc066bb3bd
engine: MergeTree
metadata_dropped_path: /data/ClickHouse/build/programs/data/metadata_dropped/default.test.03141bb2-e97a-4d7c-a172-95cc066bb3bd.sql
table_dropped_time: 2023-03-16 23:43:31 | {"source_file": "dropped_tables.md"} | [
0.004625394474714994,
-0.023012353107333183,
-0.035429880023002625,
0.04686564579606056,
0.0845843032002449,
-0.123675636947155,
0.08862468600273132,
0.03822365403175354,
0.00921811442822218,
0.03936181962490082,
0.05337756127119064,
0.0033822916448116302,
0.03359246626496315,
-0.065559051... |
0ad6a2b5-14be-4301-927c-c82d33c48838 | description: 'Overview of what system tables are and why they are useful.'
keywords: ['system tables', 'overview']
pagination_next: operations/system-tables/asynchronous_metric_log
sidebar_label: 'Overview'
sidebar_position: 52
slug: /operations/system-tables/
title: 'System Tables'
doc_type: 'reference' | {"source_file": "index.md"} | [
-0.011981352232396603,
-0.043988186866045,
-0.08711706101894379,
-0.02627117931842804,
-0.0217881016433239,
-0.020457593724131584,
0.06151158735156059,
0.09490694105625153,
-0.029741233214735985,
0.053836628794670105,
-0.017468051984906197,
0.05190729722380638,
0.0596199706196785,
-0.03081... |
1453b3f4-b460-4b2f-8e6a-6e78a0f03084 | description: 'System table containing information about contributors.'
keywords: ['system table', 'contributors']
slug: /operations/system-tables/contributors
title: 'system.contributors'
doc_type: 'reference'
Contains information about contributors. The order is random at query execution time.
Columns:
name
(
String
) β Contributor (author) name from git log.
Example
sql
SELECT * FROM system.contributors LIMIT 10
text
ββnameββββββββββββββ
β Olga Khvostikova β
β Max Vetrov β
β LiuYangkuan β
β svladykin β
β zamulla β
β Ε imon PodlipskΓ½ β
β BayoNet β
β Ilya Khomutov β
β Amy Krishnevsky β
β Loud_Scream β
ββββββββββββββββββββ
To find out yourself in the table, use a query:
sql
SELECT * FROM system.contributors WHERE name = 'Olga Khvostikova'
text
ββnameββββββββββββββ
β Olga Khvostikova β
ββββββββββββββββββββ | {"source_file": "contributors.md"} | [
0.08482580631971359,
-0.018216323107481003,
-0.023254137486219406,
0.010340367443859577,
0.004476351663470268,
-0.03591446951031685,
0.12746091187000275,
0.007423584349453449,
0.0210371445864439,
0.09113015979528427,
0.052500445395708084,
-0.0019958389457315207,
0.061190299689769745,
-0.11... |
19870658-f638-4627-ba16-8e9f887353d6 | description: 'System table containing information about parts of MergeTree dropped
tables from
system.dropped_tables
'
keywords: ['system table', 'dropped_tables_parts']
slug: /operations/system-tables/dropped_tables_parts
title: 'system.dropped_tables_parts'
doc_type: 'reference'
Contains information about parts of
MergeTree
dropped tables from
system.dropped_tables
The schema of this table is the same as
system.parts
See Also
MergeTree family
system.parts
system.dropped_tables | {"source_file": "dropped_tables_parts.md"} | [
0.023212943226099014,
-0.02109936997294426,
0.01732579804956913,
-0.011723168194293976,
0.054781049489974976,
-0.0943947434425354,
0.044063668698072433,
0.0855092853307724,
-0.07470200955867767,
-0.0014440539525821805,
0.028493087738752365,
-0.014723662286996841,
0.05425164848566055,
-0.04... |
90f0442a-310b-480e-a6c3-0e0fc33b03da | description: 'System table containing information about executed queries, for example,
start time, duration of processing, error messages.'
keywords: ['system table', 'query_log']
slug: /operations/system-tables/query_log
title: 'system.query_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.query_log
Stores metadata and statistics about executed queries, such as start time, duration, error messages, resource usage, and other execution details. It does not store the results of queries.
You can change settings of queries logging in the
query_log
section of the server configuration.
You can disable queries logging by setting
log_queries = 0
. We do not recommend to turn off logging because information in this table is important for solving issues.
The flushing period of data is set in
flush_interval_milliseconds
parameter of the
query_log
server settings section. To force flushing, use the
SYSTEM FLUSH LOGS
query.
ClickHouse does not delete data from the table automatically. See
Introduction
for more details.
The
system.query_log
table registers two kinds of queries:
Initial queries that were run directly by the client.
Child queries that were initiated by other queries (for distributed query execution). For these types of queries, information about the parent queries is shown in the
initial_*
columns.
Each query creates one or two rows in the
query_log
table, depending on the status (see the
type
column) of the query:
If the query execution was successful, two rows with the
QueryStart
and
QueryFinish
types are created.
If an error occurred during query processing, two events with the
QueryStart
and
ExceptionWhileProcessing
types are created.
If an error occurred before launching the query, a single event with the
ExceptionBeforeStart
type is created.
You can use the
log_queries_probability
setting to reduce the number of queries, registered in the
query_log
table.
You can use the
log_formatted_queries
setting to log formatted queries to the
formatted_query
column.
Columns {#columns}
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
type
(
Enum8
) β Type of an event that occurred when executing the query. Values:
'QueryStart' = 1
β Successful start of query execution.
'QueryFinish' = 2
β Successful end of query execution.
'ExceptionBeforeStart' = 3
β Exception before the start of query execution.
'ExceptionWhileProcessing' = 4
β Exception during the query execution.
event_date
(
Date
) β Query starting date.
event_time
(
DateTime
) β Query starting time.
event_time_microseconds
(
DateTime64
) β Query starting time with microseconds precision.
query_start_time
(
DateTime
) β Start time of query execution.
query_start_time_microseconds
(
DateTime64
) β Start time of query execution with microsecond precision. | {"source_file": "query_log.md"} | [
0.0483911968767643,
0.0004301743465475738,
-0.05139689892530441,
0.01746797189116478,
-0.02293410710990429,
-0.08078084141016006,
0.06314995139837265,
-0.03345191851258278,
0.04164576530456543,
0.1036858931183815,
-0.016136206686496735,
0.008715087547898293,
0.12912622094154358,
-0.1054767... |
3c381e4f-53b1-4cf2-88dd-c8e4959abd1c | query_start_time
(
DateTime
) β Start time of query execution.
query_start_time_microseconds
(
DateTime64
) β Start time of query execution with microsecond precision.
query_duration_ms
(
UInt64
) β Duration of query execution in milliseconds.
read_rows
(
UInt64
) β Total number of rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for
IN
and
JOIN
. For distributed queries
read_rows
includes the total number of rows read at all replicas. Each replica sends it's
read_rows
value, and the server-initiator of the query summarizes all received and local values. The cache volumes do not affect this value.
read_bytes
(
UInt64
) β Total number of bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for
IN
and
JOIN
. For distributed queries
read_bytes
includes the total number of rows read at all replicas. Each replica sends it's
read_bytes
value, and the server-initiator of the query summarizes all received and local values. The cache volumes do not affect this value.
written_rows
(
UInt64
) β For
INSERT
queries, the number of written rows. For other queries, the column value is 0.
written_bytes
(
UInt64
) β For
INSERT
queries, the number of written bytes (uncompressed). For other queries, the column value is 0.
result_rows
(
UInt64
) β Number of rows in a result of the
SELECT
query, or a number of rows in the
INSERT
query.
result_bytes
(
UInt64
) β RAM volume in bytes used to store a query result.
memory_usage
(
UInt64
) β Memory consumption by the query.
current_database
(
String
) β Name of the current database.
query
(
String
) β Query string.
formatted_query
(
String
) β Formatted query string.
normalized_query_hash
(
UInt64
) β A numeric hash value, such as it is identical for queries differ only by values of literals.
query_kind
(
LowCardinality(String)
) β Type of the query.
databases
(
Array
(
LowCardinality(String)
)) β Names of the databases present in the query.
tables
(
Array
(
LowCardinality(String)
)) β Names of the tables present in the query.
columns
(
Array
(
LowCardinality(String)
)) β Names of the columns present in the query.
partitions
(
Array
(
LowCardinality(String)
)) β Names of the partitions present in the query.
projections
(
String
) β Names of the projections used during the query execution.
views
(
Array
(
LowCardinality(String)
)) β Names of the (materialized or live) views present in the query.
exception_code
(
Int32
) β Code of an exception.
exception
(
String
) β Exception message.
stack_trace
(
String
) β
Stack trace
. An empty string, if the query was completed successfully.
is_initial_query
(
UInt8
) β Query type. Possible values:
1 β Query was initiated by the client.
0 β Query was initiated by another query as part of distributed query execution. | {"source_file": "query_log.md"} | [
0.01922466978430748,
-0.02810850739479065,
-0.054822519421577454,
0.06693657487630844,
-0.04544451832771301,
-0.09169206023216248,
-0.0036143397446721792,
0.01779678463935852,
0.05645669996738434,
0.03296950086951256,
-0.07715164870023727,
0.04255058243870735,
0.04529808834195137,
-0.05291... |
3f682e5a-c2aa-4ac1-8199-2a47ea08bdbd | is_initial_query
(
UInt8
) β Query type. Possible values:
1 β Query was initiated by the client.
0 β Query was initiated by another query as part of distributed query execution.
user
(
String
) β Name of the user who initiated the current query.
query_id
(
String
) β ID of the query.
address
(
IPv6
) β IP address that was used to make the query.
port
(
UInt16
) β The client port that was used to make the query.
initial_user
(
String
) β Name of the user who ran the initial query (for distributed query execution).
initial_query_id
(
String
) β ID of the initial query (for distributed query execution).
initial_address
(
IPv6
) β IP address that the parent query was launched from.
initial_port
(
UInt16
) β The client port that was used to make the parent query.
initial_query_start_time
(
DateTime
) β Initial query starting time (for distributed query execution).
initial_query_start_time_microseconds
(
DateTime64
) β Initial query starting time with microseconds precision (for distributed query execution).
interface
(
UInt8
) β Interface that the query was initiated from. Possible values:
1 β TCP.
2 β HTTP.
os_user
(
String
) β Operating system username who runs
clickhouse-client
.
client_hostname
(
String
) β Hostname of the client machine where the
clickhouse-client
or another TCP client is run.
client_name
(
String
) β The
clickhouse-client
or another TCP client name.
client_revision
(
UInt32
) β Revision of the
clickhouse-client
or another TCP client.
client_version_major
(
UInt32
) β Major version of the
clickhouse-client
or another TCP client.
client_version_minor
(
UInt32
) β Minor version of the
clickhouse-client
or another TCP client.
client_version_patch
(
UInt32
) β Patch component of the
clickhouse-client
or another TCP client version.
script_query_number
(
UInt32
) β The query number in a script with multiple queries for
clickhouse-client
.
script_line_number
(
UInt32
) β The line number of the query start in a script with multiple queries for
clickhouse-client
.
http_method
(UInt8) β HTTP method that initiated the query. Possible values:
0 β The query was launched from the TCP interface.
1 β
GET
method was used.
2 β
POST
method was used.
http_user_agent
(
String
) β HTTP header
UserAgent
passed in the HTTP query.
http_referer
(
String
) β HTTP header
Referer
passed in the HTTP query (contains an absolute or partial address of the page making the query).
forwarded_for
(
String
) β HTTP header
X-Forwarded-For
passed in the HTTP query.
quota_key
(
String
) β The
quota key
specified in the
quotas
setting (see
keyed
).
revision
(
UInt32
) β ClickHouse revision.
ProfileEvents
(
Map(String, UInt64)
) β ProfileEvents that measure different metrics. The description of them could be found in the table
system.events | {"source_file": "query_log.md"} | [
-0.023203637450933456,
0.010661059990525246,
-0.06091424450278282,
0.021774297580122948,
-0.100788414478302,
-0.10168386250734329,
-0.0294530987739563,
0.02344094216823578,
-0.051886655390262604,
-0.0027861774433404207,
-0.013481047004461288,
0.004361418075859547,
0.05097180977463722,
-0.0... |
c2743b44-70fb-4f73-96ca-45a37a234e34 | ProfileEvents
(
Map(String, UInt64)
) β ProfileEvents that measure different metrics. The description of them could be found in the table
system.events
Settings
(
Map(String, String)
) β Settings that were changed when the client ran the query. To enable logging changes to settings, set the
log_query_settings
parameter to 1.
log_comment
(
String
) β Log comment. It can be set to arbitrary string no longer than
max_query_size
. An empty string if it is not defined.
thread_ids
(
Array(UInt64)
) β Thread ids that are participating in query execution. These threads may not have run simultaneously.
peak_threads_usage
(
UInt64)
) β Maximum count of simultaneous threads executing the query.
used_aggregate_functions
(
Array(String)
) β Canonical names of
aggregate functions
, which were used during query execution.
used_aggregate_function_combinators
(
Array(String)
) β Canonical names of
aggregate functions combinators
, which were used during query execution.
used_database_engines
(
Array(String)
) β Canonical names of
database engines
, which were used during query execution.
used_data_type_families
(
Array(String)
) β Canonical names of
data type families
, which were used during query execution.
used_dictionaries
(
Array(String)
) β Canonical names of
dictionaries
, which were used during query execution. For dictionaries configured using an XML file this is the name of the dictionary, and for dictionaries created by an SQL statement, the canonical name is the fully qualified object name.
used_formats
(
Array(String)
) β Canonical names of
formats
, which were used during query execution.
used_functions
(
Array(String)
) β Canonical names of
functions
, which were used during query execution.
used_storages
(
Array(String)
) β Canonical names of
storages
, which were used during query execution.
used_table_functions
(
Array(String)
) β Canonical names of
table functions
, which were used during query execution.
used_executable_user_defined_functions
(
Array(String)
) β Canonical names of
executable user defined functions
, which were used during query execution.
used_sql_user_defined_functions
(
Array(String)
) β Canonical names of
sql user defined functions
, which were used during query execution.
used_privileges
(
Array(String)
) - Privileges which were successfully checked during query execution.
missing_privileges
(
Array(String)
) - Privileges that are missing during query execution.
query_cache_usage
(
Enum8
) β Usage of the
query cache
during query execution. Values:
'Unknown'
= Status unknown.
'None'
= The query result was neither written into nor read from the query cache.
'Write'
= The query result was written into the query cache.
'Read'
= The query result was read from the query cache.
Examples {#examples}
Basic example
sql
SELECT * FROM system.query_log WHERE type = 'QueryFinish' ORDER BY query_start_time DESC LIMIT 1 FORMAT Vertical; | {"source_file": "query_log.md"} | [
0.039328768849372864,
-0.061317771673202515,
-0.03730560094118118,
0.06563200056552887,
-0.07162369787693024,
-0.060665518045425415,
0.03365236893296242,
0.04098526015877724,
-0.004724834114313126,
0.014052102342247963,
-0.0017861708765849471,
-0.02516539767384529,
0.03327621519565582,
-0.... |
b4bd7975-a132-496e-84e2-b62802f17e7d | text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
type: QueryFinish
event_date: 2021-11-03
event_time: 2021-11-03 16:13:54
event_time_microseconds: 2021-11-03 16:13:54.953024
query_start_time: 2021-11-03 16:13:54
query_start_time_microseconds: 2021-11-03 16:13:54.952325
query_duration_ms: 0
read_rows: 69
read_bytes: 6187
written_rows: 0
written_bytes: 0
result_rows: 69
result_bytes: 48256
memory_usage: 0
current_database: default
query: DESCRIBE TABLE system.query_log
formatted_query:
normalized_query_hash: 8274064835331539124
query_kind:
databases: []
tables: []
columns: []
projections: []
views: []
exception_code: 0
exception:
stack_trace:
is_initial_query: 1
user: default
query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
address: ::ffff:127.0.0.1
port: 40452
initial_user: default
initial_query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
initial_address: ::ffff:127.0.0.1
initial_port: 40452
initial_query_start_time: 2021-11-03 16:13:54
initial_query_start_time_microseconds: 2021-11-03 16:13:54.952325
interface: 1
os_user: sevirov
client_hostname: clickhouse.eu-central1.internal
client_name: ClickHouse
client_revision: 54449
client_version_major: 21
client_version_minor: 10
client_version_patch: 1
http_method: 0
http_user_agent:
http_referer:
forwarded_for:
quota_key:
revision: 54456
log_comment:
thread_ids: [30776,31174]
ProfileEvents: {'Query':1,'NetworkSendElapsedMicroseconds':59,'NetworkSendBytes':2643,'SelectedRows':69,'SelectedBytes':6187,'ContextLock':9,'RWLockAcquiredReadLocks':1,'RealTimeMicroseconds':817,'UserTimeMicroseconds':427,'SystemTimeMicroseconds':212,'OSCPUVirtualTimeMicroseconds':639,'OSReadChars':894,'OSWriteChars':319}
Settings: {'load_balancing':'random','max_memory_usage':'10000000000'}
used_aggregate_functions: []
used_aggregate_function_combinators: [] | {"source_file": "query_log.md"} | [
0.0688166543841362,
-0.017892630770802498,
-0.07700635492801666,
0.06028037145733833,
-0.028059320524334908,
-0.09387940913438797,
0.036472465842962265,
-0.023276694118976593,
0.007538520731031895,
0.055277470499277115,
0.020149629563093185,
-0.0622541643679142,
0.031175924465060234,
-0.04... |
6a6d3231-0612-46a3-90be-4137958a9fc7 | Settings: {'load_balancing':'random','max_memory_usage':'10000000000'}
used_aggregate_functions: []
used_aggregate_function_combinators: []
used_database_engines: []
used_data_type_families: []
used_dictionaries: []
used_formats: []
used_functions: []
used_storages: []
used_table_functions: []
used_executable_user_defined_functions:[]
used_sql_user_defined_functions: []
used_privileges: []
missing_privileges: []
query_cache_usage: None | {"source_file": "query_log.md"} | [
0.055705804377794266,
-0.039590295404195786,
-0.12016945332288742,
0.028497550636529922,
-0.06918036192655563,
-0.0692397877573967,
0.01381010189652443,
0.046604596078395844,
-0.047023605555295944,
0.04969657585024834,
-0.009837350808084011,
-0.020280908793210983,
-0.012490551918745041,
-0... |
5a263f6c-2a38-41a5-944f-7902fa377238 | Cloud example
In ClickHouse Cloud,
system.query_log
is local to each node; to see all entries you must query via
clusterAllReplicas
.
For example, to aggregate query_log rows from every replica in the βdefaultβ cluster you can write:
sql
SELECT *
FROM clusterAllReplicas('default', system.query_log)
WHERE event_time >= now() - toIntervalHour(1)
LIMIT 10
SETTINGS skip_unavailable_shards = 1;
See Also
system.query_thread_log
β This table contains information about each query execution thread. | {"source_file": "query_log.md"} | [
0.09663444757461548,
-0.05459391698241234,
0.017652638256549835,
0.058268893510103226,
0.027897538617253304,
-0.028489060699939728,
0.010756297037005424,
-0.09759189188480377,
0.061308592557907104,
0.06818483024835587,
0.010698673315346241,
-0.040790583938360214,
0.03238032013177872,
-0.06... |
89460664-8cd3-4499-8e86-811bbee56c3a | description: 'System table containing logging entries.'
keywords: ['system table', 'text_log']
slug: /operations/system-tables/text_log
title: 'system.text_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.text_log
Contains logging entries. The logging level which goes to this table can be limited to the
text_log.level
server setting.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(Date) β Date of the entry.
event_time
(DateTime) β Time of the entry.
event_time_microseconds
(DateTime64) β Time of the entry with microseconds precision.
microseconds
(UInt32) β Microseconds of the entry.
thread_name
(String) β Name of the thread from which the logging was done.
thread_id
(UInt64) β OS thread ID.
level
(
Enum8
) β Entry level. Possible values:
1
or
'Fatal'
.
2
or
'Critical'
.
3
or
'Error'
.
4
or
'Warning'
.
5
or
'Notice'
.
6
or
'Information'
.
7
or
'Debug'
.
8
or
'Trace'
.
query_id
(String) β ID of the query.
logger_name
(LowCardinality(String)) β Name of the logger (i.e.Β
DDLWorker
).
message
(String) β The message itself.
revision
(UInt32) β ClickHouse revision.
source_file
(LowCardinality(String)) β Source file from which the logging was done.
source_line
(UInt64) β Source line from which the logging was done.
message_format_string
(LowCardinality(String)) β A format string that was used to format the message.
value1
(String) - Argument 1 that was used to format the message.
value2
(String) - Argument 2 that was used to format the message.
value3
(String) - Argument 3 that was used to format the message.
value4
(String) - Argument 4 that was used to format the message.
value5
(String) - Argument 5 that was used to format the message.
value6
(String) - Argument 6 that was used to format the message.
value7
(String) - Argument 7 that was used to format the message.
value8
(String) - Argument 8 that was used to format the message.
value9
(String) - Argument 9 that was used to format the message.
value10
(String) - Argument 10 that was used to format the message.
Example
sql
SELECT * FROM system.text_log LIMIT 1 \G | {"source_file": "text_log.md"} | [
0.07269939035177231,
0.0010823890333995223,
-0.020158639177680016,
-0.009320493787527084,
0.013623991049826145,
-0.11722707748413086,
0.03329772502183914,
0.021875204518437386,
0.048547208309173584,
0.10724731534719467,
-0.012269244529306889,
-0.0487755686044693,
0.10816527903079987,
-0.07... |
6fada682-21fa-4f8e-89ae-7b03591753c8 | value9
(String) - Argument 9 that was used to format the message.
value10
(String) - Argument 10 that was used to format the message.
Example
sql
SELECT * FROM system.text_log LIMIT 1 \G
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2020-09-10
event_time: 2020-09-10 11:23:07
event_time_microseconds: 2020-09-10 11:23:07.871397
microseconds: 871397
thread_name: clickhouse-serv
thread_id: 564917
level: Information
query_id:
logger_name: DNSCacheUpdater
message: Update period 15 seconds
revision: 54440
source_file: /ClickHouse/src/Interpreters/DNSCacheUpdater.cpp; void DB::DNSCacheUpdater::start()
source_line: 45
message_format_string: Update period {} seconds
value1: 15
value2:
value3:
value4:
value5:
value6:
value7:
value8:
value9:
value10: | {"source_file": "text_log.md"} | [
0.02505522035062313,
0.023991234600543976,
-0.0494171679019928,
-0.012692012824118137,
-0.04819165915250778,
-0.052790090441703796,
0.10184477269649506,
-0.010528523474931717,
0.06363663822412491,
0.0439787395298481,
-0.007294890005141497,
-0.05963561311364174,
-0.013394497334957123,
-0.05... |
366eff9d-d5de-4438-bb6b-20997d14cc3f | description: 'System table containing information about normal and aggregate functions.'
keywords: ['system table', 'functions']
slug: /operations/system-tables/functions
title: 'system.functions'
doc_type: 'reference'
Contains information about normal and aggregate functions.
Columns:
name
(
String
) β The name of the function.
is_aggregate
(
UInt8
) β Whether the function is an aggregate function.
case_insensitive
(
UInt8
) β Whether the function name can be used case-insensitively.
alias_to
(
String
) β The original function name, if the function name is an alias.
create_query
(
String
) β Obsolete.
origin
(
Enum8('System' = 0, 'SQLUserDefined' = 1, 'ExecutableUserDefined' = 2)
) β Obsolete.
description
(
String
) β A high-level description what the function does.
syntax
(
String
) β Signature of the function.
arguments
(
String
) β The function arguments.
parameters
(
String
) β The function parameters (only for aggregate function).
returned_value
(
String
) β What does the function return.
examples
(
String
) β Usage example.
introduced_in
(
String
) β ClickHouse version in which the function was first introduced.
categories
(
String
) β The category of the function.
Example
sql
SELECT name, is_aggregate, is_deterministic, case_insensitive, alias_to FROM system.functions LIMIT 5;
```text
ββnameββββββββββββββββββββββ¬βis_aggregateββ¬βis_deterministicββ¬βcase_insensitiveββ¬βalias_toββ
β BLAKE3 β 0 β 1 β 0 β β
β sipHash128Reference β 0 β 1 β 0 β β
β mapExtractKeyLike β 0 β 1 β 0 β β
β sipHash128ReferenceKeyed β 0 β 1 β 0 β β
β mapPartialSort β 0 β 1 β 0 β β
ββββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββββ΄βββββββββββ
5 rows in set. Elapsed: 0.002 sec.
``` | {"source_file": "functions.md"} | [
-0.003328838152810931,
-0.03880402818322182,
-0.03712151199579239,
0.0009164816583506763,
-0.06530974805355072,
-0.07994505017995834,
0.02549862489104271,
0.15479795634746552,
-0.0646994560956955,
0.013124666176736355,
0.024108367040753365,
-0.01651979610323906,
0.06322424858808517,
-0.049... |
c6ed229c-07f4-4003-acaa-53bb571034fe | description: 'System table containing historical values for
system.asynchronous_metrics
,
which are saved once per time interval (one second by default)'
keywords: ['system table', 'asynchronous_metric_log']
slug: /operations/system-tables/asynchronous_metric_log
title: 'system.asynchronous_metric_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains the historical values for
system.asynchronous_metrics
, which are saved once per time interval (one second by default). Enabled by default.
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
event_date
(
Date
) β Event date.
event_time
(
DateTime
) β Event time.
metric
(
String
) β Metric name.
value
(
Float64
) β Metric value.
Example
sql
SELECT * FROM system.asynchronous_metric_log LIMIT 3 \G
```text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-11-14
event_time: 2023-11-14 14:39:07
metric: AsynchronousHeavyMetricsCalculationTimeSpent
value: 0.001
Row 2:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-11-14
event_time: 2023-11-14 14:39:08
metric: AsynchronousHeavyMetricsCalculationTimeSpent
value: 0
Row 3:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2023-11-14
event_time: 2023-11-14 14:39:09
metric: AsynchronousHeavyMetricsCalculationTimeSpent
value: 0
```
See Also
asynchronous_metric_log setting
β Enabling and disabling the setting.
system.asynchronous_metrics
β Contains metrics, calculated periodically in the background.
system.metric_log
β Contains history of metrics values from tables
system.metrics
and
system.events
, periodically flushed to disk. | {"source_file": "asynchronous_metric_log.md"} | [
0.013582324609160423,
-0.012127437628805637,
-0.1091718003153801,
0.05588069558143616,
-0.0015038744313642383,
-0.07038640975952148,
0.018998045474290848,
-0.001141283893957734,
0.07313256710767746,
0.05983884632587433,
-0.013882610015571117,
-0.027335062623023987,
0.06847754865884781,
-0.... |
ee511dfa-68d0-4a85-a57e-abd33d8fb77c | description: 'System table containing information about in-progress data part moves
of MergeTree tables. Each data part movement is represented by a single row.'
keywords: ['system table', 'moves']
slug: /operations/system-tables/moves
title: 'system.moves'
doc_type: 'reference'
system.moves
The table contains information about in-progress
data part moves
of
MergeTree
tables. Each data part movement is represented by a single row.
Columns:
database
(
String
) β Name of the database.
table
(
String
) β Name of the table containing moving data part.
elapsed
(
Float64
) β Time elapsed (in seconds) since data part movement started.
target_disk_name
(
String
) β Name of disk to which the data part is moving.
target_disk_path
(
String
) β Path to the mount point of the disk in the file system.
part_name
(
String
) β Name of the data part being moved.
part_size
(
UInt64
) β Data part size.
thread_id
(
UInt64
) β Identifier of a thread performing the movement.
Example
sql
SELECT * FROM system.moves
response
ββdatabaseββ¬βtableββ¬βββββelapsedββ¬βtarget_disk_nameββ¬βtarget_disk_pathββ¬βpart_nameββ¬βpart_sizeββ¬βthread_idββ
β default β test2 β 1.668056039 β s3 β ./disks/s3/ β all_3_3_0 β 136 β 296146 β
ββββββββββββ΄ββββββββ΄ββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββββ
See Also
MergeTree
table engine
Using Multiple Block Devices for Data Storage
ALTER TABLE ... MOVE PART
command | {"source_file": "moves.md"} | [
0.029346954077482224,
-0.05819670483469963,
-0.027587322518229485,
0.015289443545043468,
-0.0003167171962559223,
-0.14305709302425385,
0.03834155574440956,
0.06186824291944504,
-0.008692659437656403,
0.05994226783514023,
0.05589190497994423,
0.013663406483829021,
0.05348871275782585,
-0.13... |
272eb8ad-4319-4d86-af41-b6327c100b4a | description: 'System table containing information about the databases that are available
to the current user.'
keywords: ['system table', 'databases']
slug: /operations/system-tables/databases
title: 'system.databases'
doc_type: 'reference'
Contains information about the databases that are available to the current user.
Columns:
name
(
String
) β Database name.
engine
(
String
) β Database engine.
data_path
(
String
) β Data path.
metadata_path
(
String
) β Metadata path.
uuid
(
UUID
) β Database UUID.
engine_full
(
String
) β Parameters of the database engine.
comment
(
String
) β Database comment.
is_external
(
UInt8
) β Database is external (i.e. PostgreSQL/DataLakeCatalog).
The
name
column from this system table is used for implementing the
SHOW DATABASES
query.
Example
Create a database.
sql
CREATE DATABASE test;
Check all of the available databases to the user.
sql
SELECT * FROM system.databases;
```text
ββnameβββββββββββββββββ¬βengineββββββ¬βdata_pathβββββββββββββββββββββ¬βmetadata_pathββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βuuidββββββββββββββββββββββββββββββββββ¬βengine_fullβββββββββββββββββββββββββββββββββββββββββββββ¬βcommentββ
β INFORMATION_SCHEMA β Memory β /data/clickhouse_data/ β β 00000000-0000-0000-0000-000000000000 β Memory β β
β default β Atomic β /data/clickhouse_data/store/ β /data/clickhouse_data/store/f97/f97a3ceb-2e8a-4912-a043-c536e826a4d4/ β f97a3ceb-2e8a-4912-a043-c536e826a4d4 β Atomic β β
β information_schema β Memory β /data/clickhouse_data/ β β 00000000-0000-0000-0000-000000000000 β Memory β β
β replicated_database β Replicated β /data/clickhouse_data/store/ β /data/clickhouse_data/store/da8/da85bb71-102b-4f69-9aad-f8d6c403905e/ β da85bb71-102b-4f69-9aad-f8d6c403905e β Replicated('some/path/database', 'shard1', 'replica1') β β
β system β Atomic β /data/clickhouse_data/store/ β /data/clickhouse_data/store/b57/b5770419-ac7a-4b67-8229-524122024076/ β b5770419-ac7a-4b67-8229-524122024076 β Atomic β β
βββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββ
``` | {"source_file": "databases.md"} | [
0.02219405025243759,
-0.08213695138692856,
-0.12054681032896042,
0.027693627402186394,
-0.06776321679353714,
-0.07730136066675186,
0.11012930423021317,
0.09150108695030212,
-0.043267350643873215,
-0.014382855966687202,
-0.0034340200945734978,
-0.010926547460258007,
0.10610269010066986,
-0.... |
95820ae6-5a39-4ebc-8af2-165ff1b6ea7b | description: 'System table containing information about maximums for all intervals
of all quotas. Any number of rows or zero can correspond to one quota.'
keywords: ['system table', 'quota_limits']
slug: /operations/system-tables/quota_limits
title: 'system.quota_limits'
doc_type: 'reference'
system.quota_limits
Contains information about maximums for all intervals of all quotas. Any number of rows or zero can correspond to one quota.
Columns:
-
quota_name
(
String
) β Quota name.
-
duration
(
UInt32
) β Length of the time interval for calculating resource consumption, in seconds.
-
is_randomized_interval
(
UInt8
) β Logical value. It shows whether the interval is randomized. Interval always starts at the same time if it is not randomized. For example, an interval of 1 minute always starts at an integer number of minutes (i.e. it can start at 11:20:00, but it never starts at 11:20:01), an interval of one day always starts at midnight UTC. If interval is randomized, the very first interval starts at random time, and subsequent intervals starts one by one. Values:
-
0
β Interval is not randomized.
-
1
β Interval is randomized.
-
max_queries
(
Nullable
(
UInt64
)) β Maximum number of queries.
-
max_query_selects
(
Nullable
(
UInt64
)) β Maximum number of select queries.
-
max_query_inserts
(
Nullable
(
UInt64
)) β Maximum number of insert queries.
-
max_errors
(
Nullable
(
UInt64
)) β Maximum number of errors.
-
max_result_rows
(
Nullable
(
UInt64
)) β Maximum number of result rows.
-
max_result_bytes
(
Nullable
(
UInt64
)) β Maximum number of RAM volume in bytes used to store a queries result.
-
max_read_rows
(
Nullable
(
UInt64
)) β Maximum number of rows read from all tables and table functions participated in queries.
-
max_read_bytes
(
Nullable
(
UInt64
)) β Maximum number of bytes read from all tables and table functions participated in queries.
-
max_execution_time
(
Nullable
(
Float64
)) β Maximum of the query execution time, in seconds. | {"source_file": "quota_limits.md"} | [
-0.05213280767202377,
-0.011929959058761597,
-0.10443060100078583,
-0.014017220586538315,
-0.03202282637357712,
-0.07220162451267242,
0.06747815012931824,
0.04915030673146248,
0.030865469947457314,
0.0270667914301157,
-0.04027697816491127,
-0.015818065032362938,
0.05869458243250847,
-0.000... |
01fdc8d6-3e02-4ee1-a879-1e0bd07830ce | description: 'System table containing metrics which can be calculated instantly, or
have a current value.'
keywords: ['system table', 'metrics']
slug: /operations/system-tables/metrics
title: 'system.metrics'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.metrics
Contains metrics which can be calculated instantly, or have a current value. For example, the number of simultaneously processed queries or the current replica delay. This table is always up to date.
Columns:
metric
(
String
) β Metric name.
value
(
Int64
) β Metric value.
description
(
String
) β Metric description.
You can find all supported metrics in source file
src/Common/CurrentMetrics.cpp
.
Example
sql
SELECT * FROM system.metrics LIMIT 10
text
ββmetricββββββββββββββββββββββββββββββββ¬βvalueββ¬βdescriptionβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Query β 1 β Number of executing queries β
β Merge β 0 β Number of executing background merges β
β PartMutation β 0 β Number of mutations (ALTER DELETE/UPDATE) β
β ReplicatedFetch β 0 β Number of data parts being fetched from replicas β
β ReplicatedSend β 0 β Number of data parts being sent to replicas β
β ReplicatedChecks β 0 β Number of data parts checking for consistency β
β BackgroundMergesAndMutationsPoolTask β 0 β Number of active merges and mutations in an associated background pool β
β BackgroundFetchesPoolTask β 0 β Number of active fetches in an associated background pool β
β BackgroundCommonPoolTask β 0 β Number of active tasks in an associated background pool β
β BackgroundMovePoolTask β 0 β Number of active tasks in BackgroundProcessingPool for moves β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Metric descriptions {#metric-descriptions}
AggregatorThreads {#aggregatorthreads}
Number of threads in the Aggregator thread pool.
AggregatorThreadsActive {#aggregatorthreadsactive}
Number of threads in the Aggregator thread pool running a task.
TablesLoaderForegroundThreads {#tablesloaderforegroundthreads}
Number of threads in the async loader foreground thread pool.
TablesLoaderForegroundThreadsActive {#tablesloaderforegroundthreadsactive}
Number of threads in the async loader foreground thread pool running a task.
TablesLoaderBackgroundThreads {#tablesloaderbackgroundthreads}
Number of threads in the async loader background thread pool. | {"source_file": "metrics.md"} | [
-0.025499438866972923,
-0.07062891870737076,
-0.10937116295099258,
-0.0016724619781598449,
-0.03685366362333298,
-0.053666338324546814,
-0.0019289972260594368,
0.021770106628537178,
0.029879426583647728,
0.08690007776021957,
-0.015179713256657124,
-0.06244506314396858,
0.08803702890872955,
... |
43e27f1f-309d-43af-89f8-1e55b42b5e32 | TablesLoaderBackgroundThreads {#tablesloaderbackgroundthreads}
Number of threads in the async loader background thread pool.
TablesLoaderBackgroundThreadsActive {#tablesloaderbackgroundthreadsactive}
Number of threads in the async loader background thread pool running a task.
AsyncInsertCacheSize {#asyncinsertcachesize}
Number of async insert hash id in cache
AsynchronousInsertThreads {#asynchronousinsertthreads}
Number of threads in the AsynchronousInsert thread pool.
AsynchronousInsertThreadsActive {#asynchronousinsertthreadsactive}
Number of threads in the AsynchronousInsert thread pool running a task.
AsynchronousReadWait {#asynchronousreadwait}
Number of threads waiting for asynchronous read.
BackgroundBufferFlushSchedulePoolSize {#backgroundbufferflushschedulepoolsize}
Limit on number of tasks in BackgroundBufferFlushSchedulePool
BackgroundBufferFlushSchedulePoolTask {#backgroundbufferflushschedulepooltask}
Number of active tasks in BackgroundBufferFlushSchedulePool. This pool is used for periodic Buffer flushes
BackgroundCommonPoolSize {#backgroundcommonpoolsize}
Limit on number of tasks in an associated background pool
BackgroundCommonPoolTask {#backgroundcommonpooltask}
Number of active tasks in an associated background pool
BackgroundDistributedSchedulePoolSize {#backgrounddistributedschedulepoolsize}
Limit on number of tasks in BackgroundDistributedSchedulePool
BackgroundDistributedSchedulePoolTask {#backgrounddistributedschedulepooltask}
Number of active tasks in BackgroundDistributedSchedulePool. This pool is used for distributed sends that is done in background.
BackgroundFetchesPoolSize {#backgroundfetchespoolsize}
Limit on number of simultaneous fetches in an associated background pool
BackgroundFetchesPoolTask {#backgroundfetchespooltask}
Number of active fetches in an associated background pool
BackgroundMergesAndMutationsPoolSize {#backgroundmergesandmutationspoolsize}
Limit on number of active merges and mutations in an associated background pool
BackgroundMergesAndMutationsPoolTask {#backgroundmergesandmutationspooltask}
Number of active merges and mutations in an associated background pool
BackgroundMessageBrokerSchedulePoolSize {#backgroundmessagebrokerschedulepoolsize}
Limit on number of tasks in BackgroundProcessingPool for message streaming
BackgroundMessageBrokerSchedulePoolTask {#backgroundmessagebrokerschedulepooltask}
Number of active tasks in BackgroundProcessingPool for message streaming
BackgroundMovePoolSize {#backgroundmovepoolsize}
Limit on number of tasks in BackgroundProcessingPool for moves
BackgroundMovePoolTask {#backgroundmovepooltask}
Number of active tasks in BackgroundProcessingPool for moves
BackgroundSchedulePoolSize {#backgroundschedulepoolsize} | {"source_file": "metrics.md"} | [
-0.0182806346565485,
-0.050088901072740555,
-0.05961694195866585,
0.05453736335039139,
-0.018357569351792336,
-0.014196633361279964,
0.04555772244930267,
-0.00885468628257513,
0.008537248708307743,
0.0033464417792856693,
-0.03681061789393425,
0.013934156857430935,
0.019636115059256554,
-0.... |
a4b90c33-04dd-4722-8580-f7fecc0bbe14 | BackgroundMovePoolTask {#backgroundmovepooltask}
Number of active tasks in BackgroundProcessingPool for moves
BackgroundSchedulePoolSize {#backgroundschedulepoolsize}
Limit on number of tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc.
BackgroundSchedulePoolTask {#backgroundschedulepooltask}
Number of active tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc.
BackupsIOThreads {#backupsiothreads}
Number of threads in the BackupsIO thread pool.
BackupsIOThreadsActive {#backupsiothreadsactive}
Number of threads in the BackupsIO thread pool running a task.
BackupsThreads {#backupsthreads}
Number of threads in the thread pool for BACKUP.
BackupsThreadsActive {#backupsthreadsactive}
Number of threads in thread pool for BACKUP running a task.
BrokenDistributedFilesToInsert {#brokendistributedfilestoinsert}
Number of files for asynchronous insertion into Distributed tables that has been marked as broken. This metric will starts from 0 on start. Number of files for every shard is summed.
CacheDetachedFileSegments {#cachedetachedfilesegments}
Number of existing detached cache file segments
CacheDictionaryThreads {#cachedictionarythreads}
Number of threads in the CacheDictionary thread pool.
CacheDictionaryThreadsActive {#cachedictionarythreadsactive}
Number of threads in the CacheDictionary thread pool running a task.
CacheDictionaryUpdateQueueBatches {#cachedictionaryupdatequeuebatches}
Number of 'batches' (a set of keys) in update queue in CacheDictionaries.
CacheDictionaryUpdateQueueKeys {#cachedictionaryupdatequeuekeys}
Exact number of keys in update queue in CacheDictionaries.
CacheFileSegments {#cachefilesegments}
Number of existing cache file segments
ContextLockWait {#contextlockwait}
Number of threads waiting for lock in Context. This is global lock.
DDLWorkerThreads {#ddlworkerthreads}
Number of threads in the DDLWorker thread pool for ON CLUSTER queries.
DDLWorkerThreadsActive {#ddlworkerthreadsactive}
Number of threads in the DDLWORKER thread pool for ON CLUSTER queries running a task.
DatabaseCatalogThreads {#databasecatalogthreads}
Number of threads in the DatabaseCatalog thread pool.
DatabaseCatalogThreadsActive {#databasecatalogthreadsactive}
Number of threads in the DatabaseCatalog thread pool running a task.
DatabaseOnDiskThreads {#databaseondiskthreads}
Number of threads in the DatabaseOnDisk thread pool.
DatabaseOnDiskThreadsActive {#databaseondiskthreadsactive}
Number of threads in the DatabaseOnDisk thread pool running a task.
DelayedInserts {#delayedinserts}
Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table. | {"source_file": "metrics.md"} | [
-0.04849459230899811,
-0.017162999138236046,
-0.012748687528073788,
-0.024169115349650383,
0.051609981805086136,
-0.04019100219011307,
0.037749823182821274,
-0.032134801149368286,
0.001561645301990211,
0.0013768111821264029,
-0.03459784388542175,
-0.015093871392309666,
0.07330376654863358,
... |
a6a390f5-7a1b-46df-85bb-3a619986ef74 | DelayedInserts {#delayedinserts}
Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table.
DestroyAggregatesThreads {#destroyaggregatesthreads}
Number of threads in the thread pool for destroy aggregate states.
DestroyAggregatesThreadsActive {#destroyaggregatesthreadsactive}
Number of threads in the thread pool for destroy aggregate states running a task.
DictCacheRequests {#dictcacherequests}
Number of requests in fly to data sources of dictionaries of cache type.
DiskObjectStorageAsyncThreads {#diskobjectstorageasyncthreads}
Number of threads in the async thread pool for DiskObjectStorage.
DiskObjectStorageAsyncThreadsActive {#diskobjectstorageasyncthreadsactive}
Number of threads in the async thread pool for DiskObjectStorage running a task.
DiskSpaceReservedForMerge {#diskspacereservedformerge}
Disk space reserved for currently running background merges. It is slightly more than the total size of currently merging parts.
DistributedFilesToInsert {#distributedfilestoinsert}
Number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed.
DistributedSend {#distributedsend}
Number of connections to remote servers sending data that was INSERTed into Distributed tables. Both synchronous and asynchronous mode.
EphemeralNode {#ephemeralnode}
Number of ephemeral nodes hold in ZooKeeper.
FilesystemCacheElements {#filesystemcacheelements}
Filesystem cache elements (file segments)
FilesystemCacheReadBuffers {#filesystemcachereadbuffers}
Number of active cache buffers
FilesystemCacheSize {#filesystemcachesize}
Filesystem cache size in bytes
QueryCacheBytes {#querycachebytes}
Total size of the query cache in bytes.
QueryCacheEntries {#querycacheentries}
Total number of entries in the query cache.
UncompressedCacheBytes {#uncompressedcachebytes}
Total size of uncompressed cache in bytes. Uncompressed cache does not usually improve the performance and should be mostly avoided.
UncompressedCacheCells {#uncompressedcachecells}
CompiledExpressionCacheBytes {#compiledexpressioncachebytes}
Total bytes used for the cache of JIT-compiled code.
CompiledExpressionCacheCount {#compiledexpressioncachecount}
Total entries in the cache of JIT-compiled code.
MMapCacheCells {#mmapcachecells}
The number of files opened with
mmap
(mapped in memory). This is used for queries with the setting
local_filesystem_read_method
set to
mmap
. The files opened with
mmap
are kept in the cache to avoid costly TLB flushes.
MarkCacheBytes {#markcachebytes}
Total size of mark cache in bytes
MarkCacheFiles {#markcachefiles}
Total number of mark files cached in the mark cache
GlobalThread {#globalthread}
Number of threads in global thread pool.
GlobalThreadActive {#globalthreadactive}
Number of threads in global thread pool running a task. | {"source_file": "metrics.md"} | [
-0.017051832750439644,
-0.02652471885085106,
0.008099295198917389,
0.008477789349853992,
-0.02212066389620304,
-0.03962540999054909,
0.047017090022563934,
-0.0034487596713006496,
0.06473834067583084,
0.08367129415273666,
0.010723693296313286,
-0.012806992046535015,
-0.015072153881192207,
-... |
0cf003b2-e070-4d3a-ab41-796397cf8856 | GlobalThread {#globalthread}
Number of threads in global thread pool.
GlobalThreadActive {#globalthreadactive}
Number of threads in global thread pool running a task.
HTTPConnection {#httpconnection}
Number of connections to HTTP server
HashedDictionaryThreads {#hasheddictionarythreads}
Number of threads in the HashedDictionary thread pool.
HashedDictionaryThreadsActive {#hasheddictionarythreadsactive}
Number of threads in the HashedDictionary thread pool running a task.
IOPrefetchThreads {#ioprefetchthreads}
Number of threads in the IO prefetch thread pool.
IOPrefetchThreadsActive {#ioprefetchthreadsactive}
Number of threads in the IO prefetch thread pool running a task.
IOThreads {#iothreads}
Number of threads in the IO thread pool.
IOThreadsActive {#iothreadsactive}
Number of threads in the IO thread pool running a task.
IOUringInFlightEvents {#iouringinflightevents}
Number of io_uring SQEs in flight
IOUringPendingEvents {#iouringpendingevents}
Number of io_uring SQEs waiting to be submitted
IOWriterThreads {#iowriterthreads}
Number of threads in the IO writer thread pool.
IOWriterThreadsActive {#iowriterthreadsactive}
Number of threads in the IO writer thread pool running a task.
InterserverConnection {#interserverconnection}
Number of connections from other replicas to fetch parts
KafkaAssignedPartitions {#kafkaassignedpartitions}
Number of partitions Kafka tables currently assigned to
KafkaBackgroundReads {#kafkabackgroundreads}
Number of background reads currently working (populating materialized views from Kafka)
KafkaConsumers {#kafkaconsumers}
Number of active Kafka consumers
KafkaConsumersInUse {#kafkaconsumersinuse}
Number of consumers which are currently used by direct or background reads
KafkaConsumersWithAssignment {#kafkaconsumerswithassignment}
Number of active Kafka consumers which have some partitions assigned.
KafkaLibrdkafkaThreads {#kafkalibrdkafkathreads}
Number of active librdkafka threads
KafkaProducers {#kafkaproducers}
Number of active Kafka producer created
KafkaWrites {#kafkawrites}
Number of currently running inserts to Kafka
KeeperAliveConnections {#keeperaliveconnections}
Number of alive connections
KeeperOutstandingRequests {#keeperoutstandingrequests}
Number of outstanding requests
LocalThread {#localthread}
Number of threads in local thread pools. The threads in local thread pools are taken from the global thread pool.
LocalThreadActive {#localthreadactive}
Number of threads in local thread pools running a task.
MMappedAllocBytes {#mmappedallocbytes}
Sum bytes of mmapped allocations
MMappedAllocs {#mmappedallocs}
Total number of mmapped allocations
MMappedFileBytes {#mmappedfilebytes}
Sum size of mmapped file regions.
MMappedFiles {#mmappedfiles}
Total number of mmapped files.
MarksLoaderThreads {#marksloaderthreads}
Number of threads in thread pool for loading marks. | {"source_file": "metrics.md"} | [
-0.03404199331998825,
-0.05324632301926613,
0.0242913868278265,
-0.0022061069030314684,
0.002442094264551997,
-0.012879149988293648,
0.043422359973192215,
-0.07993663102388382,
0.007405445445328951,
0.04891442880034447,
0.010547230951488018,
-0.02500770054757595,
0.05193900316953659,
-0.02... |
25b660b3-eb1e-492e-bc20-2133a358ed6a | Sum size of mmapped file regions.
MMappedFiles {#mmappedfiles}
Total number of mmapped files.
MarksLoaderThreads {#marksloaderthreads}
Number of threads in thread pool for loading marks.
MarksLoaderThreadsActive {#marksloaderthreadsactive}
Number of threads in the thread pool for loading marks running a task.
MaxDDLEntryID {#maxddlentryid}
Max processed DDL entry of DDLWorker.
MaxPushedDDLEntryID {#maxpushedddlentryid}
Max DDL entry of DDLWorker that pushed to zookeeper.
MemoryTracking {#memorytracking}
Total amount of memory (bytes) allocated by the server.
Merge {#merge}
Number of executing background merges
MergeTreeAllRangesAnnouncementsSent {#mergetreeallrangesannouncementssent}
The current number of announcement being sent in flight from the remote server to the initiator server about the set of data parts (for MergeTree tables). Measured on the remote server side.
MergeTreeBackgroundExecutorThreads {#mergetreebackgroundexecutorthreads}
Number of threads in the MergeTreeBackgroundExecutor thread pool.
MergeTreeBackgroundExecutorThreadsActive {#mergetreebackgroundexecutorthreadsactive}
Number of threads in the MergeTreeBackgroundExecutor thread pool running a task.
MergeTreeDataSelectExecutorThreads {#mergetreedataselectexecutorthreads}
Number of threads in the MergeTreeDataSelectExecutor thread pool.
MergeTreeDataSelectExecutorThreadsActive {#mergetreedataselectexecutorthreadsactive}
Number of threads in the MergeTreeDataSelectExecutor thread pool running a task.
MergeTreePartsCleanerThreads {#mergetreepartscleanerthreads}
Number of threads in the MergeTree parts cleaner thread pool.
MergeTreePartsCleanerThreadsActive {#mergetreepartscleanerthreadsactive}
Number of threads in the MergeTree parts cleaner thread pool running a task.
MergeTreePartsLoaderThreads {#mergetreepartsloaderthreads}
Number of threads in the MergeTree parts loader thread pool.
MergeTreePartsLoaderThreadsActive {#mergetreepartsloaderthreadsactive}
Number of threads in the MergeTree parts loader thread pool running a task.
MergeTreeReadTaskRequestsSent {#mergetreereadtaskrequestssent}
The current number of callback requests in flight from the remote server back to the initiator server to choose the read task (for MergeTree tables). Measured on the remote server side.
Move {#move}
Number of currently executing moves
MySQLConnection {#mysqlconnection}
Number of client connections using MySQL protocol
NetworkReceive {#networkreceive}
Number of threads receiving data from network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.
NetworkSend {#networksend}
Number of threads sending data to network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.
OpenFileForRead {#openfileforread}
Number of files open for reading
OpenFileForWrite {#openfileforwrite}
Number of files open for writing | {"source_file": "metrics.md"} | [
0.024663470685482025,
-0.03268424794077873,
0.008799667470157146,
-0.020344259217381477,
0.008372006006538868,
-0.0859050452709198,
0.012996395118534565,
0.04560333490371704,
0.00453103706240654,
0.03415807709097862,
-0.04088294506072998,
0.005763358436524868,
-0.00021731053129769862,
-0.0... |
08686081-37a1-4668-b740-27a0ab66abb6 | OpenFileForRead {#openfileforread}
Number of files open for reading
OpenFileForWrite {#openfileforwrite}
Number of files open for writing
ParallelFormattingOutputFormatThreads {#parallelformattingoutputformatthreads}
Number of threads in the ParallelFormattingOutputFormatThreads thread pool.
ParallelFormattingOutputFormatThreadsActive {#parallelformattingoutputformatthreadsactive}
Number of threads in the ParallelFormattingOutputFormatThreads thread pool running a task.
PartMutation {#partmutation}
Number of mutations (ALTER DELETE/UPDATE)
PartsActive {#partsactive}
Active data part, used by current and upcoming SELECTs.
PartsCommitted {#partscommitted}
Deprecated. See PartsActive.
PartsCompact {#partscompact}
Compact parts.
PartsDeleteOnDestroy {#partsdeleteondestroy}
Part was moved to another disk and should be deleted in own destructor.
PartsDeleting {#partsdeleting}
Not active data part with identity refcounter, it is deleting right now by a cleaner.
PartsOutdated {#partsoutdated}
Not active data part, but could be used by only current SELECTs, could be deleted after SELECTs finishes.
PartsPreActive {#partspreactive}
The part is in data_parts, but not used for SELECTs.
PartsPreCommitted {#partsprecommitted}
Deprecated. See PartsPreActive.
PartsTemporary {#partstemporary}
The part is generating now, it is not in data_parts list.
PartsWide {#partswide}
Wide parts.
PendingAsyncInsert {#pendingasyncinsert}
Number of asynchronous inserts that are waiting for flush.
PostgreSQLConnection {#postgresqlconnection}
Number of client connections using PostgreSQL protocol
Query {#query}
Number of executing queries
QueryPreempted {#querypreempted}
Number of queries that are stopped and waiting due to 'priority' setting.
QueryThread {#querythread}
Number of query processing threads
RWLockActiveReaders {#rwlockactivereaders}
Number of threads holding read lock in a table RWLock.
RWLockActiveWriters {#rwlockactivewriters}
Number of threads holding write lock in a table RWLock.
RWLockWaitingReaders {#rwlockwaitingreaders}
Number of threads waiting for read on a table RWLock.
RWLockWaitingWriters {#rwlockwaitingwriters}
Number of threads waiting for write on a table RWLock.
Read {#read}
Number of read (read, pread, io_getevents, etc.) syscalls in fly
ReadTaskRequestsSent {#readtaskrequestssent}
The current number of callback requests in flight from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side.
ReadonlyReplica {#readonlyreplica}
Number of Replicated tables that are currently in readonly state due to re-initialization after ZooKeeper session loss or due to startup without ZooKeeper configured.
RemoteRead {#remoteread}
Number of read with remote reader in fly
ReplicatedChecks {#replicatedchecks}
Number of data parts checking for consistency | {"source_file": "metrics.md"} | [
-0.07046820968389511,
-0.03636142984032631,
0.028705600649118423,
-0.00028962595388293266,
0.038917891681194305,
-0.036807164549827576,
0.028849544003605843,
-0.028779268264770508,
0.08191843330860138,
0.02097569778561592,
0.07127054035663605,
0.03950117528438568,
0.05155063048005104,
-0.0... |
54c291a6-5823-4e6c-81e8-19bad437fdea | RemoteRead {#remoteread}
Number of read with remote reader in fly
ReplicatedChecks {#replicatedchecks}
Number of data parts checking for consistency
ReplicatedFetch {#replicatedfetch}
Number of data parts being fetched from replica
ReplicatedSend {#replicatedsend}
Number of data parts being sent to replicas
RestartReplicaThreads {#restartreplicathreads}
Number of threads in the RESTART REPLICA thread pool.
RestartReplicaThreadsActive {#restartreplicathreadsactive}
Number of threads in the RESTART REPLICA thread pool running a task.
RestoreThreads {#restorethreads}
Number of threads in the thread pool for RESTORE.
RestoreThreadsActive {#restorethreadsactive}
Number of threads in the thread pool for RESTORE running a task.
Revision {#revision}
Revision of the server. It is a number incremented for every release or release candidate except patch releases.
S3Requests {#s3requests}
S3 requests
SendExternalTables {#sendexternaltables}
Number of connections that are sending data for external tables to remote servers. External tables are used to implement GLOBAL IN and GLOBAL JOIN operators with distributed subqueries.
SendScalars {#sendscalars}
Number of connections that are sending data for scalars to remote servers.
StorageBufferBytes {#storagebufferbytes}
Number of bytes in buffers of Buffer tables
StorageBufferRows {#storagebufferrows}
Number of rows in buffers of Buffer tables
StorageDistributedThreads {#storagedistributedthreads}
Number of threads in the StorageDistributed thread pool.
StorageDistributedThreadsActive {#storagedistributedthreadsactive}
Number of threads in the StorageDistributed thread pool running a task.
StorageHiveThreads {#storagehivethreads}
Number of threads in the StorageHive thread pool.
StorageHiveThreadsActive {#storagehivethreadsactive}
Number of threads in the StorageHive thread pool running a task.
StorageS3Threads {#storages3threads}
Number of threads in the StorageS3 thread pool.
StorageS3ThreadsActive {#storages3threadsactive}
Number of threads in the StorageS3 thread pool running a task.
SystemReplicasThreads {#systemreplicasthreads}
Number of threads in the system.replicas thread pool.
SystemReplicasThreadsActive {#systemreplicasthreadsactive}
Number of threads in the system.replicas thread pool running a task.
TCPConnection {#tcpconnection}
Number of connections to TCP server (clients with native interface), also included server-server distributed query connections
TablesToDropQueueSize {#tablestodropqueuesize}
Number of dropped tables, that are waiting for background data removal.
TemporaryFilesForAggregation {#temporaryfilesforaggregation}
Number of temporary files created for external aggregation
TemporaryFilesForJoin {#temporaryfilesforjoin}
Number of temporary files created for JOIN
TemporaryFilesForSort {#temporaryfilesforsort}
Number of temporary files created for external sorting | {"source_file": "metrics.md"} | [
-0.0471627339720726,
-0.09378048777580261,
0.008375320583581924,
-0.008274056948721409,
0.019767476245760918,
-0.01493670791387558,
-0.01065337285399437,
-0.05593829229474068,
0.0347275473177433,
0.09994126111268997,
0.0029362544883042574,
0.03272788226604462,
0.08032157272100449,
-0.08904... |
fd621c0f-6738-4e2a-852f-1eeef78430f5 | TemporaryFilesForJoin {#temporaryfilesforjoin}
Number of temporary files created for JOIN
TemporaryFilesForSort {#temporaryfilesforsort}
Number of temporary files created for external sorting
TemporaryFilesUnknown {#temporaryfilesunknown}
Number of temporary files created without known purpose
ThreadPoolFSReaderThreads {#threadpoolfsreaderthreads}
Number of threads in the thread pool for local_filesystem_read_method=threadpool.
ThreadPoolFSReaderThreadsActive {#threadpoolfsreaderthreadsactive}
Number of threads in the thread pool for local_filesystem_read_method=threadpool running a task.
ThreadPoolRemoteFSReaderThreads {#threadpoolremotefsreaderthreads}
Number of threads in the thread pool for remote_filesystem_read_method=threadpool.
ThreadPoolRemoteFSReaderThreadsActive {#threadpoolremotefsreaderthreadsactive}
Number of threads in the thread pool for remote_filesystem_read_method=threadpool running a task.
ThreadsInOvercommitTracker {#threadsinovercommittracker}
Number of waiting threads inside of OvercommitTracker
TotalTemporaryFiles {#totaltemporaryfiles}
Number of temporary files created
VersionInteger {#versioninteger}
Version of the server in a single integer number in base-1000. For example, version 11.22.33 is translated to 11022033.
Write {#write}
Number of write (write, pwrite, io_getevents, etc.) syscalls in fly
ZooKeeperRequest {#zookeeperrequest}
Number of requests to ZooKeeper in fly.
ZooKeeperSession {#zookeepersession}
Number of sessions (connections) to ZooKeeper. Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows.
ZooKeeperWatch {#zookeeperwatch}
Number of watches (event subscriptions) in ZooKeeper.
ConcurrencyControlAcquired {#concurrencycontrolacquired}
Total number of acquired CPU slots.
ConcurrencyControlSoftLimit {#concurrencycontrolsoftlimit}
Value of soft limit on number of CPU slots.
See Also
system.asynchronous_metrics
β Contains periodically calculated metrics.
system.events
β Contains a number of events that occurred.
system.metric_log
β Contains a history of metrics values from tables
system.metrics
and
system.events
.
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "metrics.md"} | [
-0.03978527709841728,
-0.020459990948438644,
-0.029803572222590446,
0.011256188154220581,
0.06007044389843941,
-0.06851331144571304,
0.05744251236319542,
0.013257232494652271,
0.04445604979991913,
0.08521037548780441,
0.05522669479250908,
0.0035981221590191126,
0.03227434307336807,
-0.0202... |
3cf9a4fb-0e3b-4ae5-b254-7b3f55d4d881 | description: 'System table which shows the content of the query cache.'
keywords: ['system table', 'query_cache']
slug: /operations/system-tables/query_cache
title: 'system.query_cache'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.query_cache
Shows the content of the
query cache
.
Columns:
query
(
String
) β Query string.
query_id
(
String
) β ID of the query.
result_size
(
UInt64
) β Size of the query cache entry.
tag
(
LowCardinality(String)
) β Tag of the query cache entry.
stale
(
UInt8
) β If the query cache entry is stale.
shared
(
UInt8
) β If the query cache entry is shared between multiple users.
compressed
(
UInt8
) β If the query cache entry is compressed.
expires_at
(
DateTime
) β When the query cache entry becomes stale.
key_hash
(
UInt64
) β A hash of the query string, used as a key to find query cache entries.
Example
sql
SELECT * FROM system.query_cache FORMAT Vertical;
```text
Row 1:
ββββββ
query: SELECT 1 SETTINGS use_query_cache = 1
query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
result_size: 128
tag:
stale: 0
shared: 0
compressed: 1
expires_at: 2023-10-13 13:35:45
key_hash: 12188185624808016954
1 row in set. Elapsed: 0.004 sec.
``` | {"source_file": "query_cache.md"} | [
0.02732313983142376,
-0.0024123613256961107,
-0.07186468690633774,
-0.0014239824377000332,
0.000236429666983895,
-0.08495073765516281,
0.04745131731033325,
0.0009453188395127654,
0.030616475269198418,
0.0730610266327858,
-0.022025512531399727,
0.049415331333875656,
0.08049467951059341,
-0.... |
2fa798e6-b301-4de2-a841-d6f1b9610bae | description: 'System table containing a single row with a single
dummy
UInt8 column
containing the value 0. Similar to the
DUAL
table found in other DBMSs.'
keywords: ['system table', 'one']
slug: /operations/system-tables/one
title: 'system.one'
doc_type: 'reference'
system.one
This table contains a single row with a single
dummy
UInt8 column containing the value 0.
This table is used if a
SELECT
query does not specify the
FROM
clause.
This is similar to the
DUAL
table found in other DBMSs.
Example
sql
SELECT * FROM system.one LIMIT 10;
```response
ββdummyββ
β 0 β
βββββββββ
1 rows in set. Elapsed: 0.001 sec.
``` | {"source_file": "one.md"} | [
-0.04562658816576004,
0.024188753217458725,
-0.07697370648384094,
0.019897399470210075,
-0.058544356375932693,
-0.09036945551633835,
0.07131030410528183,
0.08330482244491577,
-0.0007274926756508648,
0.04393388330936432,
0.08142675459384918,
-0.08196866512298584,
0.08137266337871552,
-0.077... |
4f44e920-4d23-4eb7-b147-792dd177f0d1 | description: 'System table containing information about pending asynchronous inserts
in queue.'
keywords: ['system table', 'asynchronous_inserts']
slug: /operations/system-tables/asynchronous_inserts
title: 'system.asynchronous_inserts'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about pending asynchronous inserts in queue.
Columns:
query
(
String
) β Query text.
database
(
String
) β Database name.
table
(
String
) β Table name.
format
(
String
) β Format name.
first_update
(
DateTime64(6)
) β First insert time with microseconds resolution.
total_bytes
(
UInt64
) β Total number of bytes waiting in the queue.
entries.query_id
(
Array(String)
) β Array of query ids of the inserts waiting in the queue.
entries.bytes
(
Array(UInt64)
) β Array of bytes of each insert query waiting in the queue.
Example
Query:
sql
SELECT * FROM system.asynchronous_inserts LIMIT 1 \G;
Result:
text
Row 1:
ββββββ
query: INSERT INTO public.data_guess (user_id, datasource_id, timestamp, path, type, num, str) FORMAT CSV
database: public
table: data_guess
format: CSV
first_update: 2023-06-08 10:08:54.199606
total_bytes: 133223
entries.query_id: ['b46cd4c4-0269-4d0b-99f5-d27668c6102e']
entries.bytes: [133223]
See Also
system.query_log
β Description of the
query_log
system table which contains common information about queries execution.
system.asynchronous_insert_log
β This table contains information about async inserts performed. | {"source_file": "asynchronous_inserts.md"} | [
-0.007139458321034908,
-0.03689687326550484,
-0.10628117620944977,
0.030463863164186478,
-0.021398384124040604,
-0.10662263631820679,
0.04866306111216545,
-0.022538838908076286,
0.03820646554231644,
0.10027442872524261,
0.025515645742416382,
-0.007867232896387577,
0.12338192760944366,
-0.1... |
43106ec2-7bb6-4994-a446-d84b5e725d19 | description: 'System table containing a list of time zones that are supported by the
ClickHouse server.'
keywords: ['system table', 'time_zones']
slug: /operations/system-tables/time_zones
title: 'system.time_zones'
doc_type: 'reference'
system.time_zones
Contains a list of time zones that are supported by the ClickHouse server. This list of timezones might vary depending on the version of ClickHouse.
Columns:
time_zone
(
String
) β List of supported time zones.
Example
sql
SELECT * FROM system.time_zones LIMIT 10
text
ββtime_zoneβββββββββββ
β Africa/Abidjan β
β Africa/Accra β
β Africa/Addis_Ababa β
β Africa/Algiers β
β Africa/Asmara β
β Africa/Asmera β
β Africa/Bamako β
β Africa/Bangui β
β Africa/Banjul β
β Africa/Bissau β
ββββββββββββββββββββββ | {"source_file": "time_zones.md"} | [
0.02748611569404602,
-0.0701860785484314,
-0.06548687070608139,
0.01198406983166933,
0.01946088671684265,
-0.1013912558555603,
-0.017032982781529427,
-0.05891871452331543,
-0.07773731648921967,
0.04377834498882294,
0.017813827842473984,
-0.08120394498109818,
-0.01721620187163353,
-0.010342... |
24f592ed-56c1-49fc-966c-c26babd75e5f | description: 'System table containing information about all cached file schemas.'
keywords: ['system table', 'schema_inference_cache']
slug: /operations/system-tables/schema_inference_cache
title: 'system.schema_inference_cache'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.schema_inference_cache
Contains information about all cached file schemas.
Columns:
storage
(
String
) β Storage name: File, URL, S3 or HDFS.
source
(
String
) β File source.
format
(
String
) β Format name.
additional_format_info
(
String
) β Additional information required to identify the schema. For example, format specific settings.
registration_time
(
DateTime
) β Timestamp when schema was added in cache.
schema
(
Nullable(String)
) β Cached schema.
number_of_rows
(
Nullable(UInt64)
) β Number of rows in the file in given format. It's used for caching trivial count() from data files and for caching number of rows from the metadata during schema inference.
schema_inference_mode
(
Nullable(String)
) β Scheme inference mode.
Example
Let's say we have a file
data.jsonl
with this content:
json
{"id" : 1, "age" : 25, "name" : "Josh", "hobbies" : ["football", "cooking", "music"]}
{"id" : 2, "age" : 19, "name" : "Alan", "hobbies" : ["tennis", "art"]}
{"id" : 3, "age" : 32, "name" : "Lana", "hobbies" : ["fitness", "reading", "shopping"]}
{"id" : 4, "age" : 47, "name" : "Brayan", "hobbies" : ["movies", "skydiving"]}
:::tip
Place
data.jsonl
in the
user_files_path
directory. You can find this by looking
in your ClickHouse configuration files. The default is:
sql
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
:::
Open
clickhouse-client
and run the
DESCRIBE
query:
sql
DESCRIBE file('data.jsonl') SETTINGS input_format_try_infer_integers=0;
response
ββnameβββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β id β Nullable(Float64) β β β β β β
β age β Nullable(Float64) β β β β β β
β name β Nullable(String) β β β β β β
β hobbies β Array(Nullable(String)) β β β β β β
βββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Let's see the content of the
system.schema_inference_cache
table:
sql
SELECT *
FROM system.schema_inference_cache
FORMAT Vertical | {"source_file": "schema_inference_cache.md"} | [
0.014325958676636219,
-0.024299079552292824,
-0.09599258005619049,
-0.02434597909450531,
0.06252080947160721,
-0.04928821325302124,
0.01496527623385191,
0.035871922969818115,
-0.00031502396450378,
0.08077214658260345,
0.007386953104287386,
0.015997156500816345,
0.05756188929080963,
-0.1000... |
7dffbf88-f806-45ca-9399-01bc49a9ba4f | Let's see the content of the
system.schema_inference_cache
table:
sql
SELECT *
FROM system.schema_inference_cache
FORMAT Vertical
response
Row 1:
ββββββ
storage: File
source: /home/droscigno/user_files/data.jsonl
format: JSONEachRow
additional_format_info: schema_inference_hints=, max_rows_to_read_for_schema_inference=25000, schema_inference_make_columns_nullable=true, try_infer_integers=false, try_infer_dates=true, try_infer_datetimes=true, try_infer_numbers_from_strings=true, read_bools_as_numbers=true, try_infer_objects=false
registration_time: 2022-12-29 17:49:52
schema: id Nullable(Float64), age Nullable(Float64), name Nullable(String), hobbies Array(Nullable(String))
See also
-
Automatic schema inference from input data | {"source_file": "schema_inference_cache.md"} | [
0.00021378330711741,
0.012299262918531895,
-0.1081121489405632,
0.02064506709575653,
-0.011494277976453304,
-0.06327243149280548,
-0.021557649597525597,
0.08133871853351593,
-0.07319297641515732,
0.01004899200052023,
0.014183379709720612,
-0.03877571225166321,
-0.017307661473751068,
0.0172... |
11d0df5e-2b6b-4988-a675-3d7459c831a3 | description: 'System table similar to
system.numbers
but reads are parallelized
and numbers can be returned in any order.'
keywords: ['system table', 'numbers_mt']
slug: /operations/system-tables/numbers_mt
title: 'system.numbers_mt'
doc_type: 'reference'
The same as
system.numbers
but reads are parallelized. The numbers can be returned in any order.
Used for tests.
Example
sql
SELECT * FROM system.numbers_mt LIMIT 10;
```response
ββnumberββ
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
ββββββββββ
10 rows in set. Elapsed: 0.001 sec.
``` | {"source_file": "numbers_mt.md"} | [
-0.03261946886777878,
-0.025160131976008415,
-0.07780910283327103,
-0.021470312029123306,
-0.09302187711000443,
-0.09421629458665848,
0.009808062575757504,
0.051547929644584656,
-0.010716809891164303,
0.04857255145907402,
0.053027257323265076,
0.09677835553884506,
0.08432476967573166,
-0.0... |
df7ef3c7-1db1-42ee-bdf7-370fa5c8a013 | description: 'System table containing history of metrics values from tables
system.metrics
and
system.events
, periodically flushed to disk.'
keywords: ['system table', 'metric_log']
slug: /operations/system-tables/metric_log
title: 'system.metric_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.metric_log
Contains history of metrics values from tables
system.metrics
and
system.events
, periodically flushed to disk.
Columns:
-
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
-
event_date
(
Date
) β Event date.
-
event_time
(
DateTime
) β Event time.
-
event_time_microseconds
(
DateTime64
) β Event time with microseconds resolution.
Example
sql
SELECT * FROM system.metric_log LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2020-09-05
event_time: 2020-09-05 16:22:33
event_time_microseconds: 2020-09-05 16:22:33.196807
milliseconds: 196
ProfileEvent_Query: 0
ProfileEvent_SelectQuery: 0
ProfileEvent_InsertQuery: 0
ProfileEvent_FailedQuery: 0
ProfileEvent_FailedSelectQuery: 0
...
...
CurrentMetric_Revision: 54439
CurrentMetric_VersionInteger: 20009001
CurrentMetric_RWLockWaitingReaders: 0
CurrentMetric_RWLockWaitingWriters: 0
CurrentMetric_RWLockActiveReaders: 0
CurrentMetric_RWLockActiveWriters: 0
CurrentMetric_GlobalThread: 74
CurrentMetric_GlobalThreadActive: 26
CurrentMetric_LocalThread: 0
CurrentMetric_LocalThreadActive: 0
CurrentMetric_DistributedFilesToInsert: 0
Schema
This table can be configured with different schema types using the XML tag
<schema_type>
. The default schema type is
wide
, where each metric or profile event is stored as a separate column. This schema is the most performant and efficient for single-column reads.
The
transposed
schema stores data in a format similar to
system.asynchronous_metric_log
, where metrics and events are stored as rows. This schema is useful for low-resource setups because it reduces resource consumption during merges. | {"source_file": "metric_log.md"} | [
0.07141972333192825,
-0.01703198254108429,
-0.02864140272140503,
0.010693563148379326,
-0.007497171871364117,
-0.11506146192550659,
0.02339799515902996,
-0.005135352723300457,
0.09248315542936325,
0.0773104652762413,
0.0017775832675397396,
-0.07612604647874832,
0.07794519513845444,
-0.0573... |
c8d964e4-ee96-4846-9673-c5857dd79fe4 | There is also a compatibility schema,
transposed_with_wide_view
, which stores actual data in a table with the transposed schema (
system.transposed_metric_log
) and creates a view on top of it using the wide schema. This view queries the transposed table, making it useful for migrating from the
wide
schema to the
transposed
schema.
See also
metric_log setting
β Enabling and disabling the setting.
system.asynchronous_metrics
β Contains periodically calculated metrics.
system.events
β Contains a number of events that occurred.
system.metrics
β Contains instantly calculated metrics.
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "metric_log.md"} | [
-0.013627731241285801,
-0.08470693230628967,
-0.10782370716333389,
0.007971133105456829,
0.013897566124796867,
-0.044986166059970856,
0.045373331755399704,
-0.008407284505665302,
0.03237857297062874,
0.017993316054344177,
0.028056565672159195,
-0.019903119653463364,
0.015515344217419624,
0... |
f9d5dd5a-6a7d-490b-b496-7db0cd78982a | description: 'System table which describes the content of the settings profile: constraints,
roles and users that the setting applies to, parent settings profiles.'
keywords: ['system table', 'settings_profile_elements']
slug: /operations/system-tables/settings_profile_elements
title: 'system.settings_profile_elements'
doc_type: 'reference'
system.settings_profile_elements
Describes the content of the settings profile:
Π‘onstraints.
Roles and users that the setting applies to.
Parent settings profiles.
Columns:
profile_name
(
Nullable(String)
) β Setting profile name.
user_name
(
Nullable(String)
) β User name.
role_name
(
Nullable(String)
) β Role name.
index
(
UInt64
) β Sequential number of the settings profile element.
setting_name
(
Nullable(String)
) β Setting name.
value
(
Nullable(String)
) β Setting value.
min
(
Nullable(String)
) β The minimum value of the setting. NULL if not set.
max
(
Nullable(String)
) β The maximum value of the setting. NULL if not set.
writability
(
Nullable(Enum8('WRITABLE' = 0, 'CONST' = 1, 'CHANGEABLE_IN_READONLY' = 2))
) β The property which shows whether a setting can be changed or not.
inherit_profile
(
Nullable(String)
) β A parent profile for this setting profile. NULL if not set. Setting profile will inherit all the settings' values and constraints (min, max, readonly) from its parent profiles. | {"source_file": "settings_profile_elements.md"} | [
0.06501970440149307,
-0.005631568375974894,
-0.08339712023735046,
0.03610978648066521,
-0.02765924297273159,
0.004640602972358465,
0.0575970783829689,
0.09593157470226288,
-0.1861124485731125,
-0.02606913074851036,
0.05814807862043381,
-0.05263330042362213,
0.1301732063293457,
-0.042633388... |
0a017cee-2643-40c2-ac3f-5716ee205ed2 | description: 'System table containing formation about global settings for the server,
which are specified in
config.xml
.'
keywords: ['system table', 'server_settings']
slug: /operations/system-tables/server_settings
title: 'system.server_settings'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.server_settings
Contains information about global settings for the server, which are specified in
config.xml
.
Currently, the table shows only settings from the first layer of
config.xml
and doesn't support nested configs (e.g.
logger
).
Columns:
name
(
String
) β Server setting name.
value
(
String
) β Server setting value.
default
(
String
) β Server setting default value.
changed
(
UInt8
) β Shows whether a setting was specified in
config.xml
description
(
String
) β Short server setting description.
type
(
String
) β Server setting value type.
changeable_without_restart
(
Enum8
) β Whether the setting can be changed at server runtime. Values:
'No'
'IncreaseOnly'
'DecreaseOnly'
'Yes'
is_obsolete
(
UInt8
) - Shows whether a setting is obsolete.
Example
The following example shows how to get information about server settings which name contains
thread_pool
.
sql
SELECT *
FROM system.server_settings
WHERE name LIKE '%thread_pool%' | {"source_file": "server_settings.md"} | [
0.028172291815280914,
-0.04352110996842384,
-0.0481426939368248,
-0.009196389466524124,
-0.008605677634477615,
-0.04384422302246094,
-0.001750631956383586,
0.00856201071292162,
-0.080337293446064,
0.05147598683834076,
0.03028816357254982,
-0.009959748014807701,
0.08789004385471344,
-0.0558... |
0e7cdeff-fe1f-4cd3-8294-f207566e6fb1 | ```text
ββnameβββββββββββββββββββββββββββββββββββββββββββ¬βvalueββ¬βdefaultββ¬βchangedββ¬βdescriptionββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βtypeββββ¬βchangeable_without_restartββ¬βis_obsoleteββ
β max_thread_pool_size β 10000 β 10000 β 0 β The maximum number of threads that could be allocated from the OS and used for query execution and background operations. β UInt64 β No β 0 β
β max_thread_pool_free_size β 1000 β 1000 β 0 β The maximum number of threads that will always stay in a global thread pool once allocated and remain idle in case of insufficient number of tasks. β UInt64 β No β 0 β
β thread_pool_queue_size β 10000 β 10000 β 0 β The maximum number of tasks that will be placed in a queue and wait for execution. β UInt64 β No β 0 β
β max_io_thread_pool_size β 100 β 100 β 0 β The maximum number of threads that would be used for IO operations β UInt64 β No β 0 β
β max_io_thread_pool_free_size β 0 β 0 β 0 β Max free size for IO thread pool. β UInt64 β No β 0 β
β io_thread_pool_queue_size β 10000 β 10000 β 0 β Queue size for IO thread pool. β UInt64 β No β 0 β
β max_active_parts_loading_thread_pool_size β 64 β 64 β 0 β The number of threads to load active set of data parts (Active ones) at startup. β UInt64 β No β 0 β
β max_outdated_parts_loading_thread_pool_size β 32 β 32 β 0 β The number of threads to load inactive set of data parts (Outdated ones) at startup. β UInt64 β No β 0 β
β max_unexpected_parts_loading_thread_pool_size β 32 β 32 β 0 β The number of threads to load inactive set of data parts (Unexpected ones) at startup. β UInt64 β No β 0 β | {"source_file": "server_settings.md"} | [
-0.024720070883631706,
-0.02236374281346798,
-0.05627552792429924,
0.026948940008878708,
-0.03565041720867157,
-0.1068759560585022,
0.07862577587366104,
-0.036192167550325394,
-0.004196907393634319,
0.030602211132645607,
0.009550429880619049,
0.005173245444893837,
0.024140164256095886,
-0.... |
c6fc4f65-b9d7-408f-b8cc-b0dc5cf5a27b | β max_parts_cleaning_thread_pool_size β 128 β 128 β 0 β The number of threads for concurrent removal of inactive data parts. β UInt64 β No β 0 β
β max_backups_io_thread_pool_size β 1000 β 1000 β 0 β The maximum number of threads that would be used for IO operations for BACKUP queries β UInt64 β No β 0 β
β max_backups_io_thread_pool_free_size β 0 β 0 β 0 β Max free size for backups IO thread pool. β UInt64 β No β 0 β
β backups_io_thread_pool_queue_size β 0 β 0 β 0 β Queue size for backups IO thread pool. β UInt64 β No β 0 β
βββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββ΄βββββββββββββββββββββββββββββ΄ββββββββββββββ | {"source_file": "server_settings.md"} | [
-0.032683368772268295,
0.014620726928114891,
-0.023418646305799484,
0.004028550814837217,
-0.00940641574561596,
-0.10708858072757721,
0.04239930585026741,
-0.05135383829474449,
-0.048474207520484924,
0.04313221946358681,
0.05234209820628166,
0.007090692408382893,
0.04920036345720291,
-0.06... |
ad6b50c4-c1e2-45f2-8392-0f4731a46efa | ```
Using of
WHERE changed
can be useful, for example, when you want to check
whether settings in configuration files are loaded correctly and are in use.
sql
SELECT * FROM system.server_settings WHERE changed AND name='max_thread_pool_size'
See also
Settings
Configuration Files
Server Settings | {"source_file": "server_settings.md"} | [
0.05603497475385666,
-0.022549135610461235,
-0.06056062877178192,
0.028714625164866447,
-0.04875880479812622,
-0.03938860073685646,
0.0669887587428093,
0.00124460575170815,
-0.06321981549263,
0.04544462263584137,
0.02802196890115738,
0.02204415574669838,
-0.022886699065566063,
-0.078306004... |
e920b803-ab47-4f69-9642-4cae6bffe8de | description: 'System table containing information about each detached table.'
keywords: ['system table', 'detached_tables']
slug: /operations/system-tables/detached_tables
title: 'system.detached_tables'
doc_type: 'reference'
Contains information of each detached table.
Columns:
database
(
String
) β The name of the database the table is in.
table
(
String
) β Table name.
uuid
(
UUID
) β Table uuid (Atomic database).
metadata_path
(
String
) β Path to the table metadata in the file system.
is_permanently
(
UInt8
) β Table was detached permanently.
Example
sql
SELECT * FROM system.detached_tables FORMAT Vertical;
text
Row 1:
ββββββ
database: base
table: t1
uuid: 81b1c20a-b7c6-4116-a2ce-7583fb6b6736
metadata_path: /var/lib/clickhouse/store/461/461cf698-fd0b-406d-8c01-5d8fd5748a91/t1.sql
is_permanently: 1 | {"source_file": "detached_tables.md"} | [
0.0009942760225385427,
-0.027483997866511345,
-0.07851055264472961,
0.053857166320085526,
0.07594148069620132,
-0.10625504702329636,
0.09399032592773438,
0.06137288734316826,
-0.0009372874046675861,
0.05324772372841835,
0.10683871805667877,
-0.019492818042635918,
0.05528547242283821,
-0.04... |
84e59c2f-74c0-4091-a83a-2df6bd21a48e | description: 'System table containing filters for one particular table, as well as
a list of roles and/or users which should use this row policy.'
keywords: ['system table', 'row_policies']
slug: /operations/system-tables/row_policies
title: 'system.row_policies'
doc_type: 'reference'
system.row_policies
Contains filters for one particular table, as well as a list of roles and/or users which should use this row policy.
Columns:
name
(
String
) β Name of a row policy.
short_name
(
String
) β Short name of a row policy. Names of row policies are compound, for example: myfilter ON mydb.mytable. Here 'myfilter ON mydb.mytable' is the name of the row policy, 'myfilter' is it's short name.
database
(
String
) β Database name.
table
(
String
) β Table name. Empty if policy for database.
id
(
UUID
) β Row policy ID.
storage
(
String
) β Name of the directory where the row policy is stored.
select_filter
(
Nullable(String)
) β Expression which is used for filtering in SELECT queries.
is_restrictive
(
UInt8
) β Shows whether the row policy restricts access to rows. Value: β’ 0 β The row policy is defined with
AS PERMISSIVE
clause, β’ 1 β The row policy is defined with AS RESTRICTIVE clause.
apply_to_all
(
UInt8
) β Shows that the row policies set for all roles and/or users.
apply_to_list
(
Array(String)
) β List of the roles and/or users to which the row policies is applied.
apply_to_except
(
Array(String)
) β The row policies is applied to all roles and/or users excepting of the listed ones.
See Also {#see-also}
SHOW POLICIES | {"source_file": "row_policies.md"} | [
-0.020613810047507286,
0.004243229050189257,
-0.06961644440889359,
0.0004585881542880088,
0.00933931116014719,
-0.04175394028425217,
0.12078500539064407,
-0.003368010278791189,
-0.0933883860707283,
0.025688810274004936,
-0.010643010027706623,
-0.000772130093537271,
0.0838172510266304,
-0.0... |
6f3a0559-57c2-4d47-943c-79b94cbf49a8 | description: 'System table showing which privileges are granted to ClickHouse user
accounts.'
keywords: ['system table', 'grants']
slug: /operations/system-tables/grants
title: 'system.grants'
doc_type: 'reference'
Privileges granted to ClickHouse user accounts.
Columns:
-
user_name
(
Nullable
(
String
)) β User name.
role_name
(
Nullable
(
String
)) β Role assigned to user account.
access_type
(
Enum8
) β Access parameters for ClickHouse user account.
database
(
Nullable
(
String
)) β Name of a database.
table
(
Nullable
(
String
)) β Name of a table.
column
(
Nullable
(
String
)) β Name of a column to which access is granted.
is_partial_revoke
(
UInt8
) β Logical value. It shows whether some privileges have been revoked. Possible values:
0
β The row describes a grant.
1
β The row describes a partial revoke.
grant_option
(
UInt8
) β Permission is granted
WITH GRANT OPTION
, see
GRANT
. | {"source_file": "grants.md"} | [
-0.0018661972135305405,
0.0333118811249733,
-0.04231265187263489,
-0.003479144535958767,
-0.06415458768606186,
-0.024623701348900795,
0.08338338136672974,
-0.027877338230609894,
-0.07175600528717041,
-0.012284516356885433,
0.050820350646972656,
-0.01051928661763668,
0.057176318019628525,
-... |
e0d7d4ac-2822-4f05-9558-5718573b2f84 | description: 'System table containing the history of error values from table
system.errors
,
periodically flushed to disk.'
keywords: ['system table', 'error_log']
slug: /operations/system-tables/system-error-log
title: 'system.error_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains history of error values from table
system.errors
, periodically flushed to disk.
Columns:
-
hostname
(
LowCardinality(String)
) β Hostname of the server executing the query.
-
event_date
(
Date
) β Event date.
-
event_time
(
DateTime
) β Event time.
-
code
(
Int32
) β Code number of the error.
-
error
(
LowCardinality(String)
) - Name of the error.
-
value
(
UInt64
) β The number of times this error happened.
-
remote
(
UInt8
) β Remote exception (i.e. received during one of the distributed queries).
Example
sql
SELECT * FROM system.error_log LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
hostname: clickhouse.eu-central1.internal
event_date: 2024-06-18
event_time: 2024-06-18 07:32:39
code: 999
error: KEEPER_EXCEPTION
value: 2
remote: 0
See also
error_log setting
β Enabling and disabling the setting.
system.errors
β Contains error codes with the number of times they have been triggered.
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "error_log.md"} | [
0.0445687361061573,
0.005754392594099045,
-0.018844004720449448,
0.004750682972371578,
0.003969598561525345,
-0.09363827109336853,
0.025095107033848763,
0.048112351447343826,
0.05300607904791832,
0.10313983261585236,
0.00851418823003769,
-0.0459710918366909,
0.1529712975025177,
-0.09294639... |
2c2e225d-663c-4d94-b492-9cf99529d967 | description: 'System table containing information about messages
received via a streaming engine and parsed with errors.'
keywords: ['system table', 'dead_letter_queue']
slug: /operations/system-tables/dead_letter_queue
title: 'system.dead_letter_queue'
doc_type: 'reference'
Contains information about messages received via a streaming engine and parsed with errors. Currently implemented for Kafka and RabbitMQ.
Logging is enabled by specifying
dead_letter_queue
for the engine specific
handle_error_mode
setting.
The flushing period of data is set in
flush_interval_milliseconds
parameter of the
dead_letter_queue
server settings section. To force flushing, use the
SYSTEM FLUSH LOGS
query.
ClickHouse does not delete data from the table automatically. See
Introduction
for more details.
Columns:
table_engine
(
Enum8
) - Stream type. Possible values:
Kafka
and
RabbitMQ
.
event_date
(
Date
) - Message consuming date.
event_time
(
DateTime
) - Message consuming date and time.
event_time_microseconds
(
DateTime64
) - Message consuming time with microseconds precision.
database
(
LowCardinality(String)
) - ClickHouse database the streaming table belongs to.
table
(
LowCardinality(String)
) - ClickHouse table name.
error
(
String
) - Error text.
raw_message
(
String
) - Message body.
kafka_topic_name
(
String
) - Kafka topic name.
kafka_partition
(
UInt64
) - Kafka partition of the topic.
kafka_offset
(
UInt64
) - Kafka offset of the message.
kafka_key
(
String
) - Kafka key of the message.
rabbitmq_exchange_name
(
String
) - RabbitMQ exchange name.
rabbitmq_message_id
(
String
) - RabbitMQ message id.
rabbitmq_message_timestamp
(
DateTime
) - RabbitMQ message timestamp.
rabbitmq_message_redelivered
(
UInt8
) - RabbitMQ redelivered flag.
rabbitmq_message_delivery_tag
(
UInt64
) - RabbitMQ delivery tag.
rabbitmq_channel_id
(
String
) - RabbitMQ channel id.
Example
Query:
sql
SELECT * FROM system.dead_letter_queue LIMIT 1 \G;
Result:
``` text
Row 1:
ββββββ
table_engine: Kafka
event_date: 2025-05-01
event_time: 2025-05-01 10:34:53
event_time_microseconds: 2025-05-01 10:34:53.910773
database: default
table: kafka
error: Cannot parse input: expected '\t' before: 'qwertyuiop': (at row 1)
:
Row 1:
Column 0, name: key, type: UInt64, ERROR: text "qwertyuiop" is not like UInt64
raw_message: qwertyuiop
kafka_topic_name: TSV_dead_letter_queue_err_1746095689
kafka_partition: 0
kafka_offset: 0
kafka_key:
rabbitmq_exchange_name:
rabbitmq_message_id:
rabbitmq_message_timestamp: 1970-01-01 00:00:00
rabbitmq_message_redelivered: 0
rabbitmq_message_delivery_tag: 0
rabbitmq_channel_id: | {"source_file": "dead_letter_queue.md"} | [
0.026172149926424026,
-0.06266412138938904,
-0.03807498887181282,
0.055942974984645844,
-0.05138710141181946,
-0.0889853686094284,
0.018597405403852463,
-0.05470070242881775,
0.020121166482567787,
0.07604610174894333,
0.032870691269636154,
-0.02106965333223343,
0.007843767292797565,
-0.061... |
6798455c-1301-4222-ac02-9e49f5ffca29 | Row 2:
ββββββ
table_engine: Kafka
event_date: 2025-05-01
event_time: 2025-05-01 10:34:53
event_time_microseconds: 2025-05-01 10:34:53.910944
database: default
table: kafka
error: Cannot parse input: expected '\t' before: 'asdfghjkl': (at row 1)
:
Row 1:
Column 0, name: key, type: UInt64, ERROR: text "asdfghjkl" is not like UInt64
raw_message: asdfghjkl
kafka_topic_name: TSV_dead_letter_queue_err_1746095689
kafka_partition: 0
kafka_offset: 0
kafka_key:
rabbitmq_exchange_name:
rabbitmq_message_id:
rabbitmq_message_timestamp: 1970-01-01 00:00:00
rabbitmq_message_redelivered: 0
rabbitmq_message_delivery_tag: 0
rabbitmq_channel_id:
Row 3:
ββββββ
table_engine: Kafka
event_date: 2025-05-01
event_time: 2025-05-01 10:34:53
event_time_microseconds: 2025-05-01 10:34:53.911092
database: default
table: kafka
error: Cannot parse input: expected '\t' before: 'zxcvbnm': (at row 1)
:
Row 1:
Column 0, name: key, type: UInt64, ERROR: text "zxcvbnm" is not like UInt64
raw_message: zxcvbnm
kafka_topic_name: TSV_dead_letter_queue_err_1746095689
kafka_partition: 0
kafka_offset: 0
kafka_key:
rabbitmq_exchange_name:
rabbitmq_message_id:
rabbitmq_message_timestamp: 1970-01-01 00:00:00
rabbitmq_message_redelivered: 0
rabbitmq_message_delivery_tag: 0
rabbitmq_channel_id:
(test.py:78, dead_letter_queue_test)
```
See Also
Kafka
- Kafka Engine
system.kafka_consumers
β Description of the
kafka_consumers
system table which contains information like statistics and errors about Kafka consumers. | {"source_file": "dead_letter_queue.md"} | [
0.001725415582768619,
-0.006694488227367401,
-0.020651767030358315,
0.03557610139250755,
-0.08132411539554596,
-0.04634864628314972,
0.011666861362755299,
-0.00033165636705234647,
0.05756198242306709,
0.056863199919462204,
0.08175995200872421,
-0.08017321676015854,
-0.060331057757139206,
-... |
1841d9b1-f7db-478f-b5aa-c27bdc195dba | description: 'System table containing information about settings for MergeTree tables.'
keywords: ['system table', 'merge_tree_settings']
slug: /operations/system-tables/merge_tree_settings
title: 'system.merge_tree_settings'
doc_type: 'reference'
system.merge_tree_settings
Contains information about settings for
MergeTree
tables.
Columns:
name
(
String
) β Setting name.
value
(
String
) β Setting value.
default
(
String
) β Setting default value.
changed
(
UInt8
) β 1 if the setting was explicitly defined in the config or explicitly changed.
description
(
String
) β Setting description.
min
(
Nullable(String)
) β Minimum value of the setting, if any is set via constraints. If the setting has no minimum value, contains NULL.
max
(
Nullable(String)
) β Maximum value of the setting, if any is set via constraints. If the setting has no maximum value, contains NULL.
disallowed_values
(
Array(String)
) β List of disallowed values
readonly
(
UInt8
) β Shows whether the current user can change the setting: 0 β Current user can change the setting, 1 β Current user can't change the setting.
type
(
String
) β Setting type (implementation specific string value).
is_obsolete
(
UInt8
) β Shows whether a setting is obsolete.
tier
(
Enum8('Production' = 0, 'Obsolete' = 4, 'Experimental' = 8, 'Beta' = 12)
) β
Support level for this feature. ClickHouse features are organized in tiers, varying depending on the current status of their
development and the expectations one might have when using them:
PRODUCTION: The feature is stable, safe to use and does not have issues interacting with other PRODUCTION features.
BETA: The feature is stable and safe. The outcome of using it together with other features is unknown and correctness is not guaranteed. Testing and reports are welcome.
EXPERIMENTAL: The feature is under development. Only intended for developers and ClickHouse enthusiasts. The feature might or might not work and could be removed at any time.
OBSOLETE: No longer supported. Either it is already removed or it will be removed in future releases.
Example
sql
SELECT * FROM system.merge_tree_settings LIMIT 3 FORMAT Vertical;
```response
SELECT *
FROM system.merge_tree_settings
LIMIT 3
FORMAT Vertical
Query id: 2580779c-776e-465f-a90c-4b7630d0bb70
Row 1:
ββββββ
name: min_compress_block_size
value: 0
default: 0
changed: 0
description: When granule is written, compress the data in buffer if the size of pending uncompressed data is larger or equal than the specified threshold. If this setting is not set, the corresponding global setting is used.
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
is_obsolete: 0
tier: Production | {"source_file": "merge_tree_settings.md"} | [
0.05382823944091797,
0.004512409679591656,
-0.051854703575372696,
-0.0018462854204699397,
-0.04971082881093025,
-0.05387943983078003,
0.010891452431678772,
0.09129573404788971,
-0.13583803176879883,
0.02089318260550499,
0.05867331475019455,
-0.039142265915870667,
0.04199245199561119,
-0.03... |
6e6b0129-4818-4351-abe2-62fe76b49dd0 | Row 2:
ββββββ
name: max_compress_block_size
value: 0
default: 0
changed: 0
description: Compress the pending uncompressed data in buffer if its size is larger or equal than the specified threshold. Block of data will be compressed even if the current granule is not finished. If this setting is not set, the corresponding global setting is used.
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
is_obsolete: 0
tier: Production
Row 3:
ββββββ
name: index_granularity
value: 8192
default: 8192
changed: 0
description: How many rows correspond to one primary key value.
min: α΄Ία΅α΄Έα΄Έ
max: α΄Ία΅α΄Έα΄Έ
readonly: 0
type: UInt64
is_obsolete: 0
tier: Production
3 rows in set. Elapsed: 0.001 sec.
``` | {"source_file": "merge_tree_settings.md"} | [
-0.05381149798631668,
0.05637020617723465,
-0.08197536319494247,
0.006419557146728039,
-0.04819301515817642,
-0.09230440109968185,
-0.02695230208337307,
-0.008175047114491463,
-0.020292365923523903,
0.018969791010022163,
-0.0050282711163163185,
-0.004065955989062786,
0.0027658992912620306,
... |
1b1109a1-56d4-48f3-bd70-a4d2bcd04fc5 | description: 'System table containing a single UInt64 column named
number
that contains
almost all the natural numbers starting from zero.'
keywords: ['system table', 'numbers']
slug: /operations/system-tables/numbers
title: 'system.numbers'
doc_type: 'reference'
system.numbers
This table contains a single UInt64 column named
number
that contains almost all the natural numbers starting from zero.
You can use this table for tests, or if you need to do a brute force search.
Reads from this table are not parallelized.
Example
sql
SELECT * FROM system.numbers LIMIT 10;
```response
ββnumberββ
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
ββββββββββ
10 rows in set. Elapsed: 0.001 sec.
```
You can also limit the output by predicates.
sql
SELECT * FROM system.numbers < 10;
```response
ββnumberββ
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
ββββββββββ
10 rows in set. Elapsed: 0.001 sec.
``` | {"source_file": "numbers.md"} | [
0.020247111096978188,
-0.00565007608383894,
-0.10146366059780121,
-0.01597789116203785,
-0.07842963188886642,
-0.0661134198307991,
0.030971527099609375,
0.03259189799427986,
-0.0415855310857296,
0.06739635765552521,
0.03516634926199913,
0.009878503158688545,
0.05144445225596428,
-0.0776989... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.