id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
17715edb-9a52-4379-9d68-10f32deb77bd | description: 'System table containing information about metadata files read from Iceberg tables. Each entry
represents either a root metadata file, metadata extracted from an Avro file, or an entry of some Avro file.'
keywords: ['system table', 'iceberg_metadata_log']
slug: /operations/system-tables/iceberg_metadata_log
title: 'system.iceberg_metadata_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.iceberg_metadata_log
The
system.iceberg_metadata_log
table records metadata access and parsing events for Iceberg tables read by ClickHouse. It provides detailed information about each metadata file or entry processed, which is useful for debugging, auditing, and understanding Iceberg table structure evolution.
Purpose {#purpose}
This table logs every metadata file and entry read from Iceberg tables, including root metadata files, manifest lists, and manifest entries. It helps users trace how ClickHouse interprets Iceberg table metadata and diagnose issues related to schema evolution, file resolution, or query planning.
:::note
This table is primarily intended for debugging purposes.
:::
Columns {#columns}
| Name | Type | Description |
|----------------|-----------|----------------------------------------------------------------------------------------------|
|
event_date
|
Date
| Date of the log entry. |
|
event_time
|
DateTime
| Timestamp of the event. |
|
query_id
|
String
| Query ID that triggered the metadata read. |
|
content_type
|
Enum8
| Type of metadata content (see below). |
|
table_path
|
String
| Path to the Iceberg table. |
|
file_path
|
String
| Path to the root metadata JSON file, Avro manifest list, or manifest file. |
|
content
|
String
| Content in JSON format (raw metadata from .json, Avro metadata, or Avro entry). |
|
row_in_file
|
Nullable
(
UInt64
) | Row number in the file, if applicable. Present for
ManifestListEntry
and
ManifestFileEntry
content types. |
content_type
values {#content-type-values}
None
: No content.
Metadata
: Root metadata file.
ManifestListMetadata
: Manifest list metadata.
ManifestListEntry
: Entry in a manifest list.
ManifestFileMetadata
: Manifest file metadata.
ManifestFileEntry
: Entry in a manifest file.
Controlling log verbosity {#controlling-log-verbosity}
You can control which metadata events are logged using the
iceberg_metadata_log_level
setting. | {"source_file": "iceberg_metadata_log.md"} | [
0.0011563424486666918,
-0.042706720530986786,
-0.10138051211833954,
-0.011605422012507915,
0.09458322823047638,
-0.08592931181192398,
0.05041633918881416,
0.07482784241437912,
-0.0065919640474021435,
0.048735808581113815,
-0.012506159953773022,
0.030201489105820656,
0.013475680723786354,
-... |
e15ede56-4352-42a8-921d-ebd57b536ef5 | Controlling log verbosity {#controlling-log-verbosity}
You can control which metadata events are logged using the
iceberg_metadata_log_level
setting.
To log all metadata used in the current query:
```sql
SELECT * FROM my_iceberg_table SETTINGS iceberg_metadata_log_level = 'manifest_file_entry';
SYSTEM FLUSH LOGS iceberg_metadata_log;
SELECT content_type, file_path, row_in_file
FROM system.iceberg_metadata_log
WHERE query_id = '{previous_query_id}';
```
To log only the root metadata JSON file used in the current query:
```sql
SELECT * FROM my_iceberg_table SETTINGS iceberg_metadata_log_level = 'metadata';
SYSTEM FLUSH LOGS iceberg_metadata_log;
SELECT content_type, file_path, row_in_file
FROM system.iceberg_metadata_log
WHERE query_id = '{previous_query_id}';
```
See more information in the description of the
iceberg_metadata_log_level
setting.
Good To Know {#good-to-know}
Use
iceberg_metadata_log_level
at the query level only when you need to investigate your Iceberg table in detail. Otherwise, you may populate the log table with excessive metadata and experience performance degradation.
The table may contain duplicate entries, as it is intended primarily for debugging and does not guarantee uniqueness per entity.
If you use a
content_type
more verbose than
ManifestListMetadata
, the Iceberg metadata cache is disabled for manifest lists.
Similarly, if you use a
content_type
more verbose than
ManifestFileMetadata
, the Iceberg metadata cache is disabled for manifest files.
See also {#see-also}
Iceberg Table Engine
Iceberg Table Function
system.iceberg_history | {"source_file": "iceberg_metadata_log.md"} | [
0.056006066501140594,
0.008369078859686852,
-0.031222987920045853,
0.059535298496484756,
0.05280929058790207,
-0.035593025386333466,
0.07529228180646896,
0.06445024907588959,
-0.015895353630185127,
0.05305853113532066,
-0.021854998543858528,
0.03111111745238304,
0.031512875109910965,
-0.01... |
9e1f2dfc-c078-457d-bf67-517d4be9bdd1 | description: 'System table containing information for workloads residing on the local
server.'
keywords: ['system table', 'workloads']
slug: /operations/system-tables/workloads
title: 'system.workloads'
doc_type: 'reference'
system.workloads
Contains information for
workloads
residing on the local server. The table contains a row for every workload.
Example:
sql
SELECT *
FROM system.workloads
FORMAT Vertical
``text
Row 1:
ββββββ
name: production
parent: all
create_query: CREATE WORKLOAD production IN
all` SETTINGS weight = 9
Row 2:
ββββββ
name: development
parent: all
create_query: CREATE WORKLOAD development IN
all
Row 3:
ββββββ
name: all
parent:
create_query: CREATE WORKLOAD
all
```
Columns:
name
(
String
) β The name of the workload.
parent
(
String
) β The name of the parent workload.
create_query
(
String
) β CREATE query of the workload. | {"source_file": "workloads.md"} | [
0.005859010387212038,
0.03614802286028862,
-0.06819960474967957,
0.06516315042972565,
-0.015200451947748661,
-0.10462523251771927,
0.04422461986541748,
0.02881568670272827,
-0.007783121895045042,
0.061644021421670914,
0.031208796426653862,
-0.016766877844929695,
0.046449143439531326,
-0.07... |
3de8dc4f-a69d-4e24-9d90-d5a551792cfd | description: 'System table which contains stack traces of all server threads. Allows
developers to introspect the server state.'
keywords: ['system table', 'stack_trace']
slug: /operations/system-tables/stack_trace
title: 'system.stack_trace'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.stack_trace
Contains stack traces of all server threads. Allows developers to introspect the server state.
To analyze stack frames, use the
addressToLine
,
addressToLineWithInlines
,
addressToSymbol
and
demangle
introspection functions
.
Columns:
thread_name
(
String
) β Thread name.
thread_id
(
UInt64
) β Thread identifier.
query_id
(
String
) β Query identifier that can be used to get details about a query that was running from the
query_log
system table.
trace
(
Array(UInt64)
) β A
stack trace
which represents a list of physical addresses where the called methods are stored.
:::tip
Check out the Knowledge Base for some handy queries, including
how to see what threads are currently running
and
useful queries for troubleshooting
.
:::
Example
Enabling introspection functions:
sql
SET allow_introspection_functions = 1;
Getting symbols from ClickHouse object files:
sql
WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS all SELECT thread_name, thread_id, query_id, arrayStringConcat(all, '\n') AS res FROM system.stack_trace LIMIT 1 \G; | {"source_file": "stack_trace.md"} | [
0.0038083402905613184,
-0.08177150785923004,
-0.07152826339006424,
-0.030833952128887177,
-0.004783723037689924,
-0.09998688846826553,
0.019480280578136444,
-0.023423561826348305,
0.011254549957811832,
0.06834185123443604,
-0.017213668674230576,
0.023969359695911407,
0.013238721527159214,
... |
3c795628-0238-4780-8d81-b0be0bbfa68e | text
Row 1:
ββββββ
thread_name: QueryPipelineEx
thread_id: 743490
query_id: dc55a564-febb-4e37-95bb-090ef182c6f1
res: memcpy
large_ralloc
arena_ralloc
do_rallocx
Allocator<true, true>::realloc(void*, unsigned long, unsigned long, unsigned long)
HashTable<unsigned long, HashMapCell<unsigned long, char*, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char*>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>::resize(unsigned long, unsigned long)
void DB::Aggregator::executeImplBatch<false, false, true, DB::AggregationMethodOneNumber<unsigned long, HashMapTable<unsigned long, HashMapCell<unsigned long, char*, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char*>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, true, false>>(DB::AggregationMethodOneNumber<unsigned long, HashMapTable<unsigned long, HashMapCell<unsigned long, char*, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char*>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, true, false>&, DB::AggregationMethodOneNumber<unsigned long, HashMapTable<unsigned long, HashMapCell<unsigned long, char*, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char*>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, true, false>::State&, DB::Arena*, unsigned long, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, bool, char*) const
DB::Aggregator::executeImpl(DB::AggregatedDataVariants&, unsigned long, unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, DB::Aggregator::AggregateFunctionInstruction*, bool, bool, char*) const
DB::Aggregator::executeOnBlock(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, std::__1::vector<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>, std::__1::allocator<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>>>&, bool&) const
DB::AggregatingTransform::work()
DB::ExecutionThreadContext::executeTask()
DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*)
void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__1::__function::__policy_storage const*)
ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__1::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) | {"source_file": "stack_trace.md"} | [
-0.010576908476650715,
0.018400471657514572,
-0.13316166400909424,
-0.01732601970434189,
-0.05059845373034477,
-0.06053157150745392,
0.042895976454019547,
0.030485045164823532,
-0.030287524685263634,
0.014594245702028275,
0.02189357951283455,
-0.033595722168684006,
0.052228353917598724,
-0... |
ff42fefc-4df4-4062-87fc-828584d252f5 | ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__1::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>)
void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*)
void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) | {"source_file": "stack_trace.md"} | [
-0.09846265614032745,
0.05036745220422745,
-0.029621925204992294,
-0.018521856516599655,
-0.011448467150330544,
-0.05610259994864464,
0.03167629987001419,
-0.028414038941264153,
0.023833177983760834,
0.02446589432656765,
0.02421422488987446,
-0.02638341300189495,
-0.08242706954479218,
-0.0... |
fec5e82d-9598-4160-a362-c7b409705c01 | Getting filenames and line numbers in ClickHouse source code:
sql
WITH arrayMap(x -> addressToLine(x), trace) AS all, arrayFilter(x -> x LIKE '%/dbms/%', all) AS dbms SELECT thread_name, thread_id, query_id, arrayStringConcat(notEmpty(dbms) ? dbms : all, '\n') AS res FROM system.stack_trace LIMIT 1 \G;
```text
Row 1:
ββββββ
thread_name: clickhouse-serv
thread_id: 686
query_id: cad353e7-1c29-4b2e-949f-93e597ab7a54
res: /lib/x86_64-linux-gnu/libc-2.27.so
/build/obj-x86_64-linux-gnu/../src/Storages/System/StorageSystemStackTrace.cpp:182
/build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:656
/build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:1338
/build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:751
/build/obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:224
/build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:192
/build/obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:384
/build/obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:643
/build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:251
/build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1197
/build/obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:57
/build/obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:856
/build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/Mutex_POSIX.h:59
/build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/AutoPtr.h:223
/lib/x86_64-linux-gnu/libpthread-2.27.so
/lib/x86_64-linux-gnu/libc-2.27.so
```
See Also
Introspection Functions
β Which introspection functions are available and how to use them.
system.trace_log
β Contains stack traces collected by the sampling query profiler.
arrayMap
) β Description and usage example of the
arrayMap
function.
arrayFilter
β Description and usage example of the
arrayFilter
function. | {"source_file": "stack_trace.md"} | [
-0.015863481909036636,
-0.021273411810398102,
-0.08699947595596313,
0.027341611683368683,
-0.06700703501701355,
-0.06542780250310898,
0.12139031291007996,
-0.01166278962045908,
-0.021585730835795403,
0.017052100971341133,
0.02996763400733471,
-0.028330394998192787,
0.032001469284296036,
-0... |
cd355945-aa85-4ca2-8f10-9c1f85cee4dd | description: 'System table containing information about clusters available in the
config file and the servers defined in them.'
keywords: ['system table', 'clusters']
slug: /operations/system-tables/clusters
title: 'system.clusters'
doc_type: 'reference'
Contains information about clusters available in the config file and the servers in them.
Columns:
cluster
(
String
) β The cluster name.
shard_num
(
UInt32
) β The shard number in the cluster, starting from 1.
shard_name
(
String
) β The name of the shard in the cluster.
shard_weight
(
UInt32
) β The relative weight of the shard when writing data.
internal_replication
(
UInt8
) β Flag that indicates whether this host is a part on ensemble which can replicate the data on its own.
replica_num
(
UInt32
) β The replica number in the shard, starting from 1.
host_name
(
String
) β The host name, as specified in the config.
host_address
(
String
) β The host IP address obtained from DNS.
port
(
UInt16
) β The port to use for connecting to the server.
is_local
(
UInt8
) β Flag that indicates whether the host is local.
user
(
String
) β The name of the user for connecting to the server.
default_database
(
String
) β The default database name.
errors_count
(
UInt32
) β The number of times this host failed to reach replica.
slowdowns_count
(
UInt32
) β The number of slowdowns that led to changing replica when establishing a connection with hedged requests.
estimated_recovery_time
(
UInt32
) β Seconds remaining until the replica error count is zeroed and it is considered to be back to normal.
database_shard_name
(
String
) β The name of the
Replicated
database shard (for clusters that belong to a
Replicated
database).
database_replica_name
(
String
) β The name of the
Replicated
database replica (for clusters that belong to a
Replicated
database).
is_shared_catalog_cluster
(
UInt8
) β Bool indicating if the cluster belongs to shared catalog.
is_active
(
Nullable(UInt8)
) β The status of the Replicated database replica (for clusters that belong to a Replicated database): 1 means 'replica is online', 0 means 'replica is offline', NULL means 'unknown'.
unsynced_after_recovery
(
Nullable(UInt8)
) β Indicates if a Replicated database replica has replication lag more than max_replication_lag_to_enqueue after creating or recovering the replica.
replication_lag
(
Nullable(UInt32)
) β The replication lag of the
Replicated
database replica (for clusters that belong to a Replicated database).
recovery_time
(
Nullable(UInt64)
) β The recovery time of the
Replicated
database replica (for clusters that belong to a Replicated database), in milliseconds.
Example
Query:
sql
SELECT * FROM system.clusters LIMIT 2 FORMAT Vertical;
Result: | {"source_file": "clusters.md"} | [
0.005583568941801786,
-0.09863551706075668,
-0.09168699383735657,
0.03964332491159439,
-0.006589727476239204,
-0.09821777790784836,
-0.028011472895741463,
-0.005159743130207062,
-0.019795989617705345,
0.008722478523850441,
0.01659551076591015,
-0.0295756496489048,
0.0819907858967781,
-0.08... |
a1bd74a7-c514-45fc-9c17-5d0c8c4a5f46 | Example
Query:
sql
SELECT * FROM system.clusters LIMIT 2 FORMAT Vertical;
Result:
```text
Row 1:
ββββββ
cluster: test_cluster_two_shards
shard_num: 1
shard_name: shard_01
shard_weight: 1
replica_num: 1
host_name: 127.0.0.1
host_address: 127.0.0.1
port: 9000
is_local: 1
user: default
default_database:
errors_count: 0
slowdowns_count: 0
estimated_recovery_time: 0
database_shard_name:
database_replica_name:
is_active: NULL
Row 2:
ββββββ
cluster: test_cluster_two_shards
shard_num: 2
shard_name: shard_02
shard_weight: 1
replica_num: 1
host_name: 127.0.0.2
host_address: 127.0.0.2
port: 9000
is_local: 0
user: default
default_database:
errors_count: 0
slowdowns_count: 0
estimated_recovery_time: 0
database_shard_name:
database_replica_name:
is_active: NULL
```
See Also
Table engine Distributed
distributed_replica_error_cap setting
distributed_replica_error_half_life setting | {"source_file": "clusters.md"} | [
0.12567807734012604,
-0.02772490866482258,
-0.06778371334075928,
0.07473411411046982,
0.002068019937723875,
-0.04374484717845917,
-0.032469820231199265,
-0.023195521906018257,
0.015660589560866356,
0.0010512028820812702,
0.04976559057831764,
-0.05086300149559975,
0.07474172860383987,
-0.07... |
9b9eb3e6-b5c6-4374-b172-68f683a0a4ff | description: 'System table containing information about the number of events that
have occurred in the system.'
keywords: ['system table', 'events']
slug: /operations/system-tables/events
title: 'system.events'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about the number of events that have occurred in the system. For example, in the table, you can find how many
SELECT
queries were processed since the ClickHouse server started.
Columns:
event
(
String
) β Event name.
value
(
UInt64
) β Number of events occurred.
description
(
String
) β Event description.
You can find all supported events in source file
src/Common/ProfileEvents.cpp
.
Example
sql
SELECT * FROM system.events LIMIT 5 | {"source_file": "events.md"} | [
0.07901830226182938,
-0.04528259113430977,
-0.05723198875784874,
-0.013904646039009094,
0.02176511660218239,
-0.012751446105539799,
0.06463676691055298,
0.0014378527412191033,
0.01235173549503088,
0.07717922329902649,
-0.0016985840629786253,
-0.04377551004290581,
0.10108323395252228,
-0.09... |
c3783a7f-ee0c-473f-bb18-49bc967253f8 | description
(
String
) β Event description.
You can find all supported events in source file
src/Common/ProfileEvents.cpp
.
Example
sql
SELECT * FROM system.events LIMIT 5
text
ββeventββββββββββββββββββββββββββββββββββ¬βvalueββ¬βdescriptionβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Query β 12 β Number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. β
β SelectQuery β 8 β Same as Query, but only for SELECT queries. β
β FileOpen β 73 β Number of files opened. β
β ReadBufferFromFileDescriptorRead β 155 β Number of reads (read/pread) from a file descriptor. Does not include sockets. β
β ReadBufferFromFileDescriptorReadBytes β 9931 β Number of bytes read from file descriptors. If the file is compressed, this will show the compressed data size. β
βββββββββββββββββββββββββββββββββββββββββ΄ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
See Also
system.asynchronous_metrics
β Contains periodically calculated metrics.
system.metrics
β Contains instantly calculated metrics.
system.metric_log
β Contains a history of metrics values from tables
system.metrics
and
system.events
.
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "events.md"} | [
-0.0372549407184124,
-0.03059060499072075,
-0.09994729608297348,
0.08051590621471405,
-0.00958158541470766,
-0.059050481766462326,
0.07502556592226028,
0.07286333292722702,
0.007287491578608751,
0.014794121496379375,
-0.04218636825680733,
-0.014500461518764496,
0.04727685824036598,
-0.0552... |
332c900e-fefe-4230-9468-54d8d5e90b7a | description: 'System table containing information about mutations of MergeTree tables
and their progress. Each mutation command is represented by a single row.'
keywords: ['system table', 'mutations']
slug: /operations/system-tables/mutations
title: 'system.mutations'
doc_type: 'reference'
system.mutations
The table contains information about
mutations
of
MergeTree
tables and their progress. Each mutation command is represented by a single row.
Columns: {#columns}
database
(
String
) β The name of the database to which the mutation was applied.
table
(
String
) β The name of the table to which the mutation was applied.
mutation_id
(
String
) β The ID of the mutation. For replicated tables these IDs correspond to znode names in the
<table_path_in_clickhouse_keeper>/mutations/
directory in ClickHouse Keeper. For non-replicated tables the IDs correspond to file names in the data directory of the table.
command
(
String
) β The mutation command string (the part of the query after
ALTER TABLE [db.]table
).
create_time
(
DateTime
) β Date and time when the mutation command was submitted for execution.
block_numbers.partition_id
(
Array
(
String
)) β For mutations of replicated tables, the array contains the partitions' IDs (one record for each partition). For mutations of non-replicated tables the array is empty.
block_numbers.number
(
Array
(
Int64
)) β For mutations of replicated tables, the array contains one record for each partition, with the block number that was acquired by the mutation. Only parts that contain blocks with numbers less than this number will be mutated in the partition. In non-replicated tables, block numbers in all partitions form a single sequence. This means that for mutations of non-replicated tables, the column will contain one record with a single block number acquired by the mutation.
parts_to_do_names
(
Array
(
String
)) β An array of names of data parts that need to be mutated for the mutation to complete.
parts_to_do
(
Int64
) β The number of data parts that need to be mutated for the mutation to complete.
is_killed
(
UInt8
) β Indicates whether a mutation has been killed.
Only available in ClickHouse Cloud.
:::note
is_killed=1
does not necessarily mean the mutation is completely finalized. It is possible for a mutation to remain in a state where
is_killed=1
and
is_done=0
for an extended period. This can happen if another long-running mutation is blocking the killed mutation. This is a normal situation.
:::
is_done
(
UInt8
) β The flag whether the mutation is done or not. Possible values:
1
if the mutation is completed,
0
if the mutation is still in process.
:::note
Even if
parts_to_do = 0
it is possible that a mutation of a replicated table is not completed yet because of a long-running
INSERT
query, that will create a new data part needed to be mutated.
::: | {"source_file": "mutations.md"} | [
-0.06424466520547867,
-0.005144523456692696,
-0.08613745123147964,
-0.006001683883368969,
0.01176292821764946,
-0.15466555953025818,
0.022506389766931534,
0.05888231098651886,
-0.020082509145140648,
0.029148828238248825,
0.06322894245386124,
0.009945500642061234,
0.11490532010793686,
-0.10... |
6fd2d992-2593-4bfa-9b5c-1c3d1b1bd486 | If there were problems with mutating some data parts, the following columns contain additional information:
latest_failed_part
(
String
) β The name of the most recent part that could not be mutated.
latest_fail_time
(
DateTime
) β The date and time of the most recent part mutation failure.
latest_fail_reason
(
String
) β The exception message that caused the most recent part mutation failure.
Monitoring Mutations {#monitoring-mutations}
To track the progress on the
system.mutations
table, use the following query:
```sql
SELECT * FROM clusterAllReplicas('cluster_name', 'system', 'mutations')
WHERE is_done = 0 AND table = 'tmp';
-- or
SELECT * FROM clusterAllReplicas('cluster_name', 'system.mutations')
WHERE is_done = 0 AND table = 'tmp';
```
Note: this requires read permissions on the
system.*
tables.
:::tip Cloud usage
In ClickHouse Cloud the
system.mutations
table on each node has all the mutations in the cluster, and there is no need for
clusterAllReplicas
.
:::
See Also
Mutations
MergeTree
table engine
ReplicatedMergeTree
family | {"source_file": "mutations.md"} | [
0.03519897907972336,
-0.013442233204841614,
-0.037463366985321045,
-0.009836464188992977,
0.10798300057649612,
-0.12555895745754242,
0.026328621432185173,
0.028290532529354095,
-0.04624870419502258,
0.029912399128079414,
0.08594655990600586,
-0.058192115277051926,
0.06763333827257156,
-0.0... |
7b2933af-ca67-4c40-bcac-53feb2918310 | description: 'System table containing information about setting changes in previous
ClickHouse versions.'
keywords: ['system table', 'settings_changes']
slug: /operations/system-tables/settings_changes
title: 'system.settings_changes'
doc_type: 'reference'
system.settings_changes
Contains information about setting changes in previous ClickHouse versions.
Columns:
type
(
Enum8('Session' = 0, 'MergeTree' = 1)
) β The group of settings (Session, MergeTree...)
version
(
String
) β The ClickHouse server version.
changes
(
Array(Tuple(name String, previous_value String, new_value String, reason String))
) β The list of changes in settings which changed the behaviour of ClickHouse.
Example
sql
SELECT *
FROM system.settings_changes
WHERE version = '23.5'
FORMAT Vertical
text
Row 1:
ββββββ
type: Core
version: 23.5
changes: [('input_format_parquet_preserve_order','1','0','Allow Parquet reader to reorder rows for better parallelism.'),('parallelize_output_from_storages','0','1','Allow parallelism when executing queries that read from file/url/s3/etc. This may reorder rows.'),('use_with_fill_by_sorting_prefix','0','1','Columns preceding WITH FILL columns in ORDER BY clause form sorting prefix. Rows with different values in sorting prefix are filled independently'),('output_format_parquet_compliant_nested_types','0','1','Change an internal field name in output Parquet file schema.')]
See also
Settings
system.settings | {"source_file": "settings_changes.md"} | [
0.04648808017373085,
-0.015099155716598034,
-0.03073827363550663,
-0.024206651374697685,
-0.03678038343787193,
-0.046601612120866776,
0.007502972614020109,
0.00002176374982809648,
-0.07673166692256927,
0.06456902623176575,
0.07281466573476791,
0.013819152489304543,
0.023158883675932884,
-0... |
4223afcb-11c3-458a-8b94-2adfdefe5388 | description: 'System table containing information about parts and columns of MergeTree
tables.'
keywords: ['system table', 'parts_columns']
slug: /operations/system-tables/parts_columns
title: 'system.parts_columns'
doc_type: 'reference'
system.parts_columns
Contains information about parts and columns of
MergeTree
tables.
Each row describes one data part.
Columns:
partition
(
String
) β The partition name. To learn what a partition is, see the description of the
ALTER
query.
Formats:
YYYYMM
for automatic partitioning by month.
any_string
when partitioning manually.
name
(
String
) β Name of the data part.
part_type
(
String
) β The data part storing format.
Possible values:
Wide
β Each column is stored in a separate file in a filesystem.
Compact
β All columns are stored in one file in a filesystem.
Data storing format is controlled by the
min_bytes_for_wide_part
and
min_rows_for_wide_part
settings of the
MergeTree
table.
active
(
UInt8
) β Flag that indicates whether the data part is active. If a data part is active, it's used in a table. Otherwise, it's deleted. Inactive data parts remain after merging.
marks
(
UInt64
) β The number of marks. To get the approximate number of rows in a data part, multiply
marks
by the index granularity (usually 8192) (this hint does not work for adaptive granularity).
rows
(
UInt64
) β The number of rows.
bytes_on_disk
(
UInt64
) β Total size of all the data part files in bytes.
data_compressed_bytes
(
UInt64
) β Total size of compressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
data_uncompressed_bytes
(
UInt64
) β Total size of uncompressed data in the data part. All the auxiliary files (for example, files with marks) are not included.
marks_bytes
(
UInt64
) β The size of the file with marks.
modification_time
(
DateTime
) β The time the directory with the data part was modified. This usually corresponds to the time of data part creation.
remove_time
(
DateTime
) β The time when the data part became inactive.
refcount
(
UInt32
) β The number of places where the data part is used. A value greater than 2 indicates that the data part is used in queries or merges.
min_date
(
Date
) β The minimum value of the date key in the data part.
max_date
(
Date
) β The maximum value of the date key in the data part.
partition_id
(
String
) β ID of the partition.
min_block_number
(
UInt64
) β The minimum number of data parts that make up the current part after merging.
max_block_number
(
UInt64
) β The maximum number of data parts that make up the current part after merging.
level
(
UInt32
) β Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts. | {"source_file": "parts_columns.md"} | [
0.019621592015028,
-0.02284170314669609,
-0.05450213700532913,
0.042718030512332916,
0.029523545876145363,
-0.09063582867383957,
0.035674743354320526,
0.1123661920428276,
-0.03411904349923134,
-0.006485622841864824,
0.03314453735947609,
0.035563353449106216,
0.0042983549647033215,
-0.07178... |
f6b1ad23-1f03-48ef-9a39-2c5d16b30c9d | level
(
UInt32
) β Depth of the merge tree. Zero means that the current part was created by insert rather than by merging other parts.
data_version
(
UInt64
) β Number that is used to determine which mutations should be applied to the data part (mutations with a version higher than
data_version
).
primary_key_bytes_in_memory
(
UInt64
) β The amount of memory (in bytes) used by primary key values.
primary_key_bytes_in_memory_allocated
(
UInt64
) β The amount of memory (in bytes) reserved for primary key values.
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
engine
(
String
) β Name of the table engine without parameters.
disk_name
(
String
) β Name of a disk that stores the data part.
path
(
String
) β Absolute path to the folder with data part files.
column
(
String
) β Name of the column.
type
(
String
) β Column type.
column_position
(
UInt64
) β Ordinal position of a column in a table starting with 1.
default_kind
(
String
) β Expression type (
DEFAULT
,
MATERIALIZED
,
ALIAS
) for the default value, or an empty string if it is not defined.
default_expression
(
String
) β Expression for the default value, or an empty string if it is not defined.
column_bytes_on_disk
(
UInt64
) β Total size of the column in bytes.
column_data_compressed_bytes
(
UInt64
) β Total size of compressed data in the column, in bytes.
column_data_uncompressed_bytes
(
UInt64
) β Total size of the decompressed data in the column, in bytes.
column_marks_bytes
(
UInt64
) β The size of the column with marks, in bytes.
bytes
(
UInt64
) β Alias for
bytes_on_disk
.
marks_size
(
UInt64
) β Alias for
marks_bytes
.
Example
sql
SELECT * FROM system.parts_columns LIMIT 1 FORMAT Vertical; | {"source_file": "parts_columns.md"} | [
0.010860259644687176,
-0.041594941169023514,
-0.10827227681875229,
0.012653118930757046,
-0.018632708117365837,
-0.1173071563243866,
-0.011139686219394207,
0.12360647320747375,
-0.01900739036500454,
-0.01308147981762886,
0.0774213895201683,
0.01763358898460865,
0.05728723481297493,
-0.1128... |
a10e0103-db78-4d21-ae77-992457c369f7 | bytes
(
UInt64
) β Alias for
bytes_on_disk
.
marks_size
(
UInt64
) β Alias for
marks_bytes
.
Example
sql
SELECT * FROM system.parts_columns LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
partition: tuple()
name: all_1_2_1
part_type: Wide
active: 1
marks: 2
rows: 2
bytes_on_disk: 155
data_compressed_bytes: 56
data_uncompressed_bytes: 4
marks_bytes: 96
modification_time: 2020-09-23 10:13:36
remove_time: 2106-02-07 06:28:15
refcount: 1
min_date: 1970-01-01
max_date: 1970-01-01
partition_id: all
min_block_number: 1
max_block_number: 2
level: 1
data_version: 1
primary_key_bytes_in_memory: 2
primary_key_bytes_in_memory_allocated: 64
database: default
table: 53r93yleapyears
engine: MergeTree
disk_name: default
path: /var/lib/clickhouse/data/default/53r93yleapyears/all_1_2_1/
column: id
type: Int8
column_position: 1
default_kind:
default_expression:
column_bytes_on_disk: 76
column_data_compressed_bytes: 28
column_data_uncompressed_bytes: 2
column_marks_bytes: 48
See Also
MergeTree family | {"source_file": "parts_columns.md"} | [
0.0028954220470041037,
-0.002672006608918309,
-0.028265969827771187,
0.010068165138363838,
-0.0012772917980328202,
-0.08805570751428604,
0.022196805104613304,
0.07706798613071442,
0.02709922194480896,
-0.0021467977203428745,
0.029790449887514114,
-0.013309999369084835,
0.029526762664318085,
... |
528c3745-f0bd-4f1d-b133-dace7cfa1ade | description: 'System table which exists only if ZooKeeper is configured. Shows current
connections to ZooKeeper (including auxiliary ZooKeepers).'
keywords: ['system table', 'zookeeper_connection']
slug: /operations/system-tables/zookeeper_connection
title: 'system.zookeeper_connection'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.zookeeper_connection
This table does not exist if ZooKeeper is not configured. The 'system.zookeeper_connection' table shows current connections to ZooKeeper (including auxiliary ZooKeepers). Each row shows information about one connection.
Columns:
name
(
String
) β ZooKeeper cluster's name.
host
(
String
) β The hostname/IP of the ZooKeeper node that ClickHouse connected to.
port
(
UIn16
) β The port of the ZooKeeper node that ClickHouse connected to.
index
(
Nullable(UInt8)
) β The index of the ZooKeeper node that ClickHouse connected to. The index is from ZooKeeper config. If not connected, this column is NULL.
connected_time
(
DateTime
) β When the connection was established
session_uptime_elapsed_seconds
(
UInt64
) β Seconds elapsed since the connection was established.
is_expired
(
UInt8
) β Is the current connection expired.
keeper_api_version
(
UInt8
) β Keeper API version.
client_id
(
Int64
) β Session id of the connection.
xid
(
Int64
) β XID of the current session.
enabled_feature_flags
(
Array(Enum16)
) β Feature flags which are enabled. Only applicable to ClickHouse Keeper. Possible values are
FILTERED_LIST
,
MULTI_READ
,
CHECK_NOT_EXISTS
,
CREATE_IF_NOT_EXISTS
,
REMOVE_RECURSIVE
.
availability_zone
(
String
) β Availability zone.
Example:
sql
SELECT * FROM system.zookeeper_connection;
text
ββnameβββββ¬βhostβββββββ¬βportββ¬βindexββ¬ββββββconnected_timeββ¬βsession_uptime_elapsed_secondsββ¬βis_expiredββ¬βkeeper_api_versionββ¬βclient_idββ¬βxidββ¬βenabled_feature_flagsβββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βavailability_zoneββ
β default β 127.0.0.1 β 2181 β 0 β 2025-04-10 14:30:00 β 943 β 0 β 0 β 420 β 69 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS'] β eu-west-1b β
βββββββββββ΄ββββββββββββ΄βββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββ΄ββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββ | {"source_file": "zookeeper_connection.md"} | [
0.03808595612645149,
-0.0317547433078289,
-0.06641107052564621,
-0.020502181723713875,
0.06383810192346573,
-0.0611041784286499,
0.030476192012429237,
-0.054356373846530914,
-0.0019582423847168684,
0.06855855137109756,
-0.006629803217947483,
-0.04924100637435913,
0.10029882192611694,
-0.03... |
0c08a5a5-b4e7-4ea5-a517-cfe21e3177c4 | description: 'Contains queries used by
/dashboard
page accessible though the HTTP
interface. useful for monitoring and troubleshooting.'
keywords: ['system table', 'dashboards', 'monitoring', 'troubleshooting']
slug: /operations/system-tables/dashboards
title: 'system.dashboards'
doc_type: 'reference'
Contains queries used by
/dashboard
page accessible though
HTTP interface
.
This table can be useful for monitoring and troubleshooting. The table contains a row for every chart in a dashboard.
:::note
/dashboard
page can render queries not only from
system.dashboards
, but from any table with the same schema.
This can be useful to create custom dashboards.
:::
Example:
sql
SELECT *
FROM system.dashboards
WHERE title ILIKE '%CPU%'
```text
Row 1:
ββββββ
dashboard: overview
title: CPU Usage (cores)
query: SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSCPUVirtualTimeMicroseconds) / 1000000
FROM system.metric_log
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
GROUP BY t
ORDER BY t WITH FILL STEP {rounding:UInt32}
Row 2:
ββββββ
dashboard: overview
title: CPU Wait
query: SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSCPUWaitMicroseconds) / 1000000
FROM system.metric_log
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
GROUP BY t
ORDER BY t WITH FILL STEP {rounding:UInt32}
Row 3:
ββββββ
dashboard: overview
title: OS CPU Usage (Userspace)
query: SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(value)
FROM system.asynchronous_metric_log
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32} AND metric = 'OSUserTimeNormalized'
GROUP BY t
ORDER BY t WITH FILL STEP {rounding:UInt32}
Row 4:
ββββββ
dashboard: overview
title: OS CPU Usage (Kernel)
query: SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(value)
FROM system.asynchronous_metric_log
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32} AND metric = 'OSSystemTimeNormalized'
GROUP BY t
ORDER BY t WITH FILL STEP {rounding:UInt32}
```
Columns:
dashboard
(
String
) β The dashboard name.
title
(
String
) β The title of a chart.
query
(
String
) β The query to obtain data to be displayed. | {"source_file": "dashboards.md"} | [
-0.038844961673021317,
0.0025326968170702457,
-0.09890546649694443,
0.0325624942779541,
0.0022143819369375706,
-0.04308370128273964,
0.04698130860924721,
0.06575924903154373,
0.022537607699632645,
0.04222007095813751,
-0.02863527648150921,
-0.06001553311944008,
0.11329628527164459,
-0.0400... |
60706717-d70a-4438-96d7-bef0b80ace55 | description: 'Shows the history of ZooKeeper connections (including auxiliary ZooKeepers).'
keywords: ['system table', 'zookeeper_connection_log']
slug: /operations/system-tables/zookeeper_connection_log
title: 'system.zookeeper_connection_log'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.zookeeper_connection_log
The 'system.zookeeper_connection_log' table shows the history of ZooKeeper connections (including auxiliary ZooKeepers). Each row shows information about one event regarding connections.
:::note
The table doesn't contain events for disconnections caused by server shutdown.
:::
Columns:
hostname
(
LowCardinality(String)
) β Hostname of the server which is connected to or disconnected from ZooKeeper.
type
(
Enum8
) - The type of the event. Possible values:
Connected
,
Disconnected
.
event_date
(
Date
) - Date of the entry.
event_time
(
DateTime
) - Time of the entry.
event_time_microseconds
(
Date
) - Time of the entry with microseconds precision.
name
(
String
) β ZooKeeper cluster's name.
host
(
String
) β The hostname/IP of the ZooKeeper node that ClickHouse connected to.
port
(
UIn16
) β The port of the ZooKeeper node that ClickHouse connected to.
index
(
UInt8
) β The index of the ZooKeeper node that ClickHouse connected to or disconnected from. The index is from ZooKeeper config.
client_id
(
Int64
) β Session id of the connection.
keeper_api_version
(
UInt8
) β Keeper API version.
enabled_feature_flags
(
Array(Enum16)
) β Feature flags which are enabled. Only applicable to ClickHouse Keeper. Possible values are
FILTERED_LIST
,
MULTI_READ
,
CHECK_NOT_EXISTS
,
CREATE_IF_NOT_EXISTS
,
REMOVE_RECURSIVE
.
availability_zone
(
String
) β Availability zone.
reason
(
String
) β Reason for the connection or disconnection.
Example:
sql
SELECT * FROM system.zookeeper_connection_log; | {"source_file": "zookeeper_connection_log.md"} | [
0.04437028989195824,
-0.0019478746689856052,
-0.02435608208179474,
0.0024712979793548584,
0.06260586529970169,
-0.05416709557175636,
0.03107648529112339,
0.000040807793993735686,
0.024666937068104744,
0.10372338443994522,
-0.03604559972882271,
-0.03343033418059349,
0.04647614434361458,
-0.... |
fff48ee6-c861-4bdd-8f5e-1a167fe49480 | text
ββhostnameββ¬βtypeββββββββββ¬βevent_dateββ¬ββββββββββevent_timeββ¬ββββevent_time_microsecondsββ¬βnameββββββββββββββββ¬βhostββ¬βportββ¬βindexββ¬βclient_idββ¬βkeeper_api_versionββ¬βenabled_feature_flagsββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βavailability_zoneββ¬βreasonβββββββββββββββ
1. β node β Connected β 2025-05-12 β 2025-05-12 19:49:35 β 2025-05-12 19:49:35.713067 β zk_conn_log_test_4 β zoo2 β 2181 β 0 β 10 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Initialization β
2. β node β Connected β 2025-05-12 β 2025-05-12 19:49:23 β 2025-05-12 19:49:23.981570 β default β zoo1 β 2181 β 0 β 4 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Initialization β
3. β node β Connected β 2025-05-12 β 2025-05-12 19:49:28 β 2025-05-12 19:49:28.104021 β default β zoo1 β 2181 β 0 β 5 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Initialization β
4. β node β Connected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.459251 β zk_conn_log_test_2 β zoo2 β 2181 β 0 β 6 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Initialization β
5. β node β Connected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.574312 β zk_conn_log_test_3 β zoo3 β 2181 β 0 β 7 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Initialization β
6. β node β Disconnected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.909890 β default β zoo1 β 2181 β 0 β 5 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Config changed β
7. β node β Connected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.909895 β default β zoo2 β 2181 β 0 β 8 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Config changed β
8. β node β Disconnected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.912010 β zk_conn_log_test_2 β zoo2 β 2181 β 0 β 6 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Config changed β | {"source_file": "zookeeper_connection_log.md"} | [
-0.0297568179666996,
0.033285584300756454,
-0.00031784220482222736,
0.022655542939901352,
0.04491318017244339,
-0.10213062912225723,
0.07147186994552612,
-0.04370812699198723,
0.007239553611725569,
0.032595403492450714,
0.05094543844461441,
-0.04019637778401375,
0.03528958931565285,
0.0364... |
d4700532-d9dc-4490-8a9e-1259d1aa35f6 | 9. β node β Connected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.912014 β zk_conn_log_test_2 β zoo3 β 2181 β 0 β 9 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Config changed β
10. β node β Disconnected β 2025-05-12 β 2025-05-12 19:49:29 β 2025-05-12 19:49:29.912061 β zk_conn_log_test_3 β zoo3 β 2181 β 0 β 7 β 0 β ['FILTERED_LIST','MULTI_READ','CHECK_NOT_EXISTS','CREATE_IF_NOT_EXISTS','REMOVE_RECURSIVE'] β β Removed from config β
ββββββββββββ΄βββββββββββββββ΄βββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββββββββββββ΄βββββββββββββββββββββ΄βββββββ΄βββββββ΄ββββββββ΄ββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββββββββββ | {"source_file": "zookeeper_connection_log.md"} | [
-0.013025964610278606,
-0.01326835062354803,
-0.008532145991921425,
0.008860059082508087,
0.07956484705209732,
-0.10704617947340012,
0.07846567779779434,
-0.05365100875496864,
0.0025493011344224215,
0.0717233270406723,
0.057002875953912735,
-0.012790975160896778,
0.03542611375451088,
0.005... |
372fc8b2-018a-4525-8bf1-9f1f63e2ced2 | description: 'System table containing information about ClickHouse server''s build options.'
slug: /operations/system-tables/build_options
title: 'system.build_options'
keywords: ['system table', 'build_options']
doc_type: 'reference'
Contains information about the ClickHouse server's build options.
Columns:
name
(
String
) β Name of the build option.
value
(
String
) β Value of the build option.
Example
sql
SELECT * FROM system.build_options LIMIT 5
text
ββnameββββββββββββββ¬βvalueββ
β USE_BROTLI β 1 β
β USE_BZIP2 β 1 β
β USE_CAPNP β 1 β
β USE_CASSANDRA β 1 β
β USE_DATASKETCHES β 1 β
ββββββββββββββββββββ΄ββββββββ | {"source_file": "build_options.md"} | [
0.021682430058717728,
0.0016417651204392314,
-0.10416373610496521,
0.03158332034945488,
-0.001743472763337195,
-0.0859285369515419,
0.06916189193725586,
0.016934920102357864,
-0.09873335063457489,
0.03809143230319023,
0.06733624637126923,
-0.059090446680784225,
0.04534001275897026,
-0.0934... |
0f3a194c-59c1-4726-a8d4-b8c8a4d11179 | description: 'System table containing metrics that are calculated periodically in
the background. For example, the amount of RAM in use.'
keywords: ['system table', 'asynchronous_metrics']
slug: /operations/system-tables/asynchronous_metrics
title: 'system.asynchronous_metrics'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
system.asynchronous_metrics
Contains metrics that are calculated periodically in the background. For example, the amount of RAM in use.
Columns:
metric
(
String
) β Metric name.
value
(
Float64
) β Metric value.
description
(
String
- Metric description)
Example
sql
SELECT * FROM system.asynchronous_metrics LIMIT 10 | {"source_file": "asynchronous_metrics.md"} | [
0.03815797343850136,
-0.020981844514608383,
-0.15646407008171082,
0.05155688524246216,
-0.018871229141950607,
-0.07614695280790329,
0.023762106895446777,
0.0439639613032341,
0.05539446324110031,
0.05780353024601936,
-0.013696182519197464,
-0.006552870385348797,
0.07746678590774536,
-0.1003... |
c6005bf0-7893-4196-9aad-9064707529b7 | text
ββmetricβββββββββββββββββββββββββββββββββββ¬ββββββvalueββ¬βdescriptionβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AsynchronousMetricsCalculationTimeSpent β 0.00179053 β Time in seconds spent for calculation of asynchronous metrics (this is the overhead of asynchronous metrics). β
β NumberOfDetachedByUserParts β 0 β The total number of parts detached from MergeTree tables by users with the `ALTER TABLE DETACH` query (as opposed to unexpected, broken or ignored parts). The server does not care about detached parts and they can be removed. β
β NumberOfDetachedParts β 0 β The total number of parts detached from MergeTree tables. A part can be detached by a user with the `ALTER TABLE DETACH` query or by the server itself it the part is broken, unexpected or unneeded. The server does not care about detached parts and they can be removed. β
β TotalRowsOfMergeTreeTables β 2781309 β Total amount of rows (records) stored in all tables of MergeTree family. β
β TotalBytesOfMergeTreeTables β 7741926 β Total amount of bytes (compressed, including data and indices) stored in all tables of MergeTree family. β
β NumberOfTables β 93 β Total number of tables summed across the databases on the server, excluding the databases that cannot contain MergeTree tables. The excluded database engines are those who generate the set of tables on the fly, like `Lazy`, `MySQL`, `PostgreSQL`, `SQlite`. β
β NumberOfDatabases β 6 β Total number of databases on the server. β
β MaxPartCountForPartition β 6 β Maximum number of parts per partition across all partitions of all tables of MergeTree family. Values larger than 300 indicates misconfiguration, overload, or massive data loading. β | {"source_file": "asynchronous_metrics.md"} | [
-0.023229876533150673,
-0.034632690250873566,
0.052221719175577164,
0.042004723101854324,
-0.02621322125196457,
-0.12539607286453247,
0.043055400252342224,
0.010843217372894287,
0.023269496858119965,
0.04398282617330551,
0.03687385469675064,
-0.01354557741433382,
0.002517851535230875,
-0.0... |
5f29b707-9006-4703-8d24-4fec4e59b227 | β ReplicasSumMergesInQueue β 0 β Sum of merge operations in the queue (still to be applied) across Replicated tables. β
β ReplicasSumInsertsInQueue β 0 β Sum of INSERT operations in the queue (still to be replicated) across Replicated tables. β
βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {"source_file": "asynchronous_metrics.md"} | [
-0.036718741059303284,
-0.0467633493244648,
-0.01815049536526203,
0.049694400280714035,
-0.06461290270090103,
-0.05118916183710098,
0.05767081677913666,
-0.11823568493127823,
0.07329326868057251,
0.07361937314271927,
0.0777982547879219,
-0.05029861629009247,
0.08853153884410858,
-0.0277697... |
55f8908d-c0ed-4ce4-8085-85d0c7f10622 | Metric descriptions {#metric-descriptions}
AsynchronousHeavyMetricsCalculationTimeSpent {#asynchronousheavymetricscalculationtimespent}
Time in seconds spent for calculation of asynchronous heavy (tables related) metrics (this is the overhead of asynchronous metrics).
AsynchronousHeavyMetricsUpdateInterval {#asynchronousheavymetricsupdateinterval}
Heavy (tables related) metrics update interval
AsynchronousMetricsCalculationTimeSpent {#asynchronousmetricscalculationtimespent}
Time in seconds spent for calculation of asynchronous metrics (this is the overhead of asynchronous metrics).
AsynchronousMetricsUpdateInterval {#asynchronousmetricsupdateinterval}
Metrics update interval
BlockActiveTime_
name
{#blockactivetime_name}
Time in seconds the block device had the IO requests queued. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockDiscardBytes_
name
{#blockdiscardbytes_name}
Number of discarded bytes on the block device. These operations are relevant for SSD. Discard operations are not used by ClickHouse, but can be used by other processes on the system. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockDiscardMerges_
name
{#blockdiscardmerges_name}
Number of discard operations requested from the block device and merged together by the OS IO scheduler. These operations are relevant for SSD. Discard operations are not used by ClickHouse, but can be used by other processes on the system. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockDiscardOps_
name
{#blockdiscardops_name}
Number of discard operations requested from the block device. These operations are relevant for SSD. Discard operations are not used by ClickHouse, but can be used by other processes on the system. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockDiscardTime_
name
{#blockdiscardtime_name}
Time in seconds spend in discard operations requested from the block device, summed across all the operations. These operations are relevant for SSD. Discard operations are not used by ClickHouse, but can be used by other processes on the system. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockInFlightOps_
name
{#blockinflightops_name} | {"source_file": "asynchronous_metrics.md"} | [
-0.07134264707565308,
0.004167820792645216,
-0.04962094873189926,
0.03511440008878708,
0.004885885398834944,
-0.11719278991222382,
0.018131520599126816,
0.017585482448339462,
0.08760704100131989,
0.012356598861515522,
-0.0049996692687273026,
-0.05580976605415344,
0.01774587295949459,
-0.02... |
fd1b9a60-8add-4031-8762-7378c9d98368 | BlockInFlightOps_
name
{#blockinflightops_name}
This value counts the number of I/O requests that have been issued to the device driver but have not yet completed. It does not include IO requests that are in the queue but not yet issued to the device driver. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockQueueTime_
name
{#blockqueuetime_name}
This value counts the number of milliseconds that IO requests have waited on this block device. If there are multiple IO requests waiting, this value will increase as the product of the number of milliseconds times the number of requests waiting. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockReadBytes_
name
{#blockreadbytes_name}
Number of bytes read from the block device. It can be lower than the number of bytes read from the filesystem due to the usage of the OS page cache, that saves IO. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockReadMerges_
name
{#blockreadmerges_name}
Number of read operations requested from the block device and merged together by the OS IO scheduler. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockReadOps_
name
{#blockreadops_name}
Number of read operations requested from the block device. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockReadTime_
name
{#blockreadtime_name}
Time in seconds spend in read operations requested from the block device, summed across all the operations. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockWriteBytes_
name
{#blockwritebytes_name}
Number of bytes written to the block device. It can be lower than the number of bytes written to the filesystem due to the usage of the OS page cache, that saves IO. A write to the block device may happen later than the corresponding write to the filesystem due to write-through caching. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockWriteMerges_
name
{#blockwritemerges_name} | {"source_file": "asynchronous_metrics.md"} | [
-0.0349944606423378,
-0.011305982246994972,
-0.044486165046691895,
0.012605494819581509,
0.0019911620765924454,
-0.1577802300453186,
0.08338940143585205,
-0.052191972732543945,
0.07842887938022614,
0.034361280500888824,
-0.009165365248918533,
-0.003921749535948038,
-0.03746957704424858,
-0... |
a3c9c00a-e952-4088-950a-842993bc53ed | BlockWriteMerges_
name
{#blockwritemerges_name}
Number of write operations requested from the block device and merged together by the OS IO scheduler. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockWriteOps_
name
{#blockwriteops_name}
Number of write operations requested from the block device. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
BlockWriteTime_
name
{#blockwritetime_name}
Time in seconds spend in write operations requested from the block device, summed across all the operations. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Source:
/sys/block
. See https://www.kernel.org/doc/Documentation/block/stat.txt
CPUFrequencyMHz_
name
{#cpufrequencymhz_name}
The current frequency of the CPU, in MHz. Most of the modern CPUs adjust the frequency dynamically for power saving and Turbo Boosting.
DictionaryMaxUpdateDelay {#dictionarymaxlastsuccessfulupdatetime}
The maximum delay (in seconds) of dictionary update.
DictionaryTotalFailedUpdates {#dictionaryloadfailed}
Number of errors since last successful loading in all dictionaries.
DiskAvailable_
name
{#diskavailable_name}
Available bytes on the disk (virtual filesystem). Remote filesystems can show a large value like 16 EiB.
DiskTotal_
name
{#disktotal_name}
The total size in bytes of the disk (virtual filesystem). Remote filesystems can show a large value like 16 EiB.
DiskUnreserved_
name
{#diskunreserved_name}
Available bytes on the disk (virtual filesystem) without the reservations for merges, fetches, and moves. Remote filesystems can show a large value like 16 EiB.
DiskUsed_
name
{#diskused_name}
Used bytes on the disk (virtual filesystem). Remote filesystems not always provide this information.
FilesystemCacheBytes {#filesystemcachebytes}
Total bytes in the
cache
virtual filesystem. This cache is hold on disk.
FilesystemCacheFiles {#filesystemcachefiles}
Total number of cached file segments in the
cache
virtual filesystem. This cache is hold on disk.
FilesystemLogsPathAvailableBytes {#filesystemlogspathavailablebytes}
Available bytes on the volume where ClickHouse logs path is mounted. If this value approaches zero, you should tune the log rotation in the configuration file.
FilesystemLogsPathAvailableINodes {#filesystemlogspathavailableinodes}
The number of available inodes on the volume where ClickHouse logs path is mounted.
FilesystemLogsPathTotalBytes {#filesystemlogspathtotalbytes}
The size of the volume where ClickHouse logs path is mounted, in bytes. It's recommended to have at least 10 GB for logs.
FilesystemLogsPathTotalINodes {#filesystemlogspathtotalinodes} | {"source_file": "asynchronous_metrics.md"} | [
-0.04272354766726494,
-0.04863579198718071,
-0.01844828389585018,
0.006583780515938997,
0.01844480074942112,
-0.13764941692352295,
-0.0004415283619891852,
-0.012858357280492783,
0.07257957756519318,
0.0067415074445307255,
0.014834565110504627,
0.032236844301223755,
-0.0168478861451149,
-0.... |
9b99e910-e8bd-4ddc-b56e-ff51a65799d5 | The size of the volume where ClickHouse logs path is mounted, in bytes. It's recommended to have at least 10 GB for logs.
FilesystemLogsPathTotalINodes {#filesystemlogspathtotalinodes}
The total number of inodes on the volume where ClickHouse logs path is mounted.
FilesystemLogsPathUsedBytes {#filesystemlogspathusedbytes}
Used bytes on the volume where ClickHouse logs path is mounted.
FilesystemLogsPathUsedINodes {#filesystemlogspathusedinodes}
The number of used inodes on the volume where ClickHouse logs path is mounted.
FilesystemMainPathAvailableBytes {#filesystemmainpathavailablebytes}
Available bytes on the volume where the main ClickHouse path is mounted.
FilesystemMainPathAvailableINodes {#filesystemmainpathavailableinodes}
The number of available inodes on the volume where the main ClickHouse path is mounted. If it is close to zero, it indicates a misconfiguration, and you will get 'no space left on device' even when the disk is not full.
FilesystemMainPathTotalBytes {#filesystemmainpathtotalbytes}
The size of the volume where the main ClickHouse path is mounted, in bytes.
FilesystemMainPathTotalINodes {#filesystemmainpathtotalinodes}
The total number of inodes on the volume where the main ClickHouse path is mounted. If it is less than 25 million, it indicates a misconfiguration.
FilesystemMainPathUsedBytes {#filesystemmainpathusedbytes}
Used bytes on the volume where the main ClickHouse path is mounted.
FilesystemMainPathUsedINodes {#filesystemmainpathusedinodes}
The number of used inodes on the volume where the main ClickHouse path is mounted. This value mostly corresponds to the number of files.
HTTPThreads {#httpthreads}
Number of threads in the server of the HTTP interface (without TLS).
InterserverThreads {#interserverthreads}
Number of threads in the server of the replicas communication protocol (without TLS).
Jitter {#jitter}
The difference in time the thread for calculation of the asynchronous metrics was scheduled to wake up and the time it was in fact, woken up. A proxy-indicator of overall system latency and responsiveness.
LoadAverage
N
{#loadaveragen}
The whole system load, averaged with exponential smoothing over 1 minute. The load represents the number of threads across all the processes (the scheduling entities of the OS kernel), that are currently running by CPU or waiting for IO, or ready to run but not being scheduled at this point of time. This number includes all the processes, not only clickhouse-server. The number can be greater than the number of CPU cores, if the system is overloaded, and many processes are ready to run but waiting for CPU or IO.
MaxPartCountForPartition {#maxpartcountforpartition}
Maximum number of parts per partition across all partitions of all tables of MergeTree family. Values larger than 300 indicates misconfiguration, overload, or massive data loading.
MemoryCode {#memorycode} | {"source_file": "asynchronous_metrics.md"} | [
0.042269572615623474,
-0.025853700935840607,
-0.008057594299316406,
-0.01304900273680687,
0.09040793031454086,
-0.12104583531618118,
0.03924427181482315,
0.033218495547771454,
0.06846413016319275,
0.04143521189689636,
0.039172179996967316,
0.014245991595089436,
-0.0035319228190928698,
0.03... |
be523998-3cd3-4de2-b921-57134546ced1 | MemoryCode {#memorycode}
The amount of virtual memory mapped for the pages of machine code of the server process, in bytes.
MemoryDataAndStack {#memorydataandstack}
The amount of virtual memory mapped for the use of stack and for the allocated memory, in bytes. It is unspecified whether it includes the per-thread stacks and most of the allocated memory, that is allocated with the 'mmap' system call. This metric exists only for completeness reasons. I recommend to use the
MemoryResident
metric for monitoring.
MemoryResidentMax {#memoryresidentmax}
Maximum amount of physical memory used by the server process, in bytes.
MemoryResident {#memoryresident}
The amount of physical memory used by the server process, in bytes.
MemoryShared {#memoryshared}
The amount of memory used by the server process, that is also shared by another processes, in bytes. ClickHouse does not use shared memory, but some memory can be labeled by OS as shared for its own reasons. This metric does not make a lot of sense to watch, and it exists only for completeness reasons.
MemoryVirtual {#memoryvirtual}
The size of the virtual address space allocated by the server process, in bytes. The size of the virtual address space is usually much greater than the physical memory consumption, and should not be used as an estimate for the memory consumption. The large values of this metric are totally normal, and makes only technical sense.
MySQLThreads {#mysqlthreads}
Number of threads in the server of the MySQL compatibility protocol.
NetworkReceiveBytes_
name
{#networkreceivebytes_name}
Number of bytes received via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkReceiveDrop_
name
{#networkreceivedrop_name}
Number of bytes a packet was dropped while received via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkReceiveErrors_
name
{#networkreceiveerrors_name}
Number of times error happened receiving via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkReceivePackets_
name
{#networkreceivepackets_name}
Number of network packets received via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkSendBytes_
name
{#networksendbytes_name}
Number of bytes sent via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkSendDrop_
name
{#networksenddrop_name}
Number of times a packed was dropped while sending via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkSendErrors_
name
{#networksenderrors_name} | {"source_file": "asynchronous_metrics.md"} | [
0.02633754536509514,
-0.07928120344877243,
-0.11532900482416153,
-0.011454133316874504,
-0.03538809344172478,
-0.00036665526567958295,
0.05674499645829201,
0.018977703526616096,
0.034845802932977676,
0.0358387790620327,
-0.005567669402807951,
-0.010466222651302814,
0.03431524708867073,
-0.... |
10a7e0c0-8223-4446-a2fd-c2d6b7b91359 | NetworkSendErrors_
name
{#networksenderrors_name}
Number of times error (e.g. TCP retransmit) happened while sending via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NetworkSendPackets_
name
{#networksendpackets_name}
Number of network packets sent via the network interface. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
NumberOfDatabases {#numberofdatabases}
Total number of databases on the server.
NumberOfDetachedByUserParts {#numberofdetachedbyuserparts}
The total number of parts detached from MergeTree tables by users with the
ALTER TABLE DETACH
query (as opposed to unexpected, broken or ignored parts). The server does not care about detached parts, and they can be removed.
NumberOfDetachedParts {#numberofdetachedparts}
The total number of parts detached from MergeTree tables. A part can be detached by a user with the
ALTER TABLE DETACH
query or by the server itself it the part is broken, unexpected or unneeded. The server does not care about detached parts, and they can be removed.
NumberOfTables {#numberoftables}
Total number of tables summed across the databases on the server, excluding the databases that cannot contain MergeTree tables. The excluded database engines are those who generate the set of tables on the fly, like
Lazy
,
MySQL
,
PostgreSQL
,
SQlite
.
OSContextSwitches {#oscontextswitches}
The number of context switches that the system underwent on the host machine. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSGuestNiceTime {#osguestnicetime}
The ratio of time spent running a virtual CPU for guest operating systems under the control of the Linux kernel, when a guest was set to a higher priority (See
man procfs
). This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This metric is irrelevant for ClickHouse, but still exists for completeness. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSGuestNiceTimeCPU_
N
{#osguestnicetimecpu_n}
The ratio of time spent running a virtual CPU for guest operating systems under the control of the Linux kernel, when a guest was set to a higher priority (See
man procfs
). This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This metric is irrelevant for ClickHouse, but still exists for completeness. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSGuestNiceTimeNormalized {#osguestnicetimenormalized} | {"source_file": "asynchronous_metrics.md"} | [
0.0029005208052694798,
-0.047751907259225845,
0.09408947080373764,
0.04519972950220108,
-0.043361812829971313,
-0.06381916254758835,
0.08884155750274658,
-0.01125356089323759,
-0.009120449423789978,
0.022334318608045578,
-0.014890956692397594,
-0.0036183998454362154,
0.08790317922830582,
-... |
a30e1aa0-6c81-4e62-a4c7-a76a69f6b0b9 | OSGuestNiceTimeNormalized {#osguestnicetimenormalized}
The value is similar to
OSGuestNiceTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSGuestTime {#osguesttime}
The ratio of time spent running a virtual CPU for guest operating systems under the control of the Linux kernel (See
man procfs
). This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This metric is irrelevant for ClickHouse, but still exists for completeness. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSGuestTimeCPU_
N
{#osguesttimecpu_n}
The ratio of time spent running a virtual CPU for guest operating systems under the control of the Linux kernel (See
man procfs
). This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This metric is irrelevant for ClickHouse, but still exists for completeness. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSGuestTimeNormalized {#osguesttimenormalized}
The value is similar to
OSGuestTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSIOWaitTime {#osiowaittime}
The ratio of time the CPU core was not running the code but when the OS kernel did not run any other process on this CPU as the processes were waiting for IO. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSIOWaitTimeCPU_
N
{#osiowaittimecpu_n}
The ratio of time the CPU core was not running the code but when the OS kernel did not run any other process on this CPU as the processes were waiting for IO. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSIOWaitTimeNormalized {#osiowaittimenormalized} | {"source_file": "asynchronous_metrics.md"} | [
0.0147590646520257,
-0.042990297079086304,
-0.059974305331707,
0.03637150675058365,
0.06751739233732224,
-0.08594533056020737,
0.024575350806117058,
0.03296539559960365,
0.029371043667197227,
0.024638643488287926,
0.016141202300786972,
-0.05999978631734848,
0.03137810155749321,
-0.04724939... |
94f84a8a-311e-46ab-af68-d8ca439185cd | OSIOWaitTimeNormalized {#osiowaittimenormalized}
The value is similar to
OSIOWaitTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSIdleTime {#osidletime}
The ratio of time the CPU core was idle (not even ready to run a process waiting for IO) from the OS kernel standpoint. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This does not include the time when the CPU was under-utilized due to the reasons internal to the CPU (memory loads, pipeline stalls, branch mispredictions, running another SMT core). The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSIdleTimeCPU_
N
{#osidletimecpu_n}
The ratio of time the CPU core was idle (not even ready to run a process waiting for IO) from the OS kernel standpoint. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This does not include the time when the CPU was under-utilized due to the reasons internal to the CPU (memory loads, pipeline stalls, branch mispredictions, running another SMT core). The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSIdleTimeNormalized {#osidletimenormalized}
The value is similar to
OSIdleTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSInterrupts {#osinterrupts}
The number of interrupts on the host machine. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSIrqTime {#osirqtime}
The ratio of time spent for running hardware interrupt requests on the CPU. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. A high number of this metric may indicate hardware misconfiguration or a very high network load. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSIrqTimeCPU_
N
{#osirqtimecpu_n} | {"source_file": "asynchronous_metrics.md"} | [
-0.01746109500527382,
-0.03978077322244644,
-0.030930301174521446,
0.09368563443422318,
0.05903109535574913,
-0.12063194066286087,
0.022941768169403076,
0.0076889884658157825,
0.023795275017619133,
-0.05607748031616211,
-0.012186406180262566,
-0.05618860945105553,
0.007389686536043882,
-0.... |
58a91a2e-23cd-47e6-9eee-80be1ea47256 | OSIrqTimeCPU_
N
{#osirqtimecpu_n}
The ratio of time spent for running hardware interrupt requests on the CPU. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. A high number of this metric may indicate hardware misconfiguration or a very high network load. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSIrqTimeNormalized {#osirqtimenormalized}
The value is similar to
OSIrqTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSMemoryAvailable {#osmemoryavailable}
The amount of memory available to be used by programs, in bytes. This is very similar to the
OSMemoryFreePlusCached
metric. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSMemoryBuffers {#osmemorybuffers}
The amount of memory used by OS kernel buffers, in bytes. This should be typically small, and large values may indicate a misconfiguration of the OS. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSMemoryCached {#osmemorycached}
The amount of memory used by the OS page cache, in bytes. Typically, almost all available memory is used by the OS page cache - high values of this metric are normal and expected. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSMemoryFreePlusCached {#osmemoryfreepluscached}
The amount of free memory plus OS page cache memory on the host system, in bytes. This memory is available to be used by programs. The value should be very similar to
OSMemoryAvailable
. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSMemoryFreeWithoutCached {#osmemoryfreewithoutcached}
The amount of free memory on the host system, in bytes. This does not include the memory used by the OS page cache memory, in bytes. The page cache memory is also available for usage by programs, so the value of this metric can be confusing. See the
OSMemoryAvailable
metric instead. For convenience, we also provide the
OSMemoryFreePlusCached
metric, that should be somewhat similar to OSMemoryAvailable. See also https://www.linuxatemyram.com/. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSMemoryTotal {#osmemorytotal}
The total amount of memory on the host system, in bytes.
OSNiceTime {#osnicetime} | {"source_file": "asynchronous_metrics.md"} | [
-0.008905635215342045,
-0.03077436238527298,
-0.06592557579278946,
0.07072847336530685,
0.030817724764347076,
-0.10851946473121643,
0.018554743379354477,
0.02926638163626194,
0.0622551329433918,
-0.03536456078290939,
-0.03809555992484093,
-0.027438241988420486,
0.024216879159212112,
-0.019... |
3e3a606f-061b-4ccc-966b-99cf5839c2b7 | OSMemoryTotal {#osmemorytotal}
The total amount of memory on the host system, in bytes.
OSNiceTime {#osnicetime}
The ratio of time the CPU core was running userspace code with higher priority. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSNiceTimeCPU_
N
{#osnicetimecpu_n}
The ratio of time the CPU core was running userspace code with higher priority. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSNiceTimeNormalized {#osnicetimenormalized}
The value is similar to
OSNiceTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSOpenFiles {#osopenfiles}
The total number of opened files on the host machine. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSProcessesBlocked {#osprocessesblocked}
Number of threads blocked waiting for I/O to complete (
man procfs
). This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSProcessesCreated {#osprocessescreated}
The number of processes created. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSProcessesRunning {#osprocessesrunning}
The number of runnable (running or ready to run) threads by the operating system. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server.
OSSoftIrqTime {#ossoftirqtime}
The ratio of time spent for running software interrupt requests on the CPU. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. A high number of this metric may indicate inefficient software running on the system. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSSoftIrqTimeCPU_
N
{#ossoftirqtimecpu_n}
The ratio of time spent for running software interrupt requests on the CPU. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. A high number of this metric may indicate inefficient software running on the system. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores]. | {"source_file": "asynchronous_metrics.md"} | [
0.009822526015341282,
0.014633002690970898,
-0.02253052219748497,
0.019095197319984436,
0.040518682450056076,
-0.12020675837993622,
0.04513921961188316,
0.044655367732048035,
0.0035533576738089323,
0.024920832365751266,
0.013593332841992378,
-0.02314694970846176,
0.04374687001109123,
-0.02... |
a38559a7-9fe8-4fa0-b25f-b8e71d0075f1 | OSSoftIrqTimeNormalized {#ossoftirqtimenormalized}
The value is similar to
OSSoftIrqTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSStealTime {#osstealtime}
The ratio of time spent in other operating systems by the CPU when running in a virtualized environment. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Not every virtualized environments present this metric, and most of them don't. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSStealTimeCPU_
N
{#osstealtimecpu_n}
The ratio of time spent in other operating systems by the CPU when running in a virtualized environment. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. Not every virtualized environments present this metric, and most of them don't. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSStealTimeNormalized {#osstealtimenormalized}
The value is similar to
OSStealTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSSystemTime {#ossystemtime}
The ratio of time the CPU core was running OS kernel (system) code. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSSystemTimeCPU_
N
{#ossystemtimecpu_n}
The ratio of time the CPU core was running OS kernel (system) code. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSSystemTimeNormalized {#ossystemtimenormalized}
The value is similar to
OSSystemTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
OSThreadsRunnable {#osthreadsrunnable}
The total number of 'runnable' threads, as the OS kernel scheduler seeing it. | {"source_file": "asynchronous_metrics.md"} | [
0.010281559079885483,
-0.02903429977595806,
-0.04240280017256737,
0.047950007021427155,
0.03698192536830902,
-0.10341622680425644,
0.005173735320568085,
0.013766183517873287,
0.0632719025015831,
-0.04546405002474785,
-0.037794068455696106,
-0.05141648277640343,
0.011308939196169376,
-0.012... |
4dc974d5-c01e-4460-8371-5db4a65343f1 | OSThreadsRunnable {#osthreadsrunnable}
The total number of 'runnable' threads, as the OS kernel scheduler seeing it.
OSThreadsTotal {#osthreadstotal}
The total number of threads, as the OS kernel scheduler seeing it.
OSUptime {#osuptime}
The uptime of the host server (the machine where ClickHouse is running), in seconds.
OSUserTime {#osusertime}
The ratio of time the CPU core was running userspace code. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This includes also the time when the CPU was under-utilized due to the reasons internal to the CPU (memory loads, pipeline stalls, branch mispredictions, running another SMT core). The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSUserTimeCPU_
N
{#osusertimecpu_n}
The ratio of time the CPU core was running userspace code. This is a system-wide metric, it includes all the processes on the host machine, not just clickhouse-server. This includes also the time when the CPU was under-utilized due to the reasons internal to the CPU (memory loads, pipeline stalls, branch mispredictions, running another SMT core). The value for a single CPU core will be in the interval [0..1]. The value for all CPU cores is calculated as a sum across them [0..num cores].
OSUserTimeNormalized {#osusertimenormalized}
The value is similar to
OSUserTime
but divided to the number of CPU cores to be measured in the [0..1] interval regardless of the number of cores. This allows you to average the values of this metric across multiple servers in a cluster even if the number of cores is non-uniform, and still get the average resource utilization metric.
PostgreSQLThreads {#postgresqlthreads}
Number of threads in the server of the PostgreSQL compatibility protocol.
ReplicasMaxAbsoluteDelay {#replicasmaxabsolutedelay}
Maximum difference in seconds between the most fresh replicated part and the most fresh data part still to be replicated, across Replicated tables. A very high value indicates a replica with no data.
ReplicasMaxInsertsInQueue {#replicasmaxinsertsinqueue}
Maximum number of INSERT operations in the queue (still to be replicated) across Replicated tables.
ReplicasMaxMergesInQueue {#replicasmaxmergesinqueue}
Maximum number of merge operations in the queue (still to be applied) across Replicated tables.
ReplicasMaxQueueSize {#replicasmaxqueuesize}
Maximum queue size (in the number of operations like get, merge) across Replicated tables.
ReplicasMaxRelativeDelay {#replicasmaxrelativedelay}
Maximum difference between the replica delay and the delay of the most up-to-date replica of the same table, across Replicated tables.
ReplicasSumInsertsInQueue {#replicassuminsertsinqueue}
Sum of INSERT operations in the queue (still to be replicated) across Replicated tables.
ReplicasSumMergesInQueue {#replicassummergesinqueue} | {"source_file": "asynchronous_metrics.md"} | [
-0.028925521299242973,
-0.013161467388272285,
-0.007041240576654673,
0.02217058092355728,
0.06087840721011162,
-0.11489250510931015,
-0.02751844748854637,
0.03457965329289436,
0.018938517197966576,
-0.03574323654174805,
0.018083099275827408,
-0.010049129836261272,
-0.0033089241478592157,
-... |
40760de7-d21c-449f-9103-f8e8de4b1c34 | Sum of INSERT operations in the queue (still to be replicated) across Replicated tables.
ReplicasSumMergesInQueue {#replicassummergesinqueue}
Sum of merge operations in the queue (still to be applied) across Replicated tables.
ReplicasSumQueueSize {#replicassumqueuesize}
Sum queue size (in the number of operations like get, merge) across Replicated tables.
TCPThreads {#tcpthreads}
Number of threads in the server of the TCP protocol (without TLS).
Temperature_
N
{#temperature_n}
The temperature of the corresponding device in β. A sensor can return an unrealistic value. Source:
/sys/class/thermal
Temperature_
name
{#temperature_name}
The temperature reported by the corresponding hardware monitor and the corresponding sensor in β. A sensor can return an unrealistic value. Source:
/sys/class/hwmon
TotalBytesOfMergeTreeTables {#totalbytesofmergetreetables}
Total amount of bytes (compressed, including data and indices) stored in all tables of MergeTree family.
TotalPartsOfMergeTreeTables {#totalpartsofmergetreetables}
Total amount of data parts in all tables of MergeTree family. Numbers larger than 10 000 will negatively affect the server startup time, and it may indicate unreasonable choice of the partition key.
TotalPrimaryKeyBytesInMemory {#totalprimarykeybytesinmemory}
The total amount of memory (in bytes) used by primary key values (only takes active parts into account).
TotalPrimaryKeyBytesInMemoryAllocated {#totalprimarykeybytesinmemoryallocated}
The total amount of memory (in bytes) reserved for primary key values (only takes active parts into account).
TotalRowsOfMergeTreeTables {#totalrowsofmergetreetables}
Total amount of rows (records) stored in all tables of MergeTree family.
Uptime {#uptime}
The server uptime in seconds. It includes the time spent for server initialization before accepting connections.
jemalloc.active {#jemallocactive}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.allocated {#jemallocallocated}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.arenas.all.dirty_purged {#jemallocarenasalldirty_purged}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.arenas.all.muzzy_purged {#jemallocarenasallmuzzy_purged}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.arenas.all.pactive {#jemallocarenasallpactive}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.arenas.all.pdirty {#jemallocarenasallpdirty}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.arenas.all.pmuzzy {#jemallocarenasallpmuzzy} | {"source_file": "asynchronous_metrics.md"} | [
-0.07541052997112274,
0.00727452477440238,
-0.004811449907720089,
0.07606955617666245,
-0.024784766137599945,
-0.1347520500421524,
0.07974827289581299,
-0.05618494749069214,
0.07805874198675156,
0.0819084420800209,
-0.00398056348785758,
-0.06246590614318848,
0.1441647708415985,
-0.07283739... |
ca03363c-3dd0-4fa5-9f3c-e16ec405cc7b | An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.arenas.all.pmuzzy {#jemallocarenasallpmuzzy}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.background_thread.num_runs {#jemallocbackground_threadnum_runs}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.background_thread.num_threads {#jemallocbackground_threadnum_threads}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.background_thread.run_intervals {#jemallocbackground_threadrun_intervals}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.epoch {#jemallocepoch}
An internal incremental update number of the statistics of jemalloc (Jason Evans' memory allocator), used in all other
jemalloc
metrics.
jemalloc.mapped {#jemallocmapped}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.metadata {#jemallocmetadata}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.metadata_thp {#jemallocmetadata_thp}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.resident {#jemallocresident}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.retained {#jemallocretained}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
jemalloc.prof.active {#jemallocprofactive}
An internal metric of the low-level memory allocator (jemalloc). See https://jemalloc.net/jemalloc.3.html
See Also
Monitoring
β Base concepts of ClickHouse monitoring.
system.metrics
β Contains instantly calculated metrics.
system.events
β Contains a number of events that have occurred.
system.metric_log
β Contains a history of metrics values from tables
system.metrics
and
system.events
. | {"source_file": "asynchronous_metrics.md"} | [
0.031402308493852615,
-0.08200160413980484,
-0.07718300819396973,
-0.023424455896019936,
-0.013065625913441181,
-0.03528732806444168,
-0.010509232990443707,
0.05070821940898895,
0.03516528382897377,
-0.018360480666160583,
-0.04967397823929787,
-0.022089587524533272,
-0.03809401020407677,
-... |
337b5527-b984-455d-a9fb-8a7d9b063e0b | description: 'System table containing information about Kafka consumers.'
keywords: ['system table', 'kafka_consumers']
slug: /operations/system-tables/kafka_consumers
title: 'system.kafka_consumers'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
Contains information about Kafka consumers.
Applicable for
Kafka table engine
(native ClickHouse integration)
Columns:
database
(
String
) β Database of the table with Kafka Engine.
table
(
String
) β Name of the table with Kafka Engine.
consumer_id
(
String
) β Kafka consumer identifier. Note, that a table can have many consumers. Specified by
kafka_num_consumers
parameter.
assignments.topic
(
Array(String)
) β Kafka topic.
assignments.partition_id
(
Array(Int32)
) β Kafka partition id. Note, that only one consumer can be assigned to a partition.
assignments.current_offset
(
Array(Int64)
) β Current offset.
assignments.intent_size
(
Array(Nullable(Int64))
) β The number of pushed, but not yet committed messages in new StorageKafka.
exceptions.time
(
Array(DateTime)
) β Timestamp when the 10 most recent exceptions were generated.
exceptions.text
(
Array(String)
) β Text of 10 most recent exceptions.
last_poll_time
(
DateTime
) β Timestamp of the most recent poll.
num_messages_read
(
UInt64
) β Number of messages read by the consumer.
last_commit_time
(
DateTime
) β Timestamp of the most recent poll.
num_commits
(
UInt64
) β Total number of commits for the consumer.
last_rebalance_time
(
DateTime
) β Timestamp of the most recent Kafka rebalance.
num_rebalance_revocations
(
UInt64
) β Number of times the consumer was revoked its partitions.
num_rebalance_assignments
(
UInt64
) β Number of times the consumer was assigned to Kafka cluster.
is_currently_used
(
UInt8
) β The flag which shows whether the consumer is in use.
last_used
(
DateTime64(6)
) β The last time this consumer was in use.
rdkafka_stat
(
String
) β Library internal statistic. Set statistics_interval_ms to 0 disable, default is 3000 (once in three seconds).
Example:
sql
SELECT *
FROM system.kafka_consumers
FORMAT Vertical
```text
Row 1:
ββββββ
database: test
table: kafka
consumer_id: ClickHouse-instance-test-kafka-1caddc7f-f917-4bb1-ac55-e28bd103a4a0
assignments.topic: ['system_kafka_cons']
assignments.partition_id: [0]
assignments.current_offset: [18446744073709550615]
exceptions.time: []
exceptions.text: []
last_poll_time: 2006-11-09 18:47:47
num_messages_read: 4
last_commit_time: 2006-11-10 04:39:40
num_commits: 1
last_rebalance_time: 1970-01-01 00:00:00
num_rebalance_revocations: 0
num_rebalance_assignments: 1
is_currently_used: 1
rdkafka_stat: {...}
``` | {"source_file": "kafka_consumers.md"} | [
0.018040278926491737,
-0.053347148001194,
-0.10103437304496765,
-0.0336047001183033,
0.02722330018877983,
-0.036256078630685806,
0.05412757769227028,
-0.0015427174512296915,
-0.03916677460074425,
0.042380936443805695,
0.003459006315097213,
-0.104105643928051,
0.072187140583992,
-0.12644203... |
dcd24f38-298e-43d2-9848-1a40a5c24cf2 | description: 'System table which contains properties of configured setting profiles.'
keywords: ['system table', 'settings_profiles']
slug: /operations/system-tables/settings_profiles
title: 'system.settings_profiles'
doc_type: 'reference'
system.settings_profiles
Contains properties of configured setting profiles.
Columns:
name
(
String
) β Setting profile name.
id
(
UUID
) β Setting profile ID.
storage
(
String
) β Path to the storage of setting profiles. Configured in the
access_control_path
parameter.
num_elements
(
UInt64
) β Number of elements for this profile in the
system.settings_profile_elements
table.
apply_to_all
(
UInt8
) β Shows that the settings profile set for all roles and/or users.
apply_to_list
(
Array(String)
) β List of the roles and/or users to which the setting profile is applied.
apply_to_except
(
Array(String)
) β The setting profile is applied to all roles and/or users excepting of the listed ones.
See Also {#see-also}
SHOW PROFILES | {"source_file": "settings_profiles.md"} | [
0.05356574431061745,
0.0028096111491322517,
-0.06399458646774292,
0.07516693323850632,
0.0009778497042134404,
-0.020401252433657646,
0.06819866597652435,
0.06507285684347153,
-0.192064106464386,
-0.041970182210206985,
0.03465624526143074,
-0.029598418623209,
0.11142724007368088,
-0.0644185... |
ea337809-36ba-47f1-b6e5-5c7ef43e2df0 | description: 'System table containing information about codecs
in queue.'
keywords: ['system table', 'codecs', 'compression']
slug: /operations/system-tables/codecs
title: 'system.codecs'
doc_type: 'reference'
Contains information about compression and encryption codecs.
You can use this table to get information about the available compression and encryption codecs
The
system.codecs
table contains the following columns (the column type is shown in brackets):
name
(
String
) β Codec name.
method_byte
(
UInt8
) β Byte which indicates codec in compressed file.
is_compression
(
UInt8
) β True if this codec compresses something. Otherwise it can be just a transformation that helps compression.
is_generic_compression
(
UInt8
) β The codec is a generic compression algorithm like lz4, zstd.
is_encryption
(
UInt8
) β The codec encrypts.
is_timeseries_codec
(
UInt8
) β The codec is for floating point timeseries codec.
is_experimental
(
UInt8
) β The codec is experimental.
description
(
String
) β A high-level description of the codec.
Example
Query:
sql
SELECT * FROM system.codecs WHERE name='LZ4'
Result:
text
Row 1:
ββββββ
name: LZ4
method_byte: 130
is_compression: 1
is_generic_compression: 1
is_encryption: 0
is_timeseries_codec: 0
is_experimental: 0
description: Extremely fast; good compression; balanced speed and efficiency. | {"source_file": "codecs.md"} | [
-0.09845499694347382,
0.00730785122141242,
-0.13067272305488586,
0.01971898227930069,
0.0281952116638422,
-0.09519436210393906,
0.0313999205827713,
0.042062584310770035,
0.000602038053330034,
0.0281013622879982,
-0.05916288122534752,
-0.022331317886710167,
0.02024168334901333,
-0.047809217... |
a38b7cc5-cdeb-4feb-a827-42b862ca79b5 | description: 'System table which exists only if ClickHouse Keeper or ZooKeeper are
configured. It exposes data from the Keeper cluster defined in the config.'
keywords: ['system table', 'zookeeper']
slug: /operations/system-tables/zookeeper
title: 'system.zookeeper'
doc_type: 'reference'
system.zookeeper
The table does not exist unless ClickHouse Keeper or ZooKeeper is configured. The
system.zookeeper
table exposes data from the Keeper clusters defined in the config.
The query must either have a
path =
condition or a
path IN
condition set with the
WHERE
clause as shown below. This corresponds to the path of the children that you want to get data for.
The query
SELECT * FROM system.zookeeper WHERE path = '/clickhouse'
outputs data for all children on the
/clickhouse
node.
To output data for all root nodes, write path = '/'.
If the path specified in 'path' does not exist, an exception will be thrown.
The query
SELECT * FROM system.zookeeper WHERE path IN ('/', '/clickhouse')
outputs data for all children on the
/
and
/clickhouse
node.
If in the specified 'path' collection has does not exist path, an exception will be thrown.
It can be used to do a batch of Keeper path queries.
The query
SELECT * FROM system.zookeeper WHERE path = '/clickhouse' AND zookeeperName = 'auxiliary_cluster'
outputs data in
auxiliary_cluster
ZooKeeper cluster.
If the specified 'auxiliary_cluster' does not exists, an exception will be thrown.
Columns:
name
(String) β The name of the node.
path
(String) β The path to the node.
value
(String) β Node value.
zookeeperName
(String) β The name of default or one of auxiliary ZooKeeper cluster.
dataLength
(Int32) β Size of the value.
numChildren
(Int32) β Number of descendants.
czxid
(Int64) β ID of the transaction that created the node.
mzxid
(Int64) β ID of the transaction that last changed the node.
pzxid
(Int64) β ID of the transaction that last deleted or added descendants.
ctime
(DateTime) β Time of node creation.
mtime
(DateTime) β Time of the last modification of the node.
version
(Int32) β Node version: the number of times the node was changed.
cversion
(Int32) β Number of added or removed descendants.
aversion
(Int32) β Number of changes to the ACL.
ephemeralOwner
(Int64) β For ephemeral nodes, the ID of the session that owns this node.
Example:
sql
SELECT *
FROM system.zookeeper
WHERE path = '/clickhouse/tables/01-08/visits/replicas'
FORMAT Vertical
```text
Row 1:
ββββββ
name: example01-08-1
value:
czxid: 932998691229
mzxid: 932998691229
ctime: 2015-03-27 16:49:51
mtime: 2015-03-27 16:49:51
version: 0
cversion: 47
aversion: 0
ephemeralOwner: 0
dataLength: 0
numChildren: 7
pzxid: 987021031383
path: /clickhouse/tables/01-08/visits/replicas | {"source_file": "zookeeper.md"} | [
0.06545151770114899,
-0.01722128875553608,
-0.033242322504520416,
0.05844949558377266,
0.028026822954416275,
-0.08630403876304626,
0.08576637506484985,
0.00010437268065288663,
-0.0457003116607666,
0.0486127994954586,
0.06648124754428864,
-0.10772182792425156,
0.10738611966371536,
-0.028321... |
d0dd98c9-5f42-4cc9-a591-855905c15919 | Row 2:
ββββββ
name: example01-08-2
value:
czxid: 933002738135
mzxid: 933002738135
ctime: 2015-03-27 16:57:01
mtime: 2015-03-27 16:57:01
version: 0
cversion: 37
aversion: 0
ephemeralOwner: 0
dataLength: 0
numChildren: 7
pzxid: 987021252247
path: /clickhouse/tables/01-08/visits/replicas
``` | {"source_file": "zookeeper.md"} | [
-0.07009188830852509,
0.037865083664655685,
-0.08806916326284409,
0.02023446373641491,
-0.013154571875929832,
-0.044056136161088943,
-0.009590225294232368,
-0.03657842054963112,
0.028366710990667343,
0.04752938449382782,
0.1126287579536438,
-0.06535983830690384,
0.023845229297876358,
-0.06... |
431882a8-99df-4650-8391-079b2e1e2380 | description: 'System table containing information about tasks from replication queues
stored in ClickHouse Keeper, or ZooKeeper, for tables in the
ReplicatedMergeTree
family.'
keywords: ['system table', 'replication_queue']
slug: /operations/system-tables/replication_queue
title: 'system.replication_queue'
doc_type: 'reference'
system.replication_queue
Contains information about tasks from replication queues stored in ClickHouse Keeper, or ZooKeeper, for tables in the
ReplicatedMergeTree
family.
Columns:
database
(
String
) β Name of the database.
table
(
String
) β Name of the table.
replica_name
(
String
) β Replica name in ClickHouse Keeper. Different replicas of the same table have different names.
position
(
UInt32
) β Position of the task in the queue.
node_name
(
String
) β Node name in ClickHouse Keeper.
type
(
String
) β Type of the task in the queue, one of:
GET_PART
β Get the part from another replica.
ATTACH_PART
β Attach the part, possibly from our own replica (if found in the
detached
folder). You may think of it as a
GET_PART
with some optimizations as they're nearly identical.
MERGE_PARTS
β Merge the parts.
DROP_RANGE
β Delete the parts in the specified partition in the specified number range.
CLEAR_COLUMN
β NOTE: Deprecated. Drop specific column from specified partition.
CLEAR_INDEX
β NOTE: Deprecated. Drop specific index from specified partition.
REPLACE_RANGE
β Drop a certain range of parts and replace them with new ones.
MUTATE_PART
β Apply one or several mutations to the part.
ALTER_METADATA
β Apply alter modification according to global /metadata and /columns paths.
create_time
(
DateTime
) β Date and time when the task was submitted for execution.
required_quorum
(
UInt32
) β The number of replicas waiting for the task to complete with confirmation of completion. This column is only relevant for the
GET_PARTS
task.
source_replica
(
String
) β Name of the source replica.
new_part_name
(
String
) β Name of the new part.
parts_to_merge
(
Array
(
String
)) β Names of parts to merge or update.
is_detach
(
UInt8
) β The flag indicates whether the
DETACH_PARTS
task is in the queue.
is_currently_executing
(
UInt8
) β The flag indicates whether a specific task is being performed right now.
num_tries
(
UInt32
) β The number of failed attempts to complete the task.
last_exception
(
String
) β Text message about the last error that occurred (if any).
last_attempt_time
(
DateTime
) β Date and time when the task was last attempted.
num_postponed
(
UInt32
) β The number of times the action was postponed.
postpone_reason
(
String
) β The reason why the task was postponed.
last_postpone_time
(
DateTime
) β Date and time when the task was last postponed.
merge_type
(
String
) β Type of the current merge. Empty if it's a mutation.
Example | {"source_file": "replication_queue.md"} | [
-0.046444252133369446,
-0.04095204919576645,
-0.09527590125799179,
0.01853766478598118,
-0.0004766320053022355,
-0.1269969493150711,
0.05897064507007599,
-0.030651424080133438,
-0.030918225646018982,
0.07366983592510223,
0.009895024821162224,
-0.003293936140835285,
0.09456492960453033,
-0.... |
12e529af-f74d-458b-b90c-d911d8c6287a | last_postpone_time
(
DateTime
) β Date and time when the task was last postponed.
merge_type
(
String
) β Type of the current merge. Empty if it's a mutation.
Example
sql
SELECT * FROM system.replication_queue LIMIT 1 FORMAT Vertical;
text
Row 1:
ββββββ
database: merge
table: visits_v2
replica_name: mtgiga001-1t
position: 15
node_name: queue-0009325559
type: MERGE_PARTS
create_time: 2020-12-07 14:04:21
required_quorum: 0
source_replica: mtgiga001-1t
new_part_name: 20201130_121373_121384_2
parts_to_merge: ['20201130_121373_121378_1','20201130_121379_121379_0','20201130_121380_121380_0','20201130_121381_121381_0','20201130_121382_121382_0','20201130_121383_121383_0','20201130_121384_121384_0']
is_detach: 0
is_currently_executing: 0
num_tries: 36
last_exception: Code: 226, e.displayText() = DB::Exception: Marks file '/opt/clickhouse/data/merge/visits_v2/tmp_fetch_20201130_121373_121384_2/CounterID.mrk' does not exist (version 20.8.7.15 (official build))
last_attempt_time: 2020-12-08 17:35:54
num_postponed: 0
postpone_reason:
last_postpone_time: 1970-01-01 03:00:00
See Also
Managing ReplicatedMergeTree Tables | {"source_file": "replication_queue.md"} | [
-0.06074927747249603,
-0.02208145707845688,
0.006452289875596762,
0.02465183660387993,
-0.03346852585673332,
-0.08865752816200256,
-0.015803907066583633,
0.015520359389483929,
0.006220502778887749,
0.03797492757439613,
0.0623297318816185,
-0.004957964178174734,
0.007070758379995823,
-0.048... |
56930d94-6bf3-41d0-ba94-c35d0a0af7b5 | description: 'This table contains dimensional metrics that can be calculated instantly
and exported in the Prometheus format. It is always up to date.'
keywords: ['system table', 'dimensional_metrics']
slug: /operations/system-tables/dimensional_metrics
title: 'system.dimensional_metrics'
doc_type: 'reference'
import SystemTableCloud from '@site/docs/_snippets/_system_table_cloud.md';
dimensional_metrics {#dimensional_metrics}
This table contains dimensional metrics that can be calculated instantly and exported in the Prometheus format. It is always up to date.
Columns:
metric
(
String
) β Metric name.
value
(
Int64
) β Metric value.
description
(
String
) β Metric description.
labels
(
Map(String, String)
) β Metric labels.
name
(
String
) β Alias for
metric
.
Example
You can use a query like this to export all the dimensional metrics in the Prometheus format.
sql
SELECT
metric AS name,
toFloat64(value) AS value,
description AS help,
labels,
'gauge' AS type
FROM system.dimensional_metrics
FORMAT Prometheus
Metric descriptions {#metric_descriptions}
merge_failures {#merge_failures}
Number of all failed merges since startup.
startup_scripts_failure_reason {#startup_scripts_failure_reason}
Indicates startup scripts failures by error type. Set to 1 when a startup script fails, labelled with the error name.
merge_tree_parts {#merge_tree_parts}
Number of merge tree data parts, labelled by part state, part type, and whether it is a projection part.
See Also
-
system.asynchronous_metrics
β Contains periodically calculated metrics.
-
system.events
β Contains a number of events that occurred.
-
system.metric_log
β Contains a history of metrics values from tables
system.metrics
and
system.events
.
-
Monitoring
β Base concepts of ClickHouse monitoring. | {"source_file": "dimensional_metrics.md"} | [
0.02766108512878418,
-0.04803554713726044,
-0.05617966130375862,
0.015092547051608562,
-0.02896074950695038,
-0.05061381682753563,
0.013653845526278019,
0.019383534789085388,
-0.004428449552506208,
0.06030993536114693,
-0.0012368048774078488,
-0.07872708141803741,
0.0566040500998497,
-0.05... |
2ae0ab43-e8c0-449e-a06f-65b054ad7807 | slug: /integrations/misc
keywords: ['Retool', 'Easypanel', 'Splunk']
title: 'Tools'
description: 'Landing page for the Tools section'
doc_type: 'landing-page'
Tools
| Page |
|-------------------|
|
Visual Interfaces
|
|
Proxies
|
|
Integrations
| | {"source_file": "index.md"} | [
-0.021481435745954514,
-0.03805594518780708,
-0.025832412764430046,
0.001997864106670022,
0.0016416861908510327,
-0.08029704540967941,
0.008113239891827106,
0.0260403361171484,
-0.04693497717380524,
0.0696302130818367,
0.03758877143263817,
-0.027140775695443153,
0.028067970648407936,
0.015... |
404d828b-c0a9-4571-b25e-fd9356f165c5 | slug: /integrations/tools
keywords: ['Retool', 'Easypanel', 'Splunk']
title: 'Tools'
description: 'Landing page for the tools section'
doc_type: 'landing-page'
Tools
| Page | Description |
|-----------|---------------------------------------------------------------------------------------------------------------------------------|
|
SQL Client
| How to integrate ClickHouse with various common database management, analysis and visualization tools |
|
Data Integrations
| Data integrations for ClickHouse |
|
Misc
| Miscellaneous tooling for ClickHouse | | {"source_file": "index.md"} | [
-0.0012312554754316807,
-0.04003845900297165,
-0.08917487412691116,
0.044826678931713104,
-0.061329618096351624,
-0.02142094261944294,
0.04137468710541725,
0.01626625470817089,
-0.02729969471693039,
0.043381452560424805,
0.05424023047089577,
-0.06002781540155411,
0.08014103025197983,
-0.02... |
86a99baa-ee17-4179-843b-a0b7f0eb5339 | sidebar_label: 'Metabase'
sidebar_position: 131
slug: /integrations/metabase
keywords: ['Metabase']
description: 'Metabase is an easy-to-use, open source UI tool for asking questions about your data.'
title: 'Connecting Metabase to ClickHouse'
show_related_blogs: true
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_visualization'
- website: 'https://github.com/clickhouse/metabase-clickhouse-driver'
import Image from '@theme/IdealImage';
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import metabase_01 from '@site/static/images/integrations/data-visualization/metabase_01.png';
import metabase_02 from '@site/static/images/integrations/data-visualization/metabase_02.png';
import metabase_03 from '@site/static/images/integrations/data-visualization/metabase_03.png';
import metabase_04 from '@site/static/images/integrations/data-visualization/metabase_04.png';
import metabase_06 from '@site/static/images/integrations/data-visualization/metabase_06.png';
import metabase_07 from '@site/static/images/integrations/data-visualization/metabase_07.png';
import metabase_08 from '@site/static/images/integrations/data-visualization/metabase_08.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Connecting Metabase to ClickHouse
Metabase is an easy-to-use, open source UI tool for asking questions about your data. Metabase is a Java application that can be run by simply
downloading the JAR file
and running it with
java -jar metabase.jar
. Metabase connects to ClickHouse using a JDBC driver that you download and put in the
plugins
folder:
Goal {#goal}
In this guide you will ask some questions of your ClickHouse data with Metabase and visualize the answers. One of the answers will look like this:
:::tip Add some data
If you do not have a dataset to work with you can add one of the examples. This guide uses the
UK Price Paid
dataset, so you might choose that one. There are several others to look at in the same documentation category.
:::
1. Gather your connection details {#1-gather-your-connection-details}
2. Download the ClickHouse plugin for Metabase {#2--download-the-clickhouse-plugin-for-metabase}
If you do not have a
plugins
folder, create one as a subfolder of where you have
metabase.jar
saved.
The plugin is a JAR file named
clickhouse.metabase-driver.jar
. Download the latest version of the JAR file at
https://github.com/clickhouse/metabase-clickhouse-driver/releases/latest
Save
clickhouse.metabase-driver.jar
in your
plugins
folder.
Start (or restart) Metabase so that the driver gets loaded properly.
Access Metabase at
http://hostname:3000
. On the initial startup, you will see a welcome screen and have to work your way through a list of questions. If prompted to select a database, select "
I'll add my data later
":
3. Connect Metabase to ClickHouse {#3--connect-metabase-to-clickhouse} | {"source_file": "metabase-and-clickhouse.md"} | [
0.06482890248298645,
0.006151668261736631,
-0.03228330984711647,
0.010771268047392368,
0.05637713149189949,
0.006074387580156326,
-0.0025528008118271828,
0.09519679844379425,
-0.1104581207036972,
-0.017150798812508583,
0.007867562584578991,
-0.015067349188029766,
0.053328342735767365,
-0.0... |
9c72d0fa-d22e-4ce4-8004-686155a856c9 | 3. Connect Metabase to ClickHouse {#3--connect-metabase-to-clickhouse}
Click on the gear icon in the top-right corner and select
Admin Settings
to visit your
Metabase admin page
.
Click on
Add a database
. Alternately, you can click on the
Databases
tab and select the
Add database
button.
If your driver installation worked, you will see
ClickHouse
in the dropdown menu for
Database type
:
Give your database a
Display name
, which is a Metabase setting - so use any name you like.
Enter the connection details of your ClickHouse database. Enable a secure connection if your ClickHouse server is configured to use SSL. For example:
Click the
Save
button and Metabase will scan your database for tables.
4. Run a SQL query {#4-run-a-sql-query}
Exit the
Admin settings
by clicking the
Exit admin
button in the top-right corner.
In the top-right corner, click the
+ New
menu and notice you can ask questions, run SQL queries, and build a dashboard:
For example, here is a SQL query run on a table named
uk_price_paid
that returns the average price paid by year from 1995 to 2022:
5. Ask a question {#5-ask-a-question}
Click on
+ New
and select
Question
. Notice you can build a question by starting with a database and table. For example, the following question is being asked of a table named
uk_price_paid
in the
default
database. Here is a simple question that calculates the average price by town, within the county of Greater Manchester:
Click the
Visualize
button to see the results in a tabular view.
Below the results, click the
Visualization
button to change the visualization to a bar chart (or any of the other options available):
Learn more {#learn-more}
Find more information about Metabase and how to build dashboards by
visiting the Metabase documentation
. | {"source_file": "metabase-and-clickhouse.md"} | [
0.09300848841667175,
-0.07134497165679932,
-0.07284160703420639,
0.013673460111021996,
-0.0744812935590744,
0.009312557056546211,
-0.005465359427034855,
0.04026417434215546,
-0.09462040662765503,
-0.051620736718177795,
-0.04163496568799019,
-0.05656450241804123,
0.05377738177776337,
0.0200... |
bb27c98a-6714-4d03-a1f9-53cb991b3f3d | sidebar_label: 'Lightdash'
sidebar_position: 131
slug: /integrations/lightdash
keywords: ['clickhouse', 'lightdash', 'data visualization', 'BI', 'semantic layer', 'dbt', 'self-serve analytics', 'connect']
description: 'Lightdash is a modern open-source BI tool built on top of dbt, enabling teams to explore and visualize data from ClickHouse through a semantic layer. Learn how to connect Lightdash to ClickHouse for fast, governed analytics powered by dbt.'
title: 'Connecting Lightdash to ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'data_visualization'
import lightdash_01 from '@site/static/images/integrations/data-visualization/lightdash_01.png';
import lightdash_02 from '@site/static/images/integrations/data-visualization/lightdash_02.png';
import lightdash_03 from '@site/static/images/integrations/data-visualization/lightdash_03.png';
import lightdash_04 from '@site/static/images/integrations/data-visualization/lightdash_04.png';
import lightdash_05 from '@site/static/images/integrations/data-visualization/lightdash_05.png';
import lightdash_06 from '@site/static/images/integrations/data-visualization/lightdash_06.png';
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import PartnerBadge from '@theme/badges/PartnerBadge';
Lightdash
Lightdash is the
AI-first BI platform
built for modern data teams, combining the openness of dbt with the performance of ClickHouse. By connecting ClickHouse to Lightdash, teams get an
AI-powered self-serve analytics experience
grounded in their dbt semantic layer, so every question is answered with governed, consistent metrics.
Developers love Lightdash for its open architecture, version-controlled YAML models, and integrations that fit directly into their workflow - from GitHub to the IDE.
This partnership brings together
ClickHouseβs speed
and
Lightdashβs developer experience
, making it easier than ever to explore, visualize, and automate insights with AI.
Build an interactive dashboard with Lightdash and ClickHouse {#build-an-interactive-dashboard}
In this guide, youβll see how
Lightdash
connects to
ClickHouse
to explore your dbt models and build interactive dashboards.
The example below shows a finished dashboard powered by data from ClickHouse.
Gather connection data {#connection-data-required}
When setting up your connection between Lightdash and ClickHouse, youβll need the following details:
Host:
The address where your ClickHouse database is running
User:
Your ClickHouse database username
Password:
Your ClickHouse database password
DB name:
The name of your ClickHouse database
Schema:
The default schema used by dbt to compile and run your project (found in your
profiles.yml
)
Port:
The ClickHouse HTTPS interface port (default:
8443
)
Secure:
Enable this option to use HTTPS/SSL for secure connections | {"source_file": "lightdash-and-clickhouse.md"} | [
-0.00340197142213583,
-0.04800896346569061,
-0.07071314752101898,
0.058701369911432266,
0.012098339386284351,
-0.07899823039770126,
0.020644834265112877,
-0.00577460927888751,
-0.07298555225133896,
0.004756249953061342,
0.07515169680118561,
-0.03895784914493561,
0.016437554731965065,
-0.00... |
e3e70a89-7a26-48ec-a6bc-4500b27d6831 | Port:
The ClickHouse HTTPS interface port (default:
8443
)
Secure:
Enable this option to use HTTPS/SSL for secure connections
Retries:
Number of times Lightdash retries failed ClickHouse queries (default:
3
)
Start of week:
Choose which day your reporting week starts; defaults to your warehouse setting
Configure your dbt profile for ClickHouse {#configuring-your-dbt-profile-for-clickhouse}
In Lightdash, connections are based on your existing
dbt project
.
To connect ClickHouse, make sure your local
~/.dbt/profiles.yml
file contains a valid ClickHouse target configuration.
For example:
Create a Lightdash project connected to ClickHouse {#creating-a-lightdash-project-connected-to-clickhouse}
Once your dbt profile is configured for ClickHouse, youβll also need to connect your
dbt project
to Lightdash.
Because this process is the same for all data warehouses, we wonβt go into detail here β you can follow the official Lightdash guide for importing a dbt project:
Import a dbt project β Lightdash Docs
After connecting your dbt project, Lightdash will automatically detect your ClickHouse configuration from the
profiles.yml
file. Once the connection test succeeds, youβll be able to start exploring your dbt models and building dashboards powered by ClickHouse.
Explore your ClickHouse data in Lightdash {#exploring-your-clickhouse-data-in-lightdash}
Once connected, Lightdash automatically syncs your dbt models and exposes:
Dimensions
and
measures
defined in YAML
Semantic layer logic
, such as metrics, joins, and explores
Dashboards
powered by real-time ClickHouse queries
You can now build dashboards, share insights, and even use
Ask AI
to generate visualizations directly on top of ClickHouse β no manual SQL required.
Define metrics and dimensions in Lightdash {#defining-metrics-and-dimensions-in-lightdash}
In Lightdash, all
metrics
and
dimensions
are defined directly in your dbt model
.yml
files. This makes your business logic version-controlled, consistent, and fully transparent.
Defining these in YAML ensures your team is using the same definitions across dashboards and analyses. For example, you can create reusable metrics like
total_order_count
,
total_revenue
, or
avg_order_value
right next to your dbt models β no duplication required in the UI.
To learn more about how to define these, see the following Lightdash guides:
-
How to create metrics
-
How to create dimensions
Query your data from tables {#querying-your-data-from-tables}
Once your dbt project is connected and synced with Lightdash, you can start exploring data directly from your
tables
(or βexploresβ).
Each table represents a dbt model and includes the metrics and dimensions youβve defined in YAML.
The
Explore
page is made up of five main areas:
Dimensions and Metrics
β all fields available on the selected table | {"source_file": "lightdash-and-clickhouse.md"} | [
-0.0011173330713063478,
-0.06071114167571068,
-0.08159726113080978,
0.03237038105726242,
-0.03493169695138931,
-0.052656929939985275,
-0.06222910061478615,
-0.016747716814279556,
-0.056574415415525436,
-0.023982945829629898,
0.06318601965904236,
-0.04688729718327522,
0.0289456807076931,
0.... |
0dc5c4aa-b114-4335-8380-ddfc1db3a9db | The
Explore
page is made up of five main areas:
Dimensions and Metrics
β all fields available on the selected table
Filters
β restrict the data returned by your query
Chart
β visualize your query results
Results
β view the raw data returned from your ClickHouse database
SQL
β inspect the generated SQL query behind your results
From here, you can build and adjust queries interactively β dragging and dropping fields, adding filters, and switching between visualization types such as tables, bar charts, or time series.
For a deeper look at explores and how to query from your tables, see:
An intro to tables and the Explore page β Lightdash Docs
Build dashboards {#building-dashboards}
Once youβve explored your data and saved visualizations, you can combine them into
dashboards
to share with your team.
Dashboards in Lightdash are fully interactive β you can apply filters, add tabs, and view charts powered by real-time ClickHouse queries.
You can also create new charts
directly from within a dashboard
, which helps keep your projects organized and clutter-free. Charts created this way are
exclusive to that dashboard
β they canβt be reused elsewhere in the project.
To create a dashboard-only chart:
1. Click
Add tile
2. Select
New chart
3. Build your visualization in the chart builder
4. Save it β it will appear at the bottom of your dashboard
Learn more about how to create and organize dashboards here:
Building dashboards β Lightdash Docs
Ask AI: self-serve analytics powered by dbt {#ask-ai}
AI Agents
in Lightdash make data exploration truly self-serve.
Instead of writing queries, users can simply ask questions in plain language β like
βWhat was our monthly revenue growth?β
β and the AI Agent automatically generates the right visualization, referencing your dbt-defined metrics and models to ensure accuracy and consistency.
Itβs powered by the same semantic layer you use in dbt, meaning every answer stays governed, explainable, and fast β all backed by ClickHouse.
:::tip
Learn more about AI Agents here:
AI Agents β Lightdash Docs
:::
Learn more {#learn-more}
To learn more about connecting dbt projects to Lightdash, visit the
Lightdash Docs β ClickHouse setup
. | {"source_file": "lightdash-and-clickhouse.md"} | [
0.045115821063518524,
-0.03517105057835579,
-0.04333380237221718,
0.10270519554615021,
0.0067879087291657925,
-0.020267505198717117,
-0.035285063087940216,
0.002747237216681242,
-0.03150743246078491,
0.046406883746385574,
0.024269036948680878,
-0.04085053130984306,
0.00687361741438508,
0.0... |
f4d65906-cb2c-4da4-98e0-583995a31a68 | sidebar_label: 'QuickSight'
slug: /integrations/quicksight
keywords: ['clickhouse', 'aws', 'amazon', 'QuickSight', 'mysql', 'connect', 'integrate', 'ui']
description: 'Amazon QuickSight powers data-driven organizations with unified business intelligence (BI).'
title: 'QuickSight'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_visualization'
import MySQLOnPremiseSetup from '@site/docs/_snippets/_clickhouse_mysql_on_premise_setup.mdx';
import Image from '@theme/IdealImage';
import quicksight_01 from '@site/static/images/integrations/data-visualization/quicksight_01.png';
import quicksight_02 from '@site/static/images/integrations/data-visualization/quicksight_02.png';
import quicksight_03 from '@site/static/images/integrations/data-visualization/quicksight_03.png';
import quicksight_04 from '@site/static/images/integrations/data-visualization/quicksight_04.png';
import quicksight_05 from '@site/static/images/integrations/data-visualization/quicksight_05.png';
import quicksight_06 from '@site/static/images/integrations/data-visualization/quicksight_06.png';
import quicksight_07 from '@site/static/images/integrations/data-visualization/quicksight_07.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
QuickSight
QuickSight can connect to on-premise ClickHouse setup (23.11+) via MySQL interface using the official MySQL data source and Direct Query mode.
On-premise ClickHouse server setup {#on-premise-clickhouse-server-setup}
Please refer to
the official documentation
on how to set up a ClickHouse server with enabled MySQL interface.
Aside from adding an entry to the server's
config.xml
xml
<clickhouse>
<mysql_port>9004</mysql_port>
</clickhouse>
it is also
required
to use
Double SHA1 password encryption
for the user that will be using MySQL interface.
Generating a random password encrypted with Double SHA1 from the shell:
shell
PASSWORD=$(base64 < /dev/urandom | head -c16); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
The output should look like the following:
text
LZOQYnqQN4L/T6L0
fbc958cc745a82188a51f30de69eebfc67c40ee4
The first line is the generated password, and the second line is the hash we could use to configure ClickHouse.
Here is an example configuration for
mysql_user
that uses the generated hash:
/etc/clickhouse-server/users.d/mysql_user.xml
xml
<users>
<mysql_user>
<password_double_sha1_hex>fbc958cc745a82188a51f30de69eebfc67c40ee4</password_double_sha1_hex>
<networks>
<ip>::/0</ip>
</networks>
<profile>default</profile>
<quota>default</quota>
</mysql_user>
</users>
Replace
password_double_sha1_hex
entry with your own generated Double SHA1 hash.
QuickSight requires several additional settings in the MySQL user's profile.
/etc/clickhouse-server/users.d/mysql_user.xml | {"source_file": "quicksight-and-clickhouse.md"} | [
0.009032683447003365,
0.008733685128390789,
-0.08263803273439407,
0.03269069269299507,
0.019149230793118477,
-0.009270604699850082,
0.05114179104566574,
0.017043350264430046,
-0.08112939447164536,
0.005967399105429649,
0.059633512049913406,
-0.02109411545097828,
0.13194037973880768,
-0.056... |
5e6505ee-8283-4835-aeca-b174907314f7 | QuickSight requires several additional settings in the MySQL user's profile.
/etc/clickhouse-server/users.d/mysql_user.xml
xml
<profiles>
<default>
<prefer_column_name_to_alias>1</prefer_column_name_to_alias>
<mysql_map_string_to_text_in_show_columns>1</mysql_map_string_to_text_in_show_columns>
<mysql_map_fixed_string_to_text_in_show_columns>1</mysql_map_fixed_string_to_text_in_show_columns>
</default>
</profiles>
However, it is recommended to assign it to a different profile that can be used by your MySQL user instead of the default one.
Finally, configure the Clickhouse Server to listen on the desired IP address(es).
In
config.xml
, uncomment out the following to listen on all addresses:
bash
<listen_host>::</listen_host>
If you have the
mysql
binary available, you can test the connection from the command line.
Using the sample username (
mysql_user
) and password (
LZOQYnqQN4L/T6L0
) from above the command line would be:
bash
mysql --protocol tcp -h localhost -u mysql_user -P 9004 --password=LZOQYnqQN4L/T6L0
response
mysql> show databases;
+--------------------+
| name |
+--------------------+
| INFORMATION_SCHEMA |
| default |
| information_schema |
| system |
+--------------------+
4 rows in set (0.00 sec)
Read 4 rows, 603.00 B in 0.00156 sec., 2564 rows/sec., 377.48 KiB/sec.
Connecting QuickSight to ClickHouse {#connecting-quicksight-to-clickhouse}
First of all, go to
https://quicksight.aws.amazon.com
, navigate to Datasets and click "New dataset":
Search for the official MySQL connector bundled with QuickSight (named just
MySQL
):
Specify your connection details. Please note that MySQL interface port is 9004 by default,
and it might be different depending on your server configuration.
Now, you have two options on how to fetch the data from ClickHouse. First, you could select a table from the list:
Alternatively, you could specify a custom SQL to fetch your data:
By clicking "Edit/Preview data", you should be able to see the introspected table structure or adjust your custom SQL, if that's how you decided to access the data:
Make sure you have "Direct Query" mode selected in the bottom left corner of the UI:
Now you can proceed with publishing your dataset and creating a new visualization!
Known limitations {#known-limitations}
SPICE import doesn't work as expected; please use Direct Query mode instead. See
#58553
. | {"source_file": "quicksight-and-clickhouse.md"} | [
0.08947496861219406,
-0.06355360150337219,
-0.07363535463809967,
-0.056333109736442566,
-0.056208230555057526,
0.0012551735853776336,
0.09662742167711258,
0.004447454120963812,
-0.041706230491399765,
-0.025934910401701927,
0.07925057411193848,
-0.019553516060113907,
0.1044149175286293,
-0.... |
660bdf56-9e01-429a-88b3-a07cc4b86614 | sidebar_label: 'Superset'
sidebar_position: 198
slug: /integrations/superset
keywords: ['superset']
description: 'Apache Superset is an open-source data exploration and visualization platform.'
title: 'Connect Superset to ClickHouse'
show_related_blogs: true
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_visualization'
- website: 'https://github.com/ClickHouse/clickhouse-connect'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import superset_01 from '@site/static/images/integrations/data-visualization/superset_01.png';
import superset_02 from '@site/static/images/integrations/data-visualization/superset_02.png';
import superset_03 from '@site/static/images/integrations/data-visualization/superset_03.png';
import superset_04 from '@site/static/images/integrations/data-visualization/superset_04.png';
import superset_05 from '@site/static/images/integrations/data-visualization/superset_05.png';
import superset_06 from '@site/static/images/integrations/data-visualization/superset_06.png';
import superset_08 from '@site/static/images/integrations/data-visualization/superset_08.png';
import superset_09 from '@site/static/images/integrations/data-visualization/superset_09.png';
import superset_10 from '@site/static/images/integrations/data-visualization/superset_10.png';
import superset_11 from '@site/static/images/integrations/data-visualization/superset_11.png';
import superset_12 from '@site/static/images/integrations/data-visualization/superset_12.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Connect Superset to ClickHouse
Apache Superset
is an open-source data exploration and visualization platform written in Python. Superset connects to ClickHouse using a Python driver provided by ClickHouse. Let's see how it works...
Goal {#goal}
In this guide you will build a dashboard in Superset with data from a ClickHouse database. The dashboard will look like this:
:::tip Add some data
If you do not have a dataset to work with you can add one of the examples. This guide uses the
UK Price Paid
dataset, so you might choose that one. There are several others to look at in the same documentation category.
:::
1. Gather your connection details {#1-gather-your-connection-details}
2. Install the Driver {#2-install-the-driver}
Superset uses the
clickhouse-connect
driver to connect to ClickHouse. The details of
clickhouse-connect
are at
https://pypi.org/project/clickhouse-connect/
and it can be installed with the following command:
console
pip install clickhouse-connect
Start (or restart) Superset.
3. Connect Superset to ClickHouse {#3-connect-superset-to-clickhouse}
Within Superset, select
Data
from the top menu and then
Databases
from the drop-down menu. Add a new database by clicking the
+ Database
button: | {"source_file": "superset-and-clickhouse.md"} | [
-0.024945931509137154,
-0.010293935425579548,
-0.05620036646723747,
0.012555411085486412,
0.0125999441370368,
-0.018871517851948738,
-0.011112036183476448,
0.043181244283914566,
-0.1029219999909401,
-0.0029315610881894827,
0.05049339309334755,
0.024013126268982887,
0.07074934244155884,
-0.... |
05eefb8a-c419-4238-ad67-6a0d44550fab | Within Superset, select
Data
from the top menu and then
Databases
from the drop-down menu. Add a new database by clicking the
+ Database
button:
In the first step, select
ClickHouse Connect
as the type of database:
In the second step:
Set SSL on or off.
Enter the connection information that you collected earlier
Specify the
DISPLAY NAME
: this can be any name you prefer. If you will be connecting to multiple ClickHouse databases then make the name more descriptive.
Click the
CONNECT
and then
FINISH
buttons to complete the setup wizard, and you should see your database in the list of databases.
4. Add a Dataset {#4-add-a-dataset}
To interact with your ClickHouse data with Superset, you need to define a
dataset
. From the top menu in Superset, select
Data
, then
Datasets
from the drop-down menu.
Click the button for adding a dataset. Select your new database as the datasource and you should see the tables defined in your database:
Click the
ADD
button at the bottom of the dialog window and your table appears in the list of datasets. You are ready to build a dashboard and analyze your ClickHouse data!
5. Creating charts and a dashboard in Superset {#5--creating-charts-and-a-dashboard-in-superset}
If you are familiar with Superset, then you will feel right at home with this next section. If you are new to Superset, well...it's like a lot of the other cool visualization tools out there in the world - it doesn't take long to get started, but the details and nuances get learned over time as you use the tool.
You start with a dashboard. From the top menu in Superset, select
Dashboards
. Click the button in the upper-right to add a new dashboard. The following dashboard is named
UK property prices
:
To create a new chart, select
Charts
from the top menu and click the button to add a new chart. You will be shown a lot of options. The following example shows a
Pie Chart
chart using the
uk_price_paid
dataset from the
CHOOSE A DATASET
drop-down:
Superset pie charts need a
Dimension
and a
Metric
, the rest of the settings are optional. You can pick your own fields for the dimension and metric, this example uses the ClickHouse field
district
as the dimension and
AVG(price)
as the metric.
If you prefer doughnut charts over pie, then you can set that and other options under
CUSTOMIZE
:
Click the
SAVE
button to save the chart, then select
UK property prices
under the
ADD TO DASHBOARD
drop-down, then
SAVE & GO TO DASHBOARD
saves the chart and adds it to the dashboard:
That's it. Building dashboards in Superset based on data in ClickHouse opens up a whole world of blazing fast data analytics! | {"source_file": "superset-and-clickhouse.md"} | [
-0.01783420331776142,
-0.14186854660511017,
-0.11805503070354462,
0.033996496349573135,
-0.08185970038175583,
0.011118683964014053,
-0.019776763394474983,
-0.005767197348177433,
-0.07899433374404907,
-0.0020778316538780928,
0.04285987466573715,
-0.05374622344970703,
0.072235107421875,
0.00... |
e93f0329-6bf4-4a68-acbb-41b284911422 | sidebar_label: 'Looker Studio'
slug: /integrations/lookerstudio
keywords: ['clickhouse', 'looker', 'studio', 'connect', 'mysql', 'integrate', 'ui']
description: 'Looker Studio, formerly Google Data Studio, is an online tool for converting data into customizable informative reports and dashboards.'
title: 'Looker Studio'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_visualization'
import Image from '@theme/IdealImage';
import MySQLCloudSetup from '@site/docs/_snippets/_clickhouse_mysql_cloud_setup.mdx';
import MySQLOnPremiseSetup from '@site/docs/_snippets/_clickhouse_mysql_on_premise_setup.mdx';
import looker_studio_01 from '@site/static/images/integrations/data-visualization/looker_studio_01.png';
import looker_studio_02 from '@site/static/images/integrations/data-visualization/looker_studio_02.png';
import looker_studio_03 from '@site/static/images/integrations/data-visualization/looker_studio_03.png';
import looker_studio_04 from '@site/static/images/integrations/data-visualization/looker_studio_04.png';
import looker_studio_05 from '@site/static/images/integrations/data-visualization/looker_studio_05.png';
import looker_studio_06 from '@site/static/images/integrations/data-visualization/looker_studio_06.png';
import looker_studio_enable_mysql from '@site/static/images/integrations/data-visualization/looker_studio_enable_mysql.png';
import looker_studio_mysql_cloud from '@site/static/images/integrations/data-visualization/looker_studio_mysql_cloud.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Looker Studio
Looker Studio can connect to ClickHouse via the MySQL interface using the official Google MySQL data source.
ClickHouse Cloud setup {#clickhouse-cloud-setup}
On-premise ClickHouse server setup {#on-premise-clickhouse-server-setup}
Connecting Looker Studio to ClickHouse {#connecting-looker-studio-to-clickhouse}
First, login to https://lookerstudio.google.com using your Google account and create a new Data Source:
Search for the official MySQL connector provided by Google (named just
MySQL
):
Specify your connection details. Please note that MySQL interface port is 9004 by default,
and it might be different depending on your server configuration.
Now, you have two options on how to fetch the data from ClickHouse. First, you could use the Table Browser feature:
Alternatively, you could specify a custom query to fetch your data:
Finally, you should be able to see the introspected table structure and adjust the data types if necessary.
Now you can proceed with exploring your data or creating a new report!
Using Looker Studio with ClickHouse Cloud {#using-looker-studio-with-clickhouse-cloud}
When using ClickHouse Cloud, you need to enable MySQL interface first. You can do that in connection dialog, "MySQL" tab. | {"source_file": "looker-studio-and-clickhouse.md"} | [
-0.0386231392621994,
0.03797551989555359,
-0.058967530727386475,
0.05628905072808266,
-0.0267501063644886,
0.012272907420992851,
-0.009522351436316967,
0.07501989603042603,
-0.048423394560813904,
0.04188646748661995,
0.005146029405295849,
-0.03904855623841286,
0.1699003279209137,
-0.061221... |
9ff5b962-489d-44e9-b8bc-65d93c1aa60a | When using ClickHouse Cloud, you need to enable MySQL interface first. You can do that in connection dialog, "MySQL" tab.
In the Looker Studio UI, choose the "Enable SSL" option. ClickHouse Cloud's SSL certificate is signed by
Let's Encrypt
. You can download this root cert
here
.
The rest of the steps are the same as listed above in the previous section. | {"source_file": "looker-studio-and-clickhouse.md"} | [
-0.008926323615014553,
-0.061792340129613876,
-0.01645720936357975,
0.012788497842848301,
-0.028334449976682663,
-0.01706845872104168,
-0.04280943423509598,
-0.04946193844079971,
0.05787258595228195,
0.005421712528914213,
0.007659899070858955,
-0.034359805285930634,
0.09289362281560898,
0.... |
71f2b9b4-f04f-4848-a714-79bfad974b68 | sidebar_label: 'Splunk'
sidebar_position: 198
slug: /integrations/splunk
keywords: ['Splunk', 'integration', 'data visualization']
description: 'Connect Splunk dashboards to ClickHouse'
title: 'Connecting Splunk to ClickHouse'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import splunk_1 from '@site/static/images/integrations/splunk/splunk-1.png';
import splunk_2 from '@site/static/images/integrations/splunk/splunk-2.png';
import splunk_3 from '@site/static/images/integrations/splunk/splunk-3.png';
import splunk_4 from '@site/static/images/integrations/splunk/splunk-4.png';
import splunk_5 from '@site/static/images/integrations/splunk/splunk-5.png';
import splunk_6 from '@site/static/images/integrations/splunk/splunk-6.png';
import splunk_7 from '@site/static/images/integrations/splunk/splunk-7.png';
import splunk_8 from '@site/static/images/integrations/splunk/splunk-8.png';
import splunk_9 from '@site/static/images/integrations/splunk/splunk-9.png';
import splunk_10 from '@site/static/images/integrations/splunk/splunk-10.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Connecting Splunk to ClickHouse
:::tip
Looking to store ClickHouse audit logs to Splunk? Follow the
"Storing ClickHouse Cloud Audit logs into Splunk"
guide.
:::
Splunk is a popular technology for security and observability. It is also a powerful search and dashboarding engine. There are hundreds of Splunk apps available to address different use cases.
For ClickHouse specifically, we are leveraging the
Splunk DB Connect App
which has a simple integration to the highly performant ClickHouse JDBC driver to query tables in ClickHouse directly.
The ideal use case for this integration is when you are using ClickHouse for large data sources such as NetFlow, Avro or Protobuf binary data, DNS, VPC flow logs, and other OTEL logs that can be shared with your team on Splunk to search and create dashboards. By using this approach, the data is not ingested into the Splunk index layer and is simply queried directly from ClickHouse similarly to other visualization integrations such as
Metabase
or
Superset
.
Goalβ {#goal}
In this guide, we will use the ClickHouse JDBC driver to connect ClickHouse to Splunk. We will install a local version of Splunk Enterprise but we are not indexing any data. Instead, we are using the search functions through the DB Connect query engine.
With this guide, you will be able to create a dashboard connected to ClickHouse similar to this:
:::note
This guide uses the
New York City Taxi dataset
. There are many other datasets that you can use from
our docs
.
:::
Prerequisites {#prerequisites}
Before you get started you will need:
- Splunk Enterprise to use search head functions
-
Java Runtime Environment (JRE)
requirements installed on your OS or container
-
Splunk DB Connect | {"source_file": "splunk-and-clickhouse.md"} | [
-0.0011499726679176092,
0.00913409423083067,
-0.010339400731027126,
0.02919447422027588,
0.044526442885398865,
-0.016074176877737045,
0.03698986768722534,
0.024123847484588623,
-0.05340556800365448,
0.06652341037988663,
0.055857256054878235,
-0.003499123966321349,
0.09481754899024963,
0.01... |
1c75eb35-f9e4-4bc7-ae3f-616ac291d599 | Before you get started you will need:
- Splunk Enterprise to use search head functions
-
Java Runtime Environment (JRE)
requirements installed on your OS or container
-
Splunk DB Connect
- Admin or SSH access to your Splunk Enterprise OS Instance
- ClickHouse connection details (see
here
if you're using ClickHouse Cloud)
Install and configure DB Connect on Splunk Enterprise {#install-and-configure-db-connect-on-splunk-enterprise}
You must first install the Java Runtime Environment on your Splunk Enterprise instance. If you're using Docker, you can use the command
microdnf install java-11-openjdk
.
Note down the
java_home
path:
java -XshowSettings:properties -version
.
Ensure that the DB Connect App is installed on Splunk Enterprise. You can find it in the Apps section of the Splunk Web UI:
- Log in to Splunk Web and go to Apps > Find More Apps
- Use the search box to find DB Connect
- Click the green "Install" button next to Splunk DB Connect
- Click "Restart Splunk"
If you're having issues installing the DB Connect App, please see
this link
for additional instructions.
Once you've verified that the DB Connect App is installed, add the java_home path to the DB Connect App in Configuration -> Settings, and click save then reset.
Configure JDBC for ClickHouse {#configure-jdbc-for-clickhouse}
Download the
ClickHouse JDBC driver
to the DB Connect Drivers folder such as:
bash
$SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers
You must then edit the connection types configuration at
$SPLUNK_HOME/etc/apps/splunk_app_db_connect/default/db_connection_types.conf
to add the ClickHouse JDBC Driver class details.
Add the following stanza to the file:
text
[ClickHouse]
displayName = ClickHouse
serviceClass = com.splunk.dbx2.DefaultDBX2JDBC
jdbcUrlFormat = jdbc:ch://<host>:<port>/<database>
jdbcUrlSSLFormat = jdbc:ch://<host>:<port>/<database>?ssl=true
jdbcDriverClass = com.clickhouse.jdbc.ClickHouseDriver
ui_default_catalog = $database$
Restart Splunk using
$SPLUNK_HOME/bin/splunk restart
.
Navigate back to the DB Connect App and go to Configuration > Settings > Drivers. You should see a green tick next to ClickHouse:
Connect Splunk search to ClickHouse {#connect-splunk-search-to-clickhouse}
Navigate to DB Connect App Configuration -> Databases -> Identities: Create a Identity for your ClickHouse.
Create a new Connection to ClickHouse from Configuration -> Databases -> Connections and select "New Connection".
Add ClickHouse host details and ensure "Enable SSL" is ticked:
After saving the connection, you will have successfully connected to ClickHouse to Splunk!
:::note
If you receive an error, make sure that you have added the IP address of your Splunk instance to the ClickHouse Cloud IP Access List. See
the docs
for more info.
:::
Run a SQL query {#run-a-sql-query}
We will now run a SQL query to test that everything works. | {"source_file": "splunk-and-clickhouse.md"} | [
-0.01603686437010765,
-0.016611246392130852,
-0.01597488671541214,
-0.02134372480213642,
-0.020323069766163826,
0.025166349485516548,
-0.07858008146286011,
0.01593499630689621,
-0.010893873870372772,
0.07176987081766129,
-0.04623577371239662,
0.006753196474164724,
-0.013206384144723415,
0.... |
76052859-2da1-4f1f-9607-c8ec56fdb1ee | Run a SQL query {#run-a-sql-query}
We will now run a SQL query to test that everything works.
Select your connection details in the SQL Explorer from the DataLab section of the DB Connect App. We are using the
trips
table for this demo:
Execute a SQL query on the
trips
table that returns the count of all the records in the table:
If your query is successful, you should see the results.
Create a dashboard {#create-a-dashboard}
Let's create a dashboard that leverages a combination of SQL plus the powerful Splunk Processing Language (SPL).
Before proceeding, you must first
Deactivate DPL Safeguards
.
Run the following query that shows us the top 10 neighborhoods that have the most frequent pickups:
sql
dbxquery query="SELECT pickup_ntaname, count(*) AS count
FROM default.trips GROUP BY pickup_ntaname
ORDER BY count DESC LIMIT 10;" connection="chc"
Select the visualization tab to view the column chart created:
We will now create a dashboard by clicking Save As > Save to a Dashboard.
Let's add another query that shows the average fare based on the number of passengers.
sql
dbxquery query="SELECT passenger_count,avg(total_amount)
FROM default.trips GROUP BY passenger_count;" connection="chc"
This time, let's create a bar chart visualization and save it to the previous dashboard.
Finally, let's add one more query that shows the correlation between the number of passengers and the distance of the trip:
sql
dbxquery query="SELECT passenger_count, toYear(pickup_datetime) AS year,
round(trip_distance) AS distance, count(* FROM default.trips)
GROUP BY passenger_count, year, distance
ORDER BY year, count(*) DESC; " connection="chc"
Our final dashboard should look like this:
Time series data {#time-series-data}
Splunk has hundreds of built-in functions that dashboards can use for visualization and presentation of time series data. This example will combine SQL + SPL to create a query that can work with time series data in Splunk
sql
dbxquery query="SELECT time, orig_h, duration
FROM "demo"."conn" WHERE time >= now() - interval 1 HOURS" connection="chc"
| eval time = strptime(time, "%Y-%m-%d %H:%M:%S.%3Q")
| eval _time=time
| timechart avg(duration) as duration by orig_h
| eval duration=round(duration/60)
| sort - duration:
Learn more {#learn-more}
If you'd like to find more information about Splunk DB Connect and how to build dashboards, please visit the
Splunk documentation
. | {"source_file": "splunk-and-clickhouse.md"} | [
0.05477724224328995,
-0.08836140483617783,
-0.012969506904482841,
0.1076490506529808,
-0.10032393783330917,
0.015122958458960056,
0.0866219773888588,
0.015650276094675064,
-0.03905414044857025,
0.04449336975812912,
0.03628314658999443,
-0.05973728746175766,
0.062422651797533035,
0.04237079... |
5b07a1e7-7a08-44d2-9cc4-7bc18e2e169a | sidebar_label: 'Power BI'
slug: /integrations/powerbi
keywords: ['clickhouse', 'Power BI', 'connect', 'integrate', 'ui']
description: 'Microsoft Power BI is an interactive data visualization software product developed by Microsoft with a primary focus on business intelligence.'
title: 'Power BI'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_visualization'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import powerbi_odbc_install from '@site/static/images/integrations/data-visualization/powerbi_odbc_install.png';
import powerbi_odbc_search from '@site/static/images/integrations/data-visualization/powerbi_odbc_search.png';
import powerbi_odbc_verify from '@site/static/images/integrations/data-visualization/powerbi_odbc_verify.png';
import powerbi_get_data from '@site/static/images/integrations/data-visualization/powerbi_get_data.png';
import powerbi_search_clickhouse from '@site/static/images/integrations/data-visualization/powerbi_search_clickhouse.png';
import powerbi_connect_db from '@site/static/images/integrations/data-visualization/powerbi_connect_db.png';
import powerbi_connect_user from '@site/static/images/integrations/data-visualization/powerbi_connect_user.png';
import powerbi_table_navigation from '@site/static/images/integrations/data-visualization/powerbi_table_navigation.png';
import powerbi_add_dsn from '@site/static/images/integrations/data-visualization/powerbi_add_dsn.png';
import powerbi_select_unicode from '@site/static/images/integrations/data-visualization/powerbi_select_unicode.png';
import powerbi_connection_details from '@site/static/images/integrations/data-visualization/powerbi_connection_details.png';
import powerbi_select_odbc from '@site/static/images/integrations/data-visualization/powerbi_select_odbc.png';
import powerbi_select_dsn from '@site/static/images/integrations/data-visualization/powerbi_select_dsn.png';
import powerbi_dsn_credentials from '@site/static/images/integrations/data-visualization/powerbi_dsn_credentials.png';
import powerbi_16 from '@site/static/images/integrations/data-visualization/powerbi_16.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Power BI
Microsoft Power BI can query or load into memory data from
ClickHouse Cloud
or a self-managed deployment.
There are several flavours of Power BI that you can use to visualise your data:
Power BI Desktop: A Windows desktop application for creating Dashboards and Visualisations
Power BI Service: Available within Azure as a SaaS to host the Dashboards created on Power BI Desktop
Power BI requires you to create your dashboards within the Desktop version and publish them to Power BI Service.
This tutorial will guide you through the process of:
Installing the ClickHouse ODBC Driver
Installing the ClickHouse Power BI Connector into Power BI Desktop | {"source_file": "powerbi-and-clickhouse.md"} | [
-0.03333678096532822,
0.09149673581123352,
-0.10822790861129761,
0.07041127979755402,
0.005869952030479908,
-0.06647714972496033,
0.02581454999744892,
0.028274256736040115,
-0.01676853746175766,
-0.04540419206023216,
-0.0021977066062390804,
0.013118951581418514,
0.12825313210487366,
-0.049... |
d2a57e04-88d3-40d2-a372-d958167af43c | This tutorial will guide you through the process of:
Installing the ClickHouse ODBC Driver
Installing the ClickHouse Power BI Connector into Power BI Desktop
Querying data from ClickHouse for visualization in Power BI Desktop
Setting up an on-premise data gateway for Power BI Service
Prerequisites {#prerequisites}
Power BI Installation {#power-bi-installation}
This tutorial assumes you have Microsoft Power BI Desktop installed on your Windows machine. You can download and install Power BI Desktop
here
We recommend updating to the latest version of Power BI. The ClickHouse Connector is available by default from version
2.137.751.0
.
Gather your ClickHouse connection details {#gather-your-clickhouse-connection-details}
You'll need the following details for connecting to your ClickHouse instance:
Hostname - ClickHouse
Username - User credentials
Password - Password of the user
Database - Name of the database on the instance you want to connect to
Power BI desktop {#power-bi-desktop}
To get started with querying data in Power BI Desktop, you'll need to complete the following steps:
Install the ClickHouse ODBC Driver
Find the ClickHouse Connector
Connect to ClickHouse
Query and Visualize you data
Install the ODBC Driver {#install-the-odbc-driver}
Download the most recent
ClickHouse ODBC release
.
Execute the supplied
.msi
installer and follow the wizard.
:::note
Debug symbols
are optional and not required
:::
Verify ODBC driver {#verify-odbc-driver}
When the driver installation is completed, you can verify the installation was successful by:
Searching for ODBC in the Start menu and select "ODBC Data Sources
(64-bit)
".
Verify the ClickHouse Driver is listed.
Find the ClickHouse Connector {#find-the-clickhouse-connector}
:::note
Available in version
2.137.751.0
of Power BI Desktop
:::
On the Power BI Desktop start screen, click "Get Data".
Search for "ClickHouse"
Connect to ClickHouse {#connect-to-clickhouse}
Select the connector, and enter in the ClickHouse instance credentials:
Host (required) - Your instance domain/address. Make sure to add it with no prefixes/suffixes.
Port (required) - Your instance port.
Database - Your database name.
Options - Any ODBC option as listed
in
ClickHouse ODBC GitHub Page
Data Connectivity mode - DirectQuery
:::note
We advise selecting DirectQuery for querying ClickHouse directly.
If you have a use case that has a small amount of data, you can choose import mode, and the entire data will be loaded to Power BI.
:::
Specify username and password
Query and Visualise Data {#query-and-visualise-data}
Finally, you should see the databases and tables in the Navigator view. Select the desired table and click "Load" to
import the data from ClickHouse.
Once the import is complete, your ClickHouse Data should be accessible in Power BI as usual. | {"source_file": "powerbi-and-clickhouse.md"} | [
0.018849026411771774,
-0.009465385228395462,
-0.09131042659282684,
0.05142246559262276,
-0.0741429552435875,
-0.0033012698404490948,
0.04455973580479622,
0.01972557045519352,
-0.027875838801264763,
-0.043862611055374146,
-0.04763824865221977,
-0.01808111183345318,
0.07297450304031372,
-0.0... |
e5c7d8e9-9a74-4ab9-9fa5-7e37f6737d12 | Once the import is complete, your ClickHouse Data should be accessible in Power BI as usual.
Power BI service {#power-bi-service}
In order to use Microsoft Power BI Service, you need to create an
on-premise data gateway
.
For more details on how to setup custom connectors, please refer to Microsoft's documentation on how to
use custom data connectors with an on-premises data gateway
.
ODBC driver (import only) {#odbc-driver-import-only}
We recommend using the ClickHouse Connector that uses DirectQuery.
Install the
ODBC Driver
onto the on-premise data gateway instance and
verify
as outlined above.
Create a new User DSN {#create-a-new-user-dsn}
When the driver installation is complete, an ODBC data source can be created. Search for ODBC in the Start menu and select "ODBC Data Sources (64-bit)".
We need to add a new User DSN here. Click "Add" button on the left.
Choose the Unicode version of the ODBC driver.
Fill in the connection details.
:::note
If you are using a deployment that has SSL enabled (e.g. ClickHouse Cloud or a self-managed instance), in the
SSLMode
field you should supply
require
.
Host
should always have the protocol (i.e.
http://
or
https://
) omitted.
Timeout
is an integer representing seconds. Default value:
30 seconds
.
:::
Get data into Power BI {#get-data-into-power-bi}
In case you don't have Power BI installed
yet,
download and install Power BI Desktop
.
On the Power BI Desktop start screen, click "Get Data".
Select "Other" -> "ODBC".
Select your previously created data source from the list.
:::note
If you did not specify credentials during the data source creation, you will be prompted to specify username and password.
:::
Finally, you should see the databases and tables in the Navigator view. Select the desired table and click "Load" to import the data from ClickHouse.
Once the import is complete, your ClickHouse Data should be accessible in Power BI as usual.
Known limitations {#known-limitations}
UInt64 {#uint64}
Unsigned integer types such as UInt64 or bigger won't be loaded into the dataset automatically, as Int64 is the maximum whole number type support by Power BI.
:::note
To import the data properly, before hitting the "Load" button in the Navigator, click "Transform Data" first.
:::
In this example,
pageviews
table has a UInt64 column, which is recognized as "Binary" by default.
"Transform Data" opens Power Query Editor, where we can reassign the type of the column, setting it as, for example,
Text.
Once finished, click "Close & Apply" in the top left corner, and proceed with loading the data. | {"source_file": "powerbi-and-clickhouse.md"} | [
-0.05552529916167259,
-0.06028945371508598,
-0.09590279310941696,
0.055419132113456726,
-0.06221446022391319,
-0.02203277125954628,
0.03233003616333008,
0.020159514620900154,
-0.02141350321471691,
-0.05834843963384628,
-0.035059183835983276,
0.042121145874261856,
0.03910238668322563,
-0.04... |
ac1689fe-88b9-439a-8c61-44638eecd494 | sidebar_label: 'Looker'
slug: /integrations/looker
keywords: ['clickhouse', 'looker', 'connect', 'integrate', 'ui']
description: 'Looker is an enterprise platform for BI, data applications, and embedded analytics that helps you explore and share insights in real time.'
title: 'Looker'
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'data_visualization'
import Image from '@theme/IdealImage';
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import looker_01 from '@site/static/images/integrations/data-visualization/looker_01.png';
import looker_02 from '@site/static/images/integrations/data-visualization/looker_02.png';
import looker_03 from '@site/static/images/integrations/data-visualization/looker_03.png';
import looker_04 from '@site/static/images/integrations/data-visualization/looker_04.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Looker
Looker can connect to ClickHouse Cloud or on-premise deployment via the official ClickHouse data source.
1. Gather your connection details {#1-gather-your-connection-details}
2. Create a ClickHouse data source {#2-create-a-clickhouse-data-source}
Navigate to Admin -> Database -> Connections and click the "Add Connection" button in the top right corner.
Choose a name for your data source, and select
ClickHouse
from the dialect drop-down. Enter your credentials in the form.
If you are using ClickHouse Cloud or your deployment requires SSL, make sure you have SSL turned on in the additional settings.
Test your connection first, and, once it is done, connect to your new ClickHouse data source.
Now you should be able to attach ClickHouse Datasource to your Looker project.
3. Known limitations {#3-known-limitations}
The following data types are handled as strings by default:
Array - serialization does not work as expected due to the JDBC driver limitations
Decimal* - can be changed to number in the model
LowCardinality(...) - can be changed to a proper type in the model
Enum8, Enum16
UUID
Tuple
Map
JSON
Nested
FixedString
Geo types
MultiPolygon
Polygon
Point
Ring
Symmetric aggregate feature
is not supported
Full outer join
is not yet implemented in the driver | {"source_file": "looker-and-clickhouse.md"} | [
-0.06612730771303177,
0.015443362295627594,
-0.08251944929361343,
0.07222118228673935,
0.01785740628838539,
-0.060755033046007156,
-0.0019563562236726284,
0.05979323014616966,
-0.043029554188251495,
-0.010308964177966118,
0.015950709581375122,
-0.013467660173773766,
0.09605049341917038,
-0... |
22b0bcdc-4c08-4804-9df7-fc6a49d9debe | sidebar_label: 'Omni'
slug: /integrations/omni
keywords: ['clickhouse', 'Omni', 'connect', 'integrate', 'ui']
description: 'Omni is an enterprise platform for BI, data applications, and embedded analytics that helps you explore and share insights in real time.'
title: 'Omni'
doc_type: 'guide'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import omni_01 from '@site/static/images/integrations/data-visualization/omni_01.png';
import omni_02 from '@site/static/images/integrations/data-visualization/omni_02.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Omni
Omni can connect to ClickHouse Cloud or on-premise deployment via the official ClickHouse data source.
1. Gather your connection details {#1-gather-your-connection-details}
2. Create a ClickHouse data source {#2-create-a-clickhouse-data-source}
Navigate to Admin -> Connections and click the "Add Connection" button in the top right corner.
Select
ClickHouse
. Enter your credentials in the form.
Now you should can query and visualize data from ClickHouse in Omni. | {"source_file": "omni-and-clickhouse.md"} | [
-0.021514510735869408,
0.10123573243618011,
-0.047088589519262314,
0.027760306373238564,
0.06569929420948029,
-0.10095060616731644,
0.034587036818265915,
0.0015391669003292918,
-0.005830347537994385,
-0.008233972825109959,
-0.005664043594151735,
0.0006762333796359599,
0.06433924287557602,
... |
5dfb461e-d78b-4850-abc3-dd7d1078abed | sidebar_label: 'Overview'
sidebar_position: 1
keywords: ['ClickHouse', 'connect', 'Luzmo', 'Explo', 'Fabi.ai', 'Tableau', 'Grafana', 'Metabase', 'Mitzu', 'superset', 'Databrain','Deepnote', 'Draxlr', 'RocketBI', 'Omni', 'bi', 'visualization', 'tool', 'lightdash']
title: 'Visualizing Data in ClickHouse'
slug: /integrations/data-visualization
description: 'Learn about Visualizing Data in ClickHouse'
doc_type: 'guide'
Visualizing data in ClickHouse
Now that your data is in ClickHouse, it's time to analyze it, which often involves building visualizations using a BI tool. Many of the popular BI and visualization tools connect to ClickHouse. Some connect to ClickHouse out-of-the-box, while others require a connector to be installed. We have docs for some of the tools, including:
Apache Superset
Astrato
Chartbrew
Databrain
Deepnote
Dot
Draxlr
Embeddable
Explo
Fabi.ai
Grafana
Lightdash
Looker
Luzmo
Metabase
Mitzu
Omni
Rill
Rocket BI
Tableau
Zing Data
ClickHouse Cloud compatibility with data visualization tools {#clickhouse-cloud-compatibility-with-data-visualization-tools} | {"source_file": "index.md"} | [
-0.01158024650067091,
-0.07019929587841034,
-0.04241875559091568,
0.006286284886300564,
-0.007450548931956291,
-0.034949786961078644,
-0.015075821429491043,
0.04163118079304695,
-0.0582510344684124,
0.003654834348708391,
0.029799150303006172,
-0.018662290647625923,
0.027845220640301704,
-0... |
21d6c74a-3a3e-4efd-a24d-3c1e679a15e9 | | Tool | Supported via | Tested | Documented | Comment |
|-------------------------------------------------------------------------|-------------------------------|--------|------------|-----------------------------------------------------------------------------------------------------------------------------------------|
|
Apache Superset
| ClickHouse official connector | β
| β
| |
|
Astrato
| Native connector | β
| β
| Works natively using pushdown SQL (direct query only). |
|
AWS QuickSight
| MySQL interface | β
| β
| Works with some limitations, see
the documentation
for more details |
|
Chartbrew
| ClickHouse official connector | β
| β
| |
|
Databrain
| Native connector | β
| β
| |
|
Deepnote
| Native connector | β
| β
| |
|
Dot
| Native connector | β
| β
| |
|
Explo
| Native connector | β
| β
| |
|
Fabi.ai
| Native connector | β
| β
| |
|
Grafana
| ClickHouse official connector | β
| β
| |
|
Hashboard
| Native connector | β
| β
| |
|
Lightdash
| Native connector | β
| β
|
|
|
Looker | {"source_file": "index.md"} | [
-0.08708932250738144,
-0.04293188452720642,
-0.07709842175245285,
0.06155972182750702,
-0.035339292138814926,
-0.02130209468305111,
-0.042428214102983475,
0.026924392208456993,
-0.049810998141765594,
0.025643430650234222,
0.04623502865433693,
-0.032364681363105774,
0.04980871453881264,
-0.... |
35fd3758-b6e9-4be3-8059-7714fa3b68e8 | |
Lightdash
| Native connector | β
| β
|
|
|
Looker
| Native connector | β
| β
| Works with some limitations, see
the documentation
for more details |
| Looker | MySQL interface | π§ | β | |
|
Luzmo
| ClickHouse official connector | β
| β
| |
|
Looker Studio
| MySQL interface | β
| β
| |
|
Metabase
| ClickHouse official connector | β
| β
|
|
Mitzu
| Native connector | β
| β
| |
|
Omni
| Native connector | β
| β
| |
|
Power BI Desktop
| ClickHouse official connector | β
| β
| Via ODBC, supports direct query mode |
|
Power BI service
| ClickHouse official connector | β
| β
| A
Microsoft Data Gateway
setup is required |
|
Rill
| Native connector | β
| β
|
|
Rocket BI
| Native connector | β
| β | |
|
Tableau Desktop
| ClickHouse official connector | β
| β
| |
|
Tableau Online
| MySQL interface | β
| β
| Works with some limitations, see
the documentation
for more details |
|
Zing Data
| Native connector | β
| β
| | | {"source_file": "index.md"} | [
-0.006103302352130413,
-0.14839479327201843,
-0.05783204361796379,
0.02260362170636654,
-0.03597205877304077,
-0.04182935506105423,
-0.06290857493877411,
0.031637270003557205,
-0.07165191322565079,
0.024718062952160835,
0.011338360607624054,
-0.009241973049938679,
0.06747952103614807,
-0.0... |
51e96a6f-b13d-4b14-89d4-7c9000d8adb6 | slug: /integrations/marimo
sidebar_label: 'marimo'
description: 'marimo is a next-generation Python notebook for interacting with data'
title: 'Using marimo with ClickHouse'
doc_type: 'guide'
keywords: ['marimo', 'notebook', 'data analysis', 'python', 'visualization']
import Image from '@theme/IdealImage';
import marimo_connect from '@site/static/images/integrations/sql-clients/marimo/clickhouse-connect.gif';
import add_db_panel from '@site/static/images/integrations/sql-clients/marimo/panel-arrow.png';
import add_db_details from '@site/static/images/integrations/sql-clients/marimo/add-db-details.png';
import run_cell from '@site/static/images/integrations/sql-clients/marimo/run-cell.png';
import choose_sql_engine from '@site/static/images/integrations/sql-clients/marimo/choose-sql-engine.png';
import results from '@site/static/images/integrations/sql-clients/marimo/results.png';
import dropdown_cell_chart from '@site/static/images/integrations/sql-clients/marimo/dropdown-cell-chart.png';
import run_app_view from '@site/static/images/integrations/sql-clients/marimo/run-app-view.png';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Using marimo with ClickHouse
marimo
is an open-source reactive notebook for Python with SQL built-in. When you run a cell or interact with a UI element, marimo automatically runs affected cells (or marks them as stale), keeping code and outputs consistent and preventing bugs before they happen. Every marimo notebook is stored as pure Python, executable as a script, and deployable as an app.
1. Install marimo with SQL support {#install-marimo-sql}
shell
pip install "marimo[sql]" clickhouse_connect
marimo edit clickhouse_demo.py
This should open up a web browser running on localhost.
2. Connecting to ClickHouse. {#connect-to-clickhouse}
Navigate to the datasources panel on the left side of the marimo editor and click on 'Add database'.
You will be prompted to fill in the database details.
You will then have a cell that can be run to establish a connection.
3. Run SQL {#run-sql}
Once you have set up a connection, you can create a new SQL cell and choose the clickhouse engine.
For this guide, we will use the New York Taxi dataset. | {"source_file": "marimo.md"} | [
0.037688110023736954,
0.0008022511610761285,
-0.0991387888789177,
0.06145496293902397,
-0.05355263501405716,
-0.032598935067653656,
0.053326692432165146,
0.09780959039926529,
-0.060726746916770935,
-0.03660491108894348,
0.060865383595228195,
0.006101470440626144,
0.10157463699579239,
-0.02... |
0d4661bf-520e-43ec-b3ee-6a642c9bd641 | 3. Run SQL {#run-sql}
Once you have set up a connection, you can create a new SQL cell and choose the clickhouse engine.
For this guide, we will use the New York Taxi dataset.
sql
CREATE TABLE trips (
trip_id UInt32,
pickup_datetime DateTime,
dropoff_datetime DateTime,
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count UInt8,
trip_distance Float32,
fare_amount Float32,
extra Float32,
tip_amount Float32,
tolls_amount Float32,
total_amount Float32,
payment_type Enum('CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4, 'UNK' = 5),
pickup_ntaname LowCardinality(String),
dropoff_ntaname LowCardinality(String)
)
ENGINE = MergeTree
PRIMARY KEY (pickup_datetime, dropoff_datetime);
sql
INSERT INTO trips
SELECT
trip_id,
pickup_datetime,
dropoff_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
fare_amount,
extra,
tip_amount,
tolls_amount,
total_amount,
payment_type,
pickup_ntaname,
dropoff_ntaname
FROM gcs(
'https://storage.googleapis.com/clickhouse-public-datasets/nyc-taxi/trips_0.gz',
'TabSeparatedWithNames'
);
sql
SELECT * FROM trips LIMIT 1000;
Now, you are able to view the results in a dataframe. I would like to visualize the most expensive drop-offs from a given pickup location. marimo provides several UI components to help you. I will use a dropdown to select the location and altair for charting.
marimo's reactive execution model extends into SQL queries, so changes to your SQL will automatically trigger downstream computations for dependent cells (or optionally mark cells as stale for expensive computations). Hence the chart and table changes when the query is updated.
You can also toggle App View to have a clean interface for exploring your data. | {"source_file": "marimo.md"} | [
0.04546184092760086,
-0.03714754804968834,
-0.01931154914200306,
0.03604723513126373,
-0.11956855654716492,
-0.0057434882037341595,
0.0489351823925972,
0.02035137452185154,
-0.08233245462179184,
-0.0017638589488342404,
0.07381756603717804,
-0.11183870583772659,
-0.034227460622787476,
-0.06... |
86d84a37-9866-4240-baeb-15d89eeab1e6 | sidebar_label: 'SQL Console'
sidebar_position: 1
title: 'SQL Console'
slug: /integrations/sql-clients/sql-console
description: 'Learn about SQL Console'
doc_type: 'guide'
keywords: ['sql console', 'query interface', 'web ui', 'sql editor', 'cloud console'] | {"source_file": "sql-console.md"} | [
0.05511985346674919,
-0.0422598235309124,
-0.06310401111841202,
0.010406624525785446,
-0.016409818083047867,
0.053085923194885254,
0.06636317819356918,
0.08774242550134659,
-0.05899195373058319,
0.037785109132528305,
0.014938082545995712,
0.02703244984149933,
0.04858820512890816,
-0.058607... |
aac8b52e-e514-4d8f-8c29-c6902cd7737c | import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import Image from '@theme/IdealImage';
import table_list_and_schema from '@site/static/images/cloud/sqlconsole/table-list-and-schema.png';
import view_columns from '@site/static/images/cloud/sqlconsole/view-columns.png';
import abc from '@site/static/images/cloud/sqlconsole/abc.png';
import inspecting_cell_content from '@site/static/images/cloud/sqlconsole/inspecting-cell-content.png';
import sort_descending_on_column from '@site/static/images/cloud/sqlconsole/sort-descending-on-column.png';
import filter_on_radio_column_equal_gsm from '@site/static/images/cloud/sqlconsole/filter-on-radio-column-equal-gsm.png';
import add_more_filters from '@site/static/images/cloud/sqlconsole/add-more-filters.png';
import filtering_and_sorting_together from '@site/static/images/cloud/sqlconsole/filtering-and-sorting-together.png';
import create_a_query_from_sorts_and_filters from '@site/static/images/cloud/sqlconsole/create-a-query-from-sorts-and-filters.png';
import creating_a_query from '@site/static/images/cloud/sqlconsole/creating-a-query.png';
import run_selected_query from '@site/static/images/cloud/sqlconsole/run-selected-query.png';
import run_at_cursor_2 from '@site/static/images/cloud/sqlconsole/run-at-cursor-2.png';
import run_at_cursor from '@site/static/images/cloud/sqlconsole/run-at-cursor.png';
import cancel_a_query from '@site/static/images/cloud/sqlconsole/cancel-a-query.png';
import sql_console_save_query from '@site/static/images/cloud/sqlconsole/sql-console-save-query.png';
import sql_console_rename from '@site/static/images/cloud/sqlconsole/sql-console-rename.png';
import sql_console_share from '@site/static/images/cloud/sqlconsole/sql-console-share.png';
import sql_console_edit_access from '@site/static/images/cloud/sqlconsole/sql-console-edit-access.png';
import sql_console_add_team from '@site/static/images/cloud/sqlconsole/sql-console-add-team.png';
import sql_console_edit_member from '@site/static/images/cloud/sqlconsole/sql-console-edit-member.png';
import sql_console_access_queries from '@site/static/images/cloud/sqlconsole/sql-console-access-queries.png';
import search_hn from '@site/static/images/cloud/sqlconsole/search-hn.png';
import match_in_body from '@site/static/images/cloud/sqlconsole/match-in-body.png';
import pagination from '@site/static/images/cloud/sqlconsole/pagination.png';
import pagination_nav from '@site/static/images/cloud/sqlconsole/pagination-nav.png';
import download_as_csv from '@site/static/images/cloud/sqlconsole/download-as-csv.png';
import tabular_query_results from '@site/static/images/cloud/sqlconsole/tabular-query-results.png';
import switch_from_query_to_chart from '@site/static/images/cloud/sqlconsole/switch-from-query-to-chart.png';
import trip_total_by_week from '@site/static/images/cloud/sqlconsole/trip-total-by-week.png';
import bar_chart from '@site/static/images/cloud/sqlconsole/bar-chart.png'; | {"source_file": "sql-console.md"} | [
0.02212204970419407,
0.005162881687283516,
-0.01978124864399433,
-0.014972670003771782,
0.008455399423837662,
0.03117319568991661,
0.05380011349916458,
0.011651258915662766,
-0.05497419089078903,
0.055705148726701736,
0.05574139207601547,
-0.025949643924832344,
0.12179769575595856,
-0.0124... |
99d977d0-b49d-40b5-8a75-9c139b5ede54 | import trip_total_by_week from '@site/static/images/cloud/sqlconsole/trip-total-by-week.png';
import bar_chart from '@site/static/images/cloud/sqlconsole/bar-chart.png';
import change_from_bar_to_area from '@site/static/images/cloud/sqlconsole/change-from-bar-to-area.png';
import update_query_name from '@site/static/images/cloud/sqlconsole/update-query-name.png';
import update_subtitle_etc from '@site/static/images/cloud/sqlconsole/update-subtitle-etc.png';
import adjust_axis_scale from '@site/static/images/cloud/sqlconsole/adjust-axis-scale.png';
import give_a_query_a_name from '@site/static/images/cloud/sqlconsole/give-a-query-a-name.png'
import save_the_query from '@site/static/images/cloud/sqlconsole/save-the-query.png' | {"source_file": "sql-console.md"} | [
0.01749420166015625,
-0.006417152471840382,
0.021794011816382408,
0.042893704026937485,
-0.04483947530388832,
0.05693567171692848,
0.03966852277517319,
0.07359535992145538,
-0.04346669465303421,
0.05762742832303047,
0.06131395697593689,
-0.04980959743261337,
0.132747083902359,
-0.089346565... |
40996761-9245-43f9-9177-ff3c7b458909 | SQL Console
SQL console is the fastest and easiest way to explore and query your databases in ClickHouse Cloud. You can use the SQL console to:
Connect to your ClickHouse Cloud Services
View, filter, and sort table data
Execute queries and visualize result data in just a few clicks
Share queries with team members and collaborate more effectively.
Exploring tables {#exploring-tables}
Viewing table list and schema info {#viewing-table-list-and-schema-info}
An overview of tables contained in your ClickHouse instance can be found in the left sidebar area. Use the database selector at the top of the left bar to view the tables in a specific database
Tables in the list can also be expanded to view columns and types
Exploring table data {#exploring-table-data}
Click on a table in the list to open it in a new tab. In the Table View, data can be easily viewed, selected, and copied. Note that structure and formatting are preserved when copy-pasting to spreadsheet applications such as Microsoft Excel and Google Sheets. You can flip between pages of table data (paginated in 30-row increments) using the navigation in the footer.
Inspecting cell data {#inspecting-cell-data}
The Cell Inspector tool can be used to view large amounts of data contained within a single cell. To open it, right-click on a cell and select 'Inspect Cell'. The contents of the cell inspector can be copied by clicking the copy icon in the top right corner of the inspector contents.
Filtering and sorting tables {#filtering-and-sorting-tables}
Sorting a table {#sorting-a-table}
To sort a table in the SQL console, open a table and select the 'Sort' button in the toolbar. This button will open a menu that will allow you to configure your sort. You can choose a column by which to sort and configure the ordering of the sort (ascending or descending). Select 'Apply' or press Enter to sort your table
The SQL console also allows you to add multiple sorts to a table. Click the 'Sort' button again to add another sort. Note: sorts are applied in the order that they appear in the sort pane (top to bottom). To remove a sort, simply click the 'x' button next to the sort.
Filtering a table {#filtering-a-table}
To filter a table in the SQL console, open a table and select the 'Filter' button. Just like sorting, this button will open a menu that will allow you to configure your filter. You can choose a column by which to filter and select the necessary criteria. The SQL console intelligently displays filter options that correspond to the type of data contained in the column.
When you're happy with your filter, you can select 'Apply' to filter your data. You can also add additional filters as shown below.
Similar to the sort functionality, click the 'x' button next to a filter to remove it.
Filtering and sorting together {#filtering-and-sorting-together} | {"source_file": "sql-console.md"} | [
0.07554958760738373,
-0.10997778922319412,
-0.047151561826467514,
0.03535490110516548,
-0.013267520815134048,
0.0345592200756073,
0.025410687550902367,
-0.02516021952033043,
-0.01969960890710354,
0.0704299584031105,
0.020735712721943855,
0.005499251186847687,
0.068642757833004,
-0.04417634... |
49a47660-50e2-41f0-852c-aa9faf7e061c | Similar to the sort functionality, click the 'x' button next to a filter to remove it.
Filtering and sorting together {#filtering-and-sorting-together}
The SQL console allows you to filter and sort a table at the same time. To do this, add all desired filters and sorts using the steps described above and click the 'Apply' button.
Creating a query from filters and sorts {#creating-a-query-from-filters-and-sorts}
The SQL console can convert your sorts and filters directly into queries with one click. Simply select the 'Create Query' button from the toolbar with the sort and filter parameters of your choosing. After clicking 'Create query', a new query tab will open pre-populated with the SQL command corresponding to the data contained in your table view.
:::note
Filters and sorts are not mandatory when using the 'Create Query' feature.
:::
You can learn more about querying in the SQL console by reading the (link) query documentation.
Creating and running a query {#creating-and-running-a-query}
Creating a query {#creating-a-query}
There are two ways to create a new query in the SQL console.
Click the '+' button in the tab bar
Select the 'New Query' button from the left sidebar query list
Running a query {#running-a-query}
To run a query, type your SQL command(s) into the SQL Editor and click the 'Run' button or use the shortcut
cmd / ctrl + enter
. To write and run multiple commands sequentially, make sure to add a semicolon after each command.
Query Execution Options
By default, clicking the run button will run all commands contained in the SQL Editor. The SQL console supports two other query execution options:
Run selected command(s)
Run command at the cursor
To run selected command(s), highlight the desired command or sequence of commands and click the 'Run' button (or use the
cmd / ctrl + enter
shortcut). You can also select 'Run selected' from the SQL Editor context menu (opened by right-clicking anywhere within the editor) when a selection is present.
Running the command at the current cursor position can be achieved in two ways:
Select 'At Cursor' from the extended run options menu (or use the corresponding
cmd / ctrl + shift + enter
keyboard shortcut
Selecting 'Run at cursor' from the SQL Editor context menu
:::note
The command present at the cursor position will flash yellow on execution.
:::
Canceling a query {#canceling-a-query}
While a query is running, the 'Run' button in the Query Editor toolbar will be replaced with a 'Cancel' button. Simply click this button or press
Esc
to cancel the query. Note: Any results that have already been returned will persist after cancellation.
Saving a query {#saving-a-query}
If not previously named, your query should be called 'Untitled Query'. Click on the query name to change it. Renaming a query will cause the query to be saved. | {"source_file": "sql-console.md"} | [
0.030023939907550812,
-0.06773935258388519,
-0.0016690907068550587,
0.11222472041845322,
-0.04854455962777138,
0.03596639260649681,
0.06866700947284698,
-0.02171352691948414,
-0.040070634335279465,
0.09695848822593689,
-0.002580150729045272,
0.06236576288938522,
-0.004468949977308512,
-0.0... |
dd1904a8-bd48-4f35-8db3-0f6fec56ea78 | Saving a query {#saving-a-query}
If not previously named, your query should be called 'Untitled Query'. Click on the query name to change it. Renaming a query will cause the query to be saved.
You can also use the save button or
cmd / ctrl + s
keyboard shortcut to save a query.
Using GenAI to manage queries {#using-genai-to-manage-queries}
This feature allows users to write queries as natural language questions and have the query console create SQL queries based on the context of the available tables. GenAI can also help users debug their queries.
For more information on GenAI, checkout the
Announcing GenAI powered query suggestions in ClickHouse Cloud blog post
.
Table setup {#table-setup}
Let's import the UK Price Paid example dataset and use that to create some GenAI queries.
Open a ClickHouse Cloud service.
Create a new query by clicking the
+
icon.
Paste and run the following code:
sql
CREATE TABLE uk_price_paid
(
price UInt32,
date Date,
postcode1 LowCardinality(String),
postcode2 LowCardinality(String),
type Enum8('terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4, 'other' = 0),
is_new UInt8,
duration Enum8('freehold' = 1, 'leasehold' = 2, 'unknown' = 0),
addr1 String,
addr2 String,
street LowCardinality(String),
locality LowCardinality(String),
town LowCardinality(String),
district LowCardinality(String),
county LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY (postcode1, postcode2, addr1, addr2);
This query should take around 1 second to complete. Once it's done, you should have an empty table called `uk_price_paid.
Create a new query and paste the following query:
sql
INSERT INTO uk_price_paid
WITH
splitByChar(' ', postcode) AS p
SELECT
toUInt32(price_string) AS price,
parseDateTimeBestEffortUS(time) AS date,
p[1] AS postcode1,
p[2] AS postcode2,
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
b = 'Y' AS is_new,
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
addr1,
addr2,
street,
locality,
town,
district,
county
FROM url(
'http://prod.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
'CSV',
'uuid_string String,
price_string String,
time String,
postcode String,
a String,
b String,
c String,
addr1 String,
addr2 String,
street String,
locality String,
town String,
district String,
county String,
d String,
e String'
) SETTINGS max_http_get_redirects=10; | {"source_file": "sql-console.md"} | [
0.032162073999643326,
0.00992831215262413,
-0.03760501742362976,
0.10992120206356049,
-0.09484361112117767,
0.050442472100257874,
0.062104009091854095,
-0.011956500820815563,
-0.015604843385517597,
0.05184511840343475,
0.025023043155670166,
-0.12492293119430542,
0.05685015395283699,
-0.055... |
e996377f-2312-40ff-8cd8-cbb5f9ced895 | This query grabs the dataset from the
gov.uk
website. This file is ~4GB, so this query will take a few minutes to complete. Once ClickHouse has processed the query, you should have the entire dataset within the
uk_price_paid
table.
Query creation {#query-creation}
Let's create a query using natural language.
Select the
uk_price_paid
table, and then click
Create Query
.
Click
Generate SQL
. You may be asked to accept that your queries are sent to Chat-GPT. You must select
I agree
to continue.
You can now use this prompt to enter a natural language query and have ChatGPT convert it into an SQL query. In this example we're going to enter:
Show me the total price and total number of all uk_price_paid transactions by year.
The console will generate the query we're looking for and display it in a new tab. In our example, GenAI created the following query:
sql
-- Show me the total price and total number of all uk_price_paid transactions by year.
SELECT year(date), sum(price) as total_price, Count(*) as total_transactions
FROM uk_price_paid
GROUP BY year(date)
Once you've verified that the query is correct, click
Run
to execute it.
Debugging {#debugging}
Now, let's test the query debugging capabilities of GenAI.
Create a new query by clicking the
+
icon and paste the following code:
sql
-- Show me the total price and total number of all uk_price_paid transactions by year.
SELECT year(date), sum(pricee) as total_price, Count(*) as total_transactions
FROM uk_price_paid
GROUP BY year(date)
Click
Run
. The query fails since we're trying to get values from
pricee
instead of
price
.
Click
Fix Query
.
GenAI will attempt to fix the query. In this case, it changed
pricee
to
price
. It also realised that
toYear
is a better function to use in this scenario.
Select
Apply
to add the suggested changes to your query and click
Run
.
Keep in mind that GenAI is an experimental feature. Use caution when running GenAI-generated queries against any dataset.
Advanced querying features {#advanced-querying-features}
Searching query results {#searching-query-results}
After a query is executed, you can quickly search through the returned result set using the search input in the result pane. This feature assists in previewing the results of an additional
WHERE
clause or simply checking to ensure that specific data is included in the result set. After inputting a value into the search input, the result pane will update and return records containing an entry that matches the inputted value. In this example, we'll look for all instances of
breakfast
in the
hackernews
table for comments that contain
ClickHouse
(case-insensitive):
Note: Any field matching the inputted value will be returned. For example, the third record in the above screenshot does not match 'breakfast' in the
by
field, but the
text
field does: | {"source_file": "sql-console.md"} | [
0.02846645563840866,
0.0041235038079321384,
-0.03665574640035629,
0.09252353757619858,
-0.09738187491893768,
-0.00971410982310772,
0.1232374832034111,
0.046057187020778656,
0.0033288239501416683,
0.02055487036705017,
-0.010516284964978695,
-0.09116227179765701,
0.012259898707270622,
-0.030... |
0b1d1007-83f9-4582-b800-9acc2631f21f | Note: Any field matching the inputted value will be returned. For example, the third record in the above screenshot does not match 'breakfast' in the
by
field, but the
text
field does:
Adjusting pagination settings {#adjusting-pagination-settings}
By default, the query result pane will display every result record on a single page. For larger result sets, it may be preferable to paginate results for easier viewing. This can be accomplished using the pagination selector in the bottom right corner of the result pane:
Selecting a page size will immediately apply pagination to the result set and navigation options will appear in the middle of the result pane footer
Exporting query result data {#exporting-query-result-data}
Query result sets can be easily exported to CSV format directly from the SQL console. To do so, open the
β’β’β’
menu on the right side of the result pane toolbar and select 'Download as CSV'.
Visualizing query data {#visualizing-query-data}
Some data can be more easily interpreted in chart form. You can quickly create visualizations from query result data directly from the SQL console in just a few clicks. As an example, we'll use a query that calculates weekly statistics for NYC taxi trips:
sql
SELECT
toStartOfWeek(pickup_datetime) AS week,
sum(total_amount) AS fare_total,
sum(trip_distance) AS distance_total,
count(*) AS trip_total
FROM
nyc_taxi
GROUP BY
1
ORDER BY
1 ASC
Without visualization, these results are difficult to interpret. Let's turn them into a chart.
Creating charts {#creating-charts}
To begin building your visualization, select the 'Chart' option from the query result pane toolbar. A chart configuration pane will appear:
We'll start by creating a simple bar chart tracking
trip_total
by
week
. To accomplish this, we'll drag the
week
field to the x-axis and the
trip_total
field to the y-axis:
Most chart types support multiple fields on numeric axes. To demonstrate, we'll drag the fare_total field onto the y-axis:
Customizing charts {#customizing-charts}
The SQL console supports ten chart types that can be selected from the chart type selector in the chart configuration pane. For example, we can easily change the previous chart type from Bar to an Area:
Chart titles match the name of the query supplying the data. Updating the name of the query will cause the Chart title to update as well:
A number of more advanced chart characteristics can also be adjusted in the 'Advanced' section of the chart configuration pane. To begin, we'll adjust the following settings:
Subtitle
Axis titles
Label orientation for the x-axis
Our chart will be updated accordingly: | {"source_file": "sql-console.md"} | [
-0.05289096385240555,
0.04933949559926987,
-0.03550207242369652,
0.0788978561758995,
-0.02890520729124546,
0.10066074132919312,
0.009303808212280273,
0.042672209441661835,
0.0318906307220459,
-0.03209792822599411,
-0.04825757071375847,
0.004895153921097517,
0.04818481206893921,
-0.09811075... |
e6f0ab9c-12e3-4e84-85d6-3767665689e3 | Subtitle
Axis titles
Label orientation for the x-axis
Our chart will be updated accordingly:
In some scenarios, it may be necessary to adjust the axis scales for each field independently. This can also be accomplished in the 'Advanced' section of the chart configuration pane by specifying min and max values for an axis range. As an example, the above chart looks good, but in order to demonstrate the correlation between our
trip_total
and
fare_total
fields, the axis ranges need some adjustment:
Sharing queries {#sharing-queries}
The SQL console enables you to share queries with your team. When a query is shared, all members of the team can see and edit the query. Shared queries are a great way to collaborate with your team.
To share a query, click the 'Share' button in the query toolbar.
A dialog will open, allowing you to share the query with all members of a team. If you have multiple teams, you can select which team to share the query with.
In some scenarios, it may be necessary to adjust the axis scales for each field independently. This can also be accomplished in the 'Advanced' section of the chart configuration pane by specifying min and max values for an axis range. As an example, the above chart looks good, but in order to demonstrate the correlation between our
trip_total
and
fare_total
fields, the axis ranges need some adjustment: | {"source_file": "sql-console.md"} | [
0.027290446683764458,
-0.019984746351838112,
-0.01414182223379612,
-0.009101302362978458,
-0.056112922728061676,
0.059699270874261856,
-0.021069912239909172,
0.0025976374745368958,
0.04356079176068306,
0.016351159662008286,
-0.013763857074081898,
-0.012162725441157818,
0.041133083403110504,
... |
73f8e771-db85-4fa1-9c7c-8c968e4f59e4 | sidebar_label: 'DataGrip'
slug: /integrations/datagrip
description: 'DataGrip is a database IDE that supports ClickHouse out of the box.'
title: 'Connecting DataGrip to ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'sql_client'
- website: 'https://www.jetbrains.com/datagrip/'
keywords: ['DataGrip', 'database IDE', 'JetBrains', 'SQL client', 'integrated development environment']
import Image from '@theme/IdealImage';
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import datagrip_1 from '@site/static/images/integrations/sql-clients/datagrip-1.png';
import datagrip_5 from '@site/static/images/integrations/sql-clients/datagrip-5.png';
import datagrip_6 from '@site/static/images/integrations/sql-clients/datagrip-6.png';
import datagrip_7 from '@site/static/images/integrations/sql-clients/datagrip-7.png';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Connecting DataGrip to ClickHouse
Start or download DataGrip {#start-or-download-datagrip}
DataGrip is available at https://www.jetbrains.com/datagrip/
1. Gather your connection details {#1-gather-your-connection-details}
2. Load the ClickHouse driver {#2-load-the-clickhouse-driver}
Launch DataGrip, and on the
Data Sources
tab in the
Data Sources and Drivers
dialog, click the
+
icon
Select
ClickHouse
:::tip
As you establish connections the order changes, ClickHouse may not be at the top of your list yet.
:::
Switch to the
Drivers
tab and load the ClickHouse driver
DataGrip does not ship with drivers in order to minimize the download size. On the
Drivers
tab
Select
ClickHouse
from the
Complete Support
list, and expand the
+
sign. Choose the
Latest stable
driver from the
Provided Driver
option:
3. Connect to ClickHouse {#3-connect-to-clickhouse}
Specify your database connection details, and click
Test Connection
:
In step one you gathered your connection details, fill in the host URL, port, username, password, and database name, then test the connection.
:::tip
The
HOST
entry in the DataGrip dialog is actually a URL, see the image below.
For more details on JDBC URL settings, please refer to the
ClickHouse JDBC driver
repository.
:::
Learn more {#learn-more}
Find more information about DataGrip visit the DataGrip documentation. | {"source_file": "datagrip.md"} | [
-0.0563022755086422,
-0.060267042368650436,
-0.019737590104341507,
0.06484518945217133,
-0.039675842970609665,
-0.027316419407725334,
0.017827654257416725,
0.05091669410467148,
-0.05032737925648689,
-0.05398400127887726,
0.02525571547448635,
0.0012224100064486265,
0.015114032663404942,
-0.... |
c4a4eeb3-cc74-4c43-8a2b-28924d744faf | slug: /integrations/sql-clients/
description: 'Overview page for ClickHouse SQL clients.'
keywords: ['integrations', 'DataGrip', 'DBeaver', 'DbVisualizer', 'Jupyter Notebooks', 'QStudio', 'TABLUM.IO', 'marimo']
title: 'SQL Client Integrations'
doc_type: 'landing-page'
SQL client integrations
This section describes how to integrate ClickHouse with various common database management, analysis and visualization tools.
| Tool | Description |
|-----------------------------------------------------|-------------------------------------------------------------|
|
DataGrip
| Powerful database IDE |
|
DBeaver
| Database administration and development tool |
|
DbVisualizer
| Database management tool for developers, DBAs, and analysts |
|
Jupyter Notebooks
| Interactive notebooks for code, visualizations, and text |
|
QStudio
| Free, open-source SQL GUI client |
|
TABLUM.IO
| Cloud-based data visualization platform |
|
marimo
| Open-source reactive notebook for Python with SQL built-in | | {"source_file": "index.md"} | [
0.02101788856089115,
-0.08019251376390457,
-0.08740987628698349,
0.017342234030365944,
-0.06898876279592514,
-0.010378190316259861,
0.041926536709070206,
0.01769261434674263,
-0.04029739275574684,
0.02192915417253971,
0.007020600140094757,
0.0006547414232045412,
0.07763346284627914,
-0.034... |
766380a5-2c47-404a-a189-564356228f37 | slug: /integrations/dbeaver
sidebar_label: 'DBeaver'
description: 'DBeaver is a multi-platform database tool.'
title: 'Connect DBeaver to ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'sql_client'
- website: 'https://github.com/dbeaver/dbeaver'
keywords: ['DBeaver', 'database management', 'SQL client', 'JDBC connection', 'multi-platform']
import Image from '@theme/IdealImage';
import dbeaver_add_database from '@site/static/images/integrations/sql-clients/dbeaver-add-database.png';
import dbeaver_host_port from '@site/static/images/integrations/sql-clients/dbeaver-host-port.png';
import dbeaver_use_ssl from '@site/static/images/integrations/sql-clients/dbeaver-use-ssl.png';
import dbeaver_test_connection from '@site/static/images/integrations/sql-clients/dbeaver-test-connection.png';
import dbeaver_download_driver from '@site/static/images/integrations/sql-clients/dbeaver-download-driver.png';
import dbeaver_sql_editor from '@site/static/images/integrations/sql-clients/dbeaver-sql-editor.png';
import dbeaver_query_log_select from '@site/static/images/integrations/sql-clients/dbeaver-query-log-select.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Connect DBeaver to ClickHouse
DBeaver is available in multiple offerings. In this guide
DBeaver Community
is used. See the various offerings and capabilities
here
. DBeaver connects to ClickHouse using JDBC.
:::note
Please use DBeaver version 23.1.0 or above for improved support of
Nullable
columns in ClickHouse.
:::
1. Gather your ClickHouse details {#1-gather-your-clickhouse-details}
DBeaver uses JDBC over HTTP(S) to connect to ClickHouse; you need:
endpoint
port number
username
password
2. Download DBeaver {#2-download-dbeaver}
DBeaver is available at https://dbeaver.io/download/
3. Add a database {#3-add-a-database}
Either use the
Database > New Database Connection
menu or the
New Database Connection
icon in the
Database Navigator
to bring up the
Connect to a database
dialog:
Select
Analytical
and then
ClickHouse
:
Build the JDBC URL. On the
Main
tab set the Host, Port, Username, Password, and Database:
By default the
SSL > Use SSL
property will be unset, if you are connecting to ClickHouse Cloud or a server that requires SSL on the HTTP port, then set
SSL > Use SSL
on:
Test the connection:
If DBeaver detects that you do not have the ClickHouse driver installed it will offer to download them for you:
After downloading the driver
Test
the connection again:
4. Query ClickHouse {#4-query-clickhouse}
Open a query editor and run a query.
Right click on your connection and choose
SQL Editor > Open SQL Script
to open a query editor:
An example query against
system.query_log
:
Next steps {#next-steps} | {"source_file": "dbeaver.md"} | [
0.02381099760532379,
-0.08343875408172607,
-0.10869715362787247,
0.01364913210272789,
-0.00810416229069233,
-0.04567433521151543,
0.03412528708577156,
0.026005979627370834,
-0.04784202575683594,
-0.005205577705055475,
0.043040260672569275,
-0.027812838554382324,
0.12894566357135773,
0.0173... |
df6a1578-d5b3-49bb-a922-749f05deeb80 | Right click on your connection and choose
SQL Editor > Open SQL Script
to open a query editor:
An example query against
system.query_log
:
Next steps {#next-steps}
See the
DBeaver wiki
to learn about the capabilities of DBeaver, and the
ClickHouse documentation
to learn about the capabilities of ClickHouse. | {"source_file": "dbeaver.md"} | [
0.04818055406212807,
-0.11392976343631744,
-0.10349451005458832,
0.06587009876966476,
-0.048043571412563324,
-0.018054697662591934,
0.06808752566576004,
-0.047201741486787796,
-0.003981365822255611,
0.03966934606432915,
-0.016566967591643333,
0.007592142093926668,
0.07243674248456955,
-0.0... |
72038a20-488d-464a-b176-e2be3b9b177d | slug: /integrations/jupysql
sidebar_label: 'Jupyter notebooks'
description: 'JupySQL is a multi-platform database tool for Jupyter.'
title: 'Using JupySQL with ClickHouse'
keywords: ['JupySQL', 'Jupyter notebook', 'Python', 'data analysis', 'interactive SQL']
doc_type: 'guide'
integration:
- support_level: 'community'
- category: 'sql_client'
import Image from '@theme/IdealImage';
import jupysql_plot_1 from '@site/static/images/integrations/sql-clients/jupysql-plot-1.png';
import jupysql_plot_2 from '@site/static/images/integrations/sql-clients/jupysql-plot-2.png';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Using JupySQL with ClickHouse
In this guide we'll show an integration with ClickHouse.
We will use JupySQL to run queries on top of ClickHouse.
Once the data is loaded, we'll visualize it via SQL plotting.
The integration between JupySQL and ClickHouse is made possible by the use of the clickhouse_sqlalchemy library. This library allows for easy communication between the two systems, and enables users to connect to ClickHouse and pass the SQL dialect. Once connected, users can run SQL queries directly from the Clickhouse native UI, or from the Jupyter notebook directly.
```python
Install required packages
%pip install --quiet jupysql clickhouse_sqlalchemy
```
Note: you may need to restart the kernel to use updated packages.
```python
import pandas as pd
from sklearn_evaluation import plot
Import jupysql Jupyter extension to create SQL cells
%load_ext sql
%config SqlMagic.autocommit=False
```
You'd need to make sure your Clickhouse is up and reachable for the next stages. You can use either the local or the cloud version.
Note:
you will need to adjust the connection string according to the instance type you're trying to connect to (url, user, password). In the example below we've used a local instance. To learn more about it, check out
this guide
.
python
%sql clickhouse://default:@localhost:8123/default | {"source_file": "jupysql.md"} | [
0.028796037659049034,
0.036712396889925,
-0.009572629816830158,
0.00916214007884264,
-0.02525075525045395,
-0.010393332690000534,
0.040425047278404236,
0.03689824789762497,
-0.05833250284194946,
0.0034117649774998426,
0.055995430797338486,
-0.05334951728582382,
0.12033439427614212,
-0.0145... |
3d2d3ef9-8181-43c3-a13d-683f6d7c7a4d | python
%sql clickhouse://default:@localhost:8123/default
sql
%%sql
CREATE TABLE trips
(
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime;
* clickhouse://default:***@localhost:8123/default
Done. | {"source_file": "jupysql.md"} | [
0.052027978003025055,
-0.023242417722940445,
-0.03509923815727234,
0.058816198259592056,
-0.06433986872434616,
-0.0030938454438000917,
0.036342788487672806,
0.010722975246608257,
-0.042447157204151154,
0.04329023137688637,
0.10402648895978928,
-0.07819537818431854,
0.005412074737250805,
-0... |
cc6e9588-a811-4283-86a8-5709d2b29218 | * clickhouse://default:***@localhost:8123/default
Done.
sql
%%sql
INSERT INTO trips
SELECT * FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_{1..2}.gz',
'TabSeparatedWithNames', "
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
") SETTINGS input_format_try_infer_datetimes = 0
* clickhouse://default:***@localhost:8123/default
Done.
python
%sql SELECT count() FROM trips limit 5;
* clickhouse://default:***@localhost:8123/default
Done.
count()
1999657
python
%sql SELECT DISTINCT(pickup_ntaname) FROM trips limit 5;
* clickhouse://default:***@localhost:8123/default
Done.
pickup_ntaname
Morningside Heights
Hudson Yards-Chelsea-Flatiron-Union Square
Midtown-Midtown South
SoHo-Tribeca-Civic Center-Little Italy
Murray Hill-Kips Bay
python
%sql SELECT round(avg(tip_amount), 2) FROM trips
* clickhouse://default:***@localhost:8123/default
Done.
round(avg(tip_amount), 2)
1.68
sql
%%sql
SELECT
passenger_count,
ceil(avg(total_amount),2) AS average_total_amount
FROM trips
GROUP BY passenger_count
* clickhouse://default:***@localhost:8123/default
Done.
passenger_count
average_total_amount
0
22.69
1
15.97
2
17.15
3
16.76
4
17.33
5
16.35
6
16.04
7
59.8
8
36.41
9
9.81 | {"source_file": "jupysql.md"} | [
0.06193659454584122,
-0.06337295472621918,
-0.03506346419453621,
0.033653002232313156,
-0.012539338320493698,
0.017554299905896187,
0.05257924646139145,
-0.03144252672791481,
-0.04928729683160782,
0.048425592482089996,
0.04134713113307953,
-0.09779882431030273,
0.03299810364842415,
-0.0611... |
b9acb3df-bf99-4f57-8c30-8449a1e1ba0f | average_total_amount
0
22.69
1
15.97
2
17.15
3
16.76
4
17.33
5
16.35
6
16.04
7
59.8
8
36.41
9
9.81
sql
%%sql
SELECT
pickup_date,
pickup_ntaname,
SUM(1) AS number_of_trips
FROM trips
GROUP BY pickup_date, pickup_ntaname
ORDER BY pickup_date ASC
limit 5;
clickhouse://default:***@localhost:8123/default
Done.
pickup_date
pickup_ntaname
number_of_trips
2015-07-01
Bushwick North
2
2015-07-01
Brighton Beach
1
2015-07-01
Briarwood-Jamaica Hills
3
2015-07-01
Williamsburg
1
2015-07-01
Queensbridge-Ravenswood-Long Island City
9
```python
%sql DESCRIBE trips;
```
```python
%sql SELECT DISTINCT(trip_distance) FROM trips limit 50;
```
sql
%%sql --save short-trips --no-execute
SELECT *
FROM trips
WHERE trip_distance < 6.3
* clickhouse://default:***@localhost:8123/default
Skipping execution...
python
%sqlplot histogram --table short-trips --column trip_distance --bins 10 --with short-trips
response
<AxesSubplot: title={'center': "'trip_distance' from 'short-trips'"}, xlabel='trip_distance', ylabel='Count'>
python
ax = %sqlplot histogram --table short-trips --column trip_distance --bins 50 --with short-trips
ax.grid()
ax.set_title("Trip distance from trips < 6.3")
_ = ax.set_xlabel("Trip distance")
<Image img={jupysql_plot_2} size="md" alt="Histogram showing distribution of trip distances with 50 bins and grid, titled 'Trip distance from trips < 6.3'" border /> | {"source_file": "jupysql.md"} | [
0.06898073852062225,
-0.014762825332581997,
0.05339143052697182,
0.13838639855384827,
-0.061876922845840454,
-0.01197219267487526,
0.04585183784365654,
0.0038919036742299795,
-0.061095599085092545,
0.0016675795195624232,
0.028730519115924835,
-0.05599537491798401,
-0.0017762163188308477,
-... |
e3f1bde0-9991-4d25-ab6d-daa7d92a66a0 | sidebar_label: 'DbVisualizer'
slug: /integrations/dbvisualizer
description: 'DbVisualizer is a database tool with extended support for ClickHouse.'
title: 'Connecting DbVisualizer to ClickHouse'
keywords: ['DbVisualizer', 'database visualization', 'SQL client', 'JDBC driver', 'database tool']
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'sql_client'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import dbvisualizer_driver_manager from '@site/static/images/integrations/sql-clients/dbvisualizer-driver-manager.png';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Connecting DbVisualizer to ClickHouse
Start or download DbVisualizer {#start-or-download-dbvisualizer}
DbVisualizer is available at https://www.dbvis.com/download/
1. Gather your connection details {#1-gather-your-connection-details}
2. Built-in JDBC driver management {#2-built-in-jdbc-driver-management}
DbVisualizer has the most up-to-date JDBC drivers for ClickHouse included. It has full JDBC driver management built right in that points to the latest releases as well as historical versions for the drivers.
3. Connect to ClickHouse {#3-connect-to-clickhouse}
To connect a database with DbVisualizer, you must first create and setup a Database Connection.
Create a new connection from
Database->Create Database Connection
and select a driver for your database from the popup menu.
An
Object View
tab for the new connection is opened.
Enter a name for the connection in the
Name
field, and optionally enter a description of the connection in the
Notes
field.
Leave the
Database Type
as
Auto Detect
.
If the selected driver in
Driver Type
is marked with a green check mark then it is ready to use. If it is not marked with a green check mark, you may have to configure the driver in the
Driver Manager
.
Enter information about the database server in the remaining fields.
Verify that a network connection can be established to the specified address and port by clicking the
Ping Server
button.
If the result from Ping Server shows that the server can be reached, click
Connect
to connect to the database server.
:::tip
See
Fixing Connection Issues
for some tips if you have problems connecting to the database.
Learn more {#learn-more}
Find more information about DbVisualizer visit the
DbVisualizer documentation
. | {"source_file": "dbvisualizer.md"} | [
0.021947136148810387,
-0.013825541362166405,
-0.11005591601133347,
0.01324430201202631,
0.02778010256588459,
0.01569577492773533,
0.07714226841926575,
0.03169124200940132,
-0.14069721102714539,
-0.03323518484830856,
-0.0031060539186000824,
-0.007206318434327841,
0.1226012110710144,
-0.0403... |
26b16bc3-1996-4be4-b042-7eb36f638a5c | sidebar_label: 'TABLUM.IO'
slug: /integrations/tablumio
description: 'TABLUM.IO is a data management SaaS that supports ClickHouse out of the box.'
title: 'Connecting TABLUM.IO to ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'sql_client'
keywords: ['tablum', 'sql client', 'database tool', 'query tool', 'desktop app']
import Image from '@theme/IdealImage';
import tablum_ch_0 from '@site/static/images/integrations/sql-clients/tablum-ch-0.png';
import tablum_ch_1 from '@site/static/images/integrations/sql-clients/tablum-ch-1.png';
import tablum_ch_2 from '@site/static/images/integrations/sql-clients/tablum-ch-2.png';
import tablum_ch_3 from '@site/static/images/integrations/sql-clients/tablum-ch-3.png';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Connecting TABLUM.IO to ClickHouse
Open the TABLUM.IO startup page {#open-the-tablumio-startup-page}
:::note
You can install a self-hosted version of TABLUM.IO on your Linux server in docker.
:::
1. Sign up or sign in to the service {#1-sign-up-or-sign-in-to-the-service}
First, sign up to TABLUM.IO using your email or use a quick-login via accounts in Google or Facebook.
2. Add a ClickHouse connector {#2-add-a-clickhouse-connector}
Gather your ClickHouse connection details, navigate to the
Connector
tab, and fill in the host URL, port, username, password, database name, and connector's name. After completing these fields, click on
Test connection
button to validate the details and then click on
Save connector for me
to make it persistent.
:::tip
Make sure that you specify the correct
HTTP
port and toggle
SSL
mode according to your connection details.
:::
:::tip
Typically, the port is 8443 when using TLS or 8123 when not using TLS.
:::
3. Select the connector {#3-select-the-connector}
Navigate to the
Dataset
tab. Select recently created ClickHouse connector in the dropdown. In the right panel, you will see the list of available tables and schemas.
4. Input a SQL query and run it {#4-input-a-sql-query-and-run-it}
Type a query in the SQL Console and press
Run Query
. The results will be displayed as a spreadsheet.
:::tip
Right-click on the column name to open the dropdown menu with sort, filter and other actions.
:::
:::note
With TABLUM.IO you can
* create and utilise multiple ClickHouse connectors within your TABLUM.IO account,
* run queries on any loaded data regardless of the data source,
* share the results as a new ClickHouse database.
:::
Learn more {#learn-more}
Find more information about TABLUM.IO at https://tablum.io. | {"source_file": "tablum.md"} | [
0.05720632150769234,
-0.0921238586306572,
-0.08376207947731018,
0.016337299719452858,
-0.030669638887047768,
-0.025820501148700714,
0.09511493146419525,
0.02592931129038334,
-0.09356938302516937,
0.011829332448542118,
0.06111476570367813,
-0.009475686587393284,
0.12525857985019684,
-0.0070... |
052107fa-e760-4372-a12f-fa99e81fa210 | slug: /integrations/qstudio
sidebar_label: 'QStudio'
description: 'QStudio is a free SQL tool.'
title: 'Connect QStudio to ClickHouse'
doc_type: 'guide'
keywords: ['qstudio', 'sql client', 'database tool', 'query tool', 'ide']
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import qstudio_add_connection from '@site/static/images/integrations/sql-clients/qstudio-add-connection.png';
import qstudio_running_query from '@site/static/images/integrations/sql-clients/qstudio-running-query.png';
import Image from '@theme/IdealImage';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Connect QStudio to ClickHouse
QStudio is a free SQL GUI, it allows running SQL scripts, easy browsing of tables, charting and exporting of results. It works on every operating system, with every database.
QStudio connects to ClickHouse using JDBC.
1. Gather your ClickHouse details {#1-gather-your-clickhouse-details}
QStudio uses JDBC over HTTP(S) to connect to ClickHouse; you need:
endpoint
port number
username
password
2. Download QStudio {#2-download-qstudio}
QStudio is available at https://www.timestored.com/qstudio/download/
3. Add a database {#3-add-a-database}
When you first open QStudio click on the menu options
Server->Add Server
or on the add server button on the toolbar.
Then set the details:
Server Type: Clickhouse.com
Note for Host you MUST include https://
Host: https://abc.def.clickhouse.cloud
Port: 8443
Username: default
Password:
XXXXXXXXXXX
Click Add
If QStudio detects that you do not have the ClickHouse JDBC driver installed, it will offer to download them for you:
4. Query ClickHouse {#4-query-clickhouse}
Open a query editor and run a query. You can run queries by
Ctrl + e - Runs highlighted text
Ctrl + Enter - Runs the current line
An example query:
Next steps {#next-steps}
See
QStudio
to learn about the capabilities of QStudio, and the
ClickHouse documentation
to learn about the capabilities of ClickHouse. | {"source_file": "qstudio.md"} | [
0.03420175611972809,
-0.05180143192410469,
-0.08561048656702042,
-0.0023282812908291817,
-0.10585841536521912,
0.05954337120056152,
0.05114208534359932,
0.028017591685056686,
-0.0035806060768663883,
0.011172203347086906,
0.009629284031689167,
0.0012771988986060023,
0.10046910494565964,
0.0... |
49c26c1c-779a-462c-bf26-90b1f2c28e4a | sidebar_label: 'C#'
sidebar_position: 6
keywords: ['clickhouse', 'cs', 'c#', '.net', 'dotnet', 'csharp', 'client', 'driver', 'connect', 'integrate']
slug: /integrations/csharp
description: 'The official C# client for connecting to ClickHouse.'
title: 'ClickHouse C# Driver'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'language_client'
- website: 'https://github.com/ClickHouse/clickhouse-cs'
ClickHouse C# client
The official C# client for connecting to ClickHouse.
The client source code is available in the
GitHub repository
.
Originally developed by
Oleg V. Kozlyuk
.
Migration guide {#migration-guide}
Update your
.csproj
file with the new package name
ClickHouse.Driver
and
the latest version on NuGet
.
Update all
ClickHouse.Client
references to
ClickHouse.Driver
in your codebase.
Supported .NET versions {#supported-net-versions}
ClickHouse.Driver
supports the following .NET versions:
.NET Framework 4.6.2
.NET Framework 4.8
.NET Standard 2.1
.NET 6.0
.NET 8.0
.NET 9.0
Installation {#installation}
Install the package from NuGet:
bash
dotnet add package ClickHouse.Driver
Or using the NuGet Package Manager:
bash
Install-Package ClickHouse.Driver
Quick start {#quick-start}
```csharp
using ClickHouse.Driver.ADO;
using (var connection = new ClickHouseConnection("Host=my.clickhouse;Protocol=https;Port=8443;Username=user"))
{
var version = await connection.ExecuteScalarAsync("SELECT version()");
Console.WriteLine(version);
}
```
Using
Dapper
:
```csharp
using Dapper;
using ClickHouse.Driver.ADO;
using (var connection = new ClickHouseConnection("Host=my.clickhouse"))
{
var result = await connection.QueryAsync
("SELECT name FROM system.databases");
Console.WriteLine(string.Join('\n', result));
}
```
Usage {#usage}
Connection string parameters {#connection-string} | {"source_file": "csharp.md"} | [
-0.05072968080639839,
-0.01587488129734993,
-0.05272218957543373,
-0.009226893074810505,
-0.025756515562534332,
0.02208799123764038,
0.034294456243515015,
0.01015828549861908,
-0.1005115881562233,
-0.00833838526159525,
-0.007465831935405731,
-0.038743115961551666,
0.06916709989309311,
-0.0... |
790e82f2-ffa8-4e2c-b54e-1ca92b92813e | Usage {#usage}
Connection string parameters {#connection-string}
| Parameter | Description | Default |
| ------------------- | ----------------------------------------------- | ------------------- |
|
Host
| ClickHouse server address |
localhost
|
|
Port
| ClickHouse server port |
8123
or
8443
(depending on
Protocol
) |
|
Database
| Initial database |
default
|
|
Username
| Authentication username |
default
|
|
Password
| Authentication password |
(empty)
|
|
Protocol
| Connection protocol (
http
or
https
) |
http
|
|
Compression
| Enables Gzip compression |
true
|
|
UseSession
| Enables persistent server session |
false
|
|
SessionId
| Custom session ID | Random GUID |
|
Timeout
| HTTP timeout (seconds) |
120
|
|
UseServerTimezone
| Use server timezone for datetime columns |
true
|
|
UseCustomDecimals
| Use
ClickHouseDecimal
for decimals |
false
|
Example:
Host=clickhouse;Port=8123;Username=default;Password=;Database=default
:::note Sessions
UseSession
flag enables persistence of server session, allowing use of
SET
statements and temp tables. Session will be reset after 60 seconds of inactivity (default timeout). Session lifetime can be extended by setting session settings via ClickHouse statements.
ClickHouseConnection
class normally allows for parallel operation (multiple threads can run queries concurrently). However, enabling
UseSession
flag will limit that to one active query per connection at any moment of time (server-side limitation).
:::
Connection lifetime and pooling {#connection-lifetime}
ClickHouse.Driver
uses
System.Net.Http.HttpClient
under the hood.
HttpClient
has a per-endpoint connection pool. As a consequence:
A
ClickHouseConnection
object does not have 1:1 mapping to TCP connections - multiple database sessions will be multiplexed through several (2 by default) TCP connections per server.
Connections can stay alive after
ClickHouseConnection
object was disposed.
This behavior can be tweaked by passing a bespoke
HttpClient
with custom
HttpClientHandler
.
For DI environments, there is a bespoke constructor
ClickHouseConnection(string connectionString, IHttpClientFactory httpClientFactory, string httpClientName = "")
which allows to generalize HTTP client settings.
Recommendations: | {"source_file": "csharp.md"} | [
-0.005011194851249456,
-0.023981191217899323,
-0.18447762727737427,
-0.010061634704470634,
-0.061525266617536545,
-0.019330468028783798,
-0.0013583910185843706,
0.002781977178528905,
-0.021876655519008636,
0.027391470968723297,
0.007045796141028404,
0.011302300728857517,
0.08059219270944595,... |
c2bd02b0-0bcf-4cbf-b7b9-31d84b772afa | Recommendations:
A
ClickHouseConnection
represents a "session" with the server. It performs feature discovery by querying server version (so there is a minor overhead on opening), but generally it is safe to create and destroy such objects multiple times.
Recommended lifetime for a connection is one connection object per large "transaction" spanning multiple queries. There is a minor overhead on connection startup, so it's not recommended to create a connection object for each query.
If an application operates on large volumes of transactions and requires to create/destroy
ClickHouseConnection
objects often, it is recommended to use
IHttpClientFactory
or a static instance of
HttpClient
to manage connections.
Creating a table {#creating-a-table}
Create a table using standard SQL syntax:
```csharp
using ClickHouse.Driver.ADO;
using (var connection = new ClickHouseConnection(connectionString))
{
connection.Open();
using (var command = connection.CreateCommand())
{
command.CommandText = "CREATE TABLE IF NOT EXISTS default.my_table (id Int64, name String) ENGINE = Memory";
command.ExecuteNonQuery();
}
}
```
Inserting data {#inserting-data}
Insert data using parameterized queries:
```csharp
using ClickHouse.Driver.ADO;
using (var connection = new ClickHouseConnection(connectionString))
{
connection.Open();
using (var command = connection.CreateCommand())
{
command.AddParameter("id", "Int64", 1);
command.AddParameter("name", "String", "test");
command.CommandText = "INSERT INTO default.my_table (id, name) VALUES ({id:Int64}, {name:String})";
command.ExecuteNonQuery();
}
}
```
Bulk insert {#bulk-insert}
Using
ClickHouseBulkCopy
requires:
Target connection (
ClickHouseConnection
instance)
Target table name (
DestinationTableName
property)
Data source (
IDataReader
or
IEnumerable<object[]>
)
```csharp
using ClickHouse.Driver.ADO;
using ClickHouse.Driver.Copy;
using var connection = new ClickHouseConnection(connectionString);
connection.Open();
using var bulkCopy = new ClickHouseBulkCopy(connection)
{
DestinationTableName = "default.my_table",
BatchSize = 100000,
MaxDegreeOfParallelism = 2
};
await bulkCopy.InitAsync(); // Prepares ClickHouseBulkCopy instance by loading target column types
var values = Enumerable.Range(0, 1000000)
.Select(i => new object[] { (long)i, "value" + i });
await bulkCopy.WriteToServerAsync(values);
Console.WriteLine($"Rows written: {bulkCopy.RowsWritten}");
``` | {"source_file": "csharp.md"} | [
-0.02905004844069481,
-0.02970128133893013,
-0.10824187099933624,
0.05924312770366669,
-0.14008527994155884,
-0.05615021660923958,
0.07327257096767426,
-0.014884254895150661,
-0.008838318288326263,
0.024578187614679337,
-0.002453066408634186,
-0.009983071126043797,
0.07848493754863739,
-0.... |
582bf23f-218a-45a6-a29c-7116f9410c01 | await bulkCopy.WriteToServerAsync(values);
Console.WriteLine($"Rows written: {bulkCopy.RowsWritten}");
```
:::note
* For optimal performance, ClickHouseBulkCopy uses the Task Parallel Library (TPL) to process batches of data, with up to 4 parallel insertion tasks (this can be tuned).
* Column names can be optionally provided via
ColumnNames
property if source data has fewer columns than target table.
* Configurable parameters:
Columns
,
BatchSize
,
MaxDegreeOfParallelism
.
* Before copying, a
SELECT * FROM <table> LIMIT 0
query is performed to get information about target table structure. Types of provided objects must reasonably match the target table.
* Sessions are not compatible with parallel insertion. Connection passed to
ClickHouseBulkCopy
must have sessions disabled, or
MaxDegreeOfParallelism
must be set to
1
.
:::
Performing SELECT queries {#performing-select-queries}
Execute SELECT queries and process results:
```csharp
using ClickHouse.Driver.ADO;
using System.Data;
using (var connection = new ClickHouseConnection(connectionString))
{
connection.Open();
using (var command = connection.CreateCommand())
{
command.AddParameter("id", "Int64", 10);
command.CommandText = "SELECT * FROM default.my_table WHERE id < {id:Int64}";
using var reader = command.ExecuteReader();
while (reader.Read())
{
Console.WriteLine($"select: Id: {reader.GetInt64(0)}, Name: {reader.GetString(1)}");
}
}
}
```
Raw streaming {#raw-streaming}
csharp
using var command = connection.CreateCommand();
command.Text = "SELECT * FROM default.my_table LIMIT 100 FORMAT JSONEachRow";
using var result = await command.ExecuteRawResultAsync(CancellationToken.None);
using var stream = await result.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
var json = reader.ReadToEnd();
Nested columns support {#nested-columns}
ClickHouse nested types (
Nested(...)
) can be read and written using array semantics.
sql
CREATE TABLE test.nested (
id UInt32,
params Nested (param_id UInt8, param_val String)
) ENGINE = Memory
```csharp
using var bulkCopy = new ClickHouseBulkCopy(connection)
{
DestinationTableName = "test.nested"
};
var row1 = new object[] { 1, new[] { 1, 2, 3 }, new[] { "v1", "v2", "v3" } };
var row2 = new object[] { 2, new[] { 4, 5, 6 }, new[] { "v4", "v5", "v6" } };
await bulkCopy.WriteToServerAsync(new[] { row1, row2 });
```
AggregateFunction columns {#aggregatefunction-columns}
Columns of type
AggregateFunction(...)
cannot be queried or inserted directly.
To insert:
sql
INSERT INTO t VALUES (uniqState(1));
To select:
sql
SELECT uniqMerge(c) FROM t;
SQL parameters {#sql-parameters}
To pass parameters in query, ClickHouse parameter formatting must be used, in following form:
sql
{<name>:<data type>}
Examples:
sql
SELECT {value:Array(UInt16)} as value
sql
SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))} | {"source_file": "csharp.md"} | [
0.005340615287423134,
-0.06877580285072327,
-0.09839366376399994,
0.05684272199869156,
-0.04175247624516487,
-0.055565543472766876,
-0.011613554321229458,
-0.020928584039211273,
-0.042350996285676956,
0.034459397196769714,
0.05932772904634476,
-0.06590267270803452,
0.020877515897154808,
-0... |
ac119539-403b-4a76-b3d4-ea1973483e72 | sql
{<name>:<data type>}
Examples:
sql
SELECT {value:Array(UInt16)} as value
sql
SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}
sql
INSERT INTO table VALUES ({val1:Int32}, {val2:Array(UInt8)})
:::note
* SQL 'bind' parameters are passed as HTTP URI query parameters, so using too many of them may result in a "URL too long" exception.
* To insert large volume of records, consider using Bulk Insert functionality.
:::
Supported data types {#supported-data-types}
ClickHouse.Driver
supports the following ClickHouse data types with their corresponding .NET type mappings:
Boolean types {#boolean-types}
Bool
β
bool
Numeric types {#numeric-types}
Signed Integers:
*
Int8
β
sbyte
*
Int16
β
short
*
Int32
β
int
*
Int64
β
long
*
Int128
β
BigInteger
*
Int256
β
BigInteger
Unsigned Integers:
*
UInt8
β
byte
*
UInt16
β
ushort
*
UInt32
β
uint
*
UInt64
β
ulong
*
UInt128
β
BigInteger
*
UInt256
β
BigInteger
Floating Point:
*
Float32
β
float
*
Float64
β
double
Decimal:
*
Decimal
β
decimal
*
Decimal32
β
decimal
*
Decimal64
β
decimal
*
Decimal128
β
decimal
*
Decimal256
β
BigDecimal
String types {#string-types}
String
β
string
FixedString
β
string
Date and time types {#date-time-types}
Date
β
DateTime
Date32
β
DateTime
DateTime
β
DateTime
DateTime32
β
DateTime
DateTime64
β
DateTime
Network types {#network-types}
IPv4
β
IPAddress
IPv6
β
IPAddress
Geographic types {#geographic-types}
Point
β
Tuple
Ring
β
Array of Points
Polygon
β
Array of Rings
Complex types {#complex-types}
Array(T)
β
Array of any type
Tuple(T1, T2, ...)
β
Tuple of any types
Nullable(T)
β
Nullable version of any type
Map(K, V)
β
Dictionary<K, V>
DateTime handling {#datetime-handling}
ClickHouse.Driver
tries to correctly handle timezones and
DateTime.Kind
property. Specifically:
DateTime
values are returned as UTC. User can then convert them themselves or use
ToLocalTime()
method on
DateTime
instance.
When inserting,
DateTime
values are handled in following way:
UTC
DateTime
s are inserted 'as is', because ClickHouse stores them in UTC internally.
Local
DateTime
s are converted to UTC according to user's local timezone settings.
Unspecified
DateTime
s are considered to be in target column's timezone, and hence are converted to UTC according to that timezone.
For columns without timezone specified, client timezone is used by default (legacy behavior).
UseServerTimezone
flag in connection string can be used to use server timezone instead.
Environment variables {#environment-variables}
You can set defaults using environment variables: | {"source_file": "csharp.md"} | [
0.002113230060786009,
-0.07404393702745438,
-0.0929090827703476,
0.04954903945326805,
-0.11486044526100159,
-0.06972174346446991,
-0.0094459168612957,
0.0003511605900712311,
-0.08767111599445343,
0.02143656276166439,
-0.013136088848114014,
-0.026262659579515457,
0.08865687996149063,
-0.054... |
a1bc4c49-5f01-4261-ac7d-039e3d85670c | Environment variables {#environment-variables}
You can set defaults using environment variables:
| Variable | Purpose |
| --------------------- | ---------------- |
|
CLICKHOUSE_DB
| Default database |
|
CLICKHOUSE_USER
| Default username |
|
CLICKHOUSE_PASSWORD
| Default password |
:::note
Values provided explicitly to the
ClickHouseConnection
constructor will take priority over environment variables.
:::
ORM & Dapper support {#orm-support}
ClickHouse.Driver
supports Dapper (with limitations).
Working example:
csharp
connection.QueryAsync<string>(
"SELECT {p1:Int32}",
new Dictionary<string, object> { { "p1", 42 } }
);
Not supported:
csharp
connection.QueryAsync<string>(
"SELECT {p1:Int32}",
new { p1 = 42 }
); | {"source_file": "csharp.md"} | [
-0.03622229024767876,
-0.03174779936671257,
-0.06474671512842178,
0.0684325248003006,
-0.1297024041414261,
-0.03527991846203804,
0.05062974989414215,
0.006091176066547632,
-0.08964040130376816,
-0.0005898149101994932,
-0.01132582500576973,
-0.10106191039085388,
0.06645156443119049,
0.00817... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.