id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
7f668312-8a37-46a7-bae3-b6444fd1bce1 | | Setting | Description | Default value |
|--------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
|
fsync_after_insert
| Do the
fsync
for the file data after background insert to Distributed. Guarantees that the OS flushed the whole inserted data to a file
on the initiator node
disk. |
false
|
|
fsync_directories
| Do the
fsync
for directories. Guarantees that the OS refreshed directory metadata after operations related to background inserts on Distributed table (after insert, after sending the data to shard, etc.). |
false
|
|
skip_unavailable_shards
| If true, ClickHouse silently skips unavailable shards. Shard is marked as unavailable when: 1) The shard cannot be reached due to a connection failure. 2) Shard is unresolvable through DNS. 3) Table does not exist on the shard. |
false
|
|
bytes_to_throw_insert
| If more than this number of compressed bytes will be pending for background
INSERT
, an exception will be thrown.
0
- do not throw. |
0
|
|
bytes_to_delay_insert
| If more than this number of compressed bytes will be pending for background INSERT, the query will be delayed. 0 - do not delay. |
0
|
|
max_delay_to_insert
| Max delay of inserting data into Distributed table in seconds, if there are a lot of pending bytes for background send. |
60
|
|
background_insert_batch
| The same as
distributed_background_insert_batch
|
0
|
|
background_insert_split_batch_on_failure
| The same as
distributed_background_insert_split_batch_on_failure
|
0
|
|
background_insert_sleep_time_ms
| The same as
distributed_background_insert_sleep_time_ms
|
0
|
| | {"source_file": "distributed.md"} | [
0.04714751988649368,
0.06623343378305435,
-0.021558238193392754,
0.0017757569439709187,
-0.07634888589382172,
0.07680146396160126,
0.03045761026442051,
0.0635383278131485,
0.032603006809949875,
-0.04860292747616768,
0.022856781259179115,
-0.07949528098106384,
-0.0143241286277771,
-0.040719... |
015b6017-cf02-4477-9c05-b265c2a16fa5 | | The same as
distributed_background_insert_sleep_time_ms
|
0
|
|
background_insert_max_sleep_time_ms
| The same as
distributed_background_insert_max_sleep_time_ms
|
0
|
|
flush_on_detach
| Flush data to remote nodes on
DETACH
/
DROP
/server shutdown. |
true
| | {"source_file": "distributed.md"} | [
0.0561349131166935,
-0.029623430222272873,
-0.06822426617145538,
0.06182194873690605,
0.03472646698355675,
-0.020783698186278343,
-0.048006098717451096,
-0.019107520580291748,
0.004532132763415575,
0.018825536593794823,
0.08430399000644684,
0.05910854786634445,
0.09452630579471588,
-0.0877... |
013f5334-0c59-4c0d-a659-32b84a5087f7 | :::note
Durability settings
(
fsync_...
):
Affect only background
INSERT
s (i.e.
distributed_foreground_insert=false
) when data is first stored on the initiator node disk and later, in the background, when sent to shards.
May significantly decrease
INSERT
performance
Affect writing the data stored inside the distributed table folder into the
node which accepted your insert
. If you need to have guarantees of writing data to the underlying MergeTree tables, see durability settings (
...fsync...
) in
system.merge_tree_settings
For
Insert limit settings
(
..._insert
) see also:
distributed_foreground_insert
setting
prefer_localhost_replica
setting
bytes_to_throw_insert
handled before
bytes_to_delay_insert
, so you should not set it to the value less then
bytes_to_delay_insert
:::
Example
sql
CREATE TABLE hits_all AS hits
ENGINE = Distributed(logs, default, hits[, sharding_key[, policy_name]])
SETTINGS
fsync_after_insert=0,
fsync_directories=0;
Data will be read from all servers in the
logs
cluster, from the
default.hits
table located on every server in the cluster. Data is not only read but is partially processed on the remote servers (to the extent that this is possible). For example, for a query with
GROUP BY
, data will be aggregated on remote servers, and the intermediate states of aggregate functions will be sent to the requestor server. Then data will be further aggregated.
Instead of the database name, you can use a constant expression that returns a string. For example:
currentDatabase()
.
Clusters {#distributed-clusters}
Clusters are configured in the
server configuration file
:
```xml
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
-->
<!-- <secret></secret> -->
<!-- Optional. Whether distributed DDL queries (ON CLUSTER clause) are allowed for this cluster. Default: true (allowed). -->
<!-- <allow_distributed_ddl_queries>true</allow_distributed_ddl_queries> --> | {"source_file": "distributed.md"} | [
0.0675327330827713,
-0.06821397691965103,
-0.03885531798005104,
0.0616462416946888,
0.007714917417615652,
-0.11257950216531754,
-0.06705895066261292,
0.05562878027558327,
0.024607911705970764,
0.0708441436290741,
0.031915124505758286,
0.013546423986554146,
0.09990272670984268,
-0.068149454... |
fbeeb1ad-55b1-4fb8-bf21-084374afa01a | <shard>
<!-- Optional. Shard weight when writing data. Default: 1. -->
<weight>1</weight>
<!-- Optional. The shard name. Must be non-empty and unique among shards in the cluster. If not specified, will be empty. -->
<name>shard_01</name>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<internal_replication>false</internal_replication>
<replica>
<!-- Optional. Priority of the replica for load balancing (see also load_balancing setting). Default: 1 (less value has more priority). -->
<priority>1</priority>
<host>example01-01-1</host>
<port>9000</port>
</replica>
<replica>
<host>example01-01-2</host>
<port>9000</port>
</replica>
</shard>
<shard>
<weight>2</weight>
<name>shard_02</name>
<internal_replication>false</internal_replication>
<replica>
<host>example01-02-1</host>
<port>9000</port>
</replica>
<replica>
<host>example01-02-2</host>
<secure>1</secure>
<port>9440</port>
</replica>
</shard>
</logs>
```
Here a cluster is defined with the name
logs
that consists of two shards, each of which contains two replicas. Shards refer to the servers that contain different parts of the data (in order to read all the data, you must access all the shards). Replicas are duplicating servers (in order to read all the data, you can access the data on any one of the replicas).
Cluster names must not contain dots.
The parameters
host
,
port
, and optionally
user
,
password
,
secure
,
compression
,
bind_host
are specified for each server: | {"source_file": "distributed.md"} | [
0.04081007093191147,
-0.04108165577054024,
0.0024188326206058264,
0.012595501728355885,
-0.061342671513557434,
-0.05432233214378357,
-0.07219541817903519,
-0.03363003954291344,
0.008040365763008595,
0.022780517116189003,
0.017725206911563873,
-0.035609859973192215,
0.10954858362674713,
-0.... |
0c7d69c5-17ab-4b4f-a2ad-576d349925b0 | | Parameter | Description | Default Value |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|
|
host
| The address of the remote server. You can use either the domain or the IPv4 or IPv6 address. If you specify the domain, the server makes a DNS request when it starts, and the result is stored as long as the server is running. If the DNS request fails, the server does not start. If you change the DNS record, restart the server. | - |
|
port
| The TCP port for messenger activity (
tcp_port
in the config, usually set to 9000). Not to be confused with
http_port
. | - |
|
user
| Name of the user for connecting to a remote server. This user must have access to connect to the specified server. Access is configured in the
users.xml
file. For more information, see the section
Access rights
. |
default
|
|
password
| The password for connecting to a remote server (not masked). | '' |
|
secure
| Whether to use a secure SSL/TLS connection. Usually also requires specifying the port (the default secure port is
9440
). The server should listen on
<tcp_port_secure>9440</tcp_port_secure>
and be configured with correct certificates. |
false
|
|
compression
| Use data compression. |
true
|
|
bind_host | {"source_file": "distributed.md"} | [
0.042534567415714264,
0.08093276619911194,
-0.06253547221422195,
0.007405699696391821,
-0.09870737791061401,
0.04604874551296234,
0.03231581673026085,
0.06422016024589539,
-0.0005844180122949183,
-0.05037815496325493,
0.039196230471134186,
-0.08023396879434586,
0.0016821924364194274,
-0.06... |
cefcaf30-64d1-440a-9454-9496d60bdf83 | true
|
|
bind_host
| The source address to use when connecting to the remote server from this node. IPv4 address only supported. Intended for advanced deployment use cases where setting the source IP address used by ClickHouse distributed queries is needed. | - | | {"source_file": "distributed.md"} | [
0.015183391980826855,
-0.024640817195177078,
-0.07721133530139923,
-0.00016478955512866378,
-0.011923103593289852,
-0.028205212205648422,
-0.00943364854902029,
-0.074575275182724,
-0.013277123682200909,
0.041607923805713654,
0.007806520443409681,
0.024605272337794304,
0.0757402554154396,
-... |
901a519c-07fb-4a92-a329-526009ec6ddd | When specifying replicas, one of the available replicas will be selected for each of the shards when reading. You can configure the algorithm for load balancing (the preference for which replica to access) β see the
load_balancing
setting. If the connection with the server is not established, there will be an attempt to connect with a short timeout. If the connection failed, the next replica will be selected, and so on for all the replicas. If the connection attempt failed for all the replicas, the attempt will be repeated the same way, several times. This works in favour of resiliency, but does not provide complete fault tolerance: a remote server might accept the connection, but might not work, or work poorly.
You can specify just one of the shards (in this case, query processing should be called remote, rather than distributed) or up to any number of shards. In each shard, you can specify from one to any number of replicas. You can specify a different number of replicas for each shard.
You can specify as many clusters as you wish in the configuration.
To view your clusters, use the
system.clusters
table.
The
Distributed
engine allows working with a cluster like a local server. However, the cluster's configuration cannot be specified dynamically, it has to be configured in the server config file. Usually, all servers in a cluster will have the same cluster config (though this is not required). Clusters from the config file are updated on the fly, without restarting the server.
If you need to send a query to an unknown set of shards and replicas each time, you do not need to create a
Distributed
table β use the
remote
table function instead. See the section
Table functions
.
Writing data {#distributed-writing-data}
There are two methods for writing data to a cluster:
First, you can define which servers to write which data to and perform the write directly on each shard. In other words, perform direct
INSERT
statements on the remote tables in the cluster that the
Distributed
table is pointing to. This is the most flexible solution as you can use any sharding scheme, even one that is non-trivial due to the requirements of the subject area. This is also the most optimal solution since data can be written to different shards completely independently.
Second, you can perform
INSERT
statements on a
Distributed
table. In this case, the table will distribute the inserted data across the servers itself. In order to write to a
Distributed
table, it must have the
sharding_key
parameter configured (except if there is only one shard). | {"source_file": "distributed.md"} | [
0.034380365163087845,
-0.03986341506242752,
0.00045990594662725925,
0.02871391549706459,
-0.06561513245105743,
-0.015225442126393318,
-0.06633166968822479,
-0.025197677314281464,
0.03799377381801605,
0.01696356013417244,
-0.08260311186313629,
0.03719817101955414,
0.09440824389457703,
-0.02... |
347fc2c5-e549-4c67-846c-ca6763eb46d3 | Each shard can have a
<weight>
defined in the config file. By default, the weight is
1
. Data is distributed across shards in the amount proportional to the shard weight. All shard weights are summed up, then each shard's weight is divided by the total to determine each shard's proportion. For example, if there are two shards and the first has a weight of 1 while the second has a weight of 2, the first will be sent one third (1 / 3) of inserted rows and the second will be sent two thirds (2 / 3).
Each shard can have the
internal_replication
parameter defined in the config file. If this parameter is set to
true
, the write operation selects the first healthy replica and writes data to it. Use this if the tables underlying the
Distributed
table are replicated tables (e.g. any of the
Replicated*MergeTree
table engines). One of the table replicas will receive the write, and it will be replicated to the other replicas automatically.
If
internal_replication
is set to
false
(the default), data is written to all replicas. In this case, the
Distributed
table replicates data itself. This is worse than using replicated tables because the consistency of replicas is not checked and, over time, they will contain slightly different data.
To select the shard that a row of data is sent to, the sharding expression is analyzed, and its remainder is taken from dividing it by the total weight of the shards. The row is sent to the shard that corresponds to the half-interval of the remainders from
prev_weights
to
prev_weights + weight
, where
prev_weights
is the total weight of the shards with the smallest number, and
weight
is the weight of this shard. For example, if there are two shards, and the first has a weight of 9 while the second has a weight of 10, the row will be sent to the first shard for the remainders from the range [0, 9), and to the second for the remainders from the range [9, 19).
The sharding expression can be any expression from constants and table columns that returns an integer. For example, you can use the expression
rand()
for random distribution of data, or
UserID
for distribution by the remainder from dividing the user's ID (then the data of a single user will reside on a single shard, which simplifies running
IN
and
JOIN
by users). If one of the columns is not distributed evenly enough, you can wrap it in a hash function e.g.
intHash64(UserID)
.
A simple remainder from the division is a limited solution for sharding and isn't always appropriate. It works for medium and large volumes of data (dozens of servers), but not for very large volumes of data (hundreds of servers or more). In the latter case, use the sharding scheme required by the subject area rather than using entries in
Distributed
tables.
You should be concerned about the sharding scheme in the following cases: | {"source_file": "distributed.md"} | [
0.03243476152420044,
-0.07878731936216354,
-0.0233160313218832,
0.049758780747652054,
-0.04365444928407669,
-0.08538928627967834,
-0.05023125931620598,
-0.01913071796298027,
0.05651368200778961,
0.053935762494802475,
0.010275052860379219,
0.004224696196615696,
0.10471944510936737,
-0.05873... |
8b1aeedf-56e2-439b-8b5a-da0eaf89dab7 | You should be concerned about the sharding scheme in the following cases:
Queries are used that require joining data (
IN
or
JOIN
) by a specific key. If data is sharded by this key, you can use local
IN
or
JOIN
instead of
GLOBAL IN
or
GLOBAL JOIN
, which is much more efficient.
A large number of servers is used (hundreds or more) with a large number of small queries, for example, queries for data of individual clients (e.g. websites, advertisers, or partners). In order for the small queries to not affect the entire cluster, it makes sense to locate data for a single client on a single shard. Alternatively, you can set up bi-level sharding: divide the entire cluster into "layers", where a layer may consist of multiple shards. Data for a single client is located on a single layer, but shards can be added to a layer as necessary, and data is randomly distributed within them.
Distributed
tables are created for each layer, and a single shared distributed table is created for global queries.
Data is written in background. When inserted in the table, the data block is just written to the local file system. The data is sent to the remote servers in the background as soon as possible. The periodicity for sending data is managed by the
distributed_background_insert_sleep_time_ms
and
distributed_background_insert_max_sleep_time_ms
settings. The
Distributed
engine sends each file with inserted data separately, but you can enable batch sending of files with the
distributed_background_insert_batch
setting. This setting improves cluster performance by better utilizing local server and network resources. You should check whether data is sent successfully by checking the list of files (data waiting to be sent) in the table directory:
/var/lib/clickhouse/data/database/table/
. The number of threads performing background tasks can be set by
background_distributed_schedule_pool_size
setting.
If the server ceased to exist or had a rough restart (for example, due to a hardware failure) after an
INSERT
to a
Distributed
table, the inserted data might be lost. If a damaged data part is detected in the table directory, it is transferred to the
broken
subdirectory and no longer used.
Reading data {#distributed-reading-data}
When querying a
Distributed
table,
SELECT
queries are sent to all shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you do not have to transfer old data into it. Instead, you can write new data to it by using a heavier weight β the data will be distributed slightly unevenly, but queries will work correctly and efficiently.
When the
max_parallel_replicas
option is enabled, query processing is parallelized across all replicas within a single shard. For more information, see the section
max_parallel_replicas
. | {"source_file": "distributed.md"} | [
0.06917402148246765,
0.013368726707994938,
0.025213882327079773,
0.022403722628951073,
-0.021023986861109734,
-0.013589314185082912,
-0.04443889856338501,
-0.03076479397714138,
0.06393042206764221,
-0.025328727439045906,
-0.02855433151125908,
-0.000005342230451788055,
0.09505389630794525,
... |
7bf46edb-9258-48ba-b3f9-e9bd4e2225b0 | When the
max_parallel_replicas
option is enabled, query processing is parallelized across all replicas within a single shard. For more information, see the section
max_parallel_replicas
.
To learn more about how distributed
in
and
global in
queries are processed, refer to
this
documentation.
Virtual columns {#virtual-columns}
_Shard_num {#_shard_num}
_shard_num
β Contains the
shard_num
value from the table
system.clusters
. Type:
UInt32
.
:::note
Since
remote
and [
cluster](../../../sql-reference/table-functions/cluster.md) table functions internally create temporary Distributed table,
_shard_num` is available there too.
:::
See Also
Virtual columns
description
background_distributed_schedule_pool_size
setting
shardNum()
and
shardCount()
functions | {"source_file": "distributed.md"} | [
0.06931410729885101,
-0.08072610199451447,
-0.09303190559148788,
-0.00427060853689909,
-0.016586946323513985,
-0.06217537447810173,
-0.07948958873748779,
-0.038230251520872116,
0.016259098425507545,
0.07931999117136002,
0.022317146882414818,
-0.02421342022716999,
0.06032680347561836,
-0.05... |
aac878e2-1e72-4d1b-b85d-4ab507c493b8 | description: 'The File table engine keeps the data in a file in one of the supported
file formats (
TabSeparated
,
Native
, etc.).'
sidebar_label: 'File'
sidebar_position: 40
slug: /engines/table-engines/special/file
title: 'File table engine'
doc_type: 'reference'
File table engine
The File table engine keeps the data in a file in one of the supported
file formats
(
TabSeparated
,
Native
, etc.).
Usage scenarios:
Data export from ClickHouse to file.
Convert data from one format to another.
Updating data in ClickHouse via editing a file on a disk.
:::note
This engine is not currently available in ClickHouse Cloud, please
use the S3 table function instead
.
:::
Usage in ClickHouse Server {#usage-in-clickhouse-server}
sql
File(Format)
The
Format
parameter specifies one of the available file formats. To perform
SELECT
queries, the format must be supported for input, and to perform
INSERT
queries β for output. The available formats are listed in the
Formats
section.
ClickHouse does not allow specifying filesystem path for
File
. It will use folder defined by
path
setting in server configuration.
When creating table using
File(Format)
it creates empty subdirectory in that folder. When data is written to that table, it's put into
data.Format
file in that subdirectory.
You may manually create this subfolder and file in server filesystem and then
ATTACH
it to table information with matching name, so you can query data from that file.
:::note
Be careful with this functionality, because ClickHouse does not keep track of external changes to such files. The result of simultaneous writes via ClickHouse and outside of ClickHouse is undefined.
:::
Example {#example}
1.
Set up the
file_engine_table
table:
sql
CREATE TABLE file_engine_table (name String, value UInt32) ENGINE=File(TabSeparated)
By default ClickHouse will create folder
/var/lib/clickhouse/data/default/file_engine_table
.
2.
Manually create
/var/lib/clickhouse/data/default/file_engine_table/data.TabSeparated
containing:
bash
$ cat data.TabSeparated
one 1
two 2
3.
Query the data:
sql
SELECT * FROM file_engine_table
text
ββnameββ¬βvalueββ
β one β 1 β
β two β 2 β
ββββββββ΄ββββββββ
Usage in ClickHouse-local {#usage-in-clickhouse-local}
In
clickhouse-local
File engine accepts file path in addition to
Format
. Default input/output streams can be specified using numeric or human-readable names like
0
or
stdin
,
1
or
stdout
. It is possible to read and write compressed files based on an additional engine parameter or file extension (
gz
,
br
or
xz
).
Example:
bash
$ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64) ENGINE = File(CSV, stdin); SELECT a, b FROM table; DROP TABLE table"
Details of Implementation {#details-of-implementation}
Multiple
SELECT
queries can be performed concurrently, but
INSERT
queries will wait each other. | {"source_file": "file.md"} | [
0.01421048492193222,
-0.056513044983148575,
-0.08953996002674103,
0.03955169767141342,
0.025675080716609955,
0.0003773650387302041,
0.021885521709918976,
-0.005208734422922134,
-0.008189870975911617,
0.06070173159241676,
0.06643862277269363,
0.04788297787308693,
0.050112225115299225,
-0.11... |
802f556b-d0d9-4e24-8e48-402893f5f3d5 | Details of Implementation {#details-of-implementation}
Multiple
SELECT
queries can be performed concurrently, but
INSERT
queries will wait each other.
Supported creating new file by
INSERT
query.
If file exists,
INSERT
would append new values in it.
Not supported:
ALTER
SELECT ... SAMPLE
Indices
Replication
PARTITION BY {#partition-by}
PARTITION BY
β Optional. It is possible to create separate files by partitioning the data on a partition key. In most cases, you don't need a partition key, and if it is needed you generally don't need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the
toYYYYMM(date_column)
expression, where
date_column
is a column with a date of the type
Date
. The partition names here have the
"YYYYMM"
format.
Virtual columns {#virtual-columns}
_path
β Path to the file. Type:
LowCardinality(String)
.
_file
β Name of the file. Type:
LowCardinality(String)
.
_size
β Size of the file in bytes. Type:
Nullable(UInt64)
. If the size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
Settings {#settings}
engine_file_empty_if_not_exists
- allows to select empty data from a file that doesn't exist. Disabled by default.
engine_file_truncate_on_insert
- allows to truncate file before insert into it. Disabled by default.
engine_file_allow_create_multiple_files
- allows to create a new file on each insert if format has suffix. Disabled by default.
engine_file_skip_empty_files
- allows to skip empty files while reading. Disabled by default.
storage_file_read_method
- method of reading data from storage file, one of:
read
,
pread
,
mmap
. The mmap method does not apply to clickhouse-server (it's intended for clickhouse-local). Default value:
pread
for clickhouse-server,
mmap
for clickhouse-local. | {"source_file": "file.md"} | [
-0.04708114266395569,
-0.02238526940345764,
-0.021106142550706863,
0.024576598778367043,
-0.03932873532176018,
-0.05870061740279198,
-0.012969091534614563,
0.02423892728984356,
0.0010609461460262537,
0.07291100919246674,
0.013746397569775581,
0.041017141193151474,
0.05630580335855484,
-0.0... |
ce2d0743-523d-4e37-8834-d8bfa289f2b7 | description: 'This engine allows processing of application log files as a stream of
records.'
sidebar_label: 'FileLog'
sidebar_position: 160
slug: /engines/table-engines/special/filelog
title: 'FileLog table engine'
doc_type: 'reference'
FileLog table engine {#filelog-engine}
This engine allows processing of application log files as a stream of records.
FileLog
lets you:
Subscribe to log files.
Process new records as they are appended to subscribed log files.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = FileLog('path_to_logs', 'format_name') SETTINGS
[poll_timeout_ms = 0,]
[poll_max_batch_size = 0,]
[max_block_size = 0,]
[max_threads = 0,]
[poll_directory_watch_events_backoff_init = 500,]
[poll_directory_watch_events_backoff_max = 32000,]
[poll_directory_watch_events_backoff_factor = 2,]
[handle_error_mode = 'default']
Engine arguments:
path_to_logs
β Path to log files to subscribe. It can be path to a directory with log files or to a single log file. Note that ClickHouse allows only paths inside
user_files
directory.
format_name
- Record format. Note that FileLog process each line in a file as a separate record and not all data formats are suitable for it.
Optional parameters:
poll_timeout_ms
- Timeout for single poll from log file. Default:
stream_poll_timeout_ms
.
poll_max_batch_size
β Maximum amount of records to be polled in a single poll. Default:
max_block_size
.
max_block_size
β The maximum batch size (in records) for poll. Default:
max_insert_block_size
.
max_threads
- Number of max threads to parse files, default is 0, which means the number will be max(1, physical_cpu_cores / 4).
poll_directory_watch_events_backoff_init
- The initial sleep value for watch directory thread. Default:
500
.
poll_directory_watch_events_backoff_max
- The max sleep value for watch directory thread. Default:
32000
.
poll_directory_watch_events_backoff_factor
- The speed of backoff, exponential by default. Default:
2
.
handle_error_mode
β How to handle errors for FileLog engine. Possible values: default (the exception will be thrown if we fail to parse a message), stream (the exception message and raw message will be saved in virtual columns
_error
and
_raw_message
).
Description {#description}
The delivered records are tracked automatically, so each record in a log file is only counted once.
SELECT
is not particularly useful for reading records (except for debugging), because each record can be read only once. It is more practical to create real-time threads using
materialized views
. To do this:
Use the engine to create a FileLog table and consider it a data stream.
Create a table with the desired structure. | {"source_file": "filelog.md"} | [
-0.0006832393701188266,
-0.05290199816226959,
-0.07928112894296646,
0.045490484684705734,
-0.0016463604988530278,
-0.031737733632326126,
0.030464697629213333,
0.07819051295518875,
-0.03048526681959629,
0.09447646886110306,
-0.004695143550634384,
0.0021393727511167526,
0.06154191121459007,
... |
bcc9a293-debc-4518-a3f6-09a1d93edc3b | Use the engine to create a FileLog table and consider it a data stream.
Create a table with the desired structure.
Create a materialized view that converts data from the engine and puts it into a previously created table.
When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background. This allows you to continually receive records from log files and convert them to the required format using
SELECT
.
One FileLog table can have as many materialized views as you like, they do not read data from the table directly, but receive new records (in blocks), this way you can write to several tables with different detail level (with grouping - aggregation and without).
Example:
```sql
CREATE TABLE logs (
timestamp UInt64,
level String,
message String
) ENGINE = FileLog('user_files/my_app/app.log', 'JSONEachRow');
CREATE TABLE daily (
day Date,
level String,
total UInt64
) ENGINE = SummingMergeTree(day, (day, level), 8192);
CREATE MATERIALIZED VIEW consumer TO daily
AS SELECT toDate(toDateTime(timestamp)) AS day, level, count() AS total
FROM queue GROUP BY day, level;
SELECT level, sum(total) FROM daily GROUP BY level;
```
To stop receiving streams data or to change the conversion logic, detach the materialized view:
sql
DETACH TABLE consumer;
ATTACH TABLE consumer;
If you want to change the target table by using
ALTER
, we recommend disabling the material view to avoid discrepancies between the target table and the data from the view.
Virtual columns {#virtual-columns}
_filename
- Name of the log file. Data type:
LowCardinality(String)
.
_offset
- Offset in the log file. Data type:
UInt64
.
Additional virtual columns when
handle_error_mode='stream'
:
_raw_record
- Raw record that couldn't be parsed successfully. Data type:
Nullable(String)
.
_error
- Exception message happened during failed parsing. Data type:
Nullable(String)
.
Note:
_raw_record
and
_error
virtual columns are filled only in case of exception during parsing, they are always
NULL
when message was parsed successfully. | {"source_file": "filelog.md"} | [
-0.016865821555256844,
-0.004887678194791079,
-0.035123176872730255,
0.06699693948030472,
-0.07655765116214752,
-0.0782509446144104,
0.04324693977832794,
0.03633296862244606,
0.035595159977674484,
0.02901579439640045,
-0.014664003625512123,
-0.05872905254364014,
0.03338904306292534,
0.0395... |
c1be76ec-e815-46d4-a454-c98879a7af48 | description: 'A data set that is always in RAM. It is intended for use on the right
side of the
IN
operator.'
sidebar_label: 'Set'
sidebar_position: 60
slug: /engines/table-engines/special/set
title: 'Set table engine'
doc_type: 'reference'
Set table engine
:::note
In ClickHouse Cloud, if your service was created with a version earlier than 25.4, you will need to set the compatibility to at least 25.4 using
SET compatibility=25.4
.
:::
A data set that is always in RAM. It is intended for use on the right side of the
IN
operator (see the section "IN operators").
You can use
INSERT
to insert data in the table. New elements will be added to the data set, while duplicates will be ignored.
But you can't perform
SELECT
from the table. The only way to retrieve data is by using it in the right half of the
IN
operator.
Data is always located in RAM. For
INSERT
, the blocks of inserted data are also written to the directory of tables on the disk. When starting the server, this data is loaded to RAM. In other words, after restarting, the data remains in place.
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
Limitations and settings {#join-limitations-and-settings}
When creating a table, the following settings are applied:
Persistent {#persistent}
Disables persistency for the Set and
Join
table engines.
Reduces the I/O overhead. Suitable for scenarios that pursue performance and do not require persistence.
Possible values:
1 β Enabled.
0 β Disabled.
Default value:
1
. | {"source_file": "set.md"} | [
-0.00550485122948885,
0.009762474335730076,
-0.0859350711107254,
0.045150741934776306,
-0.07532081007957458,
0.026006091386079788,
0.05688326433300972,
0.02214961126446724,
0.006756791844964027,
0.036505185067653656,
0.09255478531122208,
0.05123582109808922,
0.08084962517023087,
-0.1467391... |
03bfcb0b-0e29-44ec-8710-9e0e3af0dfc2 | description: 'The
Dictionary
engine displays the dictionary data as a ClickHouse
table.'
sidebar_label: 'Dictionary'
sidebar_position: 20
slug: /engines/table-engines/special/dictionary
title: 'Dictionary table engine'
doc_type: 'reference'
Dictionary table engine
The
Dictionary
engine displays the
dictionary
data as a ClickHouse table.
Example {#example}
As an example, consider a dictionary of
products
with the following configuration:
xml
<dictionaries>
<dictionary>
<name>products</name>
<source>
<odbc>
<table>products</table>
<connection_string>DSN=some-db-server</connection_string>
</odbc>
</source>
<lifetime>
<min>300</min>
<max>360</max>
</lifetime>
<layout>
<flat/>
</layout>
<structure>
<id>
<name>product_id</name>
</id>
<attribute>
<name>title</name>
<type>String</type>
<null_value></null_value>
</attribute>
</structure>
</dictionary>
</dictionaries>
Query the dictionary data:
sql
SELECT
name,
type,
key,
attribute.names,
attribute.types,
bytes_allocated,
element_count,
source
FROM system.dictionaries
WHERE name = 'products'
text
ββnameββββββ¬βtypeββ¬βkeyβββββ¬βattribute.namesββ¬βattribute.typesββ¬βbytes_allocatedββ¬βelement_countββ¬βsourceβββββββββββ
β products β Flat β UInt64 β ['title'] β ['String'] β 23065376 β 175032 β ODBC: .products β
ββββββββββββ΄βββββββ΄βββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββ
You can use the
dictGet*
function to get the dictionary data in this format.
This view isn't helpful when you need to get raw data, or when performing a
JOIN
operation. For these cases, you can use the
Dictionary
engine, which displays the dictionary data in a table.
Syntax:
sql
CREATE TABLE %table_name% (%fields%) engine = Dictionary(%dictionary_name%)`
Usage example:
sql
CREATE TABLE products (product_id UInt64, title String) ENGINE = Dictionary(products);
Ok
Take a look at what's in the table.
sql
SELECT * FROM products LIMIT 1;
text
βββββproduct_idββ¬βtitleββββββββββββ
β 152689 β Some item β
βββββββββββββββββ΄ββββββββββββββββββ
See Also
Dictionary function | {"source_file": "dictionary.md"} | [
-0.0192415788769722,
-0.003044507233425975,
-0.11876042187213898,
-0.0017947215819731355,
-0.045558132231235504,
-0.06806527823209763,
0.05239482223987579,
-0.004924791865050793,
-0.02918287180364132,
-0.06828590482473373,
0.09410300105810165,
0.019691379740834236,
0.07931883633136749,
-0.... |
6b4d6f11-5bc6-4f4d-b4ec-bda8a3102398 | description: 'The Alias table engine creates a transparent proxy to another table. All operations are forwarded to the target table while the alias itself stores no data.'
sidebar_label: 'Alias'
sidebar_position: 5
slug: /engines/table-engines/special/alias
title: 'Alias table engine'
doc_type: 'reference'
Alias table engine
The
Alias
engine creates a proxy to another table. All read and write operations are forwarded to the target table, while the alias itself stores no data and only maintains a reference to the target table.
Creating a Table {#creating-a-table}
sql
CREATE TABLE [db_name.]alias_name
ENGINE = Alias(target_table)
Or with explicit database name:
sql
CREATE TABLE [db_name.]alias_name
ENGINE = Alias(target_db, target_table)
:::note
The
Alias
table does not support explicit column definitions. Columns are automatically inherited from the target table. This ensures that the alias always matches the target table's schema.
:::
Engine Parameters {#engine-parameters}
target_db (optional)
β Name of the database containing the target table.
target_table
β Name of the target table.
Supported Operations {#supported-operations}
The
Alias
table engine supports all major operations.
Operations on Target Table {#operations-on-target}
These operations are proxied to the target table:
| Operation | Support | Description |
|-----------|---------|-------------|
|
SELECT
| β
| Read data from target table |
|
INSERT
| β
| Write data to target table |
|
INSERT SELECT
| β
| Batch insert into target table |
|
ALTER TABLE ADD COLUMN
| β
| Add columns to target table |
|
ALTER TABLE MODIFY SETTING
| β
| Modify target table settings |
|
ALTER TABLE PARTITION
| β
| Partition operations (DETACH/ATTACH/DROP) on target |
|
ALTER TABLE UPDATE
| β
| Update rows in target table (mutation) |
|
ALTER TABLE DELETE
| β
| Delete rows from target table (mutation) |
|
OPTIMIZE TABLE
| β
| Optimize target table (merge parts) |
|
TRUNCATE TABLE
| β
| Truncate target table |
Operations on Alias Itself {#operations-on-alias}
These operations only affect the alias,
not
the target table:
| Operation | Support | Description |
|-----------|---------|-------------|
|
DROP TABLE
| β
| Drop the alias only, target table remains unchanged |
|
RENAME TABLE
| β
| Rename the alias only, target table remains unchanged |
Usage Examples {#usage-examples}
Basic Alias Creation {#basic-alias-creation}
Create a simple alias in the same database:
```sql
-- Create source table
CREATE TABLE source_data (
id UInt32,
name String,
value Float64
) ENGINE = MergeTree
ORDER BY id;
-- Insert some data
INSERT INTO source_data VALUES (1, 'one', 10.1), (2, 'two', 20.2);
-- Create alias
CREATE TABLE data_alias ENGINE = Alias('source_data');
-- Query through alias
SELECT * FROM data_alias;
```
text
ββidββ¬βnameββ¬βvalueββ
β 1 β one β 10.1 β
β 2 β two β 20.2 β
ββββββ΄βββββββ΄ββββββββ | {"source_file": "alias.md"} | [
-0.01970871537923813,
-0.06441419571638107,
-0.06778759509325027,
0.04240557551383972,
-0.05626443028450012,
-0.03808058425784111,
0.054392918944358826,
0.06276004761457443,
-0.018701959401369095,
0.05093080922961235,
0.026288729161024094,
-0.03411583602428436,
0.07464034855365753,
-0.0759... |
fe12abea-7154-4442-aa5e-f20def1b4dd3 | -- Query through alias
SELECT * FROM data_alias;
```
text
ββidββ¬βnameββ¬βvalueββ
β 1 β one β 10.1 β
β 2 β two β 20.2 β
ββββββ΄βββββββ΄ββββββββ
Cross-Database Alias {#cross-database-alias}
Create an alias pointing to a table in a different database:
```sql
-- Create databases
CREATE DATABASE db1;
CREATE DATABASE db2;
-- Create source table in db1
CREATE TABLE db1.events (
timestamp DateTime,
event_type String,
user_id UInt32
) ENGINE = MergeTree
ORDER BY timestamp;
-- Create alias in db2 pointing to db1.events
CREATE TABLE db2.events_alias ENGINE = Alias('db1', 'events');
-- Or using database.table format
CREATE TABLE db2.events_alias2 ENGINE = Alias('db1.events');
-- Both aliases work identically
INSERT INTO db2.events_alias VALUES (now(), 'click', 100);
SELECT * FROM db2.events_alias2;
```
Write Operations Through Alias {#write-operations}
All write operations are forwarded to the target table:
```sql
CREATE TABLE metrics (
ts DateTime,
metric_name String,
value Float64
) ENGINE = MergeTree
ORDER BY ts;
CREATE TABLE metrics_alias ENGINE = Alias('metrics');
-- Insert through alias
INSERT INTO metrics_alias VALUES
(now(), 'cpu_usage', 45.2),
(now(), 'memory_usage', 78.5);
-- Insert with SELECT
INSERT INTO metrics_alias
SELECT now(), 'disk_usage', number * 10
FROM system.numbers
LIMIT 5;
-- Verify data is in the target table
SELECT count() FROM metrics; -- Returns 7
SELECT count() FROM metrics_alias; -- Returns 7
```
Schema Modification {#schema-modification}
Alter operations modify the target table schema:
```sql
CREATE TABLE users (
id UInt32,
name String
) ENGINE = MergeTree
ORDER BY id;
CREATE TABLE users_alias ENGINE = Alias('users');
-- Add column through alias
ALTER TABLE users_alias ADD COLUMN email String DEFAULT '';
-- Column is added to target table
DESCRIBE users;
```
text
ββnameβββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ
β id β UInt32 β β β
β name β String β β β
β email β String β DEFAULT β '' β
βββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ
Data Mutations {#data-mutations}
UPDATE and DELETE operations are supported:
```sql
CREATE TABLE products (
id UInt32,
name String,
price Float64,
status String DEFAULT 'active'
) ENGINE = MergeTree
ORDER BY id;
CREATE TABLE products_alias ENGINE = Alias('products');
INSERT INTO products_alias VALUES
(1, 'item_one', 100.0, 'active'),
(2, 'item_two', 200.0, 'active'),
(3, 'item_three', 300.0, 'inactive');
-- Update through alias
ALTER TABLE products_alias UPDATE price = price * 1.1 WHERE status = 'active';
-- Delete through alias
ALTER TABLE products_alias DELETE WHERE status = 'inactive';
-- Changes are applied to target table
SELECT * FROM products ORDER BY id;
``` | {"source_file": "alias.md"} | [
-0.016829591244459152,
-0.07270321995019913,
0.007119422312825918,
0.008185142651200294,
-0.06569764763116837,
-0.043866682797670364,
0.0627373456954956,
0.10729042440652847,
0.014695391990244389,
0.038625963032245636,
0.03514082357287407,
-0.07862425595521927,
0.06468366831541061,
-0.0611... |
7bef58e5-8081-41b7-b252-50c5fb476d3c | -- Delete through alias
ALTER TABLE products_alias DELETE WHERE status = 'inactive';
-- Changes are applied to target table
SELECT * FROM products ORDER BY id;
```
text
ββidββ¬βnameββββββ¬βpriceββ¬βstatusββ
β 1 β item_one β 110.0 β active β
β 2 β item_two β 220.0 β active β
ββββββ΄βββββββββββ΄ββββββββ΄βββββββββ
Partition Operations {#partition-operations}
For partitioned tables, partition operations are forwarded:
```sql
CREATE TABLE logs (
date Date,
level String,
message String
) ENGINE = MergeTree
PARTITION BY toYYYYMM(date)
ORDER BY date;
CREATE TABLE logs_alias ENGINE = Alias('logs');
INSERT INTO logs_alias VALUES
('2024-01-15', 'INFO', 'message1'),
('2024-02-15', 'ERROR', 'message2'),
('2024-03-15', 'INFO', 'message3');
-- Detach partition through alias
ALTER TABLE logs_alias DETACH PARTITION '202402';
SELECT count() FROM logs_alias; -- Returns 2 (partition 202402 detached)
-- Attach partition back
ALTER TABLE logs_alias ATTACH PARTITION '202402';
SELECT count() FROM logs_alias; -- Returns 3
```
Table Optimization {#table-optimization}
Optimize operations merge parts in the target table:
```sql
CREATE TABLE events (
id UInt32,
data String
) ENGINE = MergeTree
ORDER BY id;
CREATE TABLE events_alias ENGINE = Alias('events');
-- Multiple inserts create multiple parts
INSERT INTO events_alias VALUES (1, 'data1');
INSERT INTO events_alias VALUES (2, 'data2');
INSERT INTO events_alias VALUES (3, 'data3');
-- Check parts count
SELECT count() FROM system.parts
WHERE database = currentDatabase()
AND table = 'events'
AND active;
-- Optimize through alias
OPTIMIZE TABLE events_alias FINAL;
-- Parts are merged in target table
SELECT count() FROM system.parts
WHERE database = currentDatabase()
AND table = 'events'
AND active; -- Returns 1
```
Alias Management {#alias-management}
Aliases can be renamed or dropped independently:
```sql
CREATE TABLE important_data (
id UInt32,
value String
) ENGINE = MergeTree
ORDER BY id;
INSERT INTO important_data VALUES (1, 'critical'), (2, 'important');
CREATE TABLE old_alias ENGINE = Alias('important_data');
-- Rename alias (target table unchanged)
RENAME TABLE old_alias TO new_alias;
-- Create another alias to same table
CREATE TABLE another_alias ENGINE = Alias('important_data');
-- Drop one alias (target table and other aliases unchanged)
DROP TABLE new_alias;
SELECT * FROM another_alias; -- Still works
SELECT count() FROM important_data; -- Data intact, returns 2
``` | {"source_file": "alias.md"} | [
0.04690396785736084,
-0.038761917501688004,
0.07301842421293259,
0.011225869879126549,
-0.025061799213290215,
-0.010684050619602203,
0.11979303508996964,
0.09379882365465164,
0.038476038724184036,
0.07085506618022919,
0.10610610991716385,
-0.0026625252794474363,
0.07802041620016098,
-0.031... |
5db8ed1b-ea40-45cc-b80f-118bafa6437e | description: 'The GenerateRandom table engine produces random data for given table
schema.'
sidebar_label: 'GenerateRandom'
sidebar_position: 140
slug: /engines/table-engines/special/generate
title: 'GenerateRandom table engine'
doc_type: 'reference'
GenerateRandom table engine
The GenerateRandom table engine produces random data for given table schema.
Usage examples:
Use in test to populate reproducible large table.
Generate random input for fuzzing tests.
Usage in ClickHouse Server {#usage-in-clickhouse-server}
sql
ENGINE = GenerateRandom([random_seed [,max_string_length [,max_array_length]]])
The
max_array_length
and
max_string_length
parameters specify maximum length of all
array or map columns and strings correspondingly in generated data.
Generate table engine supports only
SELECT
queries.
It supports all
DataTypes
that can be stored in a table except
AggregateFunction
.
Example {#example}
1.
Set up the
generate_engine_table
table:
sql
CREATE TABLE generate_engine_table (name String, value UInt32) ENGINE = GenerateRandom(1, 5, 3)
2.
Query the data:
sql
SELECT * FROM generate_engine_table LIMIT 3
text
ββnameββ¬ββββββvalueββ
β c4xJ β 1412771199 β
β r β 1791099446 β
β 7#$ β 124312908 β
ββββββββ΄βββββββββββββ
Details of Implementation {#details-of-implementation}
Not supported:
ALTER
SELECT ... SAMPLE
INSERT
Indices
Replication | {"source_file": "generate.md"} | [
0.006810818798840046,
-0.03355877473950386,
-0.0829281210899353,
0.0357290655374527,
-0.04942306503653526,
-0.04868103936314583,
0.019499091431498528,
-0.014173039235174656,
-0.06395594775676727,
0.007476263213902712,
0.021968426182866096,
-0.05992384999990463,
0.10615382343530655,
-0.1359... |
792685f4-7134-46ec-a890-db32ab8ea54d | description: 'The Memory engine stores data in RAM, in uncompressed form. Data is
stored in exactly the same form as it is received when read. In other words, reading
from this table is completely free.'
sidebar_label: 'Memory'
sidebar_position: 110
slug: /engines/table-engines/special/memory
title: 'Memory table engine'
doc_type: 'reference'
Memory table engine
:::note
When using the Memory table engine on ClickHouse Cloud, data is not replicated across all nodes (by design). To guarantee that all queries are routed to the same node and that the Memory table engine works as expected, you can do one of the following:
- Execute all operations in the same session
- Use a client that uses TCP or the native interface (which enables support for sticky connections) such as
clickhouse-client
:::
The Memory engine stores data in RAM, in uncompressed form. Data is stored in exactly the same form as it is received when read. In other words, reading from this table is completely free.
Concurrent data access is synchronized. Locks are short: read and write operations do not block each other.
Indexes are not supported. Reading is parallelized.
Maximal productivity (over 10 GB/sec) is reached on simple queries, because there is no reading from the disk, decompressing, or deserializing data. (We should note that in many cases, the productivity of the MergeTree engine is almost as high.)
When restarting a server, data disappears from the table and the table becomes empty.
Normally, using this table engine is not justified. However, it can be used for tests, and for tasks where maximum speed is required on a relatively small number of rows (up to approximately 100,000,000).
The Memory engine is used by the system for temporary tables with external query data (see the section "External data for processing a query"), and for implementing
GLOBAL IN
(see the section "IN operators").
Upper and lower bounds can be specified to limit Memory engine table size, effectively allowing it to act as a circular buffer (see
Engine Parameters
).
Engine parameters {#engine-parameters}
min_bytes_to_keep
β Minimum bytes to keep when memory table is size-capped.
Default value:
0
Requires
max_bytes_to_keep
max_bytes_to_keep
β Maximum bytes to keep within memory table where oldest rows are deleted on each insertion (i.e circular buffer). Max bytes can exceed the stated limit if the oldest batch of rows to remove falls under the
min_bytes_to_keep
limit when adding a large block.
Default value:
0
min_rows_to_keep
β Minimum rows to keep when memory table is size-capped.
Default value:
0
Requires
max_rows_to_keep
max_rows_to_keep
β Maximum rows to keep within memory table where oldest rows are deleted on each insertion (i.e circular buffer). Max rows can exceed the stated limit if the oldest batch of rows to remove falls under the
min_rows_to_keep
limit when adding a large block.
Default value:
0 | {"source_file": "memory.md"} | [
0.015589912422001362,
-0.04175784811377525,
-0.10952117294073105,
0.09367134422063828,
-0.07204211503267288,
0.02148481085896492,
0.02119591273367405,
-0.030719105154275894,
0.03250451013445854,
0.036828793585300446,
0.026163699105381966,
0.06474421918392181,
0.09479112923145294,
-0.122592... |
6fa42321-ac76-4217-b85f-a7f0140f1df7 | Default value:
0
compress
- Whether to compress data in memory.
Default value:
false
Usage {#usage}
Initialize settings
sql
CREATE TABLE memory (i UInt32) ENGINE = Memory SETTINGS min_rows_to_keep = 100, max_rows_to_keep = 1000;
Modify settings
sql
ALTER TABLE memory MODIFY SETTING min_rows_to_keep = 100, max_rows_to_keep = 1000;
Note:
Both
bytes
and
rows
capping parameters can be set at the same time, however, the lower bounds of
max
and
min
will be adhered to.
Examples {#examples}
```sql
CREATE TABLE memory (i UInt32) ENGINE = Memory SETTINGS min_bytes_to_keep = 4096, max_bytes_to_keep = 16384;
/
1. testing oldest block doesn't get deleted due to min-threshold - 3000 rows
/
INSERT INTO memory SELECT * FROM numbers(0, 1600); -- 8'192 bytes
/
2. adding block that doesn't get deleted
/
INSERT INTO memory SELECT * FROM numbers(1000, 100); -- 1'024 bytes
/
3. testing oldest block gets deleted - 9216 bytes - 1100
/
INSERT INTO memory SELECT * FROM numbers(9000, 1000); -- 8'192 bytes
/
4. checking a very large block overrides all
/
INSERT INTO memory SELECT * FROM numbers(9000, 10000); -- 65'536 bytes
SELECT total_bytes, total_rows FROM system.tables WHERE name = 'memory' AND database = currentDatabase();
```
text
ββtotal_bytesββ¬βtotal_rowsββ
β 65536 β 10000 β
βββββββββββββββ΄βββββββββββββ
also, for rows:
```sql
CREATE TABLE memory (i UInt32) ENGINE = Memory SETTINGS min_rows_to_keep = 4000, max_rows_to_keep = 10000;
/
1. testing oldest block doesn't get deleted due to min-threshold - 3000 rows
/
INSERT INTO memory SELECT * FROM numbers(0, 1600); -- 1'600 rows
/
2. adding block that doesn't get deleted
/
INSERT INTO memory SELECT * FROM numbers(1000, 100); -- 100 rows
/
3. testing oldest block gets deleted - 9216 bytes - 1100
/
INSERT INTO memory SELECT * FROM numbers(9000, 1000); -- 1'000 rows
/
4. checking a very large block overrides all
/
INSERT INTO memory SELECT * FROM numbers(9000, 10000); -- 10'000 rows
SELECT total_bytes, total_rows FROM system.tables WHERE name = 'memory' AND database = currentDatabase();
```
text
ββtotal_bytesββ¬βtotal_rowsββ
β 65536 β 10000 β
βββββββββββββββ΄βββββββββββββ | {"source_file": "memory.md"} | [
-0.023348459973931313,
0.027465973049402237,
-0.15783633291721344,
0.036461200565099716,
-0.10439103096723557,
-0.014805303886532784,
0.009284576401114464,
0.07876604050397873,
-0.04866935312747955,
0.0022147279232740402,
0.0028629719745367765,
-0.0038708376232534647,
0.03527838736772537,
... |
315baccd-11d3-4b16-9be1-5d423445c26f | description: 'Documentation for Special Table Engines'
sidebar_label: 'Special'
sidebar_position: 50
slug: /engines/table-engines/special/
title: 'Special table engines'
doc_type: 'reference'
Special table engines
There are three main categories of table engines:
MergeTree engine family
for main production use.
Log engine family
for small temporary data.
Table engines for integrations
.
The remaining engines are unique in their purpose and are not grouped into families yet, thus they are placed in this "special" category. | {"source_file": "index.md"} | [
-0.027884142473340034,
-0.04283348098397255,
-0.035493072122335434,
0.015524900518357754,
-0.007715553045272827,
-0.02234928123652935,
-0.07661997526884079,
0.015191151760518551,
-0.037464432418346405,
-0.02366236411035061,
0.005535021424293518,
-0.04681789502501488,
-0.014956850558519363,
... |
b8cf0d9a-b442-46b1-8257-7acc2cafc0ce | description: 'The
Merge
engine (not to be confused with
MergeTree
) does not store
data itself, but allows reading from any number of other tables simultaneously.'
sidebar_label: 'Merge'
sidebar_position: 30
slug: /engines/table-engines/special/merge
title: 'Merge table engine'
doc_type: 'reference'
Merge table engine
The
Merge
engine (not to be confused with
MergeTree
) does not store data itself, but allows reading from any number of other tables simultaneously.
Reading is automatically parallelized. Writing to a table is not supported. When reading, the indexes of tables that are actually being read are used, if they exist.
Creating a table {#creating-a-table}
sql
CREATE TABLE ... Engine=Merge(db_name, tables_regexp)
Engine parameters {#engine-parameters}
db_name
{#db_name}
db_name
β Possible values:
- database name,
- constant expression that returns a string with a database name, for example,
currentDatabase()
,
-
REGEXP(expression)
, where
expression
is a regular expression to match the DB names.
tables_regexp
{#tables_regexp}
tables_regexp
β A regular expression to match the table names in the specified DB or DBs.
Regular expressions β
re2
(supports a subset of PCRE), case-sensitive.
See the notes about escaping symbols in regular expressions in the "match" section.
Usage {#usage}
When selecting tables to read, the
Merge
table itself is not selected, even if it matches the regex. This is to avoid loops.
It is possible to create two
Merge
tables that will endlessly try to read each others' data, but this is not a good idea.
The typical way to use the
Merge
engine is for working with a large number of
TinyLog
tables as if with a single table.
Examples {#examples}
Example 1
Consider two databases
ABC_corporate_site
and
ABC_store
. The
all_visitors
table will contain IDs from the tables
visitors
in both databases.
sql
CREATE TABLE all_visitors (id UInt32) ENGINE=Merge(REGEXP('ABC_*'), 'visitors');
Example 2
Let's say you have an old table
WatchLog_old
and decided to change partitioning without moving data to a new table
WatchLog_new
, and you need to see data from both tables.
```sql
CREATE TABLE WatchLog_old(
date Date,
UserId Int64,
EventType String,
Cnt UInt64
)
ENGINE=MergeTree
ORDER BY (date, UserId, EventType);
INSERT INTO WatchLog_old VALUES ('2018-01-01', 1, 'hit', 3);
CREATE TABLE WatchLog_new(
date Date,
UserId Int64,
EventType String,
Cnt UInt64
)
ENGINE=MergeTree
PARTITION BY date
ORDER BY (UserId, EventType)
SETTINGS index_granularity=8192;
INSERT INTO WatchLog_new VALUES ('2018-01-02', 2, 'hit', 3);
CREATE TABLE WatchLog AS WatchLog_old ENGINE=Merge(currentDatabase(), '^WatchLog');
SELECT * FROM WatchLog;
``` | {"source_file": "merge.md"} | [
0.023571323603391647,
-0.08562055975198746,
-0.04995483160018921,
0.06116591766476631,
-0.049069758504629135,
-0.04984329640865326,
0.0019447157392278314,
0.04329313710331917,
-0.04437771439552307,
0.01805945113301277,
0.030779603868722916,
0.044209714978933334,
0.05988484248518944,
-0.095... |
9a47c95d-269c-4f8a-805a-6af53cb7035e | INSERT INTO WatchLog_new VALUES ('2018-01-02', 2, 'hit', 3);
CREATE TABLE WatchLog AS WatchLog_old ENGINE=Merge(currentDatabase(), '^WatchLog');
SELECT * FROM WatchLog;
```
text
ββββββββdateββ¬βUserIdββ¬βEventTypeββ¬βCntββ
β 2018-01-01 β 1 β hit β 3 β
ββββββββββββββ΄βββββββββ΄ββββββββββββ΄ββββββ
ββββββββdateββ¬βUserIdββ¬βEventTypeββ¬βCntββ
β 2018-01-02 β 2 β hit β 3 β
ββββββββββββββ΄βββββββββ΄ββββββββββββ΄ββββββ
Virtual columns {#virtual-columns}
_table
β The name of the table from which data was read. Type:
String
.
If you filter on
_table
, (for example
WHERE _table='xyz'
) only tables which satisfy the filter condition are read.
_database
β Contains the name of the database from which data was read. Type:
String
.
See Also
Virtual columns
merge
table function | {"source_file": "merge.md"} | [
0.08936502784490585,
-0.05025189742445946,
-0.0070213633589446545,
0.0410061739385128,
-0.00763375498354435,
0.01595490053296089,
0.05920768156647682,
-0.02533639222383499,
-0.03998347371816635,
0.06298794597387314,
0.04602476954460144,
-0.003725140355527401,
0.011658008210361004,
-0.07312... |
c9f66181-421f-40c5-a494-d23c714a9090 | description: 'ClickHouse allows sending a server the data that is needed for processing
a query, together with a
SELECT
query. This data is put in a temporary table and
can be used in the query (for example, in
IN
operators).'
sidebar_label: 'External data for query processing'
sidebar_position: 130
slug: /engines/table-engines/special/external-data
title: 'External data for query processing'
doc_type: 'reference'
External data for query processing
ClickHouse allows sending a server the data that is needed for processing a query, together with a
SELECT
query. This data is put in a temporary table (see the section "Temporary tables") and can be used in the query (for example, in
IN
operators).
For example, if you have a text file with important user identifiers, you can upload it to the server along with a query that uses filtration by this list.
If you need to run more than one query with a large volume of external data, do not use this feature. It is better to upload the data to the DB ahead of time.
External data can be uploaded using the command-line client (in non-interactive mode), or using the HTTP interface.
In the command-line client, you can specify a parameters section in the format
bash
--external --file=... [--name=...] [--format=...] [--types=...|--structure=...]
You may have multiple sections like this, for the number of tables being transmitted.
βexternal
β Marks the beginning of a clause.
βfile
β Path to the file with the table dump, or -, which refers to stdin.
Only a single table can be retrieved from stdin.
The following parameters are optional:
βname
β Name of the table. If omitted, _data is used.
βformat
β Data format in the file. If omitted, TabSeparated is used.
One of the following parameters is required:
βtypes
β A list of comma-separated column types. For example:
UInt64,String
. The columns will be named _1, _2, ...
βstructure
β The table structure in the format
UserID UInt64
,
URL String
. Defines the column names and types.
The files specified in 'file' will be parsed by the format specified in 'format', using the data types specified in 'types' or 'structure'. The table will be uploaded to the server and accessible there as a temporary table with the name in 'name'.
Examples:
bash
$ echo -ne "1\n2\n3\n" | clickhouse-client --query="SELECT count() FROM test.visits WHERE TraficSourceID IN _data" --external --file=- --types=Int8
849897
$ cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, count() AS c FROM passwd GROUP BY shell ORDER BY c DESC" --external --file=- --name=passwd --structure='login String, unused String, uid UInt16, gid UInt16, comment String, home String, shell String'
/bin/sh 20
/bin/false 5
/bin/bash 4
/usr/sbin/nologin 1
/bin/sync 1 | {"source_file": "external-data.md"} | [
-0.027072984725236893,
-0.028756454586982727,
-0.09477996826171875,
0.07516290992498398,
-0.026441393420100212,
-0.06928978860378265,
0.03829360753297806,
0.05242370069026947,
-0.010422954335808754,
0.003915808629244566,
0.04261639714241028,
0.057747166603803635,
0.06224644184112549,
-0.10... |
e4926200-3578-43f2-857e-0c623977de2e | When using the HTTP interface, external data is passed in the multipart/form-data format. Each table is transmitted as a separate file. The table name is taken from the file name. The
query_string
is passed the parameters
name_format
,
name_types
, and
name_structure
, where
name
is the name of the table that these parameters correspond to. The meaning of the parameters is the same as when using the command-line client.
Example:
```bash
$ cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv
$ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count()+AS+c+FROM+passwd+GROUP+BY+shell+ORDER+BY+c+DESC&passwd_structure=login+String,+unused+String,+uid+UInt16,+gid+UInt16,+comment+String,+home+String,+shell+String'
/bin/sh 20
/bin/false 5
/bin/bash 4
/usr/sbin/nologin 1
/bin/sync 1
```
For distributed query processing, the temporary tables are sent to all the remote servers. | {"source_file": "external-data.md"} | [
-0.02136920392513275,
0.038836002349853516,
-0.08533275127410889,
0.011399690061807632,
-0.09892316907644272,
-0.05108719319105148,
0.020278336480259895,
0.067558653652668,
0.05325980484485626,
0.026798749342560768,
0.03713854029774666,
-0.0356377474963665,
0.09718246012926102,
-0.10420385... |
a230817e-25bb-439e-ba2d-4b75ed4d839b | description: 'Optional prepared data structure for usage in JOIN operations.'
sidebar_label: 'Join'
sidebar_position: 70
slug: /engines/table-engines/special/join
title: 'Join table engine'
doc_type: 'reference'
Join table engine
Optional prepared data structure for usage in
JOIN
operations.
:::note
In ClickHouse Cloud, if your service was created with a version earlier than 25.4, you will need to set the compatibility to at least 25.4 using
SET compatibility=25.4
.
:::
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
) ENGINE = Join(join_strictness, join_type, k1[, k2, ...])
See the detailed description of the
CREATE TABLE
query.
Engine parameters {#engine-parameters}
join_strictness
{#join_strictness}
join_strictness
β
JOIN strictness
.
join_type
{#join_type}
join_type
β
JOIN type
.
Key columns {#key-columns}
k1[, k2, ...]
β Key columns from the
USING
clause that the
JOIN
operation is made with.
Enter
join_strictness
and
join_type
parameters without quotes, for example,
Join(ANY, LEFT, col1)
. They must match the
JOIN
operation that the table will be used for. If the parameters do not match, ClickHouse does not throw an exception and may return incorrect data.
Specifics and recommendations {#specifics-and-recommendations}
Data storage {#data-storage}
Join
table data is always located in the RAM. When inserting rows into a table, ClickHouse writes data blocks to the directory on the disk so that they can be restored when the server restarts.
If the server restarts incorrectly, the data block on the disk might get lost or damaged. In this case, you may need to manually delete the file with damaged data.
Selecting and Inserting Data {#selecting-and-inserting-data}
You can use
INSERT
queries to add data to the
Join
-engine tables. If the table was created with the
ANY
strictness, data for duplicate keys are ignored. With the
ALL
strictness, all rows are added.
Main use-cases for
Join
-engine tables are following:
Place the table to the right side in a
JOIN
clause.
Call the
joinGet
function, which lets you extract data from the table the same way as from a dictionary.
Deleting data {#deleting-data}
ALTER DELETE
queries for
Join
-engine tables are implemented as
mutations
.
DELETE
mutation reads filtered data and overwrites data of memory and disk.
Limitations and settings {#join-limitations-and-settings}
When creating a table, the following settings are applied:
join_use_nulls
{#join_use_nulls}
join_use_nulls
max_rows_in_join
{#max_rows_in_join}
max_rows_in_join
max_bytes_in_join
{#max_bytes_in_join}
max_bytes_in_join
join_overflow_mode
{#join_overflow_mode}
join_overflow_mode
join_any_take_last_row
{#join_any_take_last_row} | {"source_file": "join.md"} | [
-0.019750846549868584,
-0.030870895832777023,
-0.04395879805088043,
0.057700905948877335,
-0.09662406146526337,
-0.024811115115880966,
0.017406899482011795,
0.010112810879945755,
-0.062141917645931244,
-0.0022452007979154587,
0.029541321098804474,
-0.09216797351837158,
0.13751836121082306,
... |
6cbbf6b9-b884-4fb7-b933-3292e74bb24c | max_rows_in_join
max_bytes_in_join
{#max_bytes_in_join}
max_bytes_in_join
join_overflow_mode
{#join_overflow_mode}
join_overflow_mode
join_any_take_last_row
{#join_any_take_last_row}
join_any_take_last_row
join_use_nulls
{#join_use_nulls-1}
Persistent {#persistent}
Disables persistency for the Join and
Set
table engines.
Reduces the I/O overhead. Suitable for scenarios that pursue performance and do not require persistence.
Possible values:
1 β Enabled.
0 β Disabled.
Default value:
1
.
The
Join
-engine tables can't be used in
GLOBAL JOIN
operations.
The
Join
-engine allows to specify
join_use_nulls
setting in the
CREATE TABLE
statement.
SELECT
query should have the same
join_use_nulls
value.
Usage examples {#example}
Creating the left-side table:
sql
CREATE TABLE id_val(`id` UInt32, `val` UInt32) ENGINE = TinyLog;
sql
INSERT INTO id_val VALUES (1,11)(2,12)(3,13);
Creating the right-side
Join
table:
sql
CREATE TABLE id_val_join(`id` UInt32, `val` UInt8) ENGINE = Join(ANY, LEFT, id);
sql
INSERT INTO id_val_join VALUES (1,21)(1,22)(3,23);
Joining the tables:
sql
SELECT * FROM id_val ANY LEFT JOIN id_val_join USING (id);
text
ββidββ¬βvalββ¬βid_val_join.valββ
β 1 β 11 β 21 β
β 2 β 12 β 0 β
β 3 β 13 β 23 β
ββββββ΄ββββββ΄ββββββββββββββββββ
As an alternative, you can retrieve data from the
Join
table, specifying the join key value:
sql
SELECT joinGet('id_val_join', 'val', toUInt32(1));
text
ββjoinGet('id_val_join', 'val', toUInt32(1))ββ
β 21 β
ββββββββββββββββββββββββββββββββββββββββββββββ
Deleting a row from the
Join
table:
sql
ALTER TABLE id_val_join DELETE WHERE id = 3;
text
ββidββ¬βvalββ
β 1 β 21 β
ββββββ΄ββββββ | {"source_file": "join.md"} | [
-0.051181141287088394,
0.004502370487898588,
-0.04014994204044342,
0.01481536589562893,
-0.026875576004385948,
-0.032247450202703476,
-0.015307857654988766,
0.04963694512844086,
-0.057140737771987915,
-0.005175422877073288,
-0.03056998737156391,
0.05018899589776993,
0.04254790395498276,
-0... |
4c123db2-fec0-4eac-864e-ffc4665b6ac6 | description: 'This engine allows you to use Keeper/ZooKeeper cluster as consistent
key-value store with linearizable writes and sequentially consistent reads.'
sidebar_label: 'KeeperMap'
sidebar_position: 150
slug: /engines/table-engines/special/keeper-map
title: 'KeeperMap table engine'
doc_type: 'reference'
KeeperMap table engine
This engine allows you to use Keeper/ZooKeeper cluster as consistent key-value store with linearizable writes and sequentially consistent reads.
To enable KeeperMap storage engine, you need to define a ZooKeeper path where the tables will be stored using
<keeper_map_path_prefix>
config.
For example:
xml
<clickhouse>
<keeper_map_path_prefix>/keeper_map_tables</keeper_map_path_prefix>
</clickhouse>
where path can be any other valid ZooKeeper path.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = KeeperMap(root_path, [keys_limit]) PRIMARY KEY(primary_key_name)
Engine parameters:
root_path
- ZooKeeper path where the
table_name
will be stored.
This path should not contain the prefix defined by
<keeper_map_path_prefix>
config because the prefix will be automatically appended to the
root_path
.
Additionally, format of
auxiliary_zookeeper_cluster_name:/some/path
is also supported where
auxiliary_zookeeper_cluster
is a ZooKeeper cluster defined inside
<auxiliary_zookeepers>
config.
By default, ZooKeeper cluster defined inside
<zookeeper>
config is used.
keys_limit
- number of keys allowed inside the table.
This limit is a soft limit and it can be possible that more keys will end up in the table for some edge cases.
primary_key_name
β any column name in the column list.
primary key
must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a
node name
inside ZooKeeper.
columns other than the primary key will be serialized to binary in corresponding order and stored as a value of the resulting node defined by the serialized key.
queries with key
equals
or
in
filtering will be optimized to multi keys lookup from
Keeper
, otherwise all values will be fetched.
Example:
sql
CREATE TABLE keeper_map_table
(
`key` String,
`v1` UInt32,
`v2` String,
`v3` Float32
)
ENGINE = KeeperMap('/keeper_map_table', 4)
PRIMARY KEY key
with
xml
<clickhouse>
<keeper_map_path_prefix>/keeper_map_tables</keeper_map_path_prefix>
</clickhouse>
Each value, which is binary serialization of
(v1, v2, v3)
, will be stored inside
/keeper_map_tables/keeper_map_table/data/serialized_key
in
Keeper
.
Additionally, number of keys will have a soft limit of 4 for the number of keys.
If multiple tables are created on the same ZooKeeper path, the values are persisted until there exists at least 1 table using it. | {"source_file": "keepermap.md"} | [
0.0653521940112114,
-0.021826662123203278,
-0.10502267628908157,
0.03069142997264862,
-0.03847259655594826,
-0.0024863630533218384,
-0.012492632493376732,
0.004564145114272833,
-0.06071149557828903,
0.03343184292316437,
0.043515805155038834,
-0.08081061393022537,
0.10894257575273514,
-0.04... |
763d5e17-888c-4e54-aca7-24d9ec4eafa2 | If multiple tables are created on the same ZooKeeper path, the values are persisted until there exists at least 1 table using it.
As a result, it is possible to use
ON CLUSTER
clause when creating the table and sharing the data from multiple ClickHouse instances.
Of course, it's possible to manually run
CREATE TABLE
with same path on unrelated ClickHouse instances to have same data sharing effect.
Supported operations {#supported-operations}
Inserts {#inserts}
When new rows are inserted into
KeeperMap
, if the key does not exist, a new entry for the key is created.
If the key exists, and setting
keeper_map_strict_mode
is set to
true
, an exception is thrown, otherwise, the value for the key is overwritten.
Example:
sql
INSERT INTO keeper_map_table VALUES ('some key', 1, 'value', 3.2);
Deletes {#deletes}
Rows can be deleted using
DELETE
query or
TRUNCATE
.
If the key exists, and setting
keeper_map_strict_mode
is set to
true
, fetching and deleting data will succeed only if it can be executed atomically.
sql
DELETE FROM keeper_map_table WHERE key LIKE 'some%' AND v1 > 1;
sql
ALTER TABLE keeper_map_table DELETE WHERE key LIKE 'some%' AND v1 > 1;
sql
TRUNCATE TABLE keeper_map_table;
Updates {#updates}
Values can be updated using
ALTER TABLE
query. Primary key cannot be updated.
If setting
keeper_map_strict_mode
is set to
true
, fetching and updating data will succeed only if it's executed atomically.
sql
ALTER TABLE keeper_map_table UPDATE v1 = v1 * 10 + 2 WHERE key LIKE 'some%' AND v3 > 3.1;
Related content {#related-content}
Blog:
Building a Real-time Analytics Apps with ClickHouse and Hex | {"source_file": "keepermap.md"} | [
0.022105570882558823,
-0.052679143846035004,
-0.047927841544151306,
0.052191056311130524,
-0.015729358419775963,
-0.05414958670735359,
0.030926622450351715,
-0.08729296922683716,
-0.016786176711320877,
0.04389467462897301,
0.07151176780462265,
-0.046179305762052536,
0.12615397572517395,
-0... |
1913bdb2-2d9b-496a-9b1d-f742dbcaf9e5 | description: 'When writing to a
Null
table, data is ignored. When reading from a
Null
table, the response is empty.'
sidebar_label: 'Null'
sidebar_position: 50
slug: /engines/table-engines/special/null
title: 'Null table engine'
doc_type: 'reference'
Null table engine
When writing data to a
Null
table, data is ignored.
When reading from a
Null
table, the response is empty.
The
Null
table engine is useful for data transformations where you no longer need the original data after it has been transformed.
For this purpose you can create a materialized view on a
Null
table.
The data written to the table will be consumed by the view, but the original raw data will be discarded. | {"source_file": "null.md"} | [
-0.009160053916275501,
0.0034696480724960566,
-0.05587107315659523,
0.07701823860406876,
-0.0558011494576931,
0.017496876418590546,
-0.07205012440681458,
0.006827541161328554,
0.0038177287206053734,
0.02889000065624714,
0.07879457622766495,
-0.0769546702504158,
0.026753272861242294,
-0.066... |
a3156dda-8c2d-4ecf-a065-4c0a37acffea | description: 'The Kafka Table Engine can be used to publish works with Apache Kafka and lets you publish or subscribe
to data flows, organize fault-tolerant storage, and process streams as they become
available.'
sidebar_label: 'Kafka'
sidebar_position: 110
slug: /engines/table-engines/integrations/kafka
title: 'Kafka table engine'
keywords: ['Kafka', 'table engine']
doc_type: 'guide'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
Kafka table engine
:::tip
If you're on ClickHouse Cloud, we recommend using
ClickPipes
instead. ClickPipes natively supports private network connections, scaling ingestion and cluster resources independently, and comprehensive monitoring for streaming Kafka data into ClickHouse.
:::
Publish or subscribe to data flows.
Organize fault-tolerant storage.
Process streams as they become available.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [ALIAS expr1],
name2 [type2] [ALIAS expr2],
...
) ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'host:port',
kafka_topic_list = 'topic1,topic2,...',
kafka_group_name = 'group_name',
kafka_format = 'data_format'[,]
[kafka_security_protocol = '',]
[kafka_sasl_mechanism = '',]
[kafka_sasl_username = '',]
[kafka_sasl_password = '',]
[kafka_schema = '',]
[kafka_num_consumers = N,]
[kafka_max_block_size = 0,]
[kafka_skip_broken_messages = N,]
[kafka_commit_every_batch = 0,]
[kafka_client_id = '',]
[kafka_poll_timeout_ms = 0,]
[kafka_poll_max_batch_size = 0,]
[kafka_flush_interval_ms = 0,]
[kafka_thread_per_consumer = 0,]
[kafka_handle_error_mode = 'default',]
[kafka_commit_on_select = false,]
[kafka_max_rows_per_message = 1,]
[kafka_compression_codec = '',]
[kafka_compression_level = -1];
Required parameters:
kafka_broker_list
β A comma-separated list of brokers (for example,
localhost:9092
).
kafka_topic_list
β A list of Kafka topics.
kafka_group_name
β A group of Kafka consumers. Reading margins are tracked for each group separately. If you do not want messages to be duplicated in the cluster, use the same group name everywhere.
kafka_format
β Message format. Uses the same notation as the SQL
FORMAT
function, such as
JSONEachRow
. For more information, see the
Formats
section.
Optional parameters:
kafka_security_protocol
- Protocol used to communicate with brokers. Possible values:
plaintext
,
ssl
,
sasl_plaintext
,
sasl_ssl
.
kafka_sasl_mechanism
- SASL mechanism to use for authentication. Possible values:
GSSAPI
,
PLAIN
,
SCRAM-SHA-256
,
SCRAM-SHA-512
,
OAUTHBEARER
.
kafka_sasl_username
- SASL username for use with the
PLAIN
and
SASL-SCRAM-..
mechanisms.
kafka_sasl_password
- SASL password for use with the
PLAIN
and
SASL-SCRAM-..
mechanisms. | {"source_file": "kafka.md"} | [
-0.027675636112689972,
-0.0841561034321785,
-0.11359293013811111,
0.05643174797296524,
-0.003245737636461854,
-0.05475478619337082,
-0.03682655096054077,
-0.0019369127694517374,
-0.021993786096572876,
0.09598302841186523,
-0.021572889760136604,
-0.05934576317667961,
0.044006627053022385,
-... |
e95f09f2-fbc8-4cb2-885c-d2b915a9e11d | kafka_sasl_username
- SASL username for use with the
PLAIN
and
SASL-SCRAM-..
mechanisms.
kafka_sasl_password
- SASL password for use with the
PLAIN
and
SASL-SCRAM-..
mechanisms.
kafka_schema
β Parameter that must be used if the format requires a schema definition. For example,
Cap'n Proto
requires the path to the schema file and the name of the root
schema.capnp:Message
object.
kafka_schema_registry_skip_bytes
β The number of bytes to skip from the beginning of each message when using schema registry with envelope headers (e.g., AWS Glue Schema Registry which includes a 19-byte envelope). Range:
[0, 255]
. Default:
0
.
kafka_num_consumers
β The number of consumers per table. Specify more consumers if the throughput of one consumer is insufficient. The total number of consumers should not exceed the number of partitions in the topic, since only one consumer can be assigned per partition, and must not be greater than the number of physical cores on the server where ClickHouse is deployed. Default:
1
.
kafka_max_block_size
β The maximum batch size (in messages) for poll. Default:
max_insert_block_size
.
kafka_skip_broken_messages
β Kafka message parser tolerance to schema-incompatible messages per block. If
kafka_skip_broken_messages = N
then the engine skips
N
Kafka messages that cannot be parsed (a message equals a row of data). Default:
0
.
kafka_commit_every_batch
β Commit every consumed and handled batch instead of a single commit after writing a whole block. Default:
0
.
kafka_client_id
β Client identifier. Empty by default.
kafka_poll_timeout_ms
β Timeout for single poll from Kafka. Default:
stream_poll_timeout_ms
.
kafka_poll_max_batch_size
β Maximum amount of messages to be polled in a single Kafka poll. Default:
max_block_size
.
kafka_flush_interval_ms
β Timeout for flushing data from Kafka. Default:
stream_flush_interval_ms
.
kafka_thread_per_consumer
β Provide independent thread for each consumer. When enabled, every consumer flush the data independently, in parallel (otherwise β rows from several consumers squashed to form one block). Default:
0
.
kafka_handle_error_mode
β How to handle errors for Kafka engine. Possible values: default (the exception will be thrown if we fail to parse a message), stream (the exception message and raw message will be saved in virtual columns
_error
and
_raw_message
), dead_letter_queue (error related data will be saved in system.dead_letter_queue).
kafka_commit_on_select
β Commit messages when select query is made. Default:
false
.
kafka_max_rows_per_message
β The maximum number of rows written in one kafka message for row-based formats. Default :
1
. | {"source_file": "kafka.md"} | [
-0.020385460928082466,
-0.08140969276428223,
-0.11876480281352997,
-0.0046012913808226585,
-0.012667053379118443,
-0.03390152379870415,
-0.0013642505509778857,
0.004249002318829298,
0.03219572827219963,
0.04481885954737663,
-0.002717269118875265,
-0.09005436301231384,
0.013907781802117825,
... |
c866ce0a-9276-4c57-aa0c-2409ea978b55 | kafka_max_rows_per_message
β The maximum number of rows written in one kafka message for row-based formats. Default :
1
.
kafka_compression_codec
β Compression codec used for producing messages. Supported: empty string,
none
,
gzip
,
snappy
,
lz4
,
zstd
. In case of empty string the compression codec is not set by the table, thus values from the config files or default value from
librdkafka
will be used. Default: empty string.
kafka_compression_level
β Compression level parameter for algorithm selected by kafka_compression_codec. Higher values will result in better compression at the cost of more CPU usage. Usable range is algorithm-dependent:
[0-9]
for
gzip
;
[0-12]
for
lz4
; only
0
for
snappy
;
[0-12]
for
zstd
;
-1
= codec-dependent default compression level. Default:
-1
.
Examples:
```sql
CREATE TABLE queue (
timestamp UInt64,
level String,
message String
) ENGINE = Kafka('localhost:9092', 'topic', 'group1', 'JSONEachRow');
SELECT * FROM queue LIMIT 5;
CREATE TABLE queue2 (
timestamp UInt64,
level String,
message String
) ENGINE = Kafka SETTINGS kafka_broker_list = 'localhost:9092',
kafka_topic_list = 'topic',
kafka_group_name = 'group1',
kafka_format = 'JSONEachRow',
kafka_num_consumers = 4;
CREATE TABLE queue3 (
timestamp UInt64,
level String,
message String
) ENGINE = Kafka('localhost:9092', 'topic', 'group1')
SETTINGS kafka_format = 'JSONEachRow',
kafka_num_consumers = 4;
```
Deprecated Method for Creating a Table
:::note
Do not use this method in new projects. If possible, switch old projects to the method described above.
:::
```sql
Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format
[, kafka_row_delimiter, kafka_schema, kafka_num_consumers, kafka_max_block_size, kafka_skip_broken_messages, kafka_commit_every_batch, kafka_client_id, kafka_poll_timeout_ms, kafka_poll_max_batch_size, kafka_flush_interval_ms, kafka_thread_per_consumer, kafka_handle_error_mode, kafka_commit_on_select, kafka_max_rows_per_message]);
```
:::info
The Kafka table engine doesn't support columns with
default value
. If you need columns with default value, you can add them at materialized view level (see below).
:::
Description {#description}
The delivered messages are tracked automatically, so each message in a group is only counted once. If you want to get the data twice, then create a copy of the table with another group name.
Groups are flexible and synced on the cluster. For instance, if you have 10 topics and 5 copies of a table in a cluster, then each copy gets 2 topics. If the number of copies changes, the topics are redistributed across the copies automatically. Read more about this at http://kafka.apache.org/intro. | {"source_file": "kafka.md"} | [
-0.0518663227558136,
-0.005862755235284567,
-0.16765134036540985,
-0.02460717409849167,
-0.057042717933654785,
-0.028869984671473503,
-0.06647732108831406,
0.04763892665505409,
0.023259490728378296,
0.014874155633151531,
-0.02110343612730503,
-0.016889551654458046,
0.002348161768168211,
-0... |
be6cc301-6420-4555-b474-dff50982671d | It is recommended that each Kafka topic have its own dedicated consumer group, ensuring exclusive pairing between the topic and the group, especially in environments where topics may be created and deleted dynamically (e.g., in testing or staging).
SELECT
is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using materialized views. To do this:
Use the engine to create a Kafka consumer and consider it a data stream.
Create a table with the desired structure.
Create a materialized view that converts data from the engine and puts it into a previously created table.
When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background. This allows you to continually receive messages from Kafka and convert them to the required format using
SELECT
.
One kafka table can have as many materialized views as you like, they do not read data from the kafka table directly, but receive new records (in blocks), this way you can write to several tables with different detail level (with grouping - aggregation and without).
Example:
```sql
CREATE TABLE queue (
timestamp UInt64,
level String,
message String
) ENGINE = Kafka('localhost:9092', 'topic', 'group1', 'JSONEachRow');
CREATE TABLE daily (
day Date,
level String,
total UInt64
) ENGINE = SummingMergeTree(day, (day, level), 8192);
CREATE MATERIALIZED VIEW consumer TO daily
AS SELECT toDate(toDateTime(timestamp)) AS day, level, count() AS total
FROM queue GROUP BY day, level;
SELECT level, sum(total) FROM daily GROUP BY level;
```
To improve performance, received messages are grouped into blocks the size of
max_insert_block_size
. If the block wasn't formed within
stream_flush_interval_ms
milliseconds, the data will be flushed to the table regardless of the completeness of the block.
To stop receiving topic data or to change the conversion logic, detach the materialized view:
sql
DETACH TABLE consumer;
ATTACH TABLE consumer;
If you want to change the target table by using
ALTER
, we recommend disabling the material view to avoid discrepancies between the target table and the data from the view.
Configuration {#configuration}
Similar to GraphiteMergeTree, the Kafka engine supports extended configuration using the ClickHouse config file. There are two configuration keys that you can use: global (below
<kafka>
) and topic-level (below
<kafka><kafka_topic>
). The global configuration is applied first, and then the topic-level configuration is applied (if it exists).
```xml
cgrp
3000
<kafka_topic>
<name>logs</name>
<statistics_interval_ms>4000</statistics_interval_ms>
</kafka_topic> | {"source_file": "kafka.md"} | [
-0.015961775556206703,
-0.07357455790042877,
-0.06589947640895844,
0.046387679874897,
-0.033814460039138794,
-0.04609645903110504,
-0.00788541603833437,
-0.05281982570886612,
0.11559132486581802,
0.04065629839897156,
-0.030820779502391815,
-0.06327787786722183,
0.0025370882358402014,
-0.04... |
7280d970-0544-402e-957f-0a3b310e0ba4 | ```xml
cgrp
3000
<kafka_topic>
<name>logs</name>
<statistics_interval_ms>4000</statistics_interval_ms>
</kafka_topic>
<!-- Settings for consumer -->
<consumer>
<auto_offset_reset>smallest</auto_offset_reset>
<kafka_topic>
<name>logs</name>
<fetch_min_bytes>100000</fetch_min_bytes>
</kafka_topic>
<kafka_topic>
<name>stats</name>
<fetch_min_bytes>50000</fetch_min_bytes>
</kafka_topic>
</consumer>
<!-- Settings for producer -->
<producer>
<kafka_topic>
<name>logs</name>
<retry_backoff_ms>250</retry_backoff_ms>
</kafka_topic>
<kafka_topic>
<name>stats</name>
<retry_backoff_ms>400</retry_backoff_ms>
</kafka_topic>
</producer>
```
For a list of possible configuration options, see the
librdkafka configuration reference
. Use the underscore (
_
) instead of a dot in the ClickHouse configuration. For example,
check.crcs=true
will be
<check_crcs>true</check_crcs>
.
Kerberos support {#kafka-kerberos-support}
To deal with Kerberos-aware Kafka, add
security_protocol
child element with
sasl_plaintext
value. It is enough if Kerberos ticket-granting ticket is obtained and cached by OS facilities.
ClickHouse is able to maintain Kerberos credentials using a keytab file. Consider
sasl_kerberos_service_name
,
sasl_kerberos_keytab
and
sasl_kerberos_principal
child elements.
Example:
```xml
SASL_PLAINTEXT
/home/kafkauser/kafkauser.keytab
kafkauser/kafkahost@EXAMPLE.COM
```
Virtual columns {#virtual-columns}
_topic
β Kafka topic. Data type:
LowCardinality(String)
.
_key
β Key of the message. Data type:
String
.
_offset
β Offset of the message. Data type:
UInt64
.
_timestamp
β Timestamp of the message Data type:
Nullable(DateTime)
.
_timestamp_ms
β Timestamp in milliseconds of the message. Data type:
Nullable(DateTime64(3))
.
_partition
β Partition of Kafka topic. Data type:
UInt64
.
_headers.name
β Array of message's headers keys. Data type:
Array(String)
.
_headers.value
β Array of message's headers values. Data type:
Array(String)
.
Additional virtual columns when
kafka_handle_error_mode='stream'
:
_raw_message
- Raw message that couldn't be parsed successfully. Data type:
String
.
_error
- Exception message happened during failed parsing. Data type:
String
.
Note:
_raw_message
and
_error
virtual columns are filled only in case of exception during parsing, they are always empty when message was parsed successfully.
Data formats support {#data-formats-support}
Kafka engine supports all
formats
supported in ClickHouse.
The number of rows in one Kafka message depends on whether the format is row-based or block-based:
For row-based formats the number of rows in one Kafka message can be controlled by setting
kafka_max_rows_per_message
. | {"source_file": "kafka.md"} | [
-0.046727247536182404,
-0.01839420199394226,
-0.07639715820550919,
0.006728717125952244,
-0.04677567258477211,
-0.05078708752989769,
-0.05772216618061066,
0.02219710871577263,
-0.013847048394382,
0.026315512135624886,
0.04544960707426071,
-0.0917801707983017,
-0.03052043542265892,
-0.13458... |
7988280f-aeaa-4a41-b925-baee507c3568 | For row-based formats the number of rows in one Kafka message can be controlled by setting
kafka_max_rows_per_message
.
For block-based formats we cannot divide block into smaller parts, but the number of rows in one block can be controlled by general setting
max_block_size
.
Engine to store committed offsets in ClickHouse Keeper {#engine-to-store-committed-offsets-in-clickhouse-keeper}
If
allow_experimental_kafka_offsets_storage_in_keeper
is enabled, then two more settings can be specified to the Kafka table engine:
-
kafka_keeper_path
specifies the path to the table in ClickHouse Keeper
-
kafka_replica_name
specifies the replica name in ClickHouse Keeper
Either both of the settings must be specified or neither of them. When both of them are specified, then a new, experimental Kafka engine will be used. The new engine doesn't depend on storing the committed offsets in Kafka, but stores them in ClickHouse Keeper. It still tries to commit the offsets to Kafka, but it only depends on those offsets when the table is created. In any other circumstances (table is restarted, or recovered after some error) the offsets stored in ClickHouse Keeper will be used as an offset to continue consuming messages from. Apart from the committed offset, it also stores how many messages were consumed in the last batch, so if the insert fails, the same amount of messages will be consumed, thus enabling deduplication if necessary.
Example:
sql
CREATE TABLE experimental_kafka (key UInt64, value UInt64)
ENGINE = Kafka('localhost:19092', 'my-topic', 'my-consumer', 'JSONEachRow')
SETTINGS
kafka_keeper_path = '/clickhouse/{database}/{uuid}',
kafka_replica_name = '{replica}'
SETTINGS allow_experimental_kafka_offsets_storage_in_keeper=1;
Known limitations {#known-limitations}
As the new engine is experimental, it is not production ready yet. There are few known limitations of the implementation:
- The biggest limitation is the engine doesn't support direct reading. Reading from the engine using materialized views and writing to the engine work, but direct reading doesn't. As a result, all direct
SELECT
queries will fail.
- Rapidly dropping and recreating the table or specifying the same ClickHouse Keeper path to different engines might cause issues. As best practice you can use the
{uuid}
in
kafka_keeper_path
to avoid clashing paths.
- To make repeatable reads, messages cannot be consumed from multiple partitions on a single thread. On the other hand, the Kafka consumers have to be polled regularly to keep them alive. As a result of these two objectives, we decided to only allow creating multiple consumers if
kafka_thread_per_consumer
is enabled, otherwise it is too complicated to avoid issues regarding polling consumers regularly.
- Consumers created by the new storage engine do not show up in
system.kafka_consumers
table.
See Also
Virtual columns
background_message_broker_schedule_pool_size
system.kafka_consumers | {"source_file": "kafka.md"} | [
-0.0564584843814373,
-0.04119785502552986,
-0.13326667249202728,
-0.010986254550516605,
-0.04583284631371498,
0.003849109634757042,
-0.07566221803426743,
-0.035002946853637695,
0.06284096837043762,
0.05294107645750046,
-0.006314564496278763,
-0.045096006244421005,
0.013142663054168224,
-0.... |
f0930b8d-f287-4c81-94fc-ef1a10e9cce6 | description: 'This engine provides a read-only integration with existing Apache Iceberg
tables in Amazon S3, Azure, HDFS and locally stored tables.'
sidebar_label: 'Iceberg'
sidebar_position: 90
slug: /engines/table-engines/integrations/iceberg
title: 'Iceberg table engine'
doc_type: 'reference'
Iceberg table engine {#iceberg-table-engine}
:::warning
We recommend using the
Iceberg Table Function
for working with Iceberg data in ClickHouse. The Iceberg Table Function currently provides sufficient functionality, offering a partial read-only interface for Iceberg tables.
The Iceberg Table Engine is available but may have limitations. ClickHouse wasn't originally designed to support tables with externally changing schemas, which can affect the functionality of the Iceberg Table Engine. As a result, some features that work with regular tables may be unavailable or may not function correctly, especially when using the old analyzer.
For optimal compatibility, we suggest using the Iceberg Table Function while we continue to improve support for the Iceberg Table Engine.
:::
This engine provides a read-only integration with existing Apache
Iceberg
tables in Amazon S3, Azure, HDFS and locally stored tables.
Create table {#create-table}
Note that the Iceberg table must already exist in the storage, this command does not take DDL parameters to create a new table.
```sql
CREATE TABLE iceberg_table_s3
ENGINE = IcebergS3(url, [, NOSIGN | access_key_id, secret_access_key, [session_token]], format, [,compression])
CREATE TABLE iceberg_table_azure
ENGINE = IcebergAzure(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression])
CREATE TABLE iceberg_table_hdfs
ENGINE = IcebergHDFS(path_to_table, [,format] [,compression_method])
CREATE TABLE iceberg_table_local
ENGINE = IcebergLocal(path_to_table, [,format] [,compression_method])
```
Engine arguments {#engine-arguments}
Description of the arguments coincides with description of arguments in engines
S3
,
AzureBlobStorage
,
HDFS
and
File
correspondingly.
format
stands for the format of data files in the Iceberg table.
Engine parameters can be specified using
Named Collections
Example {#example}
sql
CREATE TABLE iceberg_table ENGINE=IcebergS3('http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test')
Using named collections:
xml
<clickhouse>
<named_collections>
<iceberg_conf>
<url>http://test.s3.amazonaws.com/clickhouse-bucket/</url>
<access_key_id>test</access_key_id>
<secret_access_key>test</secret_access_key>
</iceberg_conf>
</named_collections>
</clickhouse>
```sql
CREATE TABLE iceberg_table ENGINE=IcebergS3(iceberg_conf, filename = 'test_table')
```
Aliases {#aliases}
Table engine
Iceberg
is an alias to
IcebergS3
now.
Schema evolution {#schema-evolution} | {"source_file": "iceberg.md"} | [
-0.028910167515277863,
-0.06480231136083603,
-0.08259095996618271,
0.036199603229761124,
0.024377288296818733,
0.01574081927537918,
-0.0857698991894722,
-0.02380603738129139,
-0.033568475395441055,
0.0324552021920681,
0.03179217502474785,
0.0001862565550254658,
0.07613147795200348,
-0.0577... |
2b05c189-bc16-4fd6-ba15-b6fcc6c71bf7 | ```
Aliases {#aliases}
Table engine
Iceberg
is an alias to
IcebergS3
now.
Schema evolution {#schema-evolution}
At the moment, with the help of CH, you can read iceberg tables, the schema of which has changed over time. We currently support reading tables where columns have been added and removed, and their order has changed. You can also change a column where a value is required to one where NULL is allowed. Additionally, we support permitted type casting for simple types, namely: Β
* int -> long
* float -> double
* decimal(P, S) -> decimal(P', S) where P' > P.
Currently, it is not possible to change nested structures or the types of elements within arrays and maps.
To read a table where the schema has changed after its creation with dynamic schema inference, set allow_dynamic_metadata_for_data_lakes = true when creating the table.
Partition pruning {#partition-pruning}
ClickHouse supports partition pruning during SELECT queries for Iceberg tables, which helps optimize query performance by skipping irrelevant data files. To enable partition pruning, set
use_iceberg_partition_pruning = 1
. For more information about iceberg partition pruning address https://iceberg.apache.org/spec/#partitioning
Time travel {#time-travel}
ClickHouse supports time travel for Iceberg tables, allowing you to query historical data with a specific timestamp or snapshot ID.
Processing of tables with deleted rows {#deleted-rows}
Currently, only Iceberg tables with
position deletes
are supported.
The following deletion methods are
not supported
:
-
Equality deletes
-
Deletion vectors
(introduced in v3)
Basic usage {#basic-usage}
sql
SELECT * FROM example_table ORDER BY 1
SETTINGS iceberg_timestamp_ms = 1714636800000
sql
SELECT * FROM example_table ORDER BY 1
SETTINGS iceberg_snapshot_id = 3547395809148285433
Note: You cannot specify both
iceberg_timestamp_ms
and
iceberg_snapshot_id
parameters in the same query.
Important considerations {#important-considerations}
Snapshots
are typically created when:
New data is written to the table
Some kind of data compaction is performed
Schema changes typically don't create snapshots
- This leads to important behaviors when using time travel with tables that have undergone schema evolution.
Example scenarios {#example-scenarios}
All scenarios are written in Spark because CH doesn't support writing to Iceberg tables yet.
Scenario 1: Schema changes without new snapshots {#scenario-1}
Consider this sequence of operations:
```sql
-- Create a table with two columns
CREATE TABLE IF NOT EXISTS spark_catalog.db.time_travel_example (
order_number int,
product_code string
)
USING iceberg
OPTIONS ('format-version'='2')
-- Insert data into the table
INSERT INTO spark_catalog.db.time_travel_example VALUES
(1, 'Mars')
ts1 = now() // A piece of pseudo code | {"source_file": "iceberg.md"} | [
-0.014679129235446453,
-0.005793544463813305,
-0.0038136844523251057,
0.050691623240709305,
-0.006954150274395943,
-0.023247726261615753,
-0.018038490787148476,
0.04008302465081215,
-0.0730884000658989,
0.0021751634776592255,
-0.015649650245904922,
-0.03176684305071831,
-0.01246123481541872,... |
ed893655-92a2-4e2d-af31-013ca372d269 | -- Insert data into the table
INSERT INTO spark_catalog.db.time_travel_example VALUES
(1, 'Mars')
ts1 = now() // A piece of pseudo code
-- Alter table to add a new column
ALTER TABLE spark_catalog.db.time_travel_example ADD COLUMN (price double)
ts2 = now()
-- Insert data into the table
INSERT INTO spark_catalog.db.time_travel_example VALUES (2, 'Venus', 100)
ts3 = now()
-- Query the table at each timestamp
SELECT * FROM spark_catalog.db.time_travel_example TIMESTAMP AS OF ts1;
+------------+------------+
|order_number|product_code|
+------------+------------+
| 1| Mars|
+------------+------------+
SELECT * FROM spark_catalog.db.time_travel_example TIMESTAMP AS OF ts2;
+------------+------------+
|order_number|product_code|
+------------+------------+
| 1| Mars|
+------------+------------+
SELECT * FROM spark_catalog.db.time_travel_example TIMESTAMP AS OF ts3;
+------------+------------+-----+
|order_number|product_code|price|
+------------+------------+-----+
| 1| Mars| NULL|
| 2| Venus|100.0|
+------------+------------+-----+
```
Query results at different timestamps:
At ts1 & ts2: Only the original two columns appear
At ts3: All three columns appear, with NULL for the price of the first row
Scenario 2: Historical vs. current schema differences {#scenario-2}
A time travel query at a current moment might show a different schema than the current table:
```sql
-- Create a table
CREATE TABLE IF NOT EXISTS spark_catalog.db.time_travel_example_2 (
order_number int,
product_code string
)
USING iceberg
OPTIONS ('format-version'='2')
-- Insert initial data into the table
INSERT INTO spark_catalog.db.time_travel_example_2 VALUES (2, 'Venus');
-- Alter table to add a new column
ALTER TABLE spark_catalog.db.time_travel_example_2 ADD COLUMN (price double);
ts = now();
-- Query the table at a current moment but using timestamp syntax
SELECT * FROM spark_catalog.db.time_travel_example_2 TIMESTAMP AS OF ts;
+------------+------------+
|order_number|product_code|
+------------+------------+
| 2| Venus|
+------------+------------+
-- Query the table at a current moment
SELECT * FROM spark_catalog.db.time_travel_example_2;
+------------+------------+-----+
|order_number|product_code|price|
+------------+------------+-----+
| 2| Venus| NULL|
+------------+------------+-----+
```
This happens because
ALTER TABLE
doesn't create a new snapshot but for the current table Spark takes value of
schema_id
from the latest metadata file, not a snapshot.
Scenario 3: Historical vs. current schema differences {#scenario-3}
The second one is that while doing time travel you can't get state of table before any data was written to it: | {"source_file": "iceberg.md"} | [
0.021954752504825592,
-0.021491600200533867,
0.026597226038575172,
0.022656599059700966,
-0.03709499537944794,
0.01198488101363182,
0.023626986891031265,
0.07048030942678452,
-0.031100675463676453,
0.04095007851719856,
0.06936807185411453,
-0.06837473809719086,
-0.0665324330329895,
-0.0542... |
68a5aa4c-dcb0-4fb5-965e-de5f8de3499f | Scenario 3: Historical vs. current schema differences {#scenario-3}
The second one is that while doing time travel you can't get state of table before any data was written to it:
```sql
-- Create a table
CREATE TABLE IF NOT EXISTS spark_catalog.db.time_travel_example_3 (
order_number int,
product_code string
)
USING iceberg
OPTIONS ('format-version'='2');
ts = now();
-- Query the table at a specific timestamp
SELECT * FROM spark_catalog.db.time_travel_example_3 TIMESTAMP AS OF ts; -- Finises with error: Cannot find a snapshot older than ts.
```
In Clickhouse the behavior is consistent with Spark. You can mentally replace Spark Select queries with Clickhouse Select queries and it will work the same way.
Metadata file resolution {#metadata-file-resolution}
When using the
Iceberg
table engine in ClickHouse, the system needs to locate the correct metadata.json file that describes the Iceberg table structure. Here's how this resolution process works:
Candidates search {#candidate-search}
Direct Path Specification
:
If you set
iceberg_metadata_file_path
, the system will use this exact path by combining it with the Iceberg table directory path.
When this setting is provided, all other resolution settings are ignored.
Table UUID Matching
:
If
iceberg_metadata_table_uuid
is specified, the system will:
Look only at
.metadata.json
files in the
metadata
directory
Filter for files containing a
table-uuid
field matching your specified UUID (case-insensitive)
Default Search
:
If neither of the above settings are provided, all
.metadata.json
files in the
metadata
directory become candidates
Selecting the most recent file {#most-recent-file}
After identifying candidate files using the above rules, the system determines which one is the most recent:
If
iceberg_recent_metadata_file_by_last_updated_ms_field
is enabled:
The file with the largest
last-updated-ms
value is selected
Otherwise:
The file with the highest version number is selected
(Version appears as
V
in filenames formatted as
V.metadata.json
or
V-uuid.metadata.json
)
Note
: All mentioned settings are engine-level settings and must be specified during table creation as shown below:
sql
CREATE TABLE example_table ENGINE = Iceberg(
's3://bucket/path/to/iceberg_table'
) SETTINGS iceberg_metadata_table_uuid = '6f6f6407-c6a5-465f-a808-ea8900e35a38';
Note
: While Iceberg Catalogs typically handle metadata resolution, the
Iceberg
table engine in ClickHouse directly interprets files stored in S3 as Iceberg tables, which is why understanding these resolution rules is important.
Data cache {#data-cache}
Iceberg
table engine and table function support data caching same as
S3
,
AzureBlobStorage
,
HDFS
storages. See
here
.
Metadata cache {#metadata-cache} | {"source_file": "iceberg.md"} | [
-0.047989655286073685,
-0.04926374927163124,
-0.006559618748724461,
0.0073815323412418365,
0.050607677549123764,
-0.04800637066364288,
-0.03571399673819542,
0.052023448050022125,
-0.02420300431549549,
-0.013461831025779247,
0.024892587214708328,
0.01142705138772726,
-0.026361413300037384,
... |
63979d56-4e85-42b4-bd2e-805cdccfc7fc | Data cache {#data-cache}
Iceberg
table engine and table function support data caching same as
S3
,
AzureBlobStorage
,
HDFS
storages. See
here
.
Metadata cache {#metadata-cache}
Iceberg
table engine and table function support metadata cache storing the information of manifest files, manifest list and metadata json. The cache is stored in memory. This feature is controlled by setting
use_iceberg_metadata_files_cache
, which is enabled by default.
See also {#see-also}
iceberg table function | {"source_file": "iceberg.md"} | [
0.03197982907295227,
-0.022236840799450874,
-0.0792522132396698,
0.02007823996245861,
0.0074655464850366116,
-0.030887991189956665,
0.019352415576577187,
0.004267581272870302,
0.024665754288434982,
0.08627072721719742,
0.003537799697369337,
0.06704864650964737,
0.0037364684976637363,
-0.10... |
86408fff-7a2c-42de-8f72-f8faf4ea6edf | description: 'This engine allows integrating ClickHouse with RabbitMQ.'
sidebar_label: 'RabbitMQ'
sidebar_position: 170
slug: /engines/table-engines/integrations/rabbitmq
title: 'RabbitMQ table engine'
doc_type: 'guide'
RabbitMQ table engine
This engine allows integrating ClickHouse with
RabbitMQ
.
RabbitMQ
lets you:
Publish or subscribe to data flows.
Process streams as they become available.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1],
name2 [type2],
...
) ENGINE = RabbitMQ SETTINGS
rabbitmq_host_port = 'host:port' [or rabbitmq_address = 'amqp(s)://guest:guest@localhost/vhost'],
rabbitmq_exchange_name = 'exchange_name',
rabbitmq_format = 'data_format'[,]
[rabbitmq_exchange_type = 'exchange_type',]
[rabbitmq_routing_key_list = 'key1,key2,...',]
[rabbitmq_secure = 0,]
[rabbitmq_schema = '',]
[rabbitmq_num_consumers = N,]
[rabbitmq_num_queues = N,]
[rabbitmq_queue_base = 'queue',]
[rabbitmq_deadletter_exchange = 'dl-exchange',]
[rabbitmq_persistent = 0,]
[rabbitmq_skip_broken_messages = N,]
[rabbitmq_max_block_size = N,]
[rabbitmq_flush_interval_ms = N,]
[rabbitmq_queue_settings_list = 'x-dead-letter-exchange=my-dlx,x-max-length=10,x-overflow=reject-publish',]
[rabbitmq_queue_consume = false,]
[rabbitmq_address = '',]
[rabbitmq_vhost = '/',]
[rabbitmq_username = '',]
[rabbitmq_password = '',]
[rabbitmq_commit_on_select = false,]
[rabbitmq_max_rows_per_message = 1,]
[rabbitmq_handle_error_mode = 'default']
Required parameters:
rabbitmq_host_port
β host:port (for example,
localhost:5672
).
rabbitmq_exchange_name
β RabbitMQ exchange name.
rabbitmq_format
β Message format. Uses the same notation as the SQL
FORMAT
function, such as
JSONEachRow
. For more information, see the
Formats
section.
Optional parameters:
rabbitmq_exchange_type
β The type of RabbitMQ exchange:
direct
,
fanout
,
topic
,
headers
,
consistent_hash
. Default:
fanout
.
rabbitmq_routing_key_list
β A comma-separated list of routing keys.
rabbitmq_schema
β Parameter that must be used if the format requires a schema definition. For example,
Cap'n Proto
requires the path to the schema file and the name of the root
schema.capnp:Message
object.
rabbitmq_num_consumers
β The number of consumers per table. Specify more consumers if the throughput of one consumer is insufficient. Default:
1
rabbitmq_num_queues
β Total number of queues. Increasing this number can significantly improve performance. Default:
1
.
rabbitmq_queue_base
- Specify a hint for queue names. Use cases of this setting are described below. | {"source_file": "rabbitmq.md"} | [
0.08843712508678436,
-0.07903963327407837,
-0.050477154552936554,
0.08301118016242981,
-0.06173716112971306,
-0.036398693919181824,
0.0495818592607975,
-0.03289194032549858,
-0.07973334193229675,
0.07435759156942368,
0.02021951973438263,
-0.07793177664279938,
0.03775889426469803,
0.0061701... |
db4c414e-57e9-44c2-bb75-f956514aa454 | rabbitmq_queue_base
- Specify a hint for queue names. Use cases of this setting are described below.
rabbitmq_deadletter_exchange
- Specify name for a
dead letter exchange
. You can create another table with this exchange name and collect messages in cases when they are republished to dead letter exchange. By default dead letter exchange is not specified.
rabbitmq_persistent
- If set to 1 (true), in insert query delivery mode will be set to 2 (marks messages as 'persistent'). Default:
0
.
rabbitmq_skip_broken_messages
β RabbitMQ message parser tolerance to schema-incompatible messages per block. If
rabbitmq_skip_broken_messages = N
then the engine skips
N
RabbitMQ messages that cannot be parsed (a message equals a row of data). Default:
0
.
rabbitmq_max_block_size
- Number of row collected before flushing data from RabbitMQ. Default:
max_insert_block_size
.
rabbitmq_flush_interval_ms
- Timeout for flushing data from RabbitMQ. Default:
stream_flush_interval_ms
.
rabbitmq_queue_settings_list
- allows to set RabbitMQ settings when creating a queue. Available settings:
x-max-length
,
x-max-length-bytes
,
x-message-ttl
,
x-expires
,
x-priority
,
x-max-priority
,
x-overflow
,
x-dead-letter-exchange
,
x-queue-type
. The
durable
setting is enabled automatically for the queue.
rabbitmq_address
- Address for connection. Use ether this setting or
rabbitmq_host_port
.
rabbitmq_vhost
- RabbitMQ vhost. Default:
'\'
.
rabbitmq_queue_consume
- Use user-defined queues and do not make any RabbitMQ setup: declaring exchanges, queues, bindings. Default:
false
.
rabbitmq_username
- RabbitMQ username.
rabbitmq_password
- RabbitMQ password.
reject_unhandled_messages
- Reject messages (send RabbitMQ negative acknowledgement) in case of errors. This setting is automatically enabled if there is a
x-dead-letter-exchange
defined in
rabbitmq_queue_settings_list
.
rabbitmq_commit_on_select
- Commit messages when select query is made. Default:
false
.
rabbitmq_max_rows_per_message
β The maximum number of rows written in one RabbitMQ message for row-based formats. Default :
1
.
rabbitmq_empty_queue_backoff_start
β A start backoff point to reschedule read if the rabbitmq queue is empty.
rabbitmq_empty_queue_backoff_end
β An end backoff point to reschedule read if the rabbitmq queue is empty.
rabbitmq_handle_error_mode
β How to handle errors for RabbitMQ engine. Possible values: default (the exception will be thrown if we fail to parse a message), stream (the exception message and raw message will be saved in virtual columns
_error
and
_raw_message
), dead_letter_queue (error related data will be saved in system.dead_letter_queue).
[ ] SSL connection: | {"source_file": "rabbitmq.md"} | [
0.03148752450942993,
-0.08666578680276871,
0.02492314577102661,
0.09021273255348206,
-0.09575954079627991,
-0.04771833494305611,
0.04260816052556038,
-0.07084634155035019,
-0.0232921801507473,
0.028130050748586655,
0.03752429410815239,
-0.037529971450567245,
0.045667923986911774,
-0.026266... |
c8d9e795-06c3-4622-8ecb-f20451962fc1 | [ ] SSL connection:
Use either
rabbitmq_secure = 1
or
amqps
in connection address:
rabbitmq_address = 'amqps://guest:guest@localhost/vhost'
.
The default behaviour of the used library is not to check if the created TLS connection is sufficiently secure. Whether the certificate is expired, self-signed, missing or invalid: the connection is simply permitted. More strict checking of certificates can possibly be implemented in the future.
Also format settings can be added along with rabbitmq-related settings.
Example:
sql
CREATE TABLE queue (
key UInt64,
value UInt64,
date DateTime
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'localhost:5672',
rabbitmq_exchange_name = 'exchange1',
rabbitmq_format = 'JSONEachRow',
rabbitmq_num_consumers = 5,
date_time_input_format = 'best_effort';
The RabbitMQ server configuration should be added using the ClickHouse config file.
Required configuration:
xml
<rabbitmq>
<username>root</username>
<password>clickhouse</password>
</rabbitmq>
Additional configuration:
xml
<rabbitmq>
<vhost>clickhouse</vhost>
</rabbitmq>
Description {#description}
SELECT
is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using
materialized views
. To do this:
Use the engine to create a RabbitMQ consumer and consider it a data stream.
Create a table with the desired structure.
Create a materialized view that converts data from the engine and puts it into a previously created table.
When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background. This allows you to continually receive messages from RabbitMQ and convert them to the required format using
SELECT
.
One RabbitMQ table can have as many materialized views as you like.
Data can be channeled based on
rabbitmq_exchange_type
and the specified
rabbitmq_routing_key_list
.
There can be no more than one exchange per table. One exchange can be shared between multiple tables - it enables routing into multiple tables at the same time.
Exchange type options:
direct
- Routing is based on the exact matching of keys. Example table key list:
key1,key2,key3,key4,key5
, message key can equal any of them.
fanout
- Routing to all tables (where exchange name is the same) regardless of the keys.
topic
- Routing is based on patterns with dot-separated keys. Examples:
*.logs
,
records.*.*.2020
,
*.2018,*.2019,*.2020
.
headers
- Routing is based on
key=value
matches with a setting
x-match=all
or
x-match=any
. Example table key list:
x-match=all,format=logs,type=report,year=2020
. | {"source_file": "rabbitmq.md"} | [
0.06277439743280411,
-0.007814962416887283,
-0.04868612065911293,
0.07376323640346527,
-0.11663546413183212,
-0.020087985321879387,
0.046315792948007584,
-0.02451803721487522,
-0.01431924570351839,
0.054607197642326355,
-0.005754630081355572,
-0.03383816033601761,
0.04328742250800133,
0.07... |
326bcddf-b6e2-4f17-a23d-35e8285e36eb | headers
- Routing is based on
key=value
matches with a setting
x-match=all
or
x-match=any
. Example table key list:
x-match=all,format=logs,type=report,year=2020
.
consistent_hash
- Data is evenly distributed between all bound tables (where the exchange name is the same). Note that this exchange type must be enabled with RabbitMQ plugin:
rabbitmq-plugins enable rabbitmq_consistent_hash_exchange
.
Setting
rabbitmq_queue_base
may be used for the following cases:
to let different tables share queues, so that multiple consumers could be registered for the same queues, which makes a better performance. If using
rabbitmq_num_consumers
and/or
rabbitmq_num_queues
settings, the exact match of queues is achieved in case these parameters are the same.
to be able to restore reading from certain durable queues when not all messages were successfully consumed. To resume consumption from one specific queue - set its name in
rabbitmq_queue_base
setting and do not specify
rabbitmq_num_consumers
and
rabbitmq_num_queues
(defaults to 1). To resume consumption from all queues, which were declared for a specific table - just specify the same settings:
rabbitmq_queue_base
,
rabbitmq_num_consumers
,
rabbitmq_num_queues
. By default, queue names will be unique to tables.
to reuse queues as they are declared durable and not auto-deleted. (Can be deleted via any of RabbitMQ CLI tools.)
To improve performance, received messages are grouped into blocks the size of
max_insert_block_size
. If the block wasn't formed within
stream_flush_interval_ms
milliseconds, the data will be flushed to the table regardless of the completeness of the block.
If
rabbitmq_num_consumers
and/or
rabbitmq_num_queues
settings are specified along with
rabbitmq_exchange_type
, then:
rabbitmq-consistent-hash-exchange
plugin must be enabled.
message_id
property of the published messages must be specified (unique for each message/batch).
For insert query there is message metadata, which is added for each published message:
messageID
and
republished
flag (true, if published more than once) - can be accessed via message headers.
Do not use the same table for inserts and materialized views.
Example:
```sql
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'localhost:5672',
rabbitmq_exchange_name = 'exchange1',
rabbitmq_exchange_type = 'headers',
rabbitmq_routing_key_list = 'format=logs,type=report,year=2020',
rabbitmq_format = 'JSONEachRow',
rabbitmq_num_consumers = 5;
CREATE TABLE daily (key UInt64, value UInt64)
ENGINE = MergeTree() ORDER BY key;
CREATE MATERIALIZED VIEW consumer TO daily
AS SELECT key, value FROM queue;
SELECT key, value FROM daily ORDER BY key;
```
Virtual columns {#virtual-columns} | {"source_file": "rabbitmq.md"} | [
0.02764664962887764,
-0.017633037641644478,
-0.01238422654569149,
0.07643893361091614,
-0.07183273136615753,
-0.0307016558945179,
0.09951823949813843,
-0.05308732017874718,
-0.004439638927578926,
0.053791593760252,
-0.02668498270213604,
0.0009597272728569806,
0.030765457078814507,
0.015548... |
14eae81b-ac99-432c-93b1-94102b73a2d6 | CREATE MATERIALIZED VIEW consumer TO daily
AS SELECT key, value FROM queue;
SELECT key, value FROM daily ORDER BY key;
```
Virtual columns {#virtual-columns}
_exchange_name
- RabbitMQ exchange name. Data type:
String
.
_channel_id
- ChannelID, on which consumer, who received the message, was declared. Data type:
String
.
_delivery_tag
- DeliveryTag of the received message. Scoped per channel. Data type:
UInt64
.
_redelivered
-
redelivered
flag of the message. Data type:
UInt8
.
_message_id
- messageID of the received message; non-empty if was set, when message was published. Data type:
String
.
_timestamp
- timestamp of the received message; non-empty if was set, when message was published. Data type:
UInt64
.
Additional virtual columns when
rabbitmq_handle_error_mode='stream'
:
_raw_message
- Raw message that couldn't be parsed successfully. Data type:
Nullable(String)
.
_error
- Exception message happened during failed parsing. Data type:
Nullable(String)
.
Note:
_raw_message
and
_error
virtual columns are filled only in case of exception during parsing, they are always
NULL
when message was parsed successfully.
Caveats {#caveats}
Even though you may specify
default column expressions
(such as
DEFAULT
,
MATERIALIZED
,
ALIAS
) in the table definition, these will be ignored. Instead, the columns will be filled with their respective default values for their types.
Data formats support {#data-formats-support}
RabbitMQ engine supports all
formats
supported in ClickHouse.
The number of rows in one RabbitMQ message depends on whether the format is row-based or block-based:
For row-based formats the number of rows in one RabbitMQ message can be controlled by setting
rabbitmq_max_rows_per_message
.
For block-based formats we cannot divide block into smaller parts, but the number of rows in one block can be controlled by general setting
max_block_size
. | {"source_file": "rabbitmq.md"} | [
0.07916402071714401,
0.00966548826545477,
-0.005476535763591528,
0.10154082626104355,
-0.08270661532878876,
-0.049989309161901474,
0.09510611742734909,
-0.035840969532728195,
-0.003701083827763796,
0.08650295436382294,
0.07559432089328766,
-0.06229999288916588,
-0.006051002535969019,
0.009... |
e82c4d76-623b-44bc-ab88-d82ffc7cfb58 | description: 'This engine allows integrating ClickHouse with RocksDB'
sidebar_label: 'EmbeddedRocksDB'
sidebar_position: 50
slug: /engines/table-engines/integrations/embedded-rocksdb
title: 'EmbeddedRocksDB table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
EmbeddedRocksDB table engine
This engine allows integrating ClickHouse with
RocksDB
.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = EmbeddedRocksDB([ttl, rocksdb_dir, read_only]) PRIMARY KEY(primary_key_name)
[ SETTINGS name=value, ... ]
Engine parameters:
ttl
- time to live for values. TTL is accepted in seconds. If TTL is 0, regular RocksDB instance is used (without TTL).
rocksdb_dir
- path to the directory of an existed RocksDB or the destination path of the created RocksDB. Open the table with the specified
rocksdb_dir
.
read_only
- when
read_only
is set to true, read-only mode is used. For storage with TTL, compaction will not be triggered (neither manual nor automatic), so no expired entries are removed.
primary_key_name
β any column name in the column list.
primary key
must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a
rocksdb key
.
columns other than the primary key will be serialized in binary as
rocksdb
value in corresponding order.
queries with key
equals
or
in
filtering will be optimized to multi keys lookup from
rocksdb
.
Engine settings:
optimize_for_bulk_insert
β Table is optimized for bulk insertions (insert pipeline will create SST files and import to rocksdb database instead of writing to memtables); default value:
1
.
bulk_insert_block_size
- Minimum size of SST files (in term of rows) created by bulk insertion; default value:
1048449
.
Example:
sql
CREATE TABLE test
(
`key` String,
`v1` UInt32,
`v2` String,
`v3` Float32
)
ENGINE = EmbeddedRocksDB
PRIMARY KEY key
Metrics {#metrics}
There is also
system.rocksdb
table, that expose rocksdb statistics:
```sql
SELECT
name,
value
FROM system.rocksdb
ββnameβββββββββββββββββββββββ¬βvalueββ
β no.file.opens β 1 β
β number.block.decompressed β 1 β
βββββββββββββββββββββββββββββ΄ββββββββ
```
Configuration {#configuration}
You can also change any
rocksdb options
using config: | {"source_file": "embedded-rocksdb.md"} | [
-0.00003310496322228573,
-0.034724023193120956,
-0.017103973776102066,
0.05039259418845177,
-0.04702771455049515,
-0.0037079118192195892,
0.005279050208628178,
-0.005807038862258196,
-0.07349386066198349,
-0.005001347046345472,
0.02479942888021469,
-0.07772602885961533,
0.12322528660297394,
... |
09762172-e1f7-48aa-bb86-3456d780a3cf | Configuration {#configuration}
You can also change any
rocksdb options
using config:
xml
<rocksdb>
<options>
<max_background_jobs>8</max_background_jobs>
</options>
<column_family_options>
<num_levels>2</num_levels>
</column_family_options>
<tables>
<table>
<name>TABLE</name>
<options>
<max_background_jobs>8</max_background_jobs>
</options>
<column_family_options>
<num_levels>2</num_levels>
</column_family_options>
</table>
</tables>
</rocksdb>
By default trivial approximate count optimization is turned off, which might affect the performance
count()
queries. To enable this
optimization set up
optimize_trivial_approximate_count_query = 1
. Also, this setting affects
system.tables
for EmbeddedRocksDB engine,
turn on the settings to see approximate values for
total_rows
and
total_bytes
.
Supported operations {#supported-operations}
Inserts {#inserts}
When new rows are inserted into
EmbeddedRocksDB
, if the key already exists, the value will be updated, otherwise a new key is created.
Example:
sql
INSERT INTO test VALUES ('some key', 1, 'value', 3.2);
Deletes {#deletes}
Rows can be deleted using
DELETE
query or
TRUNCATE
.
sql
DELETE FROM test WHERE key LIKE 'some%' AND v1 > 1;
sql
ALTER TABLE test DELETE WHERE key LIKE 'some%' AND v1 > 1;
sql
TRUNCATE TABLE test;
Updates {#updates}
Values can be updated using the
ALTER TABLE
query. The primary key cannot be updated.
sql
ALTER TABLE test UPDATE v1 = v1 * 10 + 2 WHERE key LIKE 'some%' AND v3 > 3.1;
Joins {#joins}
A special
direct
join with EmbeddedRocksDB tables is supported.
This direct join avoids forming a hash table in memory and accesses
the data directly from the EmbeddedRocksDB.
With large joins you may see much lower memory usage with direct joins
because the hash table is not created.
To enable direct joins:
sql
SET join_algorithm = 'direct, hash'
:::tip
When the
join_algorithm
is set to
direct, hash
, direct joins will be used
when possible, and hash otherwise.
:::
Example {#example}
Create and populate an EmbeddedRocksDB table {#create-and-populate-an-embeddedrocksdb-table}
sql
CREATE TABLE rdb
(
`key` UInt32,
`value` Array(UInt32),
`value2` String
)
ENGINE = EmbeddedRocksDB
PRIMARY KEY key
sql
INSERT INTO rdb
SELECT
toUInt32(sipHash64(number) % 10) AS key,
[key, key+1] AS value,
('val2' || toString(key)) AS value2
FROM numbers_mt(10);
Create and populate a table to join with table
rdb
{#create-and-populate-a-table-to-join-with-table-rdb}
sql
CREATE TABLE t2
(
`k` UInt16
)
ENGINE = TinyLog
sql
INSERT INTO t2 SELECT number AS k
FROM numbers_mt(10)
Set the join algorithm to
direct
{#set-the-join-algorithm-to-direct}
sql
SET join_algorithm = 'direct'
An INNER JOIN {#an-inner-join} | {"source_file": "embedded-rocksdb.md"} | [
0.012219334952533245,
-0.015417451038956642,
-0.0555676706135273,
0.060251202434301376,
-0.06757120043039322,
-0.08483033627271652,
-0.08585172891616821,
0.03477679565548897,
-0.08085691183805466,
0.018004873767495155,
-0.04107864573597908,
-0.0354134626686573,
0.06132359057664871,
-0.0872... |
9f4f4188-fd1a-4625-aa4e-d8ed3ebe4152 | Set the join algorithm to
direct
{#set-the-join-algorithm-to-direct}
sql
SET join_algorithm = 'direct'
An INNER JOIN {#an-inner-join}
sql
SELECT *
FROM
(
SELECT k AS key
FROM t2
) AS t2
INNER JOIN rdb ON rdb.key = t2.key
ORDER BY key ASC
response
ββkeyββ¬βrdb.keyββ¬βvalueβββ¬βvalue2ββ
β 0 β 0 β [0,1] β val20 β
β 2 β 2 β [2,3] β val22 β
β 3 β 3 β [3,4] β val23 β
β 6 β 6 β [6,7] β val26 β
β 7 β 7 β [7,8] β val27 β
β 8 β 8 β [8,9] β val28 β
β 9 β 9 β [9,10] β val29 β
βββββββ΄ββββββββββ΄βββββββββ΄βββββββββ
More information on Joins {#more-information-on-joins}
join_algorithm
setting
JOIN clause | {"source_file": "embedded-rocksdb.md"} | [
0.03839357569813728,
0.015704846009612083,
0.017823904752731323,
0.02047397755086422,
-0.03483904153108597,
-0.09050503373146057,
0.018108099699020386,
0.009842117317020893,
-0.10707784444093704,
0.01833919622004032,
-0.042453136295080185,
-0.013555895537137985,
0.06112634390592575,
-0.076... |
335058c1-4da3-4ecb-bf06-788db01ff87c | description: 'The Hive engine allows you to perform
SELECT
queries on HDFS Hive
table.'
sidebar_label: 'Hive'
sidebar_position: 84
slug: /engines/table-engines/integrations/hive
title: 'Hive table engine'
doc_type: 'guide'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Hive table engine
The Hive engine allows you to perform
SELECT
queries on HDFS Hive table. Currently, it supports input formats as below:
Text: only supports simple scalar column types except
binary
ORC: support simple scalar columns types except
char
; only support complex types like
array
Parquet: support all simple scalar columns types; only support complex types like
array
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [ALIAS expr1],
name2 [type2] [ALIAS expr2],
...
) ENGINE = Hive('thrift://host:port', 'database', 'table');
PARTITION BY expr
See a detailed description of the
CREATE TABLE
query.
The table structure can differ from the original Hive table structure:
- Column names should be the same as in the original Hive table, but you can use just some of these columns and in any order, also you can use some alias columns calculated from other columns.
- Column types should be the same from those in the original Hive table.
- Partition by expression should be consistent with the original Hive table, and columns in partition by expression should be in the table structure.
Engine Parameters
thrift://host:port
β Hive Metastore address
database
β Remote database name.
table
β Remote table name.
Usage example {#usage-example}
How to use local cache for HDFS filesystem {#how-to-use-local-cache-for-hdfs-filesystem}
We strongly advice you to enable local cache for remote filesystems. Benchmark shows that its almost 2x faster with cache.
Before using cache, add it to
config.xml
xml
<local_cache_for_remote_fs>
<enable>true</enable>
<root_dir>local_cache</root_dir>
<limit_size>559096952</limit_size>
<bytes_read_before_flush>1048576</bytes_read_before_flush>
</local_cache_for_remote_fs>
enable: ClickHouse will maintain local cache for remote filesystem(HDFS) after startup if true.
root_dir: Required. The root directory to store local cache files for remote filesystem.
limit_size: Required. The maximum size(in bytes) of local cache files.
bytes_read_before_flush: Control bytes before flush to local filesystem when downloading file from remote filesystem. The default value is 1MB.
Query Hive table with ORC input format {#query-hive-table-with-orc-input-format}
Create Table in Hive {#create-table-in-hive} | {"source_file": "hive.md"} | [
0.019166424870491028,
-0.03637366369366646,
-0.021350635215640068,
0.018664715811610222,
0.0145682692527771,
0.027181535959243774,
0.041115500032901764,
-0.0036225500516593456,
-0.07478778064250946,
0.04901769012212753,
-0.02720429003238678,
-0.0201386921107769,
0.04165956750512123,
-0.023... |
6e62f596-9628-45a6-bfeb-788e6b066cc4 | Query Hive table with ORC input format {#query-hive-table-with-orc-input-format}
Create Table in Hive {#create-table-in-hive}
``text
hive > CREATE TABLE
test
.
test_orc
(
f_tinyint
tinyint,
f_smallint
smallint,
f_int
int,
f_integer
int,
f_bigint
bigint,
f_float
float,
f_double
double,
f_decimal
decimal(10,0),
f_timestamp
timestamp,
f_date
date,
f_string
string,
f_varchar
varchar(100),
f_bool
boolean,
f_binary
binary,
f_array_int
array<int>,
f_array_string
array<string>,
f_array_float
array<float>,
f_array_array_int
array<array<int>>,
f_array_array_string
array<array<string>>,
f_array_array_float
array<array<float>>)
PARTITIONED BY (
day` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://testcluster/data/hive/test.db/test_orc'
OK
Time taken: 0.51 seconds
hive > insert into test.test_orc partition(day='2021-09-18') select 1, 2, 3, 4, 5, 6.11, 7.22, 8.333, current_timestamp(), current_date(), 'hello world', 'hello world', 'hello world', true, 'hello world', array(1, 2, 3), array('hello world', 'hello world'), array(float(1.1), float(1.2)), array(array(1, 2), array(3, 4)), array(array('a', 'b'), array('c', 'd')), array(array(float(1.11), float(2.22)), array(float(3.33), float(4.44)));
OK
Time taken: 36.025 seconds
hive > select * from test.test_orc;
OK
1 2 3 4 5 6.11 7.22 8 2021-11-05 12:38:16.314 2021-11-05 hello world hello world hello world true hello world [1,2,3] ["hello world","hello world"] [1.1,1.2] [[1,2],[3,4]] [["a","b"],["c","d"]] [[1.11,2.22],[3.33,4.44]] 2021-09-18
Time taken: 0.295 seconds, Fetched: 1 row(s)
```
Create Table in ClickHouse {#create-table-in-clickhouse}
Table in ClickHouse, retrieving data from the Hive table created above:
``sql
CREATE TABLE test.test_orc
(
f_tinyint
Int8,
f_smallint
Int16,
f_int
Int32,
f_integer
Int32,
f_bigint
Int64,
f_float
Float32,
f_double
Float64,
f_decimal
Float64,
f_timestamp
DateTime,
f_date
Date,
f_string
String,
f_varchar
String,
f_bool
Bool,
f_binary
String,
f_array_int
Array(Int32),
f_array_string
Array(String),
f_array_float
Array(Float32),
f_array_array_int
Array(Array(Int32)),
f_array_array_string
Array(Array(String)),
f_array_array_float
Array(Array(Float32)),
day` String
)
ENGINE = Hive('thrift://202.168.117.26:9083', 'test', 'test_orc')
PARTITION BY day
```
sql
SELECT * FROM test.test_orc settings input_format_orc_allow_missing_columns = 1\G
```text
SELECT *
FROM test.test_orc
SETTINGS input_format_orc_allow_missing_columns = 1
Query id: c3eaffdc-78ab-43cd-96a4-4acc5b480658 | {"source_file": "hive.md"} | [
0.057519011199474335,
0.059455662965774536,
-0.027073558419942856,
-0.020310362800955772,
-0.01197786908596754,
0.005711216013878584,
0.012540258467197418,
0.06052672490477562,
-0.09051814675331116,
0.07397448271512985,
0.02171000838279724,
-0.10091159492731094,
-0.006192693952471018,
-0.0... |
3e41883a-fae9-41e4-b673-ef567d281629 | ```text
SELECT *
FROM test.test_orc
SETTINGS input_format_orc_allow_missing_columns = 1
Query id: c3eaffdc-78ab-43cd-96a4-4acc5b480658
Row 1:
ββββββ
f_tinyint: 1
f_smallint: 2
f_int: 3
f_integer: 4
f_bigint: 5
f_float: 6.11
f_double: 7.22
f_decimal: 8
f_timestamp: 2021-12-04 04:00:44
f_date: 2021-12-03
f_string: hello world
f_varchar: hello world
f_bool: true
f_binary: hello world
f_array_int: [1,2,3]
f_array_string: ['hello world','hello world']
f_array_float: [1.1,1.2]
f_array_array_int: [[1,2],[3,4]]
f_array_array_string: [['a','b'],['c','d']]
f_array_array_float: [[1.11,2.22],[3.33,4.44]]
day: 2021-09-18
1 rows in set. Elapsed: 0.078 sec.
```
Query Hive table with Parquet input format {#query-hive-table-with-parquet-input-format}
Create Table in Hive {#create-table-in-hive-1}
``text
hive >
CREATE TABLE
test
.
test_parquet
(
f_tinyint
tinyint,
f_smallint
smallint,
f_int
int,
f_integer
int,
f_bigint
bigint,
f_float
float,
f_double
double,
f_decimal
decimal(10,0),
f_timestamp
timestamp,
f_date
date,
f_string
string,
f_varchar
varchar(100),
f_char
char(100),
f_bool
boolean,
f_binary
binary,
f_array_int
array<int>,
f_array_string
array<string>,
f_array_float
array<float>,
f_array_array_int
array<array<int>>,
f_array_array_string
array<array<string>>,
f_array_array_float
array<array<float>>)
PARTITIONED BY (
day` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
'hdfs://testcluster/data/hive/test.db/test_parquet'
OK
Time taken: 0.51 seconds
hive > insert into test.test_parquet partition(day='2021-09-18') select 1, 2, 3, 4, 5, 6.11, 7.22, 8.333, current_timestamp(), current_date(), 'hello world', 'hello world', 'hello world', true, 'hello world', array(1, 2, 3), array('hello world', 'hello world'), array(float(1.1), float(1.2)), array(array(1, 2), array(3, 4)), array(array('a', 'b'), array('c', 'd')), array(array(float(1.11), float(2.22)), array(float(3.33), float(4.44)));
OK
Time taken: 36.025 seconds
hive > select * from test.test_parquet;
OK
1 2 3 4 5 6.11 7.22 8 2021-12-14 17:54:56.743 2021-12-14 hello world hello world hello world true hello world [1,2,3] ["hello world","hello world"] [1.1,1.2] [[1,2],[3,4]] [["a","b"],["c","d"]] [[1.11,2.22],[3.33,4.44]] 2021-09-18
Time taken: 0.766 seconds, Fetched: 1 row(s)
```
Create Table in ClickHouse {#create-table-in-clickhouse-1}
Table in ClickHouse, retrieving data from the Hive table created above: | {"source_file": "hive.md"} | [
0.06955298036336899,
0.08582732826471329,
-0.04937668889760971,
0.02348739095032215,
0.0021217588800936937,
-0.016156714409589767,
0.03526856750249863,
-0.012804231606423855,
-0.04497649893164635,
0.052704304456710815,
0.06459324061870575,
-0.10102616995573044,
0.032756257802248,
-0.080858... |
3ce89722-0ecc-4e84-8907-a7aec0fc2de0 | Create Table in ClickHouse {#create-table-in-clickhouse-1}
Table in ClickHouse, retrieving data from the Hive table created above:
sql
CREATE TABLE test.test_parquet
(
`f_tinyint` Int8,
`f_smallint` Int16,
`f_int` Int32,
`f_integer` Int32,
`f_bigint` Int64,
`f_float` Float32,
`f_double` Float64,
`f_decimal` Float64,
`f_timestamp` DateTime,
`f_date` Date,
`f_string` String,
`f_varchar` String,
`f_char` String,
`f_bool` Bool,
`f_binary` String,
`f_array_int` Array(Int32),
`f_array_string` Array(String),
`f_array_float` Array(Float32),
`f_array_array_int` Array(Array(Int32)),
`f_array_array_string` Array(Array(String)),
`f_array_array_float` Array(Array(Float32)),
`day` String
)
ENGINE = Hive('thrift://localhost:9083', 'test', 'test_parquet')
PARTITION BY day
sql
SELECT * FROM test.test_parquet settings input_format_parquet_allow_missing_columns = 1\G
```text
SELECT *
FROM test_parquet
SETTINGS input_format_parquet_allow_missing_columns = 1
Query id: 4e35cf02-c7b2-430d-9b81-16f438e5fca9
Row 1:
ββββββ
f_tinyint: 1
f_smallint: 2
f_int: 3
f_integer: 4
f_bigint: 5
f_float: 6.11
f_double: 7.22
f_decimal: 8
f_timestamp: 2021-12-14 17:54:56
f_date: 2021-12-14
f_string: hello world
f_varchar: hello world
f_char: hello world
f_bool: true
f_binary: hello world
f_array_int: [1,2,3]
f_array_string: ['hello world','hello world']
f_array_float: [1.1,1.2]
f_array_array_int: [[1,2],[3,4]]
f_array_array_string: [['a','b'],['c','d']]
f_array_array_float: [[1.11,2.22],[3.33,4.44]]
day: 2021-09-18
1 rows in set. Elapsed: 0.357 sec.
```
Query Hive table with Text input format {#query-hive-table-with-text-input-format}
Create Table in Hive {#create-table-in-hive-2}
``text
hive >
CREATE TABLE
test
.
test_text
(
f_tinyint
tinyint,
f_smallint
smallint,
f_int
int,
f_integer
int,
f_bigint
bigint,
f_float
float,
f_double
double,
f_decimal
decimal(10,0),
f_timestamp
timestamp,
f_date
date,
f_string
string,
f_varchar
varchar(100),
f_char
char(100),
f_bool
boolean,
f_binary
binary,
f_array_int
array<int>,
f_array_string
array<string>,
f_array_float
array<float>,
f_array_array_int
array<array<int>>,
f_array_array_string
array<array<string>>,
f_array_array_float
array<array<float>>)
PARTITIONED BY (
day` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://testcluster/data/hive/test.db/test_text'
Time taken: 0.1 seconds, Fetched: 34 row(s) | {"source_file": "hive.md"} | [
0.08999571949243546,
0.000006071690677345032,
-0.07897442579269409,
-0.03056935966014862,
-0.08511800318956375,
-0.06666673719882965,
0.04770674183964729,
0.02067173272371292,
-0.08323806524276733,
0.0760020837187767,
0.06742025166749954,
-0.09163402020931244,
0.005843425169587135,
-0.0374... |
7c8cc8ce-9eef-4e96-a398-f9e7758b325b | hive > insert into test.test_text partition(day='2021-09-18') select 1, 2, 3, 4, 5, 6.11, 7.22, 8.333, current_timestamp(), current_date(), 'hello world', 'hello world', 'hello world', true, 'hello world', array(1, 2, 3), array('hello world', 'hello world'), array(float(1.1), float(1.2)), array(array(1, 2), array(3, 4)), array(array('a', 'b'), array('c', 'd')), array(array(float(1.11), float(2.22)), array(float(3.33), float(4.44)));
OK
Time taken: 36.025 seconds
hive > select * from test.test_text;
OK
1 2 3 4 5 6.11 7.22 8 2021-12-14 18:11:17.239 2021-12-14 hello world hello world hello world true hello world [1,2,3] ["hello world","hello world"] [1.1,1.2] [[1,2],[3,4]] [["a","b"],["c","d"]] [[1.11,2.22],[3.33,4.44]] 2021-09-18
Time taken: 0.624 seconds, Fetched: 1 row(s)
```
Create Table in ClickHouse {#create-table-in-clickhouse-2}
Table in ClickHouse, retrieving data from the Hive table created above:
sql
CREATE TABLE test.test_text
(
`f_tinyint` Int8,
`f_smallint` Int16,
`f_int` Int32,
`f_integer` Int32,
`f_bigint` Int64,
`f_float` Float32,
`f_double` Float64,
`f_decimal` Float64,
`f_timestamp` DateTime,
`f_date` Date,
`f_string` String,
`f_varchar` String,
`f_char` String,
`f_bool` Bool,
`day` String
)
ENGINE = Hive('thrift://localhost:9083', 'test', 'test_text')
PARTITION BY day
sql
SELECT * FROM test.test_text settings input_format_skip_unknown_fields = 1, input_format_with_names_use_header = 1, date_time_input_format = 'best_effort'\G
```text
SELECT *
FROM test.test_text
SETTINGS input_format_skip_unknown_fields = 1, input_format_with_names_use_header = 1, date_time_input_format = 'best_effort'
Query id: 55b79d35-56de-45b9-8be6-57282fbf1f44
Row 1:
ββββββ
f_tinyint: 1
f_smallint: 2
f_int: 3
f_integer: 4
f_bigint: 5
f_float: 6.11
f_double: 7.22
f_decimal: 8
f_timestamp: 2021-12-14 18:11:17
f_date: 2021-12-14
f_string: hello world
f_varchar: hello world
f_char: hello world
f_bool: true
day: 2021-09-18
``` | {"source_file": "hive.md"} | [
0.032321181148290634,
0.06280746310949326,
0.06026632711291313,
0.012190350331366062,
-0.03456432372331619,
-0.046207670122385025,
0.039502616971731186,
-0.04981791600584984,
-0.05574370548129082,
0.06738711148500443,
0.03577449545264244,
-0.07481476664543152,
-0.023821566253900528,
-0.012... |
1df68c8b-8a59-4fc1-8a14-0373764eeda1 | description: 'This engine provides a read-only integration with existing Apache Hudi
tables in Amazon S3.'
sidebar_label: 'Hudi'
sidebar_position: 86
slug: /engines/table-engines/integrations/hudi
title: 'Hudi table engine'
doc_type: 'reference'
Hudi table engine
This engine provides a read-only integration with existing Apache
Hudi
tables in Amazon S3.
Create table {#create-table}
Note that the Hudi table must already exist in S3, this command does not take DDL parameters to create a new table.
sql
CREATE TABLE hudi_table
ENGINE = Hudi(url, [aws_access_key_id, aws_secret_access_key,])
Engine parameters
url
β Bucket url with the path to an existing Hudi table.
aws_access_key_id
,
aws_secret_access_key
- Long-term credentials for the
AWS
account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file.
Engine parameters can be specified using
Named Collections
.
Example
sql
CREATE TABLE hudi_table ENGINE=Hudi('http://mars-doc-test.s3.amazonaws.com/clickhouse-bucket-3/test_table/', 'ABC123', 'Abc+123')
Using named collections:
xml
<clickhouse>
<named_collections>
<hudi_conf>
<url>http://mars-doc-test.s3.amazonaws.com/clickhouse-bucket-3/</url>
<access_key_id>ABC123<access_key_id>
<secret_access_key>Abc+123</secret_access_key>
</hudi_conf>
</named_collections>
</clickhouse>
sql
CREATE TABLE hudi_table ENGINE=Hudi(hudi_conf, filename = 'test_table')
See also {#see-also}
hudi table function | {"source_file": "hudi.md"} | [
-0.06333829462528229,
-0.06561040133237839,
-0.1452704221010208,
-0.009094292297959328,
-0.009280444122850895,
-0.03777729347348213,
-0.001420147716999054,
-0.042245734483003616,
-0.025365134701132774,
-0.008319062180817127,
0.03410622477531433,
-0.016861051321029663,
0.11592880636453629,
... |
4f60d726-316e-4e59-b86a-7d623aa0a4e2 | description: 'This engine allows integrating ClickHouse with Redis.'
sidebar_label: 'Redis'
sidebar_position: 175
slug: /engines/table-engines/integrations/redis
title: 'Redis table engine'
doc_type: 'guide'
Redis table engine
This engine allows integrating ClickHouse with
Redis
. For Redis takes kv model, we strongly recommend you only query it in a point way, such as
where k=xx
or
where k in (xx, xx)
.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name
(
name1 [type1],
name2 [type2],
...
) ENGINE = Redis({host:port[, db_index[, password[, pool_size]]] | named_collection[, option=value [,..]] })
PRIMARY KEY(primary_key_name);
Engine Parameters
host:port
β Redis server address, you can ignore port and default Redis port 6379 will be used.
db_index
β Redis db index range from 0 to 15, default is 0.
password
β User password, default is blank string.
pool_size
β Redis max connection pool size, default is 16.
primary_key_name
- any column name in the column list.
:::note Serialization
PRIMARY KEY
supports only one column. The primary key will be serialized in binary as a Redis key.
Columns other than the primary key will be serialized in binary as Redis value in corresponding order.
:::
Arguments also can be passed using
named collections
. In this case
host
and
port
should be specified separately. This approach is recommended for production environment. At this moment, all parameters passed using named collections to redis are required.
:::note Filtering
Queries with
key equals
or
in filtering
will be optimized to multi keys lookup from Redis. If queries without filtering key full table scan will happen which is a heavy operation.
:::
Usage example {#usage-example}
Create a table in ClickHouse using
Redis
engine with plain arguments:
sql
CREATE TABLE redis_table
(
`key` String,
`v1` UInt32,
`v2` String,
`v3` Float32
)
ENGINE = Redis('redis1:6379') PRIMARY KEY(key);
Or using
named collections
:
xml
<named_collections>
<redis_creds>
<host>localhost</host>
<port>6379</port>
<password>****</password>
<pool_size>16</pool_size>
<db_index>s0</db_index>
</redis_creds>
</named_collections>
sql
CREATE TABLE redis_table
(
`key` String,
`v1` UInt32,
`v2` String,
`v3` Float32
)
ENGINE = Redis(redis_creds) PRIMARY KEY(key);
Insert:
sql
INSERT INTO redis_table VALUES('1', 1, '1', 1.0), ('2', 2, '2', 2.0);
Query:
sql
SELECT COUNT(*) FROM redis_table;
text
ββcount()ββ
β 2 β
βββββββββββ
sql
SELECT * FROM redis_table WHERE key='1';
text
ββkeyββ¬βv1ββ¬βv2ββ¬βv3ββ
β 1 β 1 β 1 β 1 β
βββββββ΄βββββ΄βββββ΄βββββ
sql
SELECT * FROM redis_table WHERE v1=2;
text
ββkeyββ¬βv1ββ¬βv2ββ¬βv3ββ
β 2 β 2 β 2 β 2 β
βββββββ΄βββββ΄βββββ΄βββββ
Update:
Note that the primary key cannot be updated.
sql
ALTER TABLE redis_table UPDATE v1=2 WHERE key='1';
Delete: | {"source_file": "redis.md"} | [
0.006042524240911007,
-0.06261417269706726,
-0.13048972189426422,
0.0185210220515728,
-0.05590519309043884,
-0.054251495748758316,
0.026808882132172585,
0.01692586950957775,
-0.0544106587767601,
-0.037679776549339294,
0.01043890044093132,
-0.05687101185321808,
0.09503137320280075,
-0.06407... |
3e26c359-e71b-45ab-8aa1-301428d08afe | text
ββkeyββ¬βv1ββ¬βv2ββ¬βv3ββ
β 2 β 2 β 2 β 2 β
βββββββ΄βββββ΄βββββ΄βββββ
Update:
Note that the primary key cannot be updated.
sql
ALTER TABLE redis_table UPDATE v1=2 WHERE key='1';
Delete:
sql
ALTER TABLE redis_table DELETE WHERE key='1';
Truncate:
Flush Redis db asynchronously. Also
Truncate
support SYNC mode.
sql
TRUNCATE TABLE redis_table SYNC;
Join:
Join with other tables.
sql
SELECT * FROM redis_table JOIN merge_tree_table ON merge_tree_table.key=redis_table.key;
Limitations {#limitations}
Redis engine also supports scanning queries, such as
where k > xx
, but it has some limitations:
1. Scanning query may produce some duplicated keys in a very rare case when it is rehashing. See details in
Redis Scan
.
2. During the scanning, keys could be created and deleted, so the resulting dataset can not represent a valid point in time. | {"source_file": "redis.md"} | [
0.023858819156885147,
-0.019854463636875153,
-0.04826034605503082,
0.027402516454458237,
-0.020701272413134575,
-0.10652228444814682,
0.05894928053021431,
-0.030379991978406906,
-0.01838185079395771,
0.03607942536473274,
0.04908062517642975,
0.0901179239153862,
0.10061060637235641,
-0.1381... |
8b5c85ed-11ae-4dac-90f1-4c38da2cca04 | description: 'The engine allows querying remote datasets via Apache Arrow Flight.'
sidebar_label: 'ArrowFlight'
sidebar_position: 186
slug: /engines/table-engines/integrations/arrowflight
title: 'ArrowFlight table engine'
doc_type: 'reference'
ArrowFlight table engine
The ArrowFlight table engine enables ClickHouse to query remote datasets via the
Apache Arrow Flight
protocol.
This integration allows ClickHouse to fetch data from external Flight-enabled servers in a columnar Arrow format with high performance.
Creating a Table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name (name1 [type1], name2 [type2], ...)
ENGINE = ArrowFlight('host:port', 'dataset_name' [, 'username', 'password']);
Engine Parameters
host:port
β Address of the remote Arrow Flight server.
dataset_name
β Identifier of the dataset on the Flight server.
username
- Username to use with basic HTTP style authentication.
password
- Password to use with basic HTTP style authentication.
If
username
and
password
are not specified, it means that authentication is not used
(that will work only if the Arrow Flight server allows it).
Usage Example {#usage-example}
This example shows how to create a table that reads data from a remote Arrow Flight server:
sql
CREATE TABLE remote_flight_data
(
id UInt32,
name String,
value Float64
) ENGINE = ArrowFlight('127.0.0.1:9005', 'sample_dataset');
Query the remote data as if it were a local table:
sql
SELECT * FROM remote_flight_data ORDER BY id;
text
ββidββ¬βnameβββββ¬βvalueββ
β 1 β foo β 42.1 β
β 2 β bar β 13.3 β
β 3 β baz β 77.0 β
ββββββ΄ββββββββββ΄ββββββββ
Notes {#notes}
The schema defined in ClickHouse must match the schema returned by the Flight server.
This engine is suitable for federated queries, data virtualization, and decoupling storage from compute.
See Also {#see-also}
Apache Arrow Flight SQL
Arrow format integration in ClickHouse | {"source_file": "arrowflight.md"} | [
0.06453850120306015,
-0.09964253008365631,
-0.0871177539229393,
0.0339350588619709,
-0.10970313847064972,
-0.06348173320293427,
0.03280153498053551,
-0.021662618964910507,
-0.033685021102428436,
-0.00004656500823330134,
0.04431862756609917,
0.023830102756619453,
0.042296379804611206,
-0.09... |
8fc7a5ba-d6ec-4fd9-9da7-d0fe6c2dbdc7 | description: 'Documentation for MySQL Table Engine'
sidebar_label: 'MySQL'
sidebar_position: 138
slug: /engines/table-engines/integrations/mysql
title: 'MySQL table engine'
doc_type: 'reference'
MySQL table engine
The MySQL engine allows you to perform
SELECT
and
INSERT
queries on data that is stored on a remote MySQL server.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
...
) ENGINE = MySQL({host:port, database, table, user, password[, replace_query, on_duplicate_clause] | named_collection[, option=value [,..]]})
SETTINGS
[ connection_pool_size=16, ]
[ connection_max_tries=3, ]
[ connection_wait_timeout=5, ]
[ connection_auto_close=true, ]
[ connect_timeout=10, ]
[ read_write_timeout=300 ]
;
See a detailed description of the
CREATE TABLE
query.
The table structure can differ from the original MySQL table structure:
Column names should be the same as in the original MySQL table, but you can use just some of these columns and in any order.
Column types may differ from those in the original MySQL table. ClickHouse tries to
cast
values to the ClickHouse data types.
The
external_table_functions_use_nulls
setting defines how to handle Nullable columns. Default value: 1. If 0, the table function does not make Nullable columns and inserts default values instead of nulls. This is also applicable for NULL values inside arrays.
Engine Parameters
host:port
β MySQL server address.
database
β Remote database name.
table
β Remote table name.
user
β MySQL user.
password
β User password.
replace_query
β Flag that converts
INSERT INTO
queries to
REPLACE INTO
. If
replace_query=1
, the query is substituted.
on_duplicate_clause
β The
ON DUPLICATE KEY on_duplicate_clause
expression that is added to the
INSERT
query.
Example:
INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1
, where
on_duplicate_clause
is
UPDATE c2 = c2 + 1
. See the
MySQL documentation
to find which
on_duplicate_clause
you can use with the
ON DUPLICATE KEY
clause.
To specify
on_duplicate_clause
you need to pass
0
to the
replace_query
parameter. If you simultaneously pass
replace_query = 1
and
on_duplicate_clause
, ClickHouse generates an exception.
Arguments also can be passed using
named collections
. In this case
host
and
port
should be specified separately. This approach is recommended for production environment.
Simple
WHERE
clauses such as
=, !=, >, >=, <, <=
are executed on the MySQL server.
The rest of the conditions and the
LIMIT
sampling constraint are executed in ClickHouse only after the query to MySQL finishes.
Supports multiple replicas that must be listed by
|
. For example: | {"source_file": "mysql.md"} | [
0.019331153482198715,
-0.026508482173085213,
-0.053198859095573425,
0.07167928665876389,
-0.0967659130692482,
-0.038189664483070374,
0.029173972085118294,
0.06365185976028442,
-0.029976235702633858,
0.005942643154412508,
0.048490382730960846,
-0.04825110360980034,
0.1586645245552063,
-0.08... |
76d38d65-47d4-4ab5-9928-bba07b984b41 | Supports multiple replicas that must be listed by
|
. For example:
sql
CREATE TABLE test_replicas (id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL(`mysql{2|3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse');
Usage example {#usage-example}
Create table in MySQL:
``text
mysql> CREATE TABLE
test
.
test
(
->
int_id
INT NOT NULL AUTO_INCREMENT,
->
int_nullable
INT NULL DEFAULT NULL,
->
float
FLOAT NOT NULL,
->
float_nullable
FLOAT NULL DEFAULT NULL,
-> PRIMARY KEY (
int_id`));
Query OK, 0 rows affected (0,09 sec)
mysql> insert into test (
int_id
,
float
) VALUES (1,2);
Query OK, 1 row affected (0,00 sec)
mysql> select * from test;
+------+----------+-----+----------+
| int_id | int_nullable | float | float_nullable |
+------+----------+-----+----------+
| 1 | NULL | 2 | NULL |
+------+----------+-----+----------+
1 row in set (0,00 sec)
```
Create table in ClickHouse using plain arguments:
sql
CREATE TABLE mysql_table
(
`float_nullable` Nullable(Float32),
`int_id` Int32
)
ENGINE = MySQL('localhost:3306', 'test', 'test', 'bayonet', '123')
Or using
named collections
:
sql
CREATE NAMED COLLECTION creds AS
host = 'localhost',
port = 3306,
database = 'test',
user = 'bayonet',
password = '123';
CREATE TABLE mysql_table
(
`float_nullable` Nullable(Float32),
`int_id` Int32
)
ENGINE = MySQL(creds, table='test')
Retrieving data from MySQL table:
sql
SELECT * FROM mysql_table
text
ββfloat_nullableββ¬βint_idββ
β α΄Ία΅α΄Έα΄Έ β 1 β
ββββββββββββββββββ΄βββββββββ
Settings {#mysql-settings}
Default settings are not very efficient, since they do not even reuse connections. These settings allow you to increase the number of queries run by the server per second.
connection_auto_close
{#connection-auto-close}
Allows to automatically close the connection after query execution, i.e. disable connection reuse.
Possible values:
1 β Auto-close connection is allowed, so the connection reuse is disabled
0 β Auto-close connection is not allowed, so the connection reuse is enabled
Default value:
1
.
connection_max_tries
{#connection-max-tries}
Sets the number of retries for pool with failover.
Possible values:
Positive integer.
0 β There are no retries for pool with failover.
Default value:
3
.
connection_pool_size
{#connection-pool-size}
Size of connection pool (if all connections are in use, the query will wait until some connection will be freed).
Possible values:
Positive integer.
Default value:
16
.
connection_wait_timeout
{#connection-wait-timeout}
Timeout (in seconds) for waiting for free connection (in case of there is already connection_pool_size active connections), 0 - do not wait.
Possible values:
Positive integer.
Default value:
5
.
connect_timeout
{#connect-timeout}
Connect timeout (in seconds).
Possible values: | {"source_file": "mysql.md"} | [
-0.018722759559750557,
-0.02111307717859745,
-0.10850254446268082,
0.030695542693138123,
-0.1011243611574173,
-0.03550386056303978,
0.042798642069101334,
0.010944823734462261,
-0.026802359148859978,
0.021803446114063263,
0.0631132423877716,
-0.11652713268995285,
0.16106997430324554,
-0.017... |
b301dccc-d325-4214-9441-f281446efaf2 | Possible values:
Positive integer.
Default value:
5
.
connect_timeout
{#connect-timeout}
Connect timeout (in seconds).
Possible values:
Positive integer.
Default value:
10
.
read_write_timeout
{#read-write-timeout}
Read/write timeout (in seconds).
Possible values:
Positive integer.
Default value:
300
.
See also {#see-also}
The mysql table function
Using MySQL as a dictionary source | {"source_file": "mysql.md"} | [
0.076413594186306,
0.03052552603185177,
-0.09968286007642746,
0.05348809435963631,
-0.08583402633666992,
-0.008581418544054031,
0.01128891296684742,
0.031988587230443954,
0.021113252267241478,
0.0025660707615315914,
0.00902276486158371,
0.01675431616604328,
0.07040303200483322,
-0.02871384... |
a8689233-af7b-4b91-bdb9-3961ae911e0c | description: 'Creates a ClickHouse table with an initial data dump of a PostgreSQL
table and starts the replication process.'
sidebar_label: 'MaterializedPostgreSQL'
sidebar_position: 130
slug: /engines/table-engines/integrations/materialized-postgresql
title: 'MaterializedPostgreSQL table engine'
doc_type: 'guide'
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
MaterializedPostgreSQL table engine
:::note
ClickHouse Cloud users are recommended to use
ClickPipes
for PostgreSQL replication to ClickHouse. This natively supports high-performance Change Data Capture (CDC) for PostgreSQL.
:::
Creates ClickHouse table with an initial data dump of PostgreSQL table and starts the replication process, i.e. it executes a background job to apply new changes as they happen on PostgreSQL table in the remote PostgreSQL database.
:::note
This table engine is experimental. To use it, set
allow_experimental_materialized_postgresql_table
to 1 in your configuration files or by using the
SET
command:
sql
SET allow_experimental_materialized_postgresql_table=1
:::
If more than one table is required, it is highly recommended to use the
MaterializedPostgreSQL
database engine instead of the table engine and use the
materialized_postgresql_tables_list
setting, which specifies the tables to be replicated (will also be possible to add database
schema
). It will be much better in terms of CPU, fewer connections and fewer replication slots inside the remote PostgreSQL database.
Creating a table {#creating-a-table}
sql
CREATE TABLE postgresql_db.postgresql_replica (key UInt64, value UInt64)
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgresql_table', 'postgres_user', 'postgres_password')
PRIMARY KEY key;
Engine Parameters
host:port
β PostgreSQL server address.
database
β Remote database name.
table
β Remote table name.
user
β PostgreSQL user.
password
β User password.
Requirements {#requirements}
The
wal_level
setting must have a value
logical
and
max_replication_slots
parameter must have a value at least
2
in the PostgreSQL config file.
A table with
MaterializedPostgreSQL
engine must have a primary key β the same as a replica identity index (by default: primary key) of a PostgreSQL table (see
details on replica identity index
).
Only database
Atomic
is allowed.
The
MaterializedPostgreSQL
table engine only works for PostgreSQL versions >= 11 as the implementation requires the
pg_replication_slot_advance
PostgreSQL function.
Virtual columns {#virtual-columns}
_version
β Transaction counter. Type:
UInt64
.
_sign
β Deletion mark. Type:
Int8
. Possible values:
1
β Row is not deleted,
-1
β Row is deleted.
These columns do not need to be added when a table is created. They are always accessible in
SELECT
query. | {"source_file": "materialized-postgresql.md"} | [
-0.04957379400730133,
-0.05355549603700638,
-0.04921657592058182,
-0.006500659044831991,
-0.051164720207452774,
-0.04861995577812195,
-0.04615524783730507,
-0.0740693137049675,
0.014104638248682022,
0.06241441145539284,
-0.007176539860665798,
0.03252300247550011,
0.01495471689850092,
-0.12... |
a79f2036-7df1-43b0-9521-df3f3565dc81 | 1
β Row is not deleted,
-1
β Row is deleted.
These columns do not need to be added when a table is created. They are always accessible in
SELECT
query.
_version
column equals
LSN
position in
WAL
, so it might be used to check how up-to-date replication is.
```sql
CREATE TABLE postgresql_db.postgresql_replica (key UInt64, value UInt64)
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgresql_replica', 'postgres_user', 'postgres_password')
PRIMARY KEY key;
SELECT key, value, _version FROM postgresql_db.postgresql_replica;
```
:::note
Replication of
TOAST
values is not supported. The default value for the data type will be used.
::: | {"source_file": "materialized-postgresql.md"} | [
-0.021470610052347183,
-0.05740634351968765,
-0.09156668186187744,
-0.006061256863176823,
-0.036614734679460526,
-0.0018604380311444402,
0.0060580032877624035,
-0.05979744717478752,
0.0036125273909419775,
0.036875929683446884,
0.05091623216867447,
0.033716704696416855,
0.0027478551492094994,... |
6376bc74-f576-4dcb-9584-2cd073eaab2f | description: 'Table engine that allows importing data from a YTsaurus cluster.'
sidebar_label: 'YTsaurus'
sidebar_position: 185
slug: /engines/table-engines/integrations/ytsaurus
title: 'YTsaurus table engine'
keywords: ['YTsaurus', 'table engine']
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
YTsaurus table engine
The YTsaurus table engine allows you to import data from a YTsaurus cluster.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name
(
name1 [type1],
name2 [type2], ...
) ENGINE = YTsaurus('http_proxy_url', 'cypress_path', 'oauth_token')
:::info
This is an experimental feature that may change in backwards-incompatible ways in future releases.
Enable usage of the YTsaurus table engine
using setting
allow_experimental_ytsaurus_table_engine
.
You can do so using:
SET allow_experimental_ytsaurus_table_engine = 1
.
:::
Engine parameters
http_proxy_url
β URL to the YTsaurus http proxy.
cypress_path
β Cypress path to the data source.
oauth_token
β OAuth token.
Usage example {#usage-example}
Shows a query creating the YTsaurus table:
sql title="Query"
SHOW CREATE TABLE yt_saurus;
sql title="Response"
CREATE TABLE yt_saurus
(
`a` UInt32,
`b` String
)
ENGINE = YTsaurus('http://localhost:8000', '//tmp/table', 'password')
To return the data from the table, run:
sql title="Query"
SELECT * FROM yt_saurus;
response title="Response"
βββaββ¬βbβββ
β 10 β 20 β
ββββββ΄βββββ
Data types {#data-types}
Primitive data types {#primitive-data-types} | {"source_file": "ytsaurus.md"} | [
-0.034488603472709656,
-0.04621263965964317,
-0.060036033391952515,
0.03705471754074097,
0.024576770141720772,
-0.03477540984749794,
0.006149450782686472,
-0.020935527980327606,
-0.08905072510242462,
0.008145725354552269,
0.04425569996237755,
-0.018514741212129593,
0.0873183161020279,
-0.0... |
c28b38f2-c3b7-441f-abb5-478d44a8913f | sql title="Query"
SELECT * FROM yt_saurus;
response title="Response"
βββaββ¬βbβββ
β 10 β 20 β
ββββββ΄βββββ
Data types {#data-types}
Primitive data types {#primitive-data-types}
| YTsaurus data type | Clickhouse data type |
| ------------------ | ----------------------- |
|
int8
|
Int8
|
|
int16
|
Int16
|
|
int32
|
Int32
|
|
int64
|
Int64
|
|
uint8
|
UInt8
|
|
uint16
|
UInt16
|
|
uint32
|
UInt32
|
|
uint64
|
UInt64
|
|
float
|
Float32
|
|
double
|
Float64
|
|
boolean
|
Bool
|
|
string
|
String
|
|
utf8
|
String
|
|
json
|
JSON
|
|
yson(type_v3)
|
JSON
|
|
uuid
|
UUID
|
|
date32
|
Date
(Not supported yet)|
|
datetime64
|
Int64
|
|
timestamp64
|
Int64
|
|
interval64
|
Int64
|
|
date
|
Date
(Not supported yet)|
|
datetime
|
DateTime
|
|
timestamp
|
DateTime64(6)
|
|
interval
|
UInt64
|
|
any
|
String
|
|
null
|
Nothing
|
|
void
|
Nothing
|
|
T
with
required = False
|
Nullable(T)
|
Composite types {#composite-data-types}
| YTsaurus data type | Clickhouse data type |
| ------------------ | -------------------- |
|
decimal
|
Decimal
|
|
optional
|
Nullable
|
|
list
|
Array
|
|
struct
|
NamedTuple
|
|
tuple
|
Tuple
|
|
variant
|
Variant
|
|
dict
|
Array(Tuple(...)) |
|
tagged
|
T` |
See Also
ytsaurus
table function
ytsaurus data schema
ytsaurus data types | {"source_file": "ytsaurus.md"} | [
0.004735061898827553,
-0.013893965631723404,
-0.05608822777867317,
0.0814930722117424,
-0.06147703528404236,
-0.060472521930933,
-0.009173236787319183,
0.01632198877632618,
-0.06971225142478943,
-0.021812552586197853,
0.04779487103223801,
-0.07547087222337723,
0.022486595436930656,
-0.0303... |
0adf1b74-2d1f-4dd3-8b01-52460596177d | description: 'This engine provides integration with the Amazon S3 ecosystem. Similar
to the HDFS engine, but provides S3-specific features.'
sidebar_label: 'S3'
sidebar_position: 180
slug: /engines/table-engines/integrations/s3
title: 'S3 table engine'
doc_type: 'reference'
S3 table engine
This engine provides integration with the
Amazon S3
ecosystem. This engine is similar to the
HDFS
engine, but provides S3-specific features.
Example {#example}
```sql
CREATE TABLE s3_engine_table (name String, value UInt32)
ENGINE=S3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'gzip')
SETTINGS input_format_with_names_use_header = 0;
INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3);
SELECT * FROM s3_engine_table LIMIT 2;
```
text
ββnameββ¬βvalueββ
β one β 1 β
β two β 2 β
ββββββββ΄ββββββββ
Create a table {#creating-a-table}
sql
CREATE TABLE s3_engine_table (name String, value UInt32)
ENGINE = S3(path [, NOSIGN | aws_access_key_id, aws_secret_access_key,] format, [compression], [partition_strategy], [partition_columns_in_data_file])
[PARTITION BY expr]
[SETTINGS ...]
Engine parameters {#parameters}
path
β Bucket url with path to file. Supports following wildcards in readonly mode:
*
,
**
,
?
,
{abc,def}
and
{N..M}
where
N
,
M
β numbers,
'abc'
,
'def'
β strings. For more information see
below
.
NOSIGN
- If this keyword is provided in place of credentials, all the requests will not be signed.
format
β The
format
of the file.
aws_access_key_id
,
aws_secret_access_key
- Long-term credentials for the
AWS
account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file. For more information see
Using S3 for Data Storage
.
compression
β Compression type. Supported values:
none
,
gzip/gz
,
brotli/br
,
xz/LZMA
,
zstd/zst
. Parameter is optional. By default, it will auto-detect compression by file extension.
partition_strategy
β Options:
WILDCARD
or
HIVE
.
WILDCARD
requires a
{_partition_id}
in the path, which is replaced with the partition key.
HIVE
does not allow wildcards, assumes the path is the table root, and generates Hive-style partitioned directories with Snowflake IDs as filenames and the file format as the extension. Defaults to
WILDCARD
partition_columns_in_data_file
- Only used with
HIVE
partition strategy. Tells ClickHouse whether to expect partition columns to be written in the data file. Defaults
false
.
storage_class_name
- Options:
STANDARD
or
INTELLIGENT_TIERING
, allow to specify
AWS S3 Intelligent Tiering
.
Data cache {#data-cache} | {"source_file": "s3.md"} | [
-0.03366658091545105,
-0.04751928895711899,
-0.11373913288116455,
0.007315425667911768,
0.05390316620469093,
-0.008702311664819717,
-0.057771358639001846,
-0.011886032298207283,
-0.04506603255867958,
0.006961221341043711,
-0.021160632371902466,
-0.07892818748950958,
0.11335863918066025,
-0... |
9abfb879-cca9-4fdf-821a-713bc18ccf7a | storage_class_name
- Options:
STANDARD
or
INTELLIGENT_TIERING
, allow to specify
AWS S3 Intelligent Tiering
.
Data cache {#data-cache}
S3
table engine supports data caching on local disk.
See filesystem cache configuration options and usage in this
section
.
Caching is made depending on the path and ETag of the storage object, so clickhouse will not read a stale cache version.
To enable caching use a setting
filesystem_cache_name = '<name>'
and
enable_filesystem_cache = 1
.
sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_3.csv', 'minioadmin', 'minioadminpassword', 'CSV')
SETTINGS filesystem_cache_name = 'cache_for_s3', enable_filesystem_cache = 1;
There are two ways to define cache in configuration file.
add the following section to clickhouse configuration file:
xml
<clickhouse>
<filesystem_caches>
<cache_for_s3>
<path>path to cache directory</path>
<max_size>10Gi</max_size>
</cache_for_s3>
</filesystem_caches>
</clickhouse>
reuse cache configuration (and therefore cache storage) from clickhouse
storage_configuration
section,
described here
PARTITION BY {#partition-by}
PARTITION BY
β Optional. In most cases you don't need a partition key, and if it is needed you generally don't need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the
toYYYYMM(date_column)
expression, where
date_column
is a column with a date of the type
Date
. The partition names here have the
"YYYYMM"
format.
Partition strategy {#partition-strategy}
WILDCARD
(default): Replaces the
{_partition_id}
wildcard in the file path with the actual partition key. Reading is not supported.
HIVE
implements hive style partitioning for reads & writes. Reading is implemented using a recursive glob pattern, it is equivalent to
SELECT * FROM s3('table_root/**.parquet')
.
Writing generates files using the following format:
<prefix>/<key1=val1/key2=val2...>/<snowflakeid>.<toLower(file_format)>
.
Note: When using
HIVE
partition strategy, the
use_hive_partitioning
setting has no effect.
Example of
HIVE
partition strategy:
```sql
arthur :) CREATE TABLE t_03363_parquet (year UInt16, country String, counter UInt8)
ENGINE = S3(s3_conn, filename = 't_03363_parquet', format = Parquet, partition_strategy='hive')
PARTITION BY (year, country);
arthur :) INSERT INTO t_03363_parquet VALUES
(2022, 'USA', 1),
(2022, 'Canada', 2),
(2023, 'USA', 3),
(2023, 'Mexico', 4),
(2024, 'France', 5),
(2024, 'Germany', 6),
(2024, 'Germany', 7),
(1999, 'Brazil', 8),
(2100, 'Japan', 9),
(2024, 'CN', 10),
(2025, '', 11); | {"source_file": "s3.md"} | [
-0.006207330152392387,
-0.030521335080266,
-0.09539458900690079,
-0.011258807964622974,
0.005195078905671835,
-0.06884945929050446,
-0.020611317828297615,
-0.04414408653974533,
0.016118595376610756,
0.03911059722304344,
0.01082376018166542,
0.016469135880470276,
0.02537955716252327,
-0.089... |
44482b9e-169e-403a-9085-9dd4f38278c5 | arthur :) select _path, * from t_03363_parquet;
ββ_pathβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βyearββ¬βcountryββ¬βcounterββ
β test/t_03363_parquet/year=2100/country=Japan/7329604473272971264.parquet β 2100 β Japan β 9 β
β test/t_03363_parquet/year=2024/country=France/7329604473323302912.parquet β 2024 β France β 5 β
β test/t_03363_parquet/year=2022/country=Canada/7329604473314914304.parquet β 2022 β Canada β 2 β
β test/t_03363_parquet/year=1999/country=Brazil/7329604473289748480.parquet β 1999 β Brazil β 8 β
β test/t_03363_parquet/year=2023/country=Mexico/7329604473293942784.parquet β 2023 β Mexico β 4 β
β test/t_03363_parquet/year=2023/country=USA/7329604473319108608.parquet β 2023 β USA β 3 β
β test/t_03363_parquet/year=2025/country=/7329604473327497216.parquet β 2025 β β 11 β
β test/t_03363_parquet/year=2024/country=CN/7329604473310720000.parquet β 2024 β CN β 10 β
β test/t_03363_parquet/year=2022/country=USA/7329604473298137088.parquet β 2022 β USA β 1 β
β test/t_03363_parquet/year=2024/country=Germany/7329604473306525696.parquet β 2024 β Germany β 6 β
β test/t_03363_parquet/year=2024/country=Germany/7329604473306525696.parquet β 2024 β Germany β 7 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ΄ββββββββββ΄ββββββββββ
```
Querying partitioned data {#querying-partitioned-data}
This example uses the
docker compose recipe
, which integrates ClickHouse and MinIO. You should be able to reproduce the same queries using S3 by replacing the endpoint and authentication values.
Notice that the S3 endpoint in the
ENGINE
configuration uses the parameter token
{_partition_id}
as part of the S3 object (filename), and that the SELECT queries select against those resulting object names (e.g.,
test_3.csv
).
:::note
As shown in the example, querying from S3 tables that are partitioned is
not directly supported at this time, but can be accomplished by querying the individual partitions
using the S3 table function.
The primary use-case for writing
partitioned data in S3 is to enable transferring that data into another
ClickHouse system (for example, moving from on-prem systems to ClickHouse
Cloud). Because ClickHouse datasets are often very large, and network
reliability is sometimes imperfect it makes sense to transfer datasets
in subsets, hence partitioned writes.
:::
Create the table {#create-the-table}
sql
CREATE TABLE p
(
`column1` UInt32,
`column2` UInt32,
`column3` UInt32
)
ENGINE = S3(
-- highlight-next-line
'http://minio:10000/clickhouse//test_{_partition_id}.csv',
'minioadmin',
'minioadminpassword',
'CSV')
PARTITION BY column3
Insert data {#insert-data}
sql
INSERT INTO p VALUES (1, 2, 3), (3, 2, 1), (78, 43, 45) | {"source_file": "s3.md"} | [
0.009940318763256073,
-0.013552150689065456,
-0.0077406675554811954,
-0.05811469256877899,
0.05529782548546791,
-0.03969532996416092,
-0.011858928948640823,
-0.00037885530036874115,
0.0028149031568318605,
0.04099355638027191,
0.06090560927987099,
-0.055966321378946304,
-0.005171661265194416,... |
b27abbd7-d1c2-4275-956b-4438750d8ddf | Insert data {#insert-data}
sql
INSERT INTO p VALUES (1, 2, 3), (3, 2, 1), (78, 43, 45)
Select from partition 3 {#select-from-partition-3}
:::tip
This query uses the s3 table function
:::
sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_3.csv', 'minioadmin', 'minioadminpassword', 'CSV')
response
ββc1ββ¬βc2ββ¬βc3ββ
β 1 β 2 β 3 β
ββββββ΄βββββ΄βββββ
Select from partition 1 {#select-from-partition-1}
sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_1.csv', 'minioadmin', 'minioadminpassword', 'CSV')
response
ββc1ββ¬βc2ββ¬βc3ββ
β 3 β 2 β 1 β
ββββββ΄βββββ΄βββββ
Select from partition 45 {#select-from-partition-45}
sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_45.csv', 'minioadmin', 'minioadminpassword', 'CSV')
response
ββc1ββ¬βc2ββ¬βc3ββ
β 78 β 43 β 45 β
ββββββ΄βββββ΄βββββ
Limitation {#limitation}
You may naturally try to
Select * from p
, but as noted above, this query will fail; use the preceding query.
sql
SELECT * FROM p
response
Received exception from server (version 23.4.1):
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Reading from a partitioned S3 storage is not implemented yet. (NOT_IMPLEMENTED)
Insert data {#inserting-data}
Note that rows can only be inserted into new files. There are no merge cycles or file split operations. Once a file is written, subsequent inserts will fail. To avoid this you can use
s3_truncate_on_insert
and
s3_create_new_file_on_insert
settings. See more details
here
.
Virtual columns {#virtual-columns}
_path
β Path to the file. Type:
LowCardinality(String)
.
_file
β Name of the file. Type:
LowCardinality(String)
.
_size
β Size of the file in bytes. Type:
Nullable(UInt64)
. If the size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
_etag
β ETag of the file. Type:
LowCardinality(String)
. If the etag is unknown, the value is
NULL
.
_tags
β Tags of the file. Type:
Map(String, String)
. If no tag exist, the value is an empty map `{}'.
For more information about virtual columns see
here
.
Implementation details {#implementation-details}
Reads and writes can be parallel
Not supported:
ALTER
and
SELECT...SAMPLE
operations.
Indexes.
Zero-copy
replication is possible, but not supported.
:::note Zero-copy replication is not ready for production
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
:::
Wildcards in path {#wildcards-in-path}
path
argument can specify multiple files using bash-like wildcards. For being processed file should exist and match to the whole path pattern. Listing of files is determined during
SELECT
(not at
CREATE
moment).
*
β Substitutes any number of any characters except
/
including empty string. | {"source_file": "s3.md"} | [
-0.012389925308525562,
-0.027783406898379326,
-0.05665428563952446,
-0.002302803797647357,
-0.025147436186671257,
-0.0186786986887455,
0.005781434942036867,
0.012640760280191898,
0.036127276718616486,
0.004480597097426653,
0.05800844356417656,
-0.07151549309492111,
0.06947077810764313,
-0.... |
138276d4-ad40-47d5-9fd9-6d87862d4905 | *
β Substitutes any number of any characters except
/
including empty string.
**
β Substitutes any number of any character include
/
including empty string.
?
β Substitutes any single character.
{some_string,another_string,yet_another_one}
β Substitutes any of strings
'some_string', 'another_string', 'yet_another_one'
.
{N..M}
β Substitutes any number in range from N to M including both borders. N and M can have leading zeroes e.g.
000..078
.
Constructions with
{}
are similar to the
remote
table function.
:::note
If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use
?
.
:::
Example with wildcards 1
Create table with files named
file-000.csv
,
file-001.csv
, ... ,
file-999.csv
:
sql
CREATE TABLE big_table (name String, value UInt32)
ENGINE = S3('https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/my_folder/file-{000..999}.csv', 'CSV');
Example with wildcards 2
Suppose we have several files in CSV format with the following URIs on S3:
'https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/some_folder/some_file_1.csv'
'https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/some_folder/some_file_2.csv'
'https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/some_folder/some_file_3.csv'
'https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/another_folder/some_file_1.csv'
'https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/another_folder/some_file_2.csv'
'https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/another_folder/some_file_3.csv'
There are several ways to make a table consisting of all six files:
Specify the range of file postfixes:
sql
CREATE TABLE table_with_range (name String, value UInt32)
ENGINE = S3('https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/{some,another}_folder/some_file_{1..3}', 'CSV');
Take all files with
some_file_
prefix (there should be no extra files with such prefix in both folders):
sql
CREATE TABLE table_with_question_mark (name String, value UInt32)
ENGINE = S3('https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/{some,another}_folder/some_file_?', 'CSV');
Take all the files in both folders (all files should satisfy format and schema described in query):
sql
CREATE TABLE table_with_asterisk (name String, value UInt32)
ENGINE = S3('https://clickhouse-public-datasets.s3.amazonaws.com/my-bucket/{some,another}_folder/*', 'CSV');
Storage settings {#storage-settings}
s3_truncate_on_insert
- allows to truncate file before insert into it. Disabled by default.
s3_create_new_file_on_insert
- allows to create a new file on each insert if format has suffix. Disabled by default.
s3_skip_empty_files
- allows to skip empty files while reading. Enabled by default.
S3-related settings {#settings} | {"source_file": "s3.md"} | [
-0.022088713943958282,
0.06504864245653152,
-0.04027470573782921,
0.0050877295434474945,
0.02385149523615837,
-0.007146619725972414,
0.01586020737886429,
0.06164179742336273,
0.0007844642386771739,
0.06371540576219559,
0.009904777631163597,
-0.022550825029611588,
0.03499951213598251,
-0.03... |
f93ae6b1-15a8-4465-b54a-22329c8a2668 | s3_skip_empty_files
- allows to skip empty files while reading. Enabled by default.
S3-related settings {#settings}
The following settings can be set before query execution or placed into configuration file.
s3_max_single_part_upload_size
β The maximum size of object to upload using singlepart upload to S3. Default value is
32Mb
.
s3_min_upload_part_size
β The minimum size of part to upload during multipart upload to
S3 Multipart upload
. Default value is
16Mb
.
s3_max_redirects
β Max number of S3 redirects hops allowed. Default value is
10
.
s3_single_read_retries
β The maximum number of attempts during single read. Default value is
4
.
s3_max_put_rps
β Maximum PUT requests per second rate before throttling. Default value is
0
(unlimited).
s3_max_put_burst
β Max number of requests that can be issued simultaneously before hitting request per second limit. By default (
0
value) equals to
s3_max_put_rps
.
s3_max_get_rps
β Maximum GET requests per second rate before throttling. Default value is
0
(unlimited).
s3_max_get_burst
β Max number of requests that can be issued simultaneously before hitting request per second limit. By default (
0
value) equals to
s3_max_get_rps
.
s3_upload_part_size_multiply_factor
- Multiply
s3_min_upload_part_size
by this factor each time
s3_multiply_parts_count_threshold
parts were uploaded from a single write to S3. Default values is
2
.
s3_upload_part_size_multiply_parts_count_threshold
- Each time this number of parts was uploaded to S3,
s3_min_upload_part_size
is multiplied by
s3_upload_part_size_multiply_factor
. Default value is
500
.
s3_max_inflight_parts_for_one_file
- Limits the number of put requests that can be run concurrently for one object. Its number should be limited. The value
0
means unlimited. Default value is
20
. Each in-flight part has a buffer with size
s3_min_upload_part_size
for the first
s3_upload_part_size_multiply_factor
parts and more when file is big enough, see
upload_part_size_multiply_factor
. With default settings one uploaded file consumes not more than
320Mb
for a file which is less than
8G
. The consumption is greater for a larger file.
Security consideration: if malicious user can specify arbitrary S3 URLs,
s3_max_redirects
must be set to zero to avoid
SSRF
attacks; or alternatively,
remote_host_filter
must be specified in server configuration.
Endpoint-based settings {#endpoint-settings}
The following settings can be specified in configuration file for given endpoint (which will be matched by exact prefix of a URL):
endpoint
β Specifies prefix of an endpoint. Mandatory.
access_key_id
and
secret_access_key
β Specifies credentials to use with given endpoint. Optional.
use_environment_credentials
β If set to
true
, S3 client will try to obtain credentials from environment variables and
Amazon EC2
metadata for given endpoint. Optional, default value is
false
. | {"source_file": "s3.md"} | [
0.00009043509635375813,
-0.0323951281607151,
-0.0640939325094223,
-0.029243802651762962,
0.04817437008023262,
-0.03068484179675579,
-0.09071918576955795,
0.0323023684322834,
0.035127416253089905,
0.010210152715444565,
-0.00792898703366518,
0.0567663237452507,
0.061315786093473434,
-0.07276... |
b494d055-c0a7-4614-9e90-dfe6ade12225 | region
β Specifies S3 region name. Optional.
use_insecure_imds_request
β If set to
true
, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is
false
.
expiration_window_seconds
β Grace period for checking if expiration-based credentials have expired. Optional, default value is
120
.
no_sign_request
- Ignore all the credentials so requests are not signed. Useful for accessing public buckets.
header
β Adds specified HTTP header to a request to given endpoint. Optional, can be specified multiple times.
access_header
- Adds specified HTTP header to a request to given endpoint, in cases where there are no other credentials from another source.
server_side_encryption_customer_key_base64
β If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.
server_side_encryption_kms_key_id
- If specified, required headers for accessing S3 objects with
SSE-KMS encryption
will be set. If an empty string is specified, the AWS managed S3 key will be used. Optional.
server_side_encryption_kms_encryption_context
- If specified alongside
server_side_encryption_kms_key_id
, the given encryption context header for SSE-KMS will be set. Optional.
server_side_encryption_kms_bucket_key_enabled
- If specified alongside
server_side_encryption_kms_key_id
, the header to enable S3 bucket keys for SSE-KMS will be set. Optional, can be
true
or
false
, defaults to nothing (matches the bucket-level setting).
max_single_read_retries
β The maximum number of attempts during single read. Default value is
4
. Optional.
max_put_rps
,
max_put_burst
,
max_get_rps
and
max_get_burst
- Throttling settings (see description above) to use for specific endpoint instead of per query. Optional.
Example: | {"source_file": "s3.md"} | [
-0.020006487146019936,
0.06079234182834625,
-0.07030307501554489,
-0.04192914068698883,
0.04579995572566986,
0.00982495117932558,
0.010897073894739151,
-0.016987308859825134,
0.09304872900247574,
-0.029774155467748642,
-0.0014120314735919237,
-0.04822973534464836,
0.07571343332529068,
-0.0... |
5b494858-da14-4a41-83ea-f95fbb738ad5 | max_put_rps
,
max_put_burst
,
max_get_rps
and
max_get_burst
- Throttling settings (see description above) to use for specific endpoint instead of per query. Optional.
Example:
xml
<s3>
<endpoint-name>
<endpoint>https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/</endpoint>
<!-- <access_key_id>ACCESS_KEY_ID</access_key_id> -->
<!-- <secret_access_key>SECRET_ACCESS_KEY</secret_access_key> -->
<!-- <region>us-west-1</region> -->
<!-- <use_environment_credentials>false</use_environment_credentials> -->
<!-- <use_insecure_imds_request>false</use_insecure_imds_request> -->
<!-- <expiration_window_seconds>120</expiration_window_seconds> -->
<!-- <no_sign_request>false</no_sign_request> -->
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
<!-- <server_side_encryption_kms_key_id>KMS_KEY_ID</server_side_encryption_kms_key_id> -->
<!-- <server_side_encryption_kms_encryption_context>KMS_ENCRYPTION_CONTEXT</server_side_encryption_kms_encryption_context> -->
<!-- <server_side_encryption_kms_bucket_key_enabled>true</server_side_encryption_kms_bucket_key_enabled> -->
<!-- <max_single_read_retries>4</max_single_read_retries> -->
</endpoint-name>
</s3>
Working with archives {#working-with-archives}
Suppose that we have several archive files with following URIs on S3:
'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-10.csv.zip'
'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-11.csv.zip'
'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-12.csv.zip'
Extracting data from these archives is possible using ::. Globs can be used both in the url part as well as in the part after :: (responsible for the name of a file inside the archive).
sql
SELECT *
FROM s3(
'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-1{0..2}.csv.zip :: *.csv'
);
:::note
ClickHouse supports three archive formats:
ZIP
TAR
7Z
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
:::
Accessing public buckets {#accessing-public-buckets}
ClickHouse tries to fetch credentials from many different types of sources.
Sometimes, it can produce problems when accessing some buckets that are public causing the client to return
403
error code.
This issue can be avoided by using
NOSIGN
keyword, forcing the client to ignore all the credentials, and not sign the requests.
sql
CREATE TABLE big_table (name String, value UInt32)
ENGINE = S3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv', NOSIGN, 'CSVWithNames');
Optimizing performance {#optimizing-performance} | {"source_file": "s3.md"} | [
0.005962041672319174,
0.021498441696166992,
-0.086082324385643,
-0.031243396922945976,
-0.008062021806836128,
-0.031113402917981148,
-0.020899277180433273,
-0.017081180587410927,
0.05295254662632942,
0.040003810077905655,
0.06177592650055885,
-0.02886805683374405,
0.07171111553907394,
-0.0... |
7c38319c-2c74-46b5-86a5-a7f3e8912bfd | Optimizing performance {#optimizing-performance}
For details on optimizing the performance of the s3 function see
our detailed guide
.
See also {#see-also}
s3 table function
Integrating S3 with ClickHouse | {"source_file": "s3.md"} | [
-0.01823975145816803,
-0.04864935576915741,
-0.09435011446475983,
-0.03958325833082199,
0.041811831295490265,
-0.05713401362299919,
-0.026477105915546417,
-0.00422112038359046,
-0.03367137536406517,
-0.002635960467159748,
-0.009195410646498203,
0.0279002133756876,
0.05776987969875336,
-0.0... |
71d8ab44-3a7a-4615-9da4-4d0e0ac5e556 | description: 'This engine provides integration with the Apache Hadoop ecosystem by
allowing to manage data on HDFS via ClickHouse. This engine is similar to the File
and URL engines, but provides Hadoop-specific features.'
sidebar_label: 'HDFS'
sidebar_position: 80
slug: /engines/table-engines/integrations/hdfs
title: 'HDFS table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
HDFS table engine
This engine provides integration with the
Apache Hadoop
ecosystem by allowing to manage data on
HDFS
via ClickHouse. This engine is similar to the
File
and
URL
engines, but provides Hadoop-specific features.
This feature is not supported by ClickHouse engineers, and it is known to have a sketchy quality. In case of any problems, fix them yourself and submit a pull request.
Usage {#usage}
sql
ENGINE = HDFS(URI, format)
Engine Parameters
URI
- whole file URI in HDFS. The path part of
URI
may contain globs. In this case the table would be readonly.
format
- specifies one of the available file formats. To perform
SELECT
queries, the format must be supported for input, and to perform
INSERT
queries β for output. The available formats are listed in the
Formats
section.
[PARTITION BY expr]
PARTITION BY {#partition-by}
PARTITION BY
β Optional. In most cases you don't need a partition key, and if it is needed you generally don't need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the
toYYYYMM(date_column)
expression, where
date_column
is a column with a date of the type
Date
. The partition names here have the
"YYYYMM"
format.
Example:
1.
Set up the
hdfs_engine_table
table:
sql
CREATE TABLE hdfs_engine_table (name String, value UInt32) ENGINE=HDFS('hdfs://hdfs1:9000/other_storage', 'TSV')
2.
Fill file:
sql
INSERT INTO hdfs_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
3.
Query the data:
sql
SELECT * FROM hdfs_engine_table LIMIT 2
text
ββnameββ¬βvalueββ
β one β 1 β
β two β 2 β
ββββββββ΄ββββββββ
Implementation details {#implementation-details}
Reads and writes can be parallel.
Not supported:
ALTER
and
SELECT...SAMPLE
operations.
Indexes.
Zero-copy
replication is possible, but not recommended.
:::note Zero-copy replication is not ready for production
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
:::
Globs in path | {"source_file": "hdfs.md"} | [
-0.024928642436861992,
-0.04922383278608322,
-0.021589841693639755,
0.009069019928574562,
0.04007239267230034,
-0.038691818714141846,
-0.05606083571910858,
0.0040277172811329365,
-0.065388023853302,
0.01341187208890915,
-0.026820791885256767,
0.026312118396162987,
0.06754358112812042,
-0.0... |
15eed0d4-a340-49d3-8969-9afa6376dd5a | Globs in path
Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern. Listing of files determines during
SELECT
(not at
CREATE
moment).
*
β Substitutes any number of any characters except
/
including empty string.
?
β Substitutes any single character.
{some_string,another_string,yet_another_one}
β Substitutes any of strings
'some_string', 'another_string', 'yet_another_one'
.
{N..M}
β Substitutes any number in range from N to M including both borders.
Constructions with
{}
are similar to the
remote
table function.
Example
Suppose we have several files in TSV format with the following URIs on HDFS:
'hdfs://hdfs1:9000/some_dir/some_file_1'
'hdfs://hdfs1:9000/some_dir/some_file_2'
'hdfs://hdfs1:9000/some_dir/some_file_3'
'hdfs://hdfs1:9000/another_dir/some_file_1'
'hdfs://hdfs1:9000/another_dir/some_file_2'
'hdfs://hdfs1:9000/another_dir/some_file_3'
There are several ways to make a table consisting of all six files:
sql
CREATE TABLE table_with_range (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV')
Another way:
sql
CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_?', 'TSV')
Table consists of all the files in both directories (all files should satisfy format and schema described in query):
sql
CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV')
:::note
If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use
?
.
:::
Example
Create table with files named
file000
,
file001
, ... ,
file999
:
sql
CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV')
Configuration {#configuration}
Similar to GraphiteMergeTree, the HDFS engine supports extended configuration using the ClickHouse config file. There are two configuration keys that you can use: global (
hdfs
) and user-level (
hdfs_*
). The global configuration is applied first, and then the user-level configuration is applied (if it exists).
```xml
/tmp/keytab/clickhouse.keytab
clickuser@TEST.CLICKHOUSE.TECH
kerberos
root@TEST.CLICKHOUSE.TECH
```
Configuration options {#configuration-options}
Supported by libhdfs3 {#supported-by-libhdfs3} | {"source_file": "hdfs.md"} | [
-0.03770357370376587,
-0.018098408356308937,
0.03571808338165283,
0.01073228195309639,
0.019732922315597534,
-0.05566029995679855,
-0.0028594722971320152,
-0.007418681401759386,
0.06237892806529999,
0.0336899571120739,
-0.012530929408967495,
0.020275874063372612,
0.01417636964470148,
0.019... |
c3c7d273-7f56-40bb-93f1-ff222e269e9c | |
parameter
|
default value
|
| - | - |
| rpc_client_connect_tcpnodelay | true |
| dfs_client_read_shortcircuit | true |
| output_replace-datanode-on-failure | true |
| input_notretry-another-node | false |
| input_localread_mappedfile | true |
| dfs_client_use_legacy_blockreader_local | false |
| rpc_client_ping_interval | 10 * 1000 |
| rpc_client_connect_timeout | 600 * 1000 |
| rpc_client_read_timeout | 3600 * 1000 |
| rpc_client_write_timeout | 3600 * 1000 |
| rpc_client_socket_linger_timeout | -1 |
| rpc_client_connect_retry | 10 |
| rpc_client_timeout | 3600 * 1000 |
| dfs_default_replica | 3 |
| input_connect_timeout | 600 * 1000 |
| input_read_timeout | 3600 * 1000 |
| input_write_timeout | 3600 * 1000 |
| input_localread_default_buffersize | 1 * 1024 * 1024 |
| dfs_prefetchsize | 10 |
| input_read_getblockinfo_retry | 3 |
| input_localread_blockinfo_cachesize | 1000 |
| input_read_max_retry | 60 |
| output_default_chunksize | 512 |
| output_default_packetsize | 64 * 1024 |
| output_default_write_retry | 10 |
| output_connect_timeout | 600 * 1000 |
| output_read_timeout | 3600 * 1000 |
| output_write_timeout | 3600 * 1000 |
| output_close_timeout | 3600 * 1000 |
| output_packetpool_size | 1024 |
| output_heartbeat_interval | 10 * 1000 |
| dfs_client_failover_max_attempts | 15 |
| dfs_client_read_shortcircuit_streams_cache_size | 256 |
| dfs_client_socketcache_expiryMsec | 3000 | | {"source_file": "hdfs.md"} | [
0.009994328022003174,
-0.012835892848670483,
-0.07653801888227463,
0.017140956595540047,
-0.0054126037284731865,
-0.07876738160848618,
-0.06149570271372795,
0.030150175094604492,
-0.06486595422029495,
0.018889592960476875,
0.028907857835292816,
-0.03328249603509903,
0.04432373866438866,
-0... |
f22b424f-e4c2-479b-bcbe-f4c09fbc142e | | dfs_client_read_shortcircuit_streams_cache_size | 256 |
| dfs_client_socketcache_expiryMsec | 3000 |
| dfs_client_socketcache_capacity | 16 |
| dfs_default_blocksize | 64 * 1024 * 1024 |
| dfs_default_uri | "hdfs://localhost:9000" |
| hadoop_security_authentication | "simple" |
| hadoop_security_kerberos_ticket_cache_path | "" |
| dfs_client_log_severity | "INFO" |
| dfs_domain_socket_path | "" | | {"source_file": "hdfs.md"} | [
0.041845567524433136,
0.002873726189136505,
-0.04778636246919632,
-0.0055195544846355915,
0.029923906549811363,
-0.09057951718568802,
-0.03494596853852272,
0.04078010469675064,
-0.01536969467997551,
0.0568348728120327,
-0.015038133598864079,
-0.00034560781205073,
-0.003826731815934181,
-0.... |
2827dd8f-e5e7-4929-b3f6-4daa8cea58d2 | HDFS Configuration Reference
might explain some parameters.
ClickHouse extras {#clickhouse-extras}
|
parameter
|
default value
|
| - | - |
|hadoop_kerberos_keytab | "" |
|hadoop_kerberos_principal | "" |
|libhdfs3_conf | "" |
Limitations {#limitations}
hadoop_security_kerberos_ticket_cache_path
and
libhdfs3_conf
can be global only, not user specific
Kerberos support {#kerberos-support}
If the
hadoop_security_authentication
parameter has the value
kerberos
, ClickHouse authenticates via Kerberos.
Parameters are
here
and
hadoop_security_kerberos_ticket_cache_path
may be of help.
Note that due to libhdfs3 limitations only old-fashioned approach is supported,
datanode communications are not secured by SASL (
HADOOP_SECURE_DN_USER
is a reliable indicator of such
security approach). Use
tests/integration/test_storage_kerberized_hdfs/hdfs_configs/bootstrap.sh
for reference.
If
hadoop_kerberos_keytab
,
hadoop_kerberos_principal
or
hadoop_security_kerberos_ticket_cache_path
are specified, Kerberos authentication will be used.
hadoop_kerberos_keytab
and
hadoop_kerberos_principal
are mandatory in this case.
HDFS Namenode HA support {#namenode-ha}
libhdfs3 support HDFS namenode HA.
Copy
hdfs-site.xml
from an HDFS node to
/etc/clickhouse-server/
.
Add following piece to ClickHouse config file:
xml
<hdfs>
<libhdfs3_conf>/etc/clickhouse-server/hdfs-site.xml</libhdfs3_conf>
</hdfs>
Then use
dfs.nameservices
tag value of
hdfs-site.xml
as the namenode address in the HDFS URI. For example, replace
hdfs://appadmin@192.168.101.11:8020/abc/
with
hdfs://appadmin@my_nameservice/abc/
.
Virtual columns {#virtual-columns}
_path
β Path to the file. Type:
LowCardinality(String)
.
_file
β Name of the file. Type:
LowCardinality(String)
.
_size
β Size of the file in bytes. Type:
Nullable(UInt64)
. If the size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
Storage settings {#storage-settings}
hdfs_truncate_on_insert
- allows to truncate file before insert into it. Disabled by default.
hdfs_create_new_file_on_insert
- allows to create a new file on each insert if format has suffix. Disabled by default.
hdfs_skip_empty_files
- allows to skip empty files while reading. Disabled by default.
See Also
Virtual columns | {"source_file": "hdfs.md"} | [
-0.020189864560961723,
-0.08317932486534119,
-0.032372575253248215,
-0.0425838902592659,
0.02785436064004898,
-0.04875877872109413,
-0.009418630972504616,
-0.029750501736998558,
-0.07875461131334305,
0.08202043920755386,
-0.006197805982083082,
-0.037975434213876724,
0.07169319689273834,
-0... |
81ea46a7-3c2a-4030-bcbe-954575456994 | description: 'The
ExternalDistributed
engine allows to perform
SELECT
queries
on data that is stored on a remote servers MySQL or PostgreSQL. Accepts MySQL or
PostgreSQL engines as an argument so sharding is possible.'
sidebar_label: 'ExternalDistributed'
sidebar_position: 55
slug: /engines/table-engines/integrations/ExternalDistributed
title: 'ExternalDistributed table engine'
doc_type: 'reference'
ExternalDistributed table engine
The
ExternalDistributed
engine allows to perform
SELECT
queries on data that is stored on a remote servers MySQL or PostgreSQL. Accepts
MySQL
or
PostgreSQL
engines as an argument so sharding is possible.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
...
) ENGINE = ExternalDistributed('engine', 'host:port', 'database', 'table', 'user', 'password');
See a detailed description of the
CREATE TABLE
query.
The table structure can differ from the original table structure:
Column names should be the same as in the original table, but you can use just some of these columns and in any order.
Column types may differ from those in the original table. ClickHouse tries to
cast
values to the ClickHouse data types.
Engine Parameters
engine
β The table engine
MySQL
or
PostgreSQL
.
host:port
β MySQL or PostgreSQL server address.
database
β Remote database name.
table
β Remote table name.
user
β User name.
password
β User password.
Implementation details {#implementation-details}
Supports multiple replicas that must be listed by
|
and shards must be listed by
,
. For example:
sql
CREATE TABLE test_shards (id UInt32, name String, age UInt32, money UInt32) ENGINE = ExternalDistributed('MySQL', `mysql{1|2}:3306,mysql{3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse');
When specifying replicas, one of the available replicas is selected for each of the shards when reading. If the connection fails, the next replica is selected, and so on for all the replicas. If the connection attempt fails for all the replicas, the attempt is repeated the same way several times.
You can specify any number of shards and any number of replicas for each shard.
See Also
MySQL table engine
PostgreSQL table engine
Distributed table engine | {"source_file": "ExternalDistributed.md"} | [
0.0030967029742896557,
-0.06655507534742355,
-0.07276774197816849,
0.08345840126276016,
-0.04225973039865494,
-0.032000936567783356,
0.023304928094148636,
-0.0036611086688935757,
-0.0077783456072211266,
-0.020187435671687126,
-0.006265529431402683,
-0.02346576564013958,
0.09900321811437607,
... |
c96f8de9-1c14-4a55-99b4-d23f8b719c42 | description: 'This engine provides a read-only integration with existing Delta Lake
tables in Amazon S3.'
sidebar_label: 'DeltaLake'
sidebar_position: 40
slug: /engines/table-engines/integrations/deltalake
title: 'DeltaLake table engine'
doc_type: 'reference'
DeltaLake table engine
This engine provides a read-only integration with existing
Delta Lake
tables in Amazon S3.
Create table {#create-table}
Note that the Delta Lake table must already exist in S3, this command does not take DDL parameters to create a new table.
sql
CREATE TABLE deltalake
ENGINE = DeltaLake(url, [aws_access_key_id, aws_secret_access_key,])
Engine parameters
url
β Bucket url with path to the existing Delta Lake table.
aws_access_key_id
,
aws_secret_access_key
- Long-term credentials for the
AWS
account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file.
Engine parameters can be specified using
Named Collections
.
Example
sql
CREATE TABLE deltalake ENGINE=DeltaLake('http://mars-doc-test.s3.amazonaws.com/clickhouse-bucket-3/test_table/', 'ABC123', 'Abc+123')
Using named collections:
xml
<clickhouse>
<named_collections>
<deltalake_conf>
<url>http://mars-doc-test.s3.amazonaws.com/clickhouse-bucket-3/</url>
<access_key_id>ABC123<access_key_id>
<secret_access_key>Abc+123</secret_access_key>
</deltalake_conf>
</named_collections>
</clickhouse>
sql
CREATE TABLE deltalake ENGINE=DeltaLake(deltalake_conf, filename = 'test_table')
Data cache {#data-cache}
Iceberg
table engine and table function support data caching same as
S3
,
AzureBlobStorage
,
HDFS
storages. See
here
.
See also {#see-also}
deltaLake table function | {"source_file": "deltalake.md"} | [
-0.055963337421417236,
-0.058353230357170105,
-0.09415415674448013,
-0.03478557989001274,
-0.06079111248254776,
-0.026875857263803482,
0.009594644419848919,
-0.022685900330543518,
-0.03558455407619476,
-0.0035395417362451553,
0.004385498818010092,
-0.07493723928928375,
0.09872109442949295,
... |
36e1f7c8-cae1-4b56-9343-766518d7f64a | description: 'Documentation for Table Engines for Integrations'
sidebar_label: 'Integrations'
sidebar_position: 40
slug: /engines/table-engines/integrations/
title: 'Table Engines for Integrations'
doc_type: 'reference'
Table engines for integrations
ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using
CREATE TABLE
or
ALTER TABLE
queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like dictionaries or table functions, which require the use of custom query methods on each use. | {"source_file": "index.md"} | [
-0.02065725438296795,
-0.052336134016513824,
-0.07692556083202362,
0.020942093804478645,
-0.0618998222053051,
-0.025303326547145844,
0.015436488203704357,
0.04047001898288727,
-0.035814378410577774,
-0.00334585621021688,
0.05076366662979126,
0.000642712926492095,
0.09571520239114761,
-0.08... |
62fd1984-4e94-4d58-aae0-1525c72ab2ae | description: 'The PostgreSQL engine allows
SELECT
and
INSERT
queries on data stored
on a remote PostgreSQL server.'
sidebar_label: 'PostgreSQL'
sidebar_position: 160
slug: /engines/table-engines/integrations/postgresql
title: 'PostgreSQL table Engine'
doc_type: 'guide'
PostgreSQL table engine
The PostgreSQL engine allows
SELECT
and
INSERT
queries on data stored on a remote PostgreSQL server.
:::note
Currently, only PostgreSQL versions 12 and up are supported.
:::
:::tip
ClickHouse Cloud users are recommended to use
ClickPipes
for streaming Postgres data into ClickHouse. This natively supports high-performance insertion while ensuring the separation of concerns with the ability to scale ingestion and cluster resources independently.
:::
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 type1 [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 type2 [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
...
) ENGINE = PostgreSQL({host:port, database, table, user, password[, schema, [, on_conflict]] | named_collection[, option=value [,..]]})
See a detailed description of the
CREATE TABLE
query.
The table structure can differ from the original PostgreSQL table structure:
Column names should be the same as in the original PostgreSQL table, but you can use just some of these columns and in any order.
Column types may differ from those in the original PostgreSQL table. ClickHouse tries to
cast
values to the ClickHouse data types.
The
external_table_functions_use_nulls
setting defines how to handle Nullable columns. Default value: 1. If 0, the table function does not make Nullable columns and inserts default values instead of nulls. This is also applicable for NULL values inside arrays.
Engine Parameters
host:port
β PostgreSQL server address.
database
β Remote database name.
table
β Remote table name.
user
β PostgreSQL user.
password
β User password.
schema
β Non-default table schema. Optional.
on_conflict
β Conflict resolution strategy. Example:
ON CONFLICT DO NOTHING
. Optional. Note: adding this option will make insertion less efficient.
Named collections
(available since version 21.11) are recommended for production environment. Here is an example:
xml
<named_collections>
<postgres_creds>
<host>localhost</host>
<port>5432</port>
<user>postgres</user>
<password>****</password>
<schema>schema1</schema>
</postgres_creds>
</named_collections>
Some parameters can be overridden by key value arguments:
sql
SELECT * FROM postgresql(postgres_creds, table='table1');
Implementation details {#implementation-details}
SELECT
queries on PostgreSQL side run as
COPY (SELECT ...) TO STDOUT
inside read-only PostgreSQL transaction with commit after each
SELECT
query. | {"source_file": "postgresql.md"} | [
-0.00028937202296219766,
-0.08840706199407578,
-0.04999539628624916,
0.07087654620409012,
-0.0639965683221817,
-0.014077462255954742,
0.0021998819429427385,
-0.03016178496181965,
-0.02549831010401249,
0.024300266057252884,
0.008224752731621265,
0.03185279667377472,
0.013433710671961308,
-0... |
125e9a61-332f-46ad-9317-894b2862000b | SELECT
queries on PostgreSQL side run as
COPY (SELECT ...) TO STDOUT
inside read-only PostgreSQL transaction with commit after each
SELECT
query.
Simple
WHERE
clauses such as
=
,
!=
,
>
,
>=
,
<
,
<=
, and
IN
are executed on the PostgreSQL server.
All joins, aggregations, sorting,
IN [ array ]
conditions and the
LIMIT
sampling constraint are executed in ClickHouse only after the query to PostgreSQL finishes.
INSERT
queries on PostgreSQL side run as
COPY "table_name" (field1, field2, ... fieldN) FROM STDIN
inside PostgreSQL transaction with auto-commit after each
INSERT
statement.
PostgreSQL
Array
types are converted into ClickHouse arrays.
:::note
Be careful - in PostgreSQL an array data, created like a
type_name[]
, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
:::
Supports multiple replicas that must be listed by
|
. For example:
sql
CREATE TABLE test_replicas (id UInt32, name String) ENGINE = PostgreSQL(`postgres{2|3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword');
Replicas priority for PostgreSQL dictionary source is supported. The bigger the number in map, the less the priority. The highest priority is
0
.
In the example below replica
example01-1
has the highest priority:
xml
<postgresql>
<port>5432</port>
<user>clickhouse</user>
<password>qwerty</password>
<replica>
<host>example01-1</host>
<priority>1</priority>
</replica>
<replica>
<host>example01-2</host>
<priority>2</priority>
</replica>
<db>db_name</db>
<table>table_name</table>
<where>id=10</where>
<invalidate_query>SQL_QUERY</invalidate_query>
</postgresql>
</source>
Usage example {#usage-example}
Table in PostgreSQL {#table-in-postgresql}
```text
postgres=# CREATE TABLE "public"."test" (
"int_id" SERIAL,
"int_nullable" INT NULL DEFAULT NULL,
"float" FLOAT NOT NULL,
"str" VARCHAR(100) NOT NULL DEFAULT '',
"float_nullable" FLOAT NULL DEFAULT NULL,
PRIMARY KEY (int_id));
CREATE TABLE
postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2);
INSERT 0 1
postgresql> SELECT * FROM test;
int_id | int_nullable | float | str | float_nullable
--------+--------------+-------+------+----------------
1 | | 2 | test |
(1 row)
```
Creating Table in ClickHouse, and connecting to PostgreSQL table created above {#creating-table-in-clickhouse-and-connecting-to--postgresql-table-created-above}
This example uses the
PostgreSQL table engine
to connect the ClickHouse table to the PostgreSQL table and use both SELECT and INSERT statements to the PostgreSQL database: | {"source_file": "postgresql.md"} | [
-0.008432422764599323,
-0.05455535277724266,
-0.037264708429574966,
0.06303668022155762,
-0.09412242472171783,
-0.04113079980015755,
0.07888322323560715,
-0.06070837005972862,
0.03652691841125488,
-0.005195643752813339,
-0.0019502082141116261,
0.04152373597025871,
0.00013115080946590751,
-... |
22e82c46-aa9e-4382-a549-dd0590442c39 | This example uses the
PostgreSQL table engine
to connect the ClickHouse table to the PostgreSQL table and use both SELECT and INSERT statements to the PostgreSQL database:
sql
CREATE TABLE default.postgresql_table
(
`float_nullable` Nullable(Float32),
`str` String,
`int_id` Int32
)
ENGINE = PostgreSQL('localhost:5432', 'public', 'test', 'postgres_user', 'postgres_password');
Inserting initial data from PostgreSQL table into ClickHouse table, using a SELECT query {#inserting-initial-data-from-postgresql-table-into-clickhouse-table-using-a-select-query}
The
postgresql table function
copies the data from PostgreSQL to ClickHouse, which is often used for improving the query performance of the data by querying or performing analytics in ClickHouse rather than in PostgreSQL, or can also be used for migrating data from PostgreSQL to ClickHouse. Since we will be copying the data from PostgreSQL to ClickHouse, we will use a MergeTree table engine in ClickHouse and call it postgresql_copy:
sql
CREATE TABLE default.postgresql_copy
(
`float_nullable` Nullable(Float32),
`str` String,
`int_id` Int32
)
ENGINE = MergeTree
ORDER BY (int_id);
sql
INSERT INTO default.postgresql_copy
SELECT * FROM postgresql('localhost:5432', 'public', 'test', 'postgres_user', 'postgres_password');
Inserting incremental data from PostgreSQL table into ClickHouse table {#inserting-incremental-data-from-postgresql-table-into-clickhouse-table}
If then performing ongoing synchronization between the PostgreSQL table and ClickHouse table after the initial insert, you can use a WHERE clause in ClickHouse to insert only data added to PostgreSQL based on a timestamp or unique sequence ID.
This would require keeping track of the max ID or timestamp previously added, such as the following:
sql
SELECT max(`int_id`) AS maxIntID FROM default.postgresql_copy;
Then inserting values from PostgreSQL table greater than the max
sql
INSERT INTO default.postgresql_copy
SELECT * FROM postgresql('localhost:5432', 'public', 'test', 'postges_user', 'postgres_password');
WHERE int_id > maxIntID;
Selecting data from the resulting ClickHouse table {#selecting-data-from-the-resulting-clickhouse-table}
sql
SELECT * FROM postgresql_copy WHERE str IN ('test');
text
ββfloat_nullableββ¬βstrβββ¬βint_idββ
β α΄Ία΅α΄Έα΄Έ β test β 1 β
ββββββββββββββββββ΄βββββββ΄βββββββββ
Using non-default schema {#using-non-default-schema}
```text
postgres=# CREATE SCHEMA "nice.schema";
postgres=# CREATE TABLE "nice.schema"."nice.table" (a integer);
postgres=# INSERT INTO "nice.schema"."nice.table" SELECT i FROM generate_series(0, 99) as t(i)
```
sql
CREATE TABLE pg_table_schema_with_dots (a UInt32)
ENGINE PostgreSQL('localhost:5432', 'clickhouse', 'nice.table', 'postgrsql_user', 'password', 'nice.schema');
See Also
The
postgresql
table function
Using PostgreSQL as a dictionary source
Related content {#related-content} | {"source_file": "postgresql.md"} | [
0.009644974954426289,
-0.06187725439667702,
-0.10021770745515823,
0.068231500685215,
-0.1138247400522232,
-0.04475477337837219,
0.014301995746791363,
-0.00929019134491682,
-0.04256480932235718,
0.0027010426856577396,
0.06114271655678749,
-0.033656783401966095,
0.034729938954114914,
-0.1022... |
0b210377-d342-4c60-9800-ce3ae2806896 | See Also
The
postgresql
table function
Using PostgreSQL as a dictionary source
Related content {#related-content}
Blog:
ClickHouse and PostgreSQL - a match made in data heaven - part 1
Blog:
ClickHouse and PostgreSQL - a Match Made in Data Heaven - part 2 | {"source_file": "postgresql.md"} | [
0.015289639122784138,
-0.026309004053473473,
-0.08792982250452042,
-0.024916956201195717,
-0.05755103752017021,
0.011558751575648785,
0.03133103996515274,
-0.044874995946884155,
-0.0659894198179245,
0.022414391860365868,
0.06831391155719757,
0.015723323449492455,
0.08521638065576553,
-0.01... |
d93a8792-1ebb-4673-8f32-3b672ea89bcc | description: 'This engine provides an integration with Azure Blob Storage ecosystem.'
sidebar_label: 'Azure Blob Storage'
sidebar_position: 10
slug: /engines/table-engines/integrations/azureBlobStorage
title: 'AzureBlobStorage table engine'
doc_type: 'reference'
AzureBlobStorage table engine
This engine provides an integration with
Azure Blob Storage
ecosystem.
Create table {#create-table}
sql
CREATE TABLE azure_blob_storage_table (name String, value UInt32)
ENGINE = AzureBlobStorage(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression, partition_strategy, partition_columns_in_data_file, extra_credentials(client_id=, tenant_id=)])
[PARTITION BY expr]
[SETTINGS ...]
Engine parameters {#engine-parameters}
endpoint
β AzureBlobStorage endpoint URL with container & prefix. Optionally can contain account_name if the authentication method used needs it. (
http://azurite1:{port}/[account_name]{container_name}/{data_prefix}
) or these parameters can be provided separately using storage_account_url, account_name & container. For specifying prefix, endpoint should be used.
endpoint_contains_account_name
- This flag is used to specify if endpoint contains account_name as it is only needed for certain authentication methods. (Default : true)
connection_string|storage_account_url
β connection_string includes account name & key (
Create connection string
) or you could also provide the storage account url here and account name & account key as separate parameters (see parameters account_name & account_key)
container_name
- Container name
blobpath
- file path. Supports following wildcards in readonly mode:
*
,
**
,
?
,
{abc,def}
and
{N..M}
where
N
,
M
β numbers,
'abc'
,
'def'
β strings.
account_name
- if storage_account_url is used, then account name can be specified here
account_key
- if storage_account_url is used, then account key can be specified here
format
β The
format
of the file.
compression
β Supported values:
none
,
gzip/gz
,
brotli/br
,
xz/LZMA
,
zstd/zst
. By default, it will autodetect compression by file extension. (same as setting to
auto
).
partition_strategy
β Options:
WILDCARD
or
HIVE
.
WILDCARD
requires a
{_partition_id}
in the path, which is replaced with the partition key.
HIVE
does not allow wildcards, assumes the path is the table root, and generates Hive-style partitioned directories with Snowflake IDs as filenames and the file format as the extension. Defaults to
WILDCARD
partition_columns_in_data_file
- Only used with
HIVE
partition strategy. Tells ClickHouse whether to expect partition columns to be written in the data file. Defaults
false
.
extra_credentials
- Use
client_id
and
tenant_id
for authentication. If extra_credentials are provided, they are given priority over
account_name
and
account_key
.
Example | {"source_file": "azureBlobStorage.md"} | [
0.0067711686715483665,
-0.008584882132709026,
-0.1082133799791336,
0.05365132540464401,
-0.06612828373908997,
0.03582759201526642,
0.0679691880941391,
0.021735645830631256,
-0.01972735859453678,
0.05969196930527687,
0.021245131269097328,
-0.07203030586242676,
0.11256314069032669,
0.0121233... |
7c690be2-800f-47df-9930-0ffdd990ed53 | extra_credentials
- Use
client_id
and
tenant_id
for authentication. If extra_credentials are provided, they are given priority over
account_name
and
account_key
.
Example
Users can use the Azurite emulator for local Azure Storage development. Further details
here
. If using a local instance of Azurite, users may need to substitute
http://localhost:10000
for
http://azurite1:10000
in the commands below, where we assume Azurite is available at host
azurite1
.
```sql
CREATE TABLE test_table (key UInt64, data String)
ENGINE = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', 'testcontainer', 'test_table', 'CSV');
INSERT INTO test_table VALUES (1, 'a'), (2, 'b'), (3, 'c');
SELECT * FROM test_table;
```
text
ββkeyβββ¬βdataβββ
β 1 β a β
β 2 β b β
β 3 β c β
ββββββββ΄ββββββββ
Virtual columns {#virtual-columns}
_path
β Path to the file. Type:
LowCardinality(String)
.
_file
β Name of the file. Type:
LowCardinality(String)
.
_size
β Size of the file in bytes. Type:
Nullable(UInt64)
. If the size is unknown, the value is
NULL
.
_time
β Last modified time of the file. Type:
Nullable(DateTime)
. If the time is unknown, the value is
NULL
.
Authentication {#authentication}
Currently there are 3 ways to authenticate:
-
Managed Identity
- Can be used by providing an
endpoint
,
connection_string
or
storage_account_url
.
-
SAS Token
- Can be used by providing an
endpoint
,
connection_string
or
storage_account_url
. It is identified by presence of '?' in the url. See
azureBlobStorage
for examples.
-
Workload Identity
- Can be used by providing an
endpoint
or
storage_account_url
. If
use_workload_identity
parameter is set in config, (
workload identity
) is used for authentication.
Data cache {#data-cache}
Azure
table engine supports data caching on local disk.
See filesystem cache configuration options and usage in this
section
.
Caching is made depending on the path and ETag of the storage object, so clickhouse will not read a stale cache version.
To enable caching use a setting
filesystem_cache_name = '<name>'
and
enable_filesystem_cache = 1
.
sql
SELECT *
FROM azureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;', 'testcontainer', 'test_table', 'CSV')
SETTINGS filesystem_cache_name = 'cache_for_azure', enable_filesystem_cache = 1;
add the following section to clickhouse configuration file: | {"source_file": "azureBlobStorage.md"} | [
0.052849169820547104,
0.015404211357235909,
-0.09055151045322418,
0.028321852907538414,
-0.11126811057329178,
0.003617515554651618,
0.07652027159929276,
0.0636664554476738,
0.032247334718704224,
0.12111689150333405,
0.042446110397577286,
-0.0937451422214508,
0.15910114347934723,
0.01503713... |
2275e90b-501e-45f2-b705-84d02b95bb0b | add the following section to clickhouse configuration file:
xml
<clickhouse>
<filesystem_caches>
<cache_for_azure>
<path>path to cache directory</path>
<max_size>10Gi</max_size>
</cache_for_azure>
</filesystem_caches>
</clickhouse>
reuse cache configuration (and therefore cache storage) from clickhouse
storage_configuration
section,
described here
PARTITION BY {#partition-by}
PARTITION BY
β Optional. In most cases you don't need a partition key, and if it is needed you generally don't need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the
toYYYYMM(date_column)
expression, where
date_column
is a column with a date of the type
Date
. The partition names here have the
"YYYYMM"
format.
Partition strategy {#partition-strategy}
WILDCARD
(default): Replaces the
{_partition_id}
wildcard in the file path with the actual partition key. Reading is not supported.
HIVE
implements hive style partitioning for reads & writes. Reading is implemented using a recursive glob pattern. Writing generates files using the following format:
<prefix>/<key1=val1/key2=val2...>/<snowflakeid>.<toLower(file_format)>
.
Note: When using
HIVE
partition strategy, the
use_hive_partitioning
setting has no effect.
Example of
HIVE
partition strategy:
```sql
arthur :) create table azure_table (year UInt16, country String, counter UInt8) ENGINE=AzureBlobStorage(account_name='devstoreaccount1', account_key='Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==', storage_account_url = 'http://localhost:30000/devstoreaccount1', container='cont', blob_path='hive_partitioned', format='Parquet', compression='auto', partition_strategy='hive') PARTITION BY (year, country);
arthur :) insert into azure_table values (2020, 'Russia', 1), (2021, 'Brazil', 2);
arthur :) select _path, * from azure_table;
ββ_pathβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βyearββ¬βcountryββ¬βcounterββ
1. β cont/hive_partitioned/year=2020/country=Russia/7351305360873664512.parquet β 2020 β Russia β 1 β
2. β cont/hive_partitioned/year=2021/country=Brazil/7351305360894636032.parquet β 2021 β Brazil β 2 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ΄ββββββββββ΄ββββββββββ
```
See also {#see-also}
Azure Blob Storage Table Function | {"source_file": "azureBlobStorage.md"} | [
0.044745877385139465,
0.007921949028968811,
-0.015385175123810768,
0.018391644582152367,
-0.054788749665021896,
-0.01224426832050085,
0.03350180760025978,
-0.01758740469813347,
0.0035006809048354626,
0.10200908780097961,
0.012828103266656399,
0.012628228403627872,
0.051539622247219086,
0.0... |
7eaae94b-9fd4-47eb-81c4-77fb9d91a34b | description: 'Allows ClickHouse to connect to external databases via ODBC.'
sidebar_label: 'ODBC'
sidebar_position: 150
slug: /engines/table-engines/integrations/odbc
title: 'ODBC table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
ODBC table engine
Allows ClickHouse to connect to external databases via
ODBC
.
To safely implement ODBC connections, ClickHouse uses a separate program
clickhouse-odbc-bridge
. If the ODBC driver is loaded directly from
clickhouse-server
, driver problems can crash the ClickHouse server. ClickHouse automatically starts
clickhouse-odbc-bridge
when it is required. The ODBC bridge program is installed from the same package as the
clickhouse-server
.
This engine supports the
Nullable
data type.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1],
name2 [type2],
...
)
ENGINE = ODBC(datasource, external_database, external_table)
See a detailed description of the
CREATE TABLE
query.
The table structure can differ from the source table structure:
Column names should be the same as in the source table, but you can use just some of these columns and in any order.
Column types may differ from those in the source table. ClickHouse tries to
cast
values to the ClickHouse data types.
The
external_table_functions_use_nulls
setting defines how to handle Nullable columns. Default value: 1. If 0, the table function does not make Nullable columns and inserts default values instead of nulls. This is also applicable for NULL values inside arrays.
Engine Parameters
datasource
β Name of the section with connection settings in the
odbc.ini
file.
external_database
β Name of a database in an external DBMS.
external_table
β Name of a table in the
external_database
.
These parameters can also be passed using
named collections
.
Usage example {#usage-example}
Retrieving data from the local MySQL installation via ODBC
This example is checked for Ubuntu Linux 18.04 and MySQL server 5.7.
Ensure that unixODBC and MySQL Connector are installed.
By default (if installed from packages), ClickHouse starts as user
clickhouse
. Thus, you need to create and configure this user in the MySQL server.
bash
$ sudo mysql
sql
mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'localhost' WITH GRANT OPTION;
Then configure the connection in
/etc/odbc.ini
.
bash
$ cat /etc/odbc.ini
[mysqlconn]
DRIVER = /usr/local/lib/libmyodbc5w.so
SERVER = 127.0.0.1
PORT = 3306
DATABASE = test
USER = clickhouse
PASSWORD = clickhouse
You can check the connection using the
isql
utility from the unixODBC installation.
bash
$ isql -v mysqlconn
+-------------------------+
| Connected! |
| |
...
Table in MySQL: | {"source_file": "odbc.md"} | [
-0.01121565606445074,
-0.11621849238872528,
-0.08751655369997025,
0.06100517138838768,
-0.03319670632481575,
-0.06018303707242012,
-0.010090666823089123,
-0.015072680078446865,
-0.02185194566845894,
-0.044349443167448044,
-0.02244369313120842,
0.006900763604789972,
0.05953897908329964,
-0.... |
d1a679f7-b13c-4ec2-81f3-ef35ff9d647f | bash
$ isql -v mysqlconn
+-------------------------+
| Connected! |
| |
...
Table in MySQL:
```text
mysql> CREATE DATABASE test;
Query OK, 1 row affected (0,01 sec)
mysql> CREATE TABLE
test
.
test
(
->
int_id
INT NOT NULL AUTO_INCREMENT,
->
int_nullable
INT NULL DEFAULT NULL,
->
float
FLOAT NOT NULL,
->
float_nullable
FLOAT NULL DEFAULT NULL,
-> PRIMARY KEY (
int_id
));
Query OK, 0 rows affected (0,09 sec)
mysql> insert into test.test (
int_id
,
float
) VALUES (1,2);
Query OK, 1 row affected (0,00 sec)
mysql> select * from test.test;
+------+----------+-----+----------+
| int_id | int_nullable | float | float_nullable |
+------+----------+-----+----------+
| 1 | NULL | 2 | NULL |
+------+----------+-----+----------+
1 row in set (0,00 sec)
```
Table in ClickHouse, retrieving data from the MySQL table:
sql
CREATE TABLE odbc_t
(
`int_id` Int32,
`float_nullable` Nullable(Float32)
)
ENGINE = ODBC('DSN=mysqlconn', 'test', 'test')
sql
SELECT * FROM odbc_t
text
ββint_idββ¬βfloat_nullableββ
β 1 β α΄Ία΅α΄Έα΄Έ β
ββββββββββ΄βββββββββββββββββ
See also {#see-also}
ODBC dictionaries
ODBC table function | {"source_file": "odbc.md"} | [
0.053114697337150574,
0.03791213780641556,
-0.04556938260793686,
0.05309806764125824,
-0.07274830341339111,
-0.01778733730316162,
0.08046035468578339,
0.06446196883916855,
0.014428730122745037,
0.036241527646780014,
0.08486424386501312,
-0.09409451484680176,
0.10577961802482605,
-0.0409997... |
a42dfd86-f57f-45aa-9585-cdacf8dbadd2 | description: 'Allows ClickHouse to connect to external databases via JDBC.'
sidebar_label: 'JDBC'
sidebar_position: 100
slug: /engines/table-engines/integrations/jdbc
title: 'JDBC table engine'
doc_type: 'reference'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
JDBC table engine
:::note
clickhouse-jdbc-bridge contains experimental codes and is no longer supported. It may contain reliability issues and security vulnerabilities. Use it at your own risk.
ClickHouse recommend using built-in table functions in ClickHouse which provide a better alternative for ad-hoc querying scenarios (Postgres, MySQL, MongoDB, etc).
:::
Allows ClickHouse to connect to external databases via
JDBC
.
To implement the JDBC connection, ClickHouse uses the separate program
clickhouse-jdbc-bridge
that should run as a daemon.
This engine supports the
Nullable
data type.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name
(
columns list...
)
ENGINE = JDBC(datasource, external_database, external_table)
Engine Parameters
datasource
β URI or name of an external DBMS.
URI Format:
jdbc:<driver_name>://<host_name>:<port>/?user=<username>&password=<password>
.
Example for MySQL:
jdbc:mysql://localhost:3306/?user=root&password=root
.
external_database
β Name of a database in an external DBMS, or, instead, an explicitly defined table schema (see examples).
external_table
β Name of the table in an external database or a select query like
select * from table1 where column1=1
.
These parameters can also be passed using
named collections
.
Usage example {#usage-example}
Creating a table in MySQL server by connecting directly with it's console client:
``text
mysql> CREATE TABLE
test
.
test
(
->
int_id
INT NOT NULL AUTO_INCREMENT,
->
int_nullable
INT NULL DEFAULT NULL,
->
float
FLOAT NOT NULL,
->
float_nullable
FLOAT NULL DEFAULT NULL,
-> PRIMARY KEY (
int_id`));
Query OK, 0 rows affected (0,09 sec)
mysql> insert into test (
int_id
,
float
) VALUES (1,2);
Query OK, 1 row affected (0,00 sec)
mysql> select * from test;
+------+----------+-----+----------+
| int_id | int_nullable | float | float_nullable |
+------+----------+-----+----------+
| 1 | NULL | 2 | NULL |
+------+----------+-----+----------+
1 row in set (0,00 sec)
```
Creating a table in ClickHouse server and selecting data from it:
sql
CREATE TABLE jdbc_table
(
`int_id` Int32,
`int_nullable` Nullable(Int32),
`float` Float32,
`float_nullable` Nullable(Float32)
)
ENGINE JDBC('jdbc:mysql://localhost:3306/?user=root&password=root', 'test', 'test')
sql
SELECT *
FROM jdbc_table
text
ββint_idββ¬βint_nullableββ¬βfloatββ¬βfloat_nullableββ
β 1 β α΄Ία΅α΄Έα΄Έ β 2 β α΄Ία΅α΄Έα΄Έ β
ββββββββββ΄βββββββββββββββ΄ββββββββ΄βββββββββββββββββ | {"source_file": "jdbc.md"} | [
-0.011210539378225803,
-0.06764465570449829,
-0.10718517005443573,
0.03631041944026947,
-0.11828053742647171,
-0.007100324612110853,
0.028947647660970688,
0.012392166070640087,
-0.06138858199119568,
-0.024075889959931374,
-0.03469725325703621,
-0.011920401826500893,
0.07116466015577316,
-0... |
3e16d0d2-cb0d-479b-af81-5c1504d59b42 | sql
SELECT *
FROM jdbc_table
text
ββint_idββ¬βint_nullableββ¬βfloatββ¬βfloat_nullableββ
β 1 β α΄Ία΅α΄Έα΄Έ β 2 β α΄Ία΅α΄Έα΄Έ β
ββββββββββ΄βββββββββββββββ΄ββββββββ΄βββββββββββββββββ
sql
INSERT INTO jdbc_table(`int_id`, `float`)
SELECT toInt32(number), toFloat32(number * 1.0)
FROM system.numbers
See also {#see-also}
JDBC table function
. | {"source_file": "jdbc.md"} | [
0.033968228846788406,
-0.022906113415956497,
-0.09786845743656158,
0.01315756794065237,
-0.07707367092370987,
-0.005167101044207811,
0.10978186875581741,
0.07073705643415451,
-0.05651380121707916,
-0.050944674760103226,
-0.036930300295352936,
-0.08068065345287323,
0.03724435716867447,
-0.0... |
ad18621a-da77-4551-b02a-46218368518e | description: 'This engine allows integrating ClickHouse with NATS to publish or subscribe
to message subjects, and process new messages as they become available.'
sidebar_label: 'NATS'
sidebar_position: 140
slug: /engines/table-engines/integrations/nats
title: 'NATS table engine'
doc_type: 'guide'
NATS table engine {#redisstreams-engine}
This engine allows integrating ClickHouse with
NATS
.
NATS
lets you:
Publish or subscribe to message subjects.
Process new messages as they become available.
Creating a table {#creating-a-table}
sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = NATS SETTINGS
nats_url = 'host:port',
nats_subjects = 'subject1,subject2,...',
nats_format = 'data_format'[,]
[nats_schema = '',]
[nats_num_consumers = N,]
[nats_queue_group = 'group_name',]
[nats_secure = false,]
[nats_max_reconnect = N,]
[nats_reconnect_wait = N,]
[nats_server_list = 'host1:port1,host2:port2,...',]
[nats_skip_broken_messages = N,]
[nats_max_block_size = N,]
[nats_flush_interval_ms = N,]
[nats_username = 'user',]
[nats_password = 'password',]
[nats_token = 'clickhouse',]
[nats_credential_file = '/var/nats_credentials',]
[nats_startup_connect_tries = '5']
[nats_max_rows_per_message = 1,]
[nats_handle_error_mode = 'default']
Required parameters:
nats_url
β host:port (for example,
localhost:5672
)..
nats_subjects
β List of subject for NATS table to subscribe/publish to. Supports wildcard subjects like
foo.*.bar
or
baz.>
nats_format
β Message format. Uses the same notation as the SQL
FORMAT
function, such as
JSONEachRow
. For more information, see the
Formats
section.
Optional parameters:
nats_schema
β Parameter that must be used if the format requires a schema definition. For example,
Cap'n Proto
requires the path to the schema file and the name of the root
schema.capnp:Message
object.
nats_stream
β The name of an existing stream in NATS JetStream.
nats_consumer
β The name of an existing durable pull consumer in NATS JetStream.
nats_num_consumers
β The number of consumers per table. Default:
1
. Specify more consumers if the throughput of one consumer is insufficient for NATS core only.
nats_queue_group
β Name for queue group of NATS subscribers. Default is the table name.
nats_max_reconnect
β Deprecated and has no effect, reconnect is performed permanently with nats_reconnect_wait timeout.
nats_reconnect_wait
β Amount of time in milliseconds to sleep between each reconnect attempt. Default:
5000
.
nats_server_list
- Server list for connection. Can be specified to connect to NATS cluster. | {"source_file": "nats.md"} | [
0.06803347170352936,
-0.07246478646993637,
-0.08040907233953476,
0.09083380550146103,
-0.00020783647778443992,
-0.04779043421149254,
0.004560594446957111,
-0.028298910707235336,
-0.10324736684560776,
0.044248051941394806,
0.011012519709765911,
-0.08913794904947281,
0.067692331969738,
-0.00... |
d6d8834f-d5ce-437a-a5f9-3157acc97378 | nats_server_list
- Server list for connection. Can be specified to connect to NATS cluster.
nats_skip_broken_messages
- NATS message parser tolerance to schema-incompatible messages per block. Default:
0
. If
nats_skip_broken_messages = N
then the engine skips
N
NATS messages that cannot be parsed (a message equals a row of data).
nats_max_block_size
- Number of row collected by poll(s) for flushing data from NATS. Default:
max_insert_block_size
.
nats_flush_interval_ms
- Timeout for flushing data read from NATS. Default:
stream_flush_interval_ms
.
nats_username
- NATS username.
nats_password
- NATS password.
nats_token
- NATS auth token.
nats_credential_file
- Path to a NATS credentials file.
nats_startup_connect_tries
- Number of connect tries at startup. Default:
5
.
nats_max_rows_per_message
β The maximum number of rows written in one NATS message for row-based formats. (default :
1
).
nats_handle_error_mode
β How to handle errors for NATS engine. Possible values: default (the exception will be thrown if we fail to parse a message), stream (the exception message and raw message will be saved in virtual columns
_error
and
_raw_message
).
SSL connection:
For secure connection use
nats_secure = 1
.
The default behaviour of the used library is not to check if the created TLS connection is sufficiently secure. Whether the certificate is expired, self-signed, missing or invalid: the connection is simply permitted. More strict checking of certificates can possibly be implemented in the future.
Writing to NATS table:
If table reads only from one subject, any insert will publish to the same subject.
However, if table reads from multiple subjects, we need to specify which subject we want to publish to.
That is why whenever inserting into table with multiple subjects, setting
stream_like_engine_insert_queue
is needed.
You can select one of the subjects the table reads from and publish your data there. For example:
```sql
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = NATS
SETTINGS nats_url = 'localhost:4444',
nats_subjects = 'subject1,subject2',
nats_format = 'JSONEachRow';
INSERT INTO queue
SETTINGS stream_like_engine_insert_queue = 'subject2'
VALUES (1, 1);
```
Also format settings can be added along with nats-related settings.
Example:
sql
CREATE TABLE queue (
key UInt64,
value UInt64,
date DateTime
) ENGINE = NATS
SETTINGS nats_url = 'localhost:4444',
nats_subjects = 'subject1',
nats_format = 'JSONEachRow',
date_time_input_format = 'best_effort';
The NATS server configuration can be added using the ClickHouse config file.
More specifically you can add Redis password for NATS engine:
xml
<nats>
<user>click</user>
<password>house</password>
<token>clickhouse</token>
</nats>
Description {#description} | {"source_file": "nats.md"} | [
0.029453737661242485,
-0.017923882231116295,
-0.07796461880207062,
0.06000902131199837,
0.012802531942725182,
-0.012931608594954014,
-0.049832217395305634,
-0.003793607233092189,
-0.11910293996334076,
0.026247631758451462,
-0.0018010176718235016,
-0.024868199601769447,
-0.025402428582310677,... |
68fff2ec-bbc1-410f-b5b2-d50c3a49210f | xml
<nats>
<user>click</user>
<password>house</password>
<token>clickhouse</token>
</nats>
Description {#description}
SELECT
is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using
materialized views
. To do this:
Use the engine to create a NATS consumer and consider it a data stream.
Create a table with the desired structure.
Create a materialized view that converts data from the engine and puts it into a previously created table.
When the
MATERIALIZED VIEW
joins the engine, it starts collecting data in the background. This allows you to continually receive messages from NATS and convert them to the required format using
SELECT
.
One NATS table can have as many materialized views as you like, they do not read data from the table directly, but receive new records (in blocks), this way you can write to several tables with different detail level (with grouping - aggregation and without).
Example:
```sql
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = NATS
SETTINGS nats_url = 'localhost:4444',
nats_subjects = 'subject1',
nats_format = 'JSONEachRow',
date_time_input_format = 'best_effort';
CREATE TABLE daily (key UInt64, value UInt64)
ENGINE = MergeTree() ORDER BY key;
CREATE MATERIALIZED VIEW consumer TO daily
AS SELECT key, value FROM queue;
SELECT key, value FROM daily ORDER BY key;
```
To stop receiving streams data or to change the conversion logic, detach the materialized view:
sql
DETACH TABLE consumer;
ATTACH TABLE consumer;
If you want to change the target table by using
ALTER
, we recommend disabling the material view to avoid discrepancies between the target table and the data from the view.
Virtual columns {#virtual-columns}
_subject
- NATS message subject. Data type:
String
.
Additional virtual columns when
nats_handle_error_mode='stream'
:
_raw_message
- Raw message that couldn't be parsed successfully. Data type:
Nullable(String)
.
_error
- Exception message happened during failed parsing. Data type:
Nullable(String)
.
Note:
_raw_message
and
_error
virtual columns are filled only in case of exception during parsing, they are always
NULL
when message was parsed successfully.
Data formats support {#data-formats-support}
NATS engine supports all
formats
supported in ClickHouse.
The number of rows in one NATS message depends on whether the format is row-based or block-based:
For row-based formats the number of rows in one NATS message can be controlled by setting
nats_max_rows_per_message
.
For block-based formats we cannot divide block into smaller parts, but the number of rows in one block can be controlled by general setting
max_block_size
.
Using JetStream {#using-jetstream} | {"source_file": "nats.md"} | [
0.009756086394190788,
-0.023912791162729263,
-0.04989335685968399,
0.06028858572244644,
-0.059873614460229874,
-0.09258529543876648,
0.029948348179459572,
-0.03745582699775696,
-0.018706317991018295,
0.021727340295910835,
-0.022244272753596306,
-0.09224750101566315,
0.029294710606336594,
-... |
799d6038-ee3e-4cbd-946d-4bed4deac6a4 | For block-based formats we cannot divide block into smaller parts, but the number of rows in one block can be controlled by general setting
max_block_size
.
Using JetStream {#using-jetstream}
Before using NATS engine with NATS JetStream, you must create a NATS stream and a durable pull consumer. For this, you can use, for example, the nats utility from the
NATS CLI
package:
creating stream
```bash
$ nats stream add
? Stream Name stream_name
? Subjects stream_subject
? Storage file
? Replication 1
? Retention Policy Limits
? Discard Policy Old
? Stream Messages Limit -1
? Per Subject Messages Limit -1
? Total Stream Size -1
? Message TTL -1
? Max Message Size -1
? Duplicate tracking time window 2m0s
? Allow message Roll-ups No
? Allow message deletion Yes
? Allow purging subjects or the entire stream Yes
Stream stream_name was created
Information for Stream stream_name created 2025-10-03 14:12:51
Subjects: stream_subject
Replicas: 1
Storage: File
Options:
Retention: Limits
Acknowledgments: true
Discard Policy: Old
Duplicate Window: 2m0s
Direct Get: true
Allows Msg Delete: true
Allows Purge: true
Allows Per-Message TTL: false
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: unlimited
Maximum Message Size: unlimited
Maximum Consumers: unlimited
State:
Messages: 0
Bytes: 0 B
First Sequence: 0
Last Sequence: 0
Active Consumers: 0
```
creating durable pull consumer
```bash
$ nats consumer add
? Select a Stream stream_name
? Consumer name consumer_name
? Delivery target (empty for Pull Consumers)
? Start policy (all, new, last, subject, 1h, msg sequence) all
? Acknowledgment policy explicit
? Replay policy instant
? Filter Stream by subjects (blank for all)
? Maximum Allowed Deliveries -1
? Maximum Acknowledgments Pending 0
? Deliver headers only without bodies No
? Add a Retry Backoff Policy No
Information for Consumer stream_name > consumer_name created 2025-10-03T14:13:51+03:00
Configuration:
Name: consumer_name
Pull Mode: true
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30.00s
Replay Policy: Instant
Max Ack Pending: 1,000
Max Waiting Pulls: 512
State:
Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
Acknowledgment Floor: Consumer sequence: 0 Stream sequence: 0
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Waiting Pulls: 0 of maximum 512
``` | {"source_file": "nats.md"} | [
-0.02950384095311165,
-0.0018374084029346704,
-0.05560627952218056,
-0.04028281942009926,
0.0461248941719532,
-0.008640081621706486,
-0.03456742316484451,
-0.003343199845403433,
-0.012243988923728466,
0.00878225825726986,
-0.06868388503789902,
-0.0021299412474036217,
-0.07863930612802505,
... |
c75f4e22-2c7c-4ada-8fbe-26ff8b490c30 | After creating stream and durable pull consumer, we can create a table with NATS engine. To do this, you need to initialize: nats_stream, nats_consumer_name, and nats_subjects:
SQL
CREATE TABLE nats_jet_stream (
key UInt64,
value UInt64
) ENGINE NATS
SETTINGS nats_url = 'localhost:4222',
nats_stream = 'stream_name',
nats_consumer_name = 'consumer_name',
nats_subjects = 'stream_subject',
nats_format = 'JSONEachRow'; | {"source_file": "nats.md"} | [
-0.008418579585850239,
-0.055260274559259415,
-0.12241693586111069,
-0.004647708032280207,
-0.055399734526872635,
0.025974037125706673,
-0.04522489383816719,
0.03228241205215454,
-0.14174769818782806,
-0.048128847032785416,
-0.01835670694708824,
-0.13543252646923065,
-0.0768098533153534,
0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.