id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
1e577607-029c-4359-9ec3-32b59715613e | |
TableSchema.TypeName.DATE
|
Schema.TypeName#DATETIME
| β
| |
|
TableSchema.TypeName.DATETIME
|
Schema.TypeName#DATETIME
| β
| |
|
TableSchema.TypeName.ARRAY
|
Schema.TypeName#ARRAY
| β
| |
|
TableSchema.TypeName.ENUM8
|
Schema.TypeName#STRING
| β
| |
|
TableSchema.TypeName.ENUM16
|
Schema.TypeName#STRING
| β
| |
|
TableSchema.TypeName.BOOL
|
Schema.TypeName#BOOLEAN
| β
| |
|
TableSchema.TypeName.TUPLE
|
Schema.TypeName#ROW
| β
| |
|
TableSchema.TypeName.FIXEDSTRING
|
FixedBytes
| β
|
FixedBytes
is a
LogicalType
representing a fixed-length
byte array located at
org.apache.beam.sdk.schemas.logicaltypes
|
| |
Schema.TypeName#DECIMAL
| β | |
| |
Schema.TypeName#MAP
| β | | | {"source_file": "apache-beam.md"} | [
0.021992767229676247,
-0.042072270065546036,
-0.0590108223259449,
0.03379325568675995,
-0.01022745668888092,
-0.039416320621967316,
0.04808218404650688,
-0.008868043310940266,
-0.03737840801477432,
0.04915380850434303,
0.0011412587482482195,
-0.02446327544748783,
-0.03950037062168121,
-0.0... |
95580004-8409-4b32-aac5-accf29bcc7f1 | ClickHouseIO.Write parameters {#clickhouseiowrite-parameters}
You can adjust the
ClickHouseIO.Write
configuration with the following setter functions:
| Parameter Setter Function | Argument Type | Default Value | Description |
|-----------------------------|-----------------------------|-------------------------------|-----------------------------------------------------------------|
|
withMaxInsertBlockSize
|
(long maxInsertBlockSize)
|
1000000
| Maximum size of a block of rows to insert. |
|
withMaxRetries
|
(int maxRetries)
|
5
| Maximum number of retries for failed inserts. |
|
withMaxCumulativeBackoff
|
(Duration maxBackoff)
|
Duration.standardDays(1000)
| Maximum cumulative backoff duration for retries. |
|
withInitialBackoff
|
(Duration initialBackoff)
|
Duration.standardSeconds(5)
| Initial backoff duration before the first retry. |
|
withInsertDistributedSync
|
(Boolean sync)
|
true
| If true, synchronizes insert operations for distributed tables. |
|
withInsertQuorum
|
(Long quorum)
|
null
| The number of replicas required to confirm an insert operation. |
|
withInsertDeduplicate
|
(Boolean deduplicate)
|
true
| If true, deduplication is enabled for insert operations. |
|
withTableSchema
|
(TableSchema schema)
|
null
| Schema of the target ClickHouse table. |
Limitations {#limitations}
Please consider the following limitations when using the connector:
* As of today, only Sink operation is supported. The connector doesn't support Source operation.
* ClickHouse performs deduplication when inserting into a
ReplicatedMergeTree
or a
Distributed
table built on top of a
ReplicatedMergeTree
. Without replication, inserting into a regular MergeTree can result in duplicates if an insert fails and then successfully retries. However, each block is inserted atomically, and the block size can be configured using
ClickHouseIO.Write.withMaxInsertBlockSize(long)
. Deduplication is achieved by using checksums of the inserted blocks. For more information about deduplication, please visit
Deduplication
and
Deduplicate insertion config
.
* The connector doesn't perform any DDL statements; therefore, the target table must exist prior insertion.
Related content {#related-content}
ClickHouseIO
class
documentation
.
Github
repository of examples
clickhouse-beam-connector
. | {"source_file": "apache-beam.md"} | [
0.009008467197418213,
-0.04448600113391876,
-0.09580596536397934,
0.00906286109238863,
-0.1192404106259346,
-0.008272339589893818,
-0.06266602128744125,
0.028793878853321075,
-0.06569357961416245,
0.04141838476061821,
0.0007950008730404079,
-0.04465995356440544,
0.03491121903061867,
-0.084... |
bf2d4d3b-24c8-4ae7-ad9d-a18d52823e7b | sidebar_label: 'Airbyte'
sidebar_position: 11
keywords: ['clickhouse', 'Airbyte', 'connect', 'integrate', 'etl', 'data integration']
slug: /integrations/airbyte
description: 'Stream data into ClickHouse using Airbyte data pipelines'
title: 'Connect Airbyte to ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'community'
- category: 'data_ingestion'
- website: 'https://airbyte.com/'
import Image from '@theme/IdealImage';
import airbyte01 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_01.png';
import airbyte02 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_02.png';
import airbyte03 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_03.png';
import airbyte04 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_04.png';
import airbyte05 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_05.png';
import airbyte06 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_06.png';
import airbyte07 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_07.png';
import airbyte08 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_08.png';
import airbyte09 from '@site/static/images/integrations/data-ingestion/etl-tools/airbyte_09.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Connect Airbyte to ClickHouse
:::note
Please note that the Airbyte source and destination for ClickHouse are currently in Alpha status and not suitable for moving large datasets (> 10 million rows)
:::
Airbyte
is an open-source data integration platform. It allows the creation of
ELT
data pipelines and is shipped with more than 140 out-of-the-box connectors. This step-by-step tutorial shows how to connect Airbyte to ClickHouse as a destination and load a sample dataset.
Download and run Airbyte {#1-download-and-run-airbyte}
Airbyte runs on Docker and uses
docker-compose
. Make sure to download and install the latest versions of Docker.
Deploy Airbyte by cloning the official Github repository and running
docker-compose up
in your favorite terminal:
```bash
git clone https://github.com/airbytehq/airbyte.git --depth=1
cd airbyte
./run-ab-platform.sh
```
Once you see the Airbyte banner in your terminal, you can connect to
localhost:8000
:::note
Alternatively, you can signup and use <a href="https://docs.airbyte.com/deploying-airbyte/on-cloud" target="_blank">Airbyte Cloud</a>
:::
Add ClickHouse as a destination {#2-add-clickhouse-as-a-destination}
In this section, we will display how to add a ClickHouse instance as a destination.
Start your ClickHouse server (Airbyte is compatible with ClickHouse version
21.8.10.19
or above) or login to your ClickHouse cloud account:
bash
clickhouse-server start
Within Airbyte, select the "Destinations" page and add a new destination: | {"source_file": "airbyte-and-clickhouse.md"} | [
-0.013254692777991295,
0.006025887094438076,
-0.021821537986397743,
0.0268352460116148,
0.06593573093414307,
-0.11573758721351624,
0.06444766372442245,
0.024701813235878944,
-0.09064522385597229,
-0.03809539973735809,
0.06264705210924149,
-0.043577972799539566,
0.02807989902794361,
-0.0570... |
f29f76ab-dd22-4a3c-a252-cfc7a8c25448 | bash
clickhouse-server start
Within Airbyte, select the "Destinations" page and add a new destination:
Select ClickHouse from the "Destination type" drop-down list, and Fill out the "Set up the destination" form by providing your ClickHouse hostname and ports, database name, username and password and select if it's an SSL connection (equivalent to the
--secure
flag in the
clickhouse-client
):
Congratulations! you have now added ClickHouse as a destination in Airbyte.
:::note
In order to use ClickHouse as a destination, the user you'll use need to have the permissions to create databases, tables and insert rows. We recommend creating a dedicated user for Airbyte (eg.
my_airbyte_user
) with the following permissions:
```sql
CREATE USER 'my_airbyte_user'@'%' IDENTIFIED BY 'your_password_here';
GRANT CREATE ON * TO my_airbyte_user;
```
:::
Add a dataset as a source {#3-add-a-dataset-as-a-source}
The example dataset we will use is the
New York City Taxi Data
(on
Github
). For this tutorial, we will use a subset of this dataset which corresponds to the month of Jan 2022.
Within Airbyte, select the "Sources" page and add a new source of type file.
Fill out the "Set up the source" form by naming the source and providing the URL of the NYC Taxi Jan 2022 file (see below). Make sure to pick
parquet
as file format,
HTTPS Public Web
as Storage Provider and
nyc_taxi_2022
as Dataset Name.
```text
https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2022-01.parquet
```
Congratulations! You have now added a source file in Airbyte.
Create a connection and load the dataset into ClickHouse {#4-create-a-connection-and-load-the-dataset-into-clickhouse}
Within Airbyte, select the "Connections" page and add a new connection
Select "Use existing source" and select the New York City Taxi Data, the select "Use existing destination" and select you ClickHouse instance.
Fill out the "Set up the connection" form by choosing a Replication Frequency (we will use
manual
for this tutorial) and select
nyc_taxi_2022
as the stream you want to sync. Make sure you pick
Normalized Tabular Data
as a Normalization.
Now that the connection is created, click on "Sync now" to trigger the data loading (since we picked
Manual
as a Replication Frequency)
Your data will start loading, you can expand the view to see Airbyte logs and progress. Once the operation finishes, you'll see a
Completed successfully
message in the logs:
Connect to your ClickHouse instance using your preferred SQL Client and check the resulting table:
```sql
SELECT *
FROM nyc_taxi_2022
LIMIT 10
```
The response should look like:
```response
Query id: 4f79c106-fe49-4145-8eba-15e1cb36d325 | {"source_file": "airbyte-and-clickhouse.md"} | [
0.054360777139663696,
-0.12845411896705627,
-0.14184993505477905,
0.029895834624767303,
-0.07963622361421585,
-0.07379651069641113,
0.08897415548563004,
-0.07052730023860931,
-0.09586327522993088,
-0.016574060544371605,
-0.022709239274263382,
-0.03074869140982628,
0.05318865925073624,
-0.0... |
d2701531-a0b2-4dbb-9a81-40d6fa4f6242 | ββextraββ¬βmta_taxββ¬βVendorIDββ¬βRatecodeIDββ¬βtip_amountββ¬βairport_feeββ¬βfare_amountββ¬βDOLocationIDββ¬βPULocationIDββ¬βpayment_typeββ¬βtolls_amountββ¬βtotal_amountββ¬βtrip_distanceββ¬βpassenger_countββ¬βstore_and_fwd_flagββ¬βcongestion_surchargeββ¬βtpep_pickup_datetimeββ¬βimprovement_surchargeββ¬βtpep_dropoff_datetimeββ¬β_airbyte_ab_idββββββββββββββββββββββββ¬βββββ_airbyte_emitted_atββ¬β_airbyte_normalized_atββ¬β_airbyte_nyc_taxi_2022_hashidβββββ
β 0 β 0.5 β 2 β 1 β 2.03 β 0 β 17 β 41 β 162 β 1 β 0 β 22.33 β 4.25 β 3 β N β 2.5 β 2022-01-24T16:02:27 β 0.3 β 2022-01-24T16:22:23 β 000022a5-3f14-4217-9938-5657f9041c8a β 2022-07-19 04:35:31.000 β 2022-07-19 04:39:20 β 91F83E2A3AF3CA79E27BD5019FA7EC94 β
β 3 β 0.5 β 1 β 1 β 1.75 β 0 β 5 β 186 β 246 β 1 β 0 β 10.55 β 0.9 β 1 β N β 2.5 β 2022-01-22T23:23:05 β 0.3 β 2022-01-22T23:27:03 β 000036b6-1c6a-493b-b585-4713e433b9cd β 2022-07-19 04:34:53.000 β 2022-07-19 04:39:20 β 5522F328014A7234E23F9FC5FA78FA66 β
β 0 β 0.5 β 2 β 1 β 7.62 β 1.25 β 27 β 238 β 70 β 1 β 6.55 β 45.72 β 9.16 β 1 β N β 2.5 β 2022-01-22T19:20:37 β 0.3 β 2022-01-22T19:40:51 β 00003c6d-78ad-4288-a79d-00a62d3ca3c5 β 2022-07-19 04:34:46.000 β 2022-07-19 04:39:20 β 449743975782E613109CEE448AFA0AB3 β
β 0.5 β 0.5 β 2 β 1 β 0 β 0 β 9.5 β 234 β 249 β 1 β 0 β 13.3 β 1.5 β 1 β N β 2.5 β 2022-01-22T20:13:39 β 0.3 β 2022-01-22T20:26:40 β 000042f6-6f61-498b-85b9-989eaf8b264b β 2022-07-19 04:34:47.000 β 2022-07-19 04:39:20 β 01771AF57922D1279096E5FFE1BD104A β
β 0 β 0 β 2 β 5 β 5 β 0 β 60 β 265 β 90 β 1 β 0 β 65.3 β 5.59 β 1 β N β 0 β 2022-01-25T09:28:36 β 0.3 β 2022-01-25T09:47:16 β 00004c25-53a4-4cd4-b012-a34dbc128aeb β 2022-07-19 04:35:46.000 β 2022-07-19 04:39:20 β CDA4831B683D10A7770EB492CC772029 β | {"source_file": "airbyte-and-clickhouse.md"} | [
-0.017292914912104607,
0.0032443348318338394,
-0.0068903896026313305,
0.0433492586016655,
0.03701551631093025,
-0.08792717754840851,
0.09641469269990921,
-0.03495354577898979,
-0.06171351298689842,
0.0609406940639019,
0.08121223747730255,
-0.05203484371304512,
0.009997065179049969,
-0.0118... |
e2cb8b33-79ee-4793-aec5-40517c0fbcfe | β 0 β 0.5 β 2 β 1 β 0 β 0 β 11.5 β 68 β 170 β 2 β 0 β 14.8 β 2.2 β 1 β N β 2.5 β 2022-01-25T13:19:26 β 0.3 β 2022-01-25T13:36:19 β 00005c75-c3c8-440c-a8e8-b1bd2b7b7425 β 2022-07-19 04:35:52.000 β 2022-07-19 04:39:20 β 24D75D8AADD488840D78EA658EBDFB41 β
β 2.5 β 0.5 β 1 β 1 β 0.88 β 0 β 5.5 β 79 β 137 β 1 β 0 β 9.68 β 1.1 β 1 β N β 2.5 β 2022-01-22T15:45:09 β 0.3 β 2022-01-22T15:50:16 β 0000acc3-e64f-4b58-8e15-dc47ff1685f3 β 2022-07-19 04:34:37.000 β 2022-07-19 04:39:20 β 2BB5B8E849A438E08F7FCF789E7D7E65 β
β 1.75 β 0.5 β 1 β 1 β 7.5 β 1.25 β 27.5 β 17 β 138 β 1 β 0 β 37.55 β 9 β 1 β N β 0 β 2022-01-30T21:58:19 β 0.3 β 2022-01-30T22:19:30 β 0000b339-b44b-40b0-99f8-ebbf2092cc5b β 2022-07-19 04:38:10.000 β 2022-07-19 04:39:20 β DCCE79199EF9217CD769EFD5271302FE β
β 0.5 β 0.5 β 2 β 1 β 0 β 0 β 13 β 79 β 140 β 2 β 0 β 16.8 β 3.19 β 1 β N β 2.5 β 2022-01-26T20:43:14 β 0.3 β 2022-01-26T20:58:08 β 0000caa8-d46a-4682-bd25-38b2b0b9300b β 2022-07-19 04:36:36.000 β 2022-07-19 04:39:20 β F502BE51809AF36582561B2D037B4DDC β
β 0 β 0.5 β 2 β 1 β 1.76 β 0 β 5.5 β 141 β 237 β 1 β 0 β 10.56 β 0.72 β 2 β N β 2.5 β 2022-01-27T15:19:54 β 0.3 β 2022-01-27T15:26:23 β 0000cd63-c71f-4eb9-9c27-09f402fddc76 β 2022-07-19 04:36:55.000 β 2022-07-19 04:39:20 β 8612CDB63E13D70C1D8B34351A7CA00D β
βββββββββ΄ββββββββββ΄βββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββββ΄βββββββββββββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββ
``` | {"source_file": "airbyte-and-clickhouse.md"} | [
-0.04508303850889206,
0.009560414589941502,
-0.02614755928516388,
-0.02733747847378254,
-0.00700609665364027,
-0.05994327738881111,
0.03448384627699852,
-0.05861856788396835,
-0.03581872209906578,
0.087860107421875,
0.03122199885547161,
-0.04709478095173836,
0.0752841979265213,
0.000036325... |
00967701-fa41-4a12-be3e-52a87ed38638 | ```sql
SELECT count(*)
FROM nyc_taxi_2022
```
The response is:
```response
Query id: a9172d39-50f7-421e-8330-296de0baa67e
ββcount()ββ
β 2392428 β
βββββββββββ
```
Notice that Airbyte automatically inferred the data types and added 4 columns to the destination table. These columns are used by Airbyte to manage the replication logic and log the operations. More details are available in the
Airbyte official documentation
.
```sql
`_airbyte_ab_id` String,
`_airbyte_emitted_at` DateTime64(3, 'GMT'),
`_airbyte_normalized_at` DateTime,
`_airbyte_nyc_taxi_072021_hashid` String
```
Now that the dataset is loaded on your ClickHouse instance, you can create an new table and use more suitable ClickHouse data types (<a href="https://clickhouse.com/docs/getting-started/example-datasets/nyc-taxi/" target="_blank">more details</a>).
Congratulations - you have successfully loaded the NYC taxi data into ClickHouse using Airbyte! | {"source_file": "airbyte-and-clickhouse.md"} | [
0.023784376680850983,
-0.08771965652704239,
-0.04769934341311455,
0.11928034573793411,
-0.018250925466418266,
-0.07557345926761627,
0.09355068951845169,
-0.052583616226911545,
-0.035565584897994995,
0.028191106393933296,
0.057735465466976166,
-0.05817709490656853,
0.040481407195329666,
-0.... |
afd75133-6bd4-421a-92d5-e14265d50866 | slug: /integrations/data-formats
sidebar_label: 'Overview'
sidebar_position: 1
keywords: ['clickhouse', 'CSV', 'TSV', 'Parquet', 'clickhouse-client', 'clickhouse-local']
title: 'Importing from various data formats to ClickHouse'
description: 'Page describing how to import various data formats into ClickHouse'
show_related_blogs: true
doc_type: 'guide'
Importing from various data formats to ClickHouse
In this section of the docs, you can find examples for loading from various file types.
Binary
{#binary}
Export and load binary formats such as ClickHouse Native, MessagePack, Protocol Buffers and Cap'n Proto.
CSV and TSV
{#csv-and-tsv}
Import and export the CSV family, including TSV, with custom headers and separators.
JSON
{#json}
Load and export JSON in various formats including as objects and line delimited NDJSON.
Parquet data
{#parquet-data}
Handle common Apache formats such as Parquet and Arrow.
SQL data
{#sql-data}
Need a SQL dump to import into MySQL or Postgresql? Look no further.
If you are looking to connect a BI tool like Grafana, Tableau and others, check out the
Visualize category
of the docs. | {"source_file": "intro.md"} | [
-0.03774150460958481,
-0.03350628912448883,
-0.07501841336488724,
-0.04965231567621231,
-0.047127943485975266,
-0.035178571939468384,
0.015644248574972153,
0.022652991116046906,
-0.08788297325372696,
-0.00041595875518396497,
-0.0006429451168514788,
0.01230128575116396,
0.02089829556643963,
... |
0fcfa01c-318c-473e-ae2d-8d79a98f1e0d | sidebar_label: 'SQL Dumps'
slug: /integrations/data-formats/sql
title: 'Inserting and dumping SQL data in ClickHouse'
description: 'Page describing how to transfer data between other databases and ClickHouse using SQL dumps.'
doc_type: 'guide'
keywords: ['sql format', 'data export', 'data import', 'backup', 'sql dumps']
Inserting and dumping SQL data in ClickHouse
ClickHouse can be easily integrated into OLTP database infrastructures in many ways. One way is to transfer data between other databases and ClickHouse using SQL dumps.
Creating SQL dumps {#creating-sql-dumps}
Data can be dumped in SQL format using
SQLInsert
. ClickHouse will write data in
INSERT INTO <table name> VALUES(...
form and use
output_format_sql_insert_table_name
settings option as a table name:
sql
SET output_format_sql_insert_table_name = 'some_table';
SELECT * FROM some_data
INTO OUTFILE 'dump.sql'
FORMAT SQLInsert
Column names can be omitted by disabling
output_format_sql_insert_include_column_names
option:
sql
SET output_format_sql_insert_include_column_names = 0
Now we can feed
dump.sql
file to another OLTP database:
bash
mysql some_db < dump.sql
We assume that the
some_table
table exists in the
some_db
MySQL database.
Some DBMSs might have limits on how much values can be processes within a single batch. By default, ClickHouse will create 65k values batches, but that can be changed with the
output_format_sql_insert_max_batch_size
option:
sql
SET output_format_sql_insert_max_batch_size = 1000;
Exporting a set of values {#exporting-a-set-of-values}
ClickHouse has
Values
format, which is similar to SQLInsert, but omits an
INSERT INTO table VALUES
part and returns only a set of values:
sql
SELECT * FROM some_data LIMIT 3 FORMAT Values
response
('Bangor_City_Forest','2015-07-01',34),('Alireza_Afzal','2017-02-01',24),('Akhaura-Laksam-Chittagong_Line','2015-09-01',30)
Inserting data from SQL dumps {#inserting-data-from-sql-dumps}
To read SQL dumps,
MySQLDump
is used:
sql
SELECT *
FROM file('dump.sql', MySQLDump)
LIMIT 5
response
ββpathββββββββββββββββββββββββββββ¬ββββββmonthββ¬βhitsββ
β Bangor_City_Forest β 2015-07-01 β 34 β
β Alireza_Afzal β 2017-02-01 β 24 β
β Akhaura-Laksam-Chittagong_Line β 2015-09-01 β 30 β
β 1973_National_500 β 2017-10-01 β 80 β
β Attachment β 2017-09-01 β 1356 β
ββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
By default, ClickHouse will skip unknown columns (controlled by
input_format_skip_unknown_fields
option) and process data for the first found table in a dump (in case multiple tables were dumped to a single file). DDL statements will be skipped. To load data from MySQL dump into a table (
mysql.sql
file):
sql
INSERT INTO some_data
FROM INFILE 'mysql.sql' FORMAT MySQLDump
We can also create a table automatically from the MySQL dump file: | {"source_file": "sql.md"} | [
0.014340736903250217,
-0.07397477328777313,
-0.07068812102079391,
0.08964390307664871,
-0.01715693436563015,
-0.035807251930236816,
0.04827483743429184,
0.04104314371943474,
-0.10511256009340286,
-0.02470375783741474,
0.03442779928445816,
0.007929390296339989,
0.08855775743722916,
-0.11890... |
762046ba-d2ba-4173-8386-449e3f7e9665 | sql
INSERT INTO some_data
FROM INFILE 'mysql.sql' FORMAT MySQLDump
We can also create a table automatically from the MySQL dump file:
sql
CREATE TABLE table_from_mysql
ENGINE = MergeTree
ORDER BY tuple() AS
SELECT *
FROM file('mysql.sql', MySQLDump)
Here we've created a table named
table_from_mysql
based on a structure that ClickHouse automatically inferred. ClickHouse either detects types based on data or uses DDL when available:
sql
DESCRIBE TABLE table_from_mysql;
response
ββnameβββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β path β Nullable(String) β β β β β β
β month β Nullable(Date32) β β β β β β
β hits β Nullable(UInt32) β β β β β β
βββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Other formats {#other-formats}
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
CSV and TSV formats
Parquet
JSON formats
Regex and templates
Native and binary formats
SQL formats
And also check
clickhouse-local
- a portable full-featured tool to work on local/remote files without the need for ClickHouse server. | {"source_file": "sql.md"} | [
-0.03648098185658455,
-0.077276811003685,
-0.03327043354511261,
0.01817287877202034,
-0.032506201416254044,
-0.036408744752407074,
0.034262605011463165,
0.004559433087706566,
-0.07181311398744583,
0.05956628546118736,
0.004450605716556311,
-0.008546323515474796,
0.07847250252962112,
-0.056... |
9de9ea36-3a2e-40fe-857c-43ae52655491 | sidebar_label: 'Avro, Arrow and ORC'
sidebar_position: 5
slug: /integrations/data-formats/arrow-avro-orc
title: 'Working with Avro, Arrow, and ORC data in ClickHouse'
description: 'Page describing how to work with Avro, Arrow and ORC data in ClickHouse'
keywords: ['Apache Avro', 'Apache Arrow', 'ORC format', 'columnar formats', 'big data formats']
doc_type: 'guide'
Working with Avro, Arrow, and ORC data in ClickHouse
Apache has released multiple data formats actively used in analytics environments, including the popular
Avro
,
Arrow
, and
Orc
. ClickHouse supports importing and exporting data using any from that list.
Importing and exporting in Avro format {#importing-and-exporting-in-avro-format}
ClickHouse supports reading and writing
Apache Avro
data files, which are widely used in Hadoop systems.
To import from an
avro file
, we should use
Avro
format in the
INSERT
statement:
sql
INSERT INTO sometable
FROM INFILE 'data.avro'
FORMAT Avro
With the
file()
function, we can also explore Avro files before actually importing data:
sql
SELECT path, hits
FROM file('data.avro', Avro)
ORDER BY hits DESC
LIMIT 5;
response
ββpathβββββββββββββ¬ββhitsββ
β Amy_Poehler β 62732 β
β Adam_Goldberg β 42338 β
β Aaron_Spelling β 25128 β
β Absence_seizure β 18152 β
β Ammon_Bundy β 11890 β
βββββββββββββββββββ΄ββββββββ
To export to Avro file:
sql
SELECT * FROM sometable
INTO OUTFILE 'export.avro'
FORMAT Avro;
Avro and ClickHouse data types {#avro-and-clickhouse-data-types}
Consider
data types matching
when importing or exporting Avro files. Use explicit type casting to convert when loading data from Avro files:
sql
SELECT
date,
toDate(date)
FROM file('data.avro', Avro)
LIMIT 3;
response
βββdateββ¬βtoDate(date)ββ
β 16556 β 2015-05-01 β
β 16556 β 2015-05-01 β
β 16556 β 2015-05-01 β
βββββββββ΄βββββββββββββββ
Avro messages in Kafka {#avro-messages-in-kafka}
When Kafka messages use Avro format, ClickHouse can read such streams using
AvroConfluent
format and
Kafka
engine:
sql
CREATE TABLE some_topic_stream
(
field1 UInt32,
field2 String
)
ENGINE = Kafka() SETTINGS
kafka_broker_list = 'localhost',
kafka_topic_list = 'some_topic',
kafka_group_name = 'some_group',
kafka_format = 'AvroConfluent';
Working with Arrow format {#working-with-arrow-format}
Another columnar format is
Apache Arrow
, also supported by ClickHouse for import and export. To import data from an
Arrow file
, we use the
Arrow
format:
sql
INSERT INTO sometable
FROM INFILE 'data.arrow'
FORMAT Arrow
Exporting to Arrow file works the same way:
sql
SELECT * FROM sometable
INTO OUTFILE 'export.arrow'
FORMAT Arrow
Also, check
data types matching
to know if any should be converted manually.
Arrow data streaming {#arrow-data-streaming}
The
ArrowStream
format can be used to work with Arrow streaming (used for in-memory processing). ClickHouse can read and write Arrow streams. | {"source_file": "arrow-avro-orc.md"} | [
-0.01619654893875122,
-0.0678994208574295,
-0.09055545926094055,
-0.010374892503023148,
0.023320136591792107,
0.004238110966980457,
-0.004457520321011543,
0.008188405074179173,
-0.069797083735466,
0.03877712041139603,
0.06583519279956818,
0.02121783420443535,
0.01533287949860096,
-0.046337... |
c5169bbb-62cb-40ed-86b1-49fcbbc820de | Arrow data streaming {#arrow-data-streaming}
The
ArrowStream
format can be used to work with Arrow streaming (used for in-memory processing). ClickHouse can read and write Arrow streams.
To demonstrate how ClickHouse can stream Arrow data, let's pipe it to the following python script (it reads input stream in Arrow streaming format and outputs the result as a Pandas table):
```python
import sys, pyarrow as pa
with pa.ipc.open_stream(sys.stdin.buffer) as reader:
print(reader.read_pandas())
```
Now we can stream data from ClickHouse by piping its output to the script:
bash
clickhouse-client -q "SELECT path, hits FROM some_data LIMIT 3 FORMAT ArrowStream" | python3 arrow.py
response
path hits
0 b'Akiba_Hebrew_Academy' 241
1 b'Aegithina_tiphia' 34
2 b'1971-72_Utah_Stars_season' 1
ClickHouse can read Arrow streams as well using the same ArrowStream format:
sql
arrow-stream | clickhouse-client -q "INSERT INTO sometable FORMAT ArrowStream"
We've used
arrow-stream
as a possible source of Arrow streaming data.
Importing and exporting ORC data {#importing-and-exporting-orc-data}
Apache ORC
format is a columnar storage format typically used for Hadoop. ClickHouse supports importing as well as exporting
Orc data
using
ORC format
:
```sql
SELECT *
FROM sometable
INTO OUTFILE 'data.orc'
FORMAT ORC;
INSERT INTO sometable
FROM INFILE 'data.orc'
FORMAT ORC;
```
Also, check
data types matching
as well as
additional settings
to tune export and import.
Further reading {#further-reading}
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
CSV and TSV formats
JSON formats
Regex and templates
Native and binary formats
SQL formats
And also check
clickhouse-local
- a portable full-featured tool to work on local/remote files without the need for Clickhouse server. | {"source_file": "arrow-avro-orc.md"} | [
0.03404749929904938,
-0.12241850793361664,
-0.07136117666959763,
0.010525785386562347,
-0.06962320953607559,
-0.032193563878536224,
0.035505298525094986,
-0.03871378302574158,
-0.035239093005657196,
-0.03009362891316414,
-0.011835516430437565,
0.052214983850717545,
0.007020088844001293,
-0... |
e624699b-428a-44c9-9ccb-96e6c892c6de | sidebar_label: 'Regexp and templates'
sidebar_position: 3
slug: /integrations/data-formats/templates-regexp
title: 'Importing and exporting custom text data using Templates and Regex in ClickHouse'
description: 'Page describing how to import and export custom text using templates and regex in ClickHouse'
doc_type: 'guide'
keywords: ['data formats', 'templates', 'regex', 'custom formats', 'parsing']
Importing and exporting custom text data using Templates and Regex in ClickHouse
We often have to deal with data in custom text formats. That could be a non-standard format, invalid JSON, or a broken CSV. Using standard parsers like CSV or JSON won't work in all such cases. But ClickHouse has us covered here with powerful Template and Regex formats.
Importing based on a template {#importing-based-on-a-template}
Suppose we want to import data from the following
log file
:
bash
head error.log
response
2023/01/15 14:51:17 [error] client: 7.2.8.1, server: example.com "GET /apple-touch-icon-120x120.png HTTP/1.1"
2023/01/16 06:02:09 [error] client: 8.4.2.7, server: example.com "GET /apple-touch-icon-120x120.png HTTP/1.1"
2023/01/15 13:46:13 [error] client: 6.9.3.7, server: example.com "GET /apple-touch-icon.png HTTP/1.1"
2023/01/16 05:34:55 [error] client: 9.9.7.6, server: example.com "GET /h5/static/cert/icon_yanzhengma.png HTTP/1.1"
We can use a
Template
format to import this data. We have to define a template string with values placeholders for each row of input data:
response
<time> [error] client: <ip>, server: <host> "<request>"
Let's create a table to import our data into:
sql
CREATE TABLE error_log
(
`time` DateTime,
`ip` String,
`host` String,
`request` String
)
ENGINE = MergeTree
ORDER BY (host, request, time)
To import data using a given template, we have to save our template string in a file (
row.template
in our case):
response
${time:Escaped} [error] client: ${ip:CSV}, server: ${host:CSV} ${request:JSON}
We define a name of a column and escaping rule in a
${name:escaping}
format. Multiple options are available here, like CSV, JSON, Escaped, or Quoted, which implement
respective escaping rules
.
Now we can use the given file as an argument to the
format_template_row
settings option while importing data (
note, that template and data files
should not have
an extra
\n
symbol at the end of file
):
sql
INSERT INTO error_log FROM INFILE 'error.log'
SETTINGS format_template_row = 'row.template'
FORMAT Template
And we can make sure our data was loaded into the table:
sql
SELECT
request,
count(*)
FROM error_log
GROUP BY request | {"source_file": "templates-regex.md"} | [
-0.031917959451675415,
0.06573007255792618,
0.03024609014391899,
-0.03276590257883072,
0.02833740971982479,
0.0194591972976923,
-0.019267044961452484,
0.02385948784649372,
-0.030869601294398308,
-0.014117726124823093,
0.008991205133497715,
0.0006411896902136505,
0.023135706782341003,
0.025... |
429678f7-0a1f-4b33-95dc-175d78740880 | And we can make sure our data was loaded into the table:
sql
SELECT
request,
count(*)
FROM error_log
GROUP BY request
response
ββrequestβββββββββββββββββββββββββββββββββββββββββββ¬βcount()ββ
β GET /img/close.png HTTP/1.1 β 176 β
β GET /h5/static/cert/icon_yanzhengma.png HTTP/1.1 β 172 β
β GET /phone/images/icon_01.png HTTP/1.1 β 139 β
β GET /apple-touch-icon-precomposed.png HTTP/1.1 β 161 β
β GET /apple-touch-icon.png HTTP/1.1 β 162 β
β GET /apple-touch-icon-120x120.png HTTP/1.1 β 190 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββ
Skipping whitespaces {#skipping-whitespaces}
Consider using
TemplateIgnoreSpaces
, which allows skipping whitespaces between delimiters in a template:
text
Template: --> "p1: ${p1:CSV}, p2: ${p2:CSV}"
TemplateIgnoreSpaces --> "p1:${p1:CSV}, p2:${p2:CSV}"
Exporting data using templates {#exporting-data-using-templates}
We can also export data to any text format using templates as well. In this case, we have to create two files:
Result set template
, which defines the layout for the whole result set:
response
== Top 10 IPs ==
${data}
--- ${rows_read:XML} rows read in ${time:XML} ---
Here,
rows_read
and
time
are system metrics available for each request. While
data
stands for generated rows (
${data}
should always come as a first placeholder in this file), based on a template defined in a
row template file
:
response
${ip:Escaped} generated ${total:Escaped} requests
Now let's use these templates to export the following query:
```sql
SELECT
ip,
count() AS total
FROM error_log GROUP BY ip ORDER BY total DESC LIMIT 10
FORMAT Template SETTINGS format_template_resultset = 'output.results',
format_template_row = 'output.rows';
== Top 10 IPs ==
9.8.4.6 generated 3 requests
9.5.1.1 generated 3 requests
2.4.8.9 generated 3 requests
4.8.8.2 generated 3 requests
4.5.4.4 generated 3 requests
3.3.6.4 generated 2 requests
8.9.5.9 generated 2 requests
2.5.1.8 generated 2 requests
6.8.3.6 generated 2 requests
6.6.3.5 generated 2 requests
--- 1000 rows read in 0.001380604 ---
```
Exporting to HTML files {#exporting-to-html-files}
Template-based results can also be exported to files using an
INTO OUTFILE
clause. Let's generate HTML files based on given
resultset
and
row
formats:
sql
SELECT
ip,
count() AS total
FROM error_log GROUP BY ip ORDER BY total DESC LIMIT 10
INTO OUTFILE 'out.html'
FORMAT Template
SETTINGS format_template_resultset = 'html.results',
format_template_row = 'html.row'
Exporting to XML {#exporting-to-xml}
Template format can be used to generate all imaginable text format files, including XML. Just put a relevant template and do the export.
Also consider using an
XML
format to get standard XML results including metadata:
sql
SELECT *
FROM error_log
LIMIT 3
FORMAT XML
```xml | {"source_file": "templates-regex.md"} | [
0.015143170952796936,
0.07351623475551605,
0.08753775805234909,
0.10179479420185089,
0.044659826904535294,
-0.040118299424648285,
0.03114485740661621,
0.0014571713982149959,
0.0401141382753849,
0.017643393948674202,
0.0531737245619297,
-0.08939900249242783,
0.09075794368982315,
-0.02247259... |
86b0ee3f-3e3f-4648-8ed4-49ff9a2bd5d0 | Also consider using an
XML
format to get standard XML results including metadata:
sql
SELECT *
FROM error_log
LIMIT 3
FORMAT XML
```xml
time
DateTime
...
2023-01-15 13:00:01
3.5.9.2
example.com
GET /apple-touch-icon-120x120.png HTTP/1.1
...
3
1000
0.000745001
1000
88184
```
Importing data based on regular expressions {#importing-data-based-on-regular-expressions}
Regexp
format addresses more sophisticated cases when input data needs to be parsed in a more complex way. Let's parse our
error.log
example file, but capture the file name and protocol this time to save them into separate columns. First, let's prepare a new table for that:
sql
CREATE TABLE error_log
(
`time` DateTime,
`ip` String,
`host` String,
`file` String,
`protocol` String
)
ENGINE = MergeTree
ORDER BY (host, file, time)
Now we can import data based on a regular expression:
sql
INSERT INTO error_log FROM INFILE 'error.log'
SETTINGS
format_regexp = '(.+?) \\[error\\] client: (.+), server: (.+?) "GET .+?([^/]+\\.[^ ]+) (.+?)"'
FORMAT Regexp
ClickHouse will insert data from each capture group into the relevant column based on its order. Let's check the data:
sql
SELECT * FROM error_log LIMIT 5
response
βββββββββββββββββtimeββ¬βipβββββββ¬βhostβββββββββ¬βfileββββββββββββββββββββββββββ¬βprotocolββ
β 2023-01-15 13:00:01 β 3.5.9.2 β example.com β apple-touch-icon-120x120.png β HTTP/1.1 β
β 2023-01-15 13:01:40 β 3.7.2.5 β example.com β apple-touch-icon-120x120.png β HTTP/1.1 β
β 2023-01-15 13:16:49 β 9.2.9.2 β example.com β apple-touch-icon-120x120.png β HTTP/1.1 β
β 2023-01-15 13:21:38 β 8.8.5.3 β example.com β apple-touch-icon-120x120.png β HTTP/1.1 β
β 2023-01-15 13:31:27 β 9.5.8.4 β example.com β apple-touch-icon-120x120.png β HTTP/1.1 β
βββββββββββββββββββββββ΄ββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββββββββββββ΄βββββββββββ
By default, ClickHouse will raise an error in case of unmatched rows. If you want to skip unmatched rows instead, enable it using
format_regexp_skip_unmatched
option:
sql
SET format_regexp_skip_unmatched = 1;
Other formats {#other-formats}
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
CSV and TSV formats
Parquet
JSON formats
Regex and templates
Native and binary formats
SQL formats
And also check
clickhouse-local
- a portable full-featured tool to work on local/remote files without the need for Clickhouse server. | {"source_file": "templates-regex.md"} | [
0.045170195400714874,
0.02645937353372574,
0.011573122814297676,
0.02105952613055706,
0.01788267493247986,
-0.06411556154489517,
0.016213474795222282,
0.07131925225257874,
-0.0558723621070385,
0.04573056474328041,
0.059888631105422974,
-0.044180404394865036,
0.10579939931631088,
-0.0399978... |
39c62d77-45a2-406b-9fbc-5e1bd4f1fc47 | sidebar_label: 'Parquet'
sidebar_position: 3
slug: /integrations/data-formats/parquet
title: 'Working with Parquet in ClickHouse'
description: 'Page describing how to work with Parquet in ClickHouse'
doc_type: 'guide'
keywords: ['parquet', 'columnar format', 'data format', 'compression', 'apache parquet']
Working with Parquet in ClickHouse
Parquet is an efficient file format to store data in a column-oriented way.
ClickHouse provides support for both reading and writing Parquet files.
:::tip
When you reference a file path in a query, where ClickHouse attempts to read from will depend on the variant of ClickHouse that you're using.
If you're using
clickhouse-local
it will read from a location relative to where you launched ClickHouse Local.
If you're using ClickHouse Server or ClickHouse Cloud via
clickhouse client
, it will read from a location relative to the
/var/lib/clickhouse/user_files/
directory on the server.
:::
Importing from Parquet {#importing-from-parquet}
Before loading data, we can use
file()
function to explore an
example parquet file
structure:
sql
DESCRIBE TABLE file('data.parquet', Parquet);
We've used
Parquet
as a second argument, so ClickHouse knows the file format. This will print columns with the types:
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β path β Nullable(String) β β β β β β
β date β Nullable(String) β β β β β β
β hits β Nullable(Int64) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
We can also explore files before actually importing data using all power of SQL:
sql
SELECT *
FROM file('data.parquet', Parquet)
LIMIT 3;
response
ββpathβββββββββββββββββββββββ¬βdateββββββββ¬βhitsββ
β Akiba_Hebrew_Academy β 2017-08-01 β 241 β
β Aegithina_tiphia β 2018-02-01 β 34 β
β 1971-72_Utah_Stars_season β 2016-10-01 β 1 β
βββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
:::tip
We can skip explicit format setting for
file()
and
INFILE
/
OUTFILE
.
In that case, ClickHouse will automatically detect format based on file extension.
:::
Importing to an existing table {#importing-to-an-existing-table}
Let's create a table into which we'll import Parquet data:
sql
CREATE TABLE sometable
(
`path` String,
`date` Date,
`hits` UInt32
)
ENGINE = MergeTree
ORDER BY (date, path);
Now we can import data using the
FROM INFILE
clause:
```sql
INSERT INTO sometable
FROM INFILE 'data.parquet' FORMAT Parquet;
SELECT *
FROM sometable
LIMIT 5; | {"source_file": "parquet.md"} | [
-0.008957956917583942,
0.019605504348874092,
-0.0578291155397892,
-0.047740157693624496,
-0.043682605028152466,
-0.030507158488035202,
0.03299664333462715,
0.045255690813064575,
-0.03754216432571411,
-0.005748009774833918,
0.04790457710623741,
0.0324174202978611,
0.0045861126855015755,
-0.... |
1372ff63-d4ca-4ef6-9e78-3244b26c7f04 | Now we can import data using the
FROM INFILE
clause:
```sql
INSERT INTO sometable
FROM INFILE 'data.parquet' FORMAT Parquet;
SELECT *
FROM sometable
LIMIT 5;
response
ββpathβββββββββββββββββββββββββββ¬βββββββdateββ¬βhitsββ
β 1988_in_philosophy β 2015-05-01 β 70 β
β 2004_Green_Bay_Packers_season β 2015-05-01 β 970 β
β 24_hours_of_lemans β 2015-05-01 β 37 β
β 25604_Karlin β 2015-05-01 β 20 β
β ASCII_ART β 2015-05-01 β 9 β
βββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
```
Note how ClickHouse automatically converted Parquet strings (in the
date
column) to the
Date
type. This is because ClickHouse does a typecast automatically based on the types in the target table.
Inserting a local file to remote server {#inserting-a-local-file-to-remote-server}
If you want to insert a local Parquet file to a remote ClickHouse server, you can do this by piping the contents of the file into
clickhouse-client
, as shown below:
sql
clickhouse client -q "INSERT INTO sometable FORMAT Parquet" < data.parquet
Creating new tables from Parquet files {#creating-new-tables-from-parquet-files}
Since ClickHouse reads parquet file schema, we can create tables on the fly:
sql
CREATE TABLE imported_from_parquet
ENGINE = MergeTree
ORDER BY tuple() AS
SELECT *
FROM file('data.parquet', Parquet)
This will automatically create and populate a table from a given parquet file:
sql
DESCRIBE TABLE imported_from_parquet;
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β path β Nullable(String) β β β β β β
β date β Nullable(String) β β β β β β
β hits β Nullable(Int64) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
By default, ClickHouse is strict with column names, types, and values. But sometimes, we can skip nonexistent columns or unsupported values during import. This can be managed with
Parquet settings
.
Exporting to Parquet format {#exporting-to-parquet-format}
:::tip
When using
INTO OUTFILE
with ClickHouse Cloud you will need to run the commands in
clickhouse client
on the machine where the file will be written to.
:::
To export any table or query result to the Parquet file, we can use an
INTO OUTFILE
clause:
sql
SELECT *
FROM sometable
INTO OUTFILE 'export.parquet'
FORMAT Parquet
This will create the
export.parquet
file in a working directory.
ClickHouse and Parquet data types {#clickhouse-and-parquet-data-types} | {"source_file": "parquet.md"} | [
-0.010397244244813919,
-0.050063781440258026,
-0.028621256351470947,
-0.01843164674937725,
-0.05038551241159439,
0.008926975540816784,
-0.001485056011006236,
0.0322112999856472,
-0.07037550956010818,
0.054404594004154205,
0.012175781652331352,
-0.00836564414203167,
0.024012727662920952,
-0... |
0c2ed5aa-65af-47c4-9a95-80ac2f7edcd2 | This will create the
export.parquet
file in a working directory.
ClickHouse and Parquet data types {#clickhouse-and-parquet-data-types}
ClickHouse and Parquet data types are mostly identical but still
differ a bit
. For example, ClickHouse will export
DateTime
type as a Parquets'
int64
. If we then import that back to ClickHouse, we're going to see numbers (
time.parquet file
):
sql
SELECT * FROM file('time.parquet', Parquet);
response
ββnββ¬βββββββtimeββ
β 0 β 1673622611 β
β 1 β 1673622610 β
β 2 β 1673622609 β
β 3 β 1673622608 β
β 4 β 1673622607 β
βββββ΄βββββββββββββ
In this case
type conversion
can be used:
sql
SELECT
n,
toDateTime(time) <--- int to time
FROM file('time.parquet', Parquet);
response
ββnββ¬ββββtoDateTime(time)ββ
β 0 β 2023-01-13 15:10:11 β
β 1 β 2023-01-13 15:10:10 β
β 2 β 2023-01-13 15:10:09 β
β 3 β 2023-01-13 15:10:08 β
β 4 β 2023-01-13 15:10:07 β
βββββ΄ββββββββββββββββββββββ
Further reading {#further-reading}
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
CSV and TSV formats
Avro, Arrow and ORC
JSON formats
Regex and templates
Native and binary formats
SQL formats
And also check
clickhouse-local
- a portable full-featured tool to work on local/remote files without the need for Clickhouse server. | {"source_file": "parquet.md"} | [
-0.04651978611946106,
-0.043282847851514816,
-0.05912560597062111,
-0.03203537315130234,
-0.015419851057231426,
-0.047141026705503464,
0.037733789533376694,
0.02343875914812088,
-0.000992469722405076,
0.004776763264089823,
-0.018469275906682014,
-0.026996135711669922,
-0.07744445651769638,
... |
8c9480f8-7f16-4b2b-a399-6d60eb4b31bd | sidebar_label: 'CSV and TSV'
slug: /integrations/data-formats/csv-tsv
title: 'Working with CSV and TSV data in ClickHouse'
description: 'Page describing how to work with CSV and TSV data in ClickHouse'
keywords: ['CSV format', 'TSV format', 'comma separated values', 'tab separated values', 'data import']
doc_type: 'guide'
Working with CSV and TSV data in ClickHouse
ClickHouse supports importing data from and exporting to CSV. Since CSV files can come with different format specifics, including header rows, custom delimiters, and escape symbols, ClickHouse provides formats and settings to address each case efficiently.
Importing data from a CSV file {#importing-data-from-a-csv-file}
Before importing data, let's create a table with a relevant structure:
sql
CREATE TABLE sometable
(
`path` String,
`month` Date,
`hits` UInt32
)
ENGINE = MergeTree
ORDER BY tuple(month, path)
To import data from the
CSV file
to the
sometable
table, we can pipe our file directly to the clickhouse-client:
bash
clickhouse-client -q "INSERT INTO sometable FORMAT CSV" < data_small.csv
Note that we use
FORMAT CSV
to let ClickHouse know we're ingesting CSV formatted data. Alternatively, we can load data from a local file using the
FROM INFILE
clause:
sql
INSERT INTO sometable
FROM INFILE 'data_small.csv'
FORMAT CSV
Here, we use the
FORMAT CSV
clause so ClickHouse understands the file format. We can also load data directly from URLs using
url()
function or from S3 files using
s3()
function.
:::tip
We can skip explicit format setting for
file()
and
INFILE
/
OUTFILE
.
In that case, ClickHouse will automatically detect format based on file extension.
:::
CSV files with headers {#csv-files-with-headers}
Suppose our
CSV file has headers
in it:
bash
head data-small-headers.csv
response
"path","month","hits"
"Akiba_Hebrew_Academy","2017-08-01",241
"Aegithina_tiphia","2018-02-01",34
To import data from this file, we can use
CSVWithNames
format:
bash
clickhouse-client -q "INSERT INTO sometable FORMAT CSVWithNames" < data_small_headers.csv
In this case, ClickHouse skips the first row while importing data from the file.
:::tip
Starting from
version
23.1, ClickHouse will automatically detect headers in CSV files when using the
CSV
format, so it is not necessary to use
CSVWithNames
or
CSVWithNamesAndTypes
.
:::
CSV files with custom delimiters {#csv-files-with-custom-delimiters}
In case the CSV file uses other than comma delimiter, we can use the
format_csv_delimiter
option to set the relevant symbol:
sql
SET format_csv_delimiter = ';'
Now, when we import from a CSV file,
;
symbol is going to be used as a delimiter instead of a comma.
Skipping lines in a CSV file {#skipping-lines-in-a-csv-file}
Sometimes, we might skip a certain number of lines while importing data from a CSV file. This can be done using
input_format_csv_skip_first_lines
option: | {"source_file": "csv-tsv.md"} | [
0.021873733028769493,
-0.03248746693134308,
-0.03935057297348976,
0.06213313341140747,
-0.05298624932765961,
0.0012788387248292565,
0.0011161535512655973,
0.023244241252541542,
-0.06028800085186958,
0.038559675216674805,
0.05122218281030655,
-0.002685779705643654,
0.08050300180912018,
-0.0... |
556872fe-de19-4afa-8cbb-6c8122169fcd | Sometimes, we might skip a certain number of lines while importing data from a CSV file. This can be done using
input_format_csv_skip_first_lines
option:
sql
SET input_format_csv_skip_first_lines = 10
In this case, we're going to skip the first ten lines from the CSV file:
sql
SELECT count(*) FROM file('data-small.csv', CSV)
response
ββcount()ββ
β 990 β
βββββββββββ
The
file
has 1k rows, but ClickHouse loaded only 990 since we've asked to skip the first 10.
:::tip
When using the
file()
function, with ClickHouse Cloud you will need to run the commands in
clickhouse client
on the machine where the file resides. Another option is to use
clickhouse-local
to explore files locally.
:::
Treating NULL values in CSV files {#treating-null-values-in-csv-files}
Null values can be encoded differently depending on the application that generated the file. By default, ClickHouse uses
\N
as a Null value in CSV. But we can change that using the
format_csv_null_representation
option.
Suppose we have the following CSV file:
```bash
cat nulls.csv
Donald,90
Joe,Nothing
Nothing,70
```
If we load data from this file, ClickHouse will treat
Nothing
as a String (which is correct):
sql
SELECT * FROM file('nulls.csv')
response
ββc1βββββββ¬βc2βββββββ
β Donald β 90 β
β Joe β Nothing β
β Nothing β 70 β
βββββββββββ΄ββββββββββ
If we want ClickHouse to treat
Nothing
as
NULL
, we can define that using the following option:
sql
SET format_csv_null_representation = 'Nothing'
Now we have
NULL
where we expect it to be:
sql
SELECT * FROM file('nulls.csv')
response
ββc1ββββββ¬βc2ββββ
β Donald β 90 β
β Joe β α΄Ία΅α΄Έα΄Έ β
β α΄Ία΅α΄Έα΄Έ β 70 β
ββββββββββ΄βββββββ
TSV (tab-separated) files {#tsv-tab-separated-files}
Tab-separated data format is widely used as a data interchange format. To load data from a
TSV file
to ClickHouse, the
TabSeparated
format is used:
bash
clickhouse-client -q "INSERT INTO sometable FORMAT TabSeparated" < data_small.tsv
There's also a
TabSeparatedWithNames
format to allow working with TSV files that have headers. And, like for CSV, we can skip the first X lines using the
input_format_tsv_skip_first_lines
option.
Raw TSV {#raw-tsv}
Sometimes, TSV files are saved without escaping tabs and line breaks. We should use
TabSeparatedRaw
to handle such files.
Exporting to CSV {#exporting-to-csv}
Any format in our previous examples can also be used to export data. To export data from a table (or a query) to a CSV format, we use the same
FORMAT
clause:
sql
SELECT *
FROM sometable
LIMIT 5
FORMAT CSV
response
"Akiba_Hebrew_Academy","2017-08-01",241
"Aegithina_tiphia","2018-02-01",34
"1971-72_Utah_Stars_season","2016-10-01",1
"2015_UEFA_European_Under-21_Championship_qualification_Group_8","2015-12-01",73
"2016_Greater_Western_Sydney_Giants_season","2017-05-01",86
To add a header to the CSV file, we use the
CSVWithNames
format: | {"source_file": "csv-tsv.md"} | [
-0.0035906832199543715,
0.005731300916522741,
-0.04564589262008667,
0.04198234900832176,
-0.017374681308865547,
-0.010706741362810135,
-0.005936420988291502,
-0.04182002320885658,
0.012863628566265106,
0.0038709898944944143,
0.056534308940172195,
-0.029081696644425392,
0.06557779014110565,
... |
0be2ff6a-f6b1-4a4d-9c86-485c86330b64 | To add a header to the CSV file, we use the
CSVWithNames
format:
sql
SELECT *
FROM sometable
LIMIT 5
FORMAT CSVWithNames
response
"path","month","hits"
"Akiba_Hebrew_Academy","2017-08-01",241
"Aegithina_tiphia","2018-02-01",34
"1971-72_Utah_Stars_season","2016-10-01",1
"2015_UEFA_European_Under-21_Championship_qualification_Group_8","2015-12-01",73
"2016_Greater_Western_Sydney_Giants_season","2017-05-01",86
Saving exported data to a CSV file {#saving-exported-data-to-a-csv-file}
To save exported data to a file, we can use the
INTO...OUTFILE
clause:
sql
SELECT *
FROM sometable
INTO OUTFILE 'out.csv'
FORMAT CSVWithNames
response
36838935 rows in set. Elapsed: 1.304 sec. Processed 36.84 million rows, 1.42 GB (28.24 million rows/s., 1.09 GB/s.)
Note how it took ClickHouse
~1
second to save 36m rows to a CSV file.
Exporting CSV with custom delimiters {#exporting-csv-with-custom-delimiters}
If we want to have other than comma delimiters, we can use the
format_csv_delimiter
settings option for that:
sql
SET format_csv_delimiter = '|'
Now ClickHouse will use
|
as a delimiter for CSV format:
sql
SELECT *
FROM sometable
LIMIT 5
FORMAT CSV
response
"Akiba_Hebrew_Academy"|"2017-08-01"|241
"Aegithina_tiphia"|"2018-02-01"|34
"1971-72_Utah_Stars_season"|"2016-10-01"|1
"2015_UEFA_European_Under-21_Championship_qualification_Group_8"|"2015-12-01"|73
"2016_Greater_Western_Sydney_Giants_season"|"2017-05-01"|86
Exporting CSV for Windows {#exporting-csv-for-windows}
If we want a CSV file to work fine in a Windows environment, we should consider enabling
output_format_csv_crlf_end_of_line
option. This will use
\r\n
as a line breaks instead of
\n
:
sql
SET output_format_csv_crlf_end_of_line = 1;
Schema inference for CSV files {#schema-inference-for-csv-files}
We might work with unknown CSV files in many cases, so we have to explore which types to use for columns. Clickhouse, by default, will try to guess data formats based on its analysis of a given CSV file. This is known as "Schema Inference". Detected data types can be explored using the
DESCRIBE
statement in pair with the
file()
function:
sql
DESCRIBE file('data-small.csv', CSV)
response
ββnameββ¬βtypeββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β c1 β Nullable(String) β β β β β β
β c2 β Nullable(Date) β β β β β β
β c3 β Nullable(Int64) β β β β β β
ββββββββ΄βββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Here, ClickHouse could guess column types for our CSV file efficiently. If we don't want ClickHouse to guess, we can disable this with the following option:
sql
SET input_format_csv_use_best_effort_in_schema_inference = 0 | {"source_file": "csv-tsv.md"} | [
0.04369329661130905,
0.048657044768333435,
-0.0784095749258995,
0.06977075338363647,
0.011565112508833408,
0.03706612437963486,
-0.017936183139681816,
0.045871030539274216,
-0.017082862555980682,
0.11204753816127777,
-0.045711830258369446,
-0.13000571727752686,
0.057409800589084625,
-0.134... |
2da1ab09-b733-4fd4-a55e-951d6b90eb94 | sql
SET input_format_csv_use_best_effort_in_schema_inference = 0
All column types will be treated as a
String
in this case.
Exporting and importing CSV with explicit column types {#exporting-and-importing-csv-with-explicit-column-types}
ClickHouse also allows explicitly setting column types when exporting data using
CSVWithNamesAndTypes
(and other *WithNames formats family):
sql
SELECT *
FROM sometable
LIMIT 5
FORMAT CSVWithNamesAndTypes
response
"path","month","hits"
"String","Date","UInt32"
"Akiba_Hebrew_Academy","2017-08-01",241
"Aegithina_tiphia","2018-02-01",34
"1971-72_Utah_Stars_season","2016-10-01",1
"2015_UEFA_European_Under-21_Championship_qualification_Group_8","2015-12-01",73
"2016_Greater_Western_Sydney_Giants_season","2017-05-01",86
This format will include two header rows - one with column names and the other with column types. This will allow ClickHouse (and other apps) to identify column types when loading data from
such files
:
sql
DESCRIBE file('data_csv_types.csv', CSVWithNamesAndTypes)
response
ββnameβββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β path β String β β β β β β
β month β Date β β β β β β
β hits β UInt32 β β β β β β
βββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
Now ClickHouse identifies column types based on a (second) header row instead of guessing.
Custom delimiters, separators, and escaping rules {#custom-delimiters-separators-and-escaping-rules}
In sophisticated cases, text data can be formatted in a highly custom manner but still have a structure. ClickHouse has a special
CustomSeparated
format for such cases, which allows setting custom escaping rules, delimiters, line separators, and starting/ending symbols.
Suppose we have the following data in the file:
text
row('Akiba_Hebrew_Academy';'2017-08-01';241),row('Aegithina_tiphia';'2018-02-01';34),...
We can see that individual rows are wrapped in
row()
, lines are separated with
,
and individual values are delimited with
;
. In this case, we can use the following settings to read data from this file:
sql
SET format_custom_row_before_delimiter = 'row(';
SET format_custom_row_after_delimiter = ')';
SET format_custom_field_delimiter = ';';
SET format_custom_row_between_delimiter = ',';
SET format_custom_escaping_rule = 'Quoted';
Now we can load data from our custom formatted
file
:
sql
SELECT *
FROM file('data_small_custom.txt', CustomSeparated)
LIMIT 3 | {"source_file": "csv-tsv.md"} | [
0.10284315049648285,
-0.052586670964956284,
-0.06789696216583252,
0.0762057974934578,
-0.01927233673632145,
0.0505995899438858,
0.0077375369146466255,
-0.019520390778779984,
-0.05548367649316788,
0.07400446385145187,
-0.04661399871110916,
-0.09691083431243896,
0.020705970004200935,
-0.0594... |
6e2e4710-d308-4277-a15d-0ed1a5bce73d | Now we can load data from our custom formatted
file
:
sql
SELECT *
FROM file('data_small_custom.txt', CustomSeparated)
LIMIT 3
response
ββc1βββββββββββββββββββββββββ¬βββββββββc2ββ¬ββc3ββ
β Akiba_Hebrew_Academy β 2017-08-01 β 241 β
β Aegithina_tiphia β 2018-02-01 β 34 β
β 1971-72_Utah_Stars_season β 2016-10-01 β 1 β
βββββββββββββββββββββββββββββ΄βββββββββββββ΄ββββββ
We can also use
CustomSeparatedWithNames
to get headers exported and imported correctly. Explore
regex and template
formats to deal with even more complex cases.
Working with large CSV files {#working-with-large-csv-files}
CSV files can be large, and ClickHouse works efficiently with files of any size. Large files usually come compressed, and ClickHouse covers this with no need for decompression before processing. We can use a
COMPRESSION
clause during an insert:
sql
INSERT INTO sometable
FROM INFILE 'data_csv.csv.gz'
COMPRESSION 'gzip' FORMAT CSV
If a
COMPRESSION
clause is omitted, ClickHouse will still try to guess file compression based on its extension. The same approach can be used to export files directly to compressed formats:
sql
SELECT *
FROM for_csv
INTO OUTFILE 'data_csv.csv.gz'
COMPRESSION 'gzip' FORMAT CSV
This will create a compressed
data_csv.csv.gz
file.
Other formats {#other-formats}
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
CSV and TSV formats
Parquet
JSON formats
Regex and templates
Native and binary formats
SQL formats
And also check
clickhouse-local
- a portable full-featured tool to work on local/remote files without the need for Clickhouse server. | {"source_file": "csv-tsv.md"} | [
-0.05617012083530426,
0.037136778235435486,
-0.034276463091373444,
0.04152008146047592,
0.001301231561228633,
-0.03351536765694618,
0.015573582611978054,
0.006987791508436203,
-0.06029636785387993,
0.06735014915466309,
-0.0243343748152256,
-0.03620336204767227,
0.03335776925086975,
-0.0763... |
ba1cc6de-8df5-4d32-aaa9-a2081a597844 | sidebar_label: 'Binary and Native'
slug: /integrations/data-formats/binary-native
title: 'Using native and binary formats in ClickHouse'
description: 'Page describing how to use native and binary formats in ClickHouse'
keywords: ['binary formats', 'native format', 'rowbinary', 'rawblob', 'messagepack', 'protobuf', 'capn proto', 'data formats', 'performance', 'compression']
doc_type: 'guide'
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
Using native and binary formats in ClickHouse
ClickHouse supports multiple binary formats, which result in better performance and space efficiency. Binary formats are also safe in character encoding since data is saved in a binary form.
We're going to use some_data
table
and
data
for demonstration, feel free to reproduce that on your ClickHouse instance.
Exporting in a Native ClickHouse format {#exporting-in-a-native-clickhouse-format}
The most efficient data format to export and import data between ClickHouse nodes is
Native
format. Exporting is done using
INTO OUTFILE
clause:
sql
SELECT * FROM some_data
INTO OUTFILE 'data.clickhouse' FORMAT Native
This will create
data.clickhouse
file in a native format.
Importing from a Native format {#importing-from-a-native-format}
To import data, we can use
file()
for smaller files or exploration purposes:
sql
DESCRIBE file('data.clickhouse', Native);
response
ββnameβββ¬βtypeββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β path β String β β β β β β
β month β Date β β β β β β
β hits β UInt32 β β β β β β
βββββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
:::tip
When using the
file()
function, with ClickHouse Cloud you will need to run the commands in
clickhouse client
on the machine where the file resides. Another option is to use
clickhouse-local
to explore files locally.
:::
In production, we use
FROM INFILE
to import data:
sql
INSERT INTO sometable
FROM INFILE 'data.clickhouse'
FORMAT Native
Native format compression {#native-format-compression}
We can also enable compression while exporting data to Native format (as well as most other formats) using a
COMPRESSION
clause:
sql
SELECT * FROM some_data
INTO OUTFILE 'data.clickhouse'
COMPRESSION 'lz4'
FORMAT Native
We've used LZ4 compression for export. We'll have to specify it while importing data:
sql
INSERT INTO sometable
FROM INFILE 'data.clickhouse'
COMPRESSION 'lz4'
FORMAT Native
Exporting to RowBinary {#exporting-to-rowbinary}
Another binary format supported is
RowBinary
, which allows importing and exporting data in binary-represented rows:
sql
SELECT * FROM some_data
INTO OUTFILE 'data.binary' FORMAT RowBinary | {"source_file": "binary.md"} | [
0.03312637284398079,
0.006778971757739782,
-0.04080590978264809,
-0.013629786670207977,
0.05806247889995575,
0.012094113975763321,
-0.007653897628188133,
-0.037878260016441345,
-0.06163778528571129,
0.02329067513346672,
0.01614256761968136,
-0.07224934548139572,
0.06574387103319168,
-0.050... |
fe99a67a-8005-4c10-b3cc-9b63cf18bf92 | Another binary format supported is
RowBinary
, which allows importing and exporting data in binary-represented rows:
sql
SELECT * FROM some_data
INTO OUTFILE 'data.binary' FORMAT RowBinary
This will generate
data.binary
file in a binary rows format.
Exploring RowBinary files {#exploring-rowbinary-files}
Automatic schema inference is not supported for this format, so to explore before loading, we have to define schema explicitly:
sql
SELECT *
FROM file('data.binary', RowBinary, 'path String, month Date, hits UInt32')
LIMIT 5
response
ββpathββββββββββββββββββββββββββββ¬ββββββmonthββ¬βhitsββ
β Bangor_City_Forest β 2015-07-01 β 34 β
β Alireza_Afzal β 2017-02-01 β 24 β
β Akhaura-Laksam-Chittagong_Line β 2015-09-01 β 30 β
β 1973_National_500 β 2017-10-01 β 80 β
β Attachment β 2017-09-01 β 1356 β
ββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
Consider using
RowBinaryWithNames
, which also adds a header row with a columns list.
RowBinaryWithNamesAndTypes
will also add an additional header row with column types.
Importing from RowBinary files {#importing-from-rowbinary-files}
To load data from a RowBinary file, we can use a
FROM INFILE
clause:
sql
INSERT INTO sometable
FROM INFILE 'data.binary'
FORMAT RowBinary
Importing single binary value using RawBLOB {#importing-single-binary-value-using-rawblob}
Suppose we want to read an entire binary file and save it into a field in a table.
This is the case when the
RawBLOB format
can be used. This format can be directly used with a single-column table only:
sql
CREATE TABLE images(data String) ENGINE = Memory
Let's save an image file to the
images
table:
bash
cat image.jpg | clickhouse-client -q "INSERT INTO images FORMAT RawBLOB"
We can check the
data
field length which will be equal to the original file size:
sql
SELECT length(data) FROM images
response
ββlength(data)ββ
β 6121 β
ββββββββββββββββ
Exporting RawBLOB data {#exporting-rawblob-data}
This format can also be used to export data using an
INTO OUTFILE
clause:
sql
SELECT * FROM images LIMIT 1
INTO OUTFILE 'out.jpg'
FORMAT RawBLOB
Note that we had to use
LIMIT 1
because exporting more than a single value will create a corrupted file.
MessagePack {#messagepack}
ClickHouse supports importing and exporting to
MessagePack
using the
MsgPack
. To export to MessagePack format:
sql
SELECT *
FROM some_data
INTO OUTFILE 'data.msgpk'
FORMAT MsgPack
To import data from a
MessagePack file
:
sql
INSERT INTO sometable
FROM INFILE 'data.msgpk'
FORMAT MsgPack
Protocol Buffers {#protocol-buffers}
To work with
Protocol Buffers
we first need to define a
schema file
:
```protobuf
syntax = "proto3";
message MessageType {
string path = 1;
date month = 2;
uint32 hits = 3;
};
``` | {"source_file": "binary.md"} | [
0.025134997442364693,
-0.020554667338728905,
-0.07209715247154236,
0.05351780354976654,
0.015731440857052803,
-0.03314125910401344,
0.034847110509872437,
-0.026033718138933182,
-0.14754022657871246,
0.07459591329097748,
-0.038528263568878174,
-0.08618344366550446,
0.050528742372989655,
-0.... |
c926bf2b-be06-46df-8c67-d2f8bd8b7bdf | To work with
Protocol Buffers
we first need to define a
schema file
:
```protobuf
syntax = "proto3";
message MessageType {
string path = 1;
date month = 2;
uint32 hits = 3;
};
```
Path to this schema file (
schema.proto
in our case) is set in a
format_schema
settings option for the
Protobuf
format:
sql
SELECT * FROM some_data
INTO OUTFILE 'proto.bin'
FORMAT Protobuf
SETTINGS format_schema = 'schema:MessageType'
This saves data to the
proto.bin
file. ClickHouse also supports importing Protobuf data as well as nested messages. Consider using
ProtobufSingle
to work with a single Protocol Buffer message (length delimiters will be omitted in this case).
Cap'n Proto {#capn-proto}
Another popular binary serialization format supported by ClickHouse is
Cap'n Proto
. Similarly to
Protobuf
format, we have to define a schema file (
schema.capnp
) in our example:
```response
@0xec8ff1a10aa10dbe;
struct PathStats {
path @0 :Text;
month @1 :UInt32;
hits @2 :UInt32;
}
```
Now we can import and export using
CapnProto
format and this schema:
sql
SELECT
path,
CAST(month, 'UInt32') AS month,
hits
FROM some_data
INTO OUTFILE 'capnp.bin'
FORMAT CapnProto
SETTINGS format_schema = 'schema:PathStats'
Note that we had to cast the
Date
column as
UInt32
to
match corresponding types
.
Other formats {#other-formats}
ClickHouse introduces support for many formats, both text, and binary, to cover various scenarios and platforms. Explore more formats and ways to work with them in the following articles:
CSV and TSV formats
Parquet
JSON formats
Regex and templates
Native and binary formats
SQL formats
And also check
clickhouse-local
- a portable full-featured tool to work on local/remote files without starting ClickHouse server. | {"source_file": "binary.md"} | [
0.056659698486328125,
-0.04088161140680313,
-0.042285531759262085,
-0.06651963293552399,
-0.062257688492536545,
-0.061571620404720306,
0.01525640208274126,
0.013274136930704117,
-0.06930849701166153,
0.011303356848657131,
-0.02210995927453041,
-0.13154655694961548,
-0.025765983387827873,
0... |
fc5beeaf-1599-45f7-b3bc-311d2954632b | sidebar_label: 'JDBC'
sidebar_position: 2
keywords: ['clickhouse', 'jdbc', 'connect', 'integrate']
slug: /integrations/jdbc/jdbc-with-clickhouse
description: 'The ClickHouse JDBC Bridge allows ClickHouse to access data from any external data source for which a JDBC driver is available'
title: 'Connecting ClickHouse to external data sources with JDBC'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Jdbc01 from '@site/static/images/integrations/data-ingestion/dbms/jdbc-01.png';
import Jdbc02 from '@site/static/images/integrations/data-ingestion/dbms/jdbc-02.png';
import Jdbc03 from '@site/static/images/integrations/data-ingestion/dbms/jdbc-03.png';
Connecting ClickHouse to external data sources with JDBC
:::note
Using JDBC requires the ClickHouse JDBC bridge, so you will need to use
clickhouse-local
on a local machine to stream the data from your database to ClickHouse Cloud. Visit the
Using clickhouse-local
page in the
Migrate
section of the docs for details.
:::
Overview:
The
ClickHouse JDBC Bridge
in combination with the
jdbc table function
or the
JDBC table engine
allows ClickHouse to access data from any external data source for which a
JDBC driver
is available:
This is handy when there is no native built-in
integration engine
, table function, or external dictionary for the external data source available, but a JDBC driver for the data source exists.
You can use the ClickHouse JDBC Bridge for both reads and writes. And in parallel for multiple external data sources, e.g. you can run distributed queries on ClickHouse across multiple external and internal data sources in real time.
In this lesson we will show you how easy it is to install, configure, and run the ClickHouse JDBC Bridge in order to connect ClickHouse with an external data source. We will use MySQL as the external data source for this lesson.
Let's get started!
:::note Prerequisites
You have access to a machine that has:
1. a Unix shell and internet access
2.
wget
installed
3. a current version of
Java
(e.g.
OpenJDK
Version >= 17) installed
4. a current version of
MySQL
(e.g.
MySQL
Version >=8) installed and running
5. a current version of
ClickHouse
installed
and running
:::
Install the ClickHouse JDBC Bridge locally {#install-the-clickhouse-jdbc-bridge-locally}
The easiest way to use the ClickHouse JDBC Bridge is to install and run it on the same host where also ClickHouse is running:
Let's start by connecting to the Unix shell on the machine where ClickHouse is running and create a local folder where we will later install the ClickHouse JDBC Bridge into (feel free to name the folder anything you like and put it anywhere you like):
bash
mkdir ~/clickhouse-jdbc-bridge
Now we download the
current version
of the ClickHouse JDBC Bridge into that folder: | {"source_file": "jdbc-with-clickhouse.md"} | [
0.02337528020143509,
-0.005539477802813053,
-0.051367536187171936,
0.0008253547130152583,
-0.05115056410431862,
-0.024372894316911697,
0.06582502275705338,
0.06393452733755112,
-0.11546314507722855,
-0.062436267733573914,
-0.01121611800044775,
-0.023108724504709244,
0.06453245133161545,
-0... |
c0a42971-5671-4009-8dd3-e637fc90b7e9 | bash
mkdir ~/clickhouse-jdbc-bridge
Now we download the
current version
of the ClickHouse JDBC Bridge into that folder:
bash
cd ~/clickhouse-jdbc-bridge
wget https://github.com/ClickHouse/clickhouse-jdbc-bridge/releases/download/v2.0.7/clickhouse-jdbc-bridge-2.0.7-shaded.jar
In order to be able to connect to MySQL we are creating a named data source:
bash
cd ~/clickhouse-jdbc-bridge
mkdir -p config/datasources
touch config/datasources/mysql8.json
You can now copy and paste the following configuration into the file
~/clickhouse-jdbc-bridge/config/datasources/mysql8.json
:
json
{
"mysql8": {
"driverUrls": [
"https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar"
],
"jdbcUrl": "jdbc:mysql://<host>:<port>",
"username": "<username>",
"password": "<password>"
}
}
:::note
in the config file above
- you are free to use any name you like for the datasource, we used
mysql8
- in the value for the
jdbcUrl
you need to replace
<host>
, and
<port>
with appropriate values according to your running MySQL instance, e.g.
"jdbc:mysql://localhost:3306"
- you need to replace
<username>
and
<password>
with your MySQL credentials, if you don't use a password, you can delete the
"password": "<password>"
line in the config file above
- in the value for
driverUrls
we just specified a URL from which the
current version
of the MySQL JDBC driver can be downloaded. That's all we have to do, and the ClickHouse JDBC Bridge will automatically download that JDBC driver (into a OS specific directory).
:::
Now we are ready to start the ClickHouse JDBC Bridge:
bash
cd ~/clickhouse-jdbc-bridge
java -jar clickhouse-jdbc-bridge-2.0.7-shaded.jar
:::note
We started the ClickHouse JDBC Bridge in foreground mode. In order to stop the Bridge you can bring the Unix shell window from above in foreground and press
CTRL+C
.
:::
Use the JDBC connection from within ClickHouse {#use-the-jdbc-connection-from-within-clickhouse}
ClickHouse can now access MySQL data by either using the
jdbc table function
or the
JDBC table engine
.
The easiest way to execute the following examples is to copy and paste them into the
clickhouse-client
or into the
Play UI
.
jdbc Table Function:
sql
SELECT * FROM jdbc('mysql8', 'mydatabase', 'mytable');
:::note
As the first parameter for the jdbc table function we are using the name of the named data source that we configured above.
:::
JDBC Table Engine:
```sql
CREATE TABLE mytable (
,
...
)
ENGINE = JDBC('mysql8', 'mydatabase', 'mytable');
SELECT * FROM mytable;
```
:::note
As the first parameter for the jdbc engine clause we are using the name of the named data source that we configured above
The schema of the ClickHouse JDBC engine table and schema of the connected MySQL table must be aligned, e.g. the column names and order must be the same, and the column data types must be compatible
::: | {"source_file": "jdbc-with-clickhouse.md"} | [
-0.007747763767838478,
-0.04105686768889427,
-0.02272554486989975,
-0.006413549650460482,
-0.13801467418670654,
-0.02773677371442318,
0.00850714836269617,
0.0314319021999836,
-0.020292766392230988,
0.010639460757374763,
-0.027287229895591736,
-0.06692570447921753,
0.0874636098742485,
-0.01... |
63f0da9c-e955-4ace-a749-17b1bbb432c3 | Install the ClickHouse JDBC Bridge externally {#install-the-clickhouse-jdbc-bridge-externally}
For a distributed ClickHouse cluster (a cluster with more than one ClickHouse host) it makes sense to install and run the ClickHouse JDBC Bridge externally on its own host:
This has the advantage that each ClickHouse host can access the JDBC Bridge. Otherwise the JDBC Bridge would need to be installed locally for each ClickHouse instance that is supposed to access external data sources via the Bridge.
In order to install the ClickHouse JDBC Bridge externally, we do the following steps:
We install, configure and run the ClickHouse JDBC Bridge on a dedicated host by following the steps described in section 1 of this guide.
On each ClickHouse Host we add the following configuration block to the
ClickHouse server configuration
(depending on your chosen configuration format, use either the XML or YAML version):
xml
<jdbc_bridge>
<host>JDBC-Bridge-Host</host>
<port>9019</port>
</jdbc_bridge>
yaml
jdbc_bridge:
host: JDBC-Bridge-Host
port: 9019
:::note
- you need to replace
JDBC-Bridge-Host
with the hostname or ip address of the dedicated ClickHouse JDBC Bridge host
- we specified the default ClickHouse JDBC Bridge port
9019
, if you are using a different port for the JDBC Bridge then you must adapt the configuration above accordingly
::: | {"source_file": "jdbc-with-clickhouse.md"} | [
-0.006463959813117981,
-0.06824787706136703,
-0.031049344688653946,
-0.028633015230298042,
-0.09726966917514801,
-0.06531790643930435,
-0.0167417973279953,
0.008081409148871899,
-0.038140665739774704,
-0.018946275115013123,
-0.06710675358772278,
-0.0443958081305027,
0.029203403741121292,
-... |
2b1053e9-3ac0-4bcc-86e4-d28503b01798 | sidebar_label: 'ODBC'
sidebar_position: 1
title: 'ODBC'
slug: /integrations/data-ingestion/dbms/odbc-with-clickhouse
description: 'Page describing the ODBC integration'
doc_type: 'reference'
hide_title: true
keywords: ['odbc', 'database connection', 'integration', 'external data', 'driver']
import Content from '@site/docs/engines/table-engines/integrations/odbc.md'; | {"source_file": "odbc-with-clickhouse.md"} | [
-0.01748562604188919,
-0.005805153399705887,
-0.07830300182104111,
0.08923113346099854,
0.05951760709285736,
-0.03344237431883812,
0.04877440258860588,
0.03667384386062622,
-0.03371065482497215,
-0.04783978685736656,
0.0025507467798888683,
0.004420692101120949,
0.07849494367837906,
-0.0985... |
c3a4f8c1-5627-4dbb-9674-72f9fc07f68b | sidebar_label: 'EMQX'
sidebar_position: 1
slug: /integrations/emqx
description: 'Introduction to EMQX with ClickHouse'
title: 'Integrating EMQX with ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'data_ingestion'
keywords: ['EMQX ClickHouse integration', 'MQTT ClickHouse connector', 'EMQX Cloud ClickHouse', 'IoT data ClickHouse', 'MQTT broker ClickHouse'] | {"source_file": "index.md"} | [
-0.014120855368673801,
0.010234104469418526,
-0.003473777323961258,
0.03255123645067215,
-0.03711222857236862,
-0.033884238451719284,
0.03450830280780792,
0.005178522784262896,
-0.046527113765478134,
-0.025610527023673058,
0.03235640749335289,
-0.03774436190724373,
0.08446761220693588,
-0.... |
d739892d-180e-4bc5-a796-f3841be8c744 | import emqx_cloud_artitecture from '@site/static/images/integrations/data-ingestion/emqx/emqx-cloud-artitecture.png';
import clickhouse_cloud_1 from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_cloud_1.png';
import clickhouse_cloud_2 from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_cloud_2.png';
import clickhouse_cloud_3 from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_cloud_3.png';
import clickhouse_cloud_4 from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_cloud_4.png';
import clickhouse_cloud_5 from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_cloud_5.png';
import clickhouse_cloud_6 from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_cloud_6.png';
import emqx_cloud_sign_up from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_sign_up.png';
import emqx_cloud_create_1 from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_create_1.png';
import emqx_cloud_create_2 from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_create_2.png';
import emqx_cloud_overview from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_overview.png';
import emqx_cloud_auth from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_auth.png';
import emqx_cloud_nat_gateway from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_nat_gateway.png';
import emqx_cloud_data_integration from '@site/static/images/integrations/data-ingestion/emqx/emqx_cloud_data_integration.png';
import data_integration_clickhouse from '@site/static/images/integrations/data-ingestion/emqx/data_integration_clickhouse.png';
import data_integration_resource from '@site/static/images/integrations/data-ingestion/emqx/data_integration_resource.png';
import data_integration_rule_1 from '@site/static/images/integrations/data-ingestion/emqx/data_integration_rule_1.png';
import data_integration_rule_2 from '@site/static/images/integrations/data-ingestion/emqx/data_integration_rule_2.png';
import data_integration_rule_action from '@site/static/images/integrations/data-ingestion/emqx/data_integration_rule_action.png';
import data_integration_details from '@site/static/images/integrations/data-ingestion/emqx/data_integration_details.png';
import work_flow from '@site/static/images/integrations/data-ingestion/emqx/work-flow.png';
import mqttx_overview from '@site/static/images/integrations/data-ingestion/emqx/mqttx-overview.png';
import mqttx_new from '@site/static/images/integrations/data-ingestion/emqx/mqttx-new.png';
import mqttx_publish from '@site/static/images/integrations/data-ingestion/emqx/mqttx-publish.png';
import rule_monitor from '@site/static/images/integrations/data-ingestion/emqx/rule_monitor.png';
import clickhouse_result from '@site/static/images/integrations/data-ingestion/emqx/clickhouse_result.png';
import Image from '@theme/IdealImage';
Integrating EMQX with ClickHouse | {"source_file": "index.md"} | [
-0.018186552450060844,
-0.018537327647209167,
0.02375101111829281,
-0.004345168359577656,
0.031206024810671806,
-0.0885300263762474,
0.09668561071157455,
-0.020840851590037346,
-0.014109684154391289,
-0.030507121235132217,
0.0959991067647934,
-0.05486060678958893,
0.11421779543161392,
-0.0... |
fc7be313-2f69-4105-acf5-dcfc171564ed | Integrating EMQX with ClickHouse
Connecting EMQX {#connecting-emqx}
EMQX
is an open source MQTT broker with a high-performance real-time message processing engine, powering event streaming for IoT devices at massive scale. As the most scalable MQTT broker, EMQX can help you connect any device, at any scale. Move and process your IoT data anywhere.
EMQX Cloud
is an MQTT messaging middleware product for the IoT domain hosted by
EMQ
. As the world's first fully managed MQTT 5.0 cloud messaging service, EMQX Cloud provides a one-stop O&M colocation and a unique isolated environment for MQTT messaging services. In the era of the Internet of Everything, EMQX Cloud can help you quickly build industry applications for the IoT domain and easily collect, transmit, compute, and persist IoT data.
With the infrastructure provided by cloud providers, EMQX Cloud serves dozens of countries and regions around the world, providing low-cost, secure, and reliable cloud services for 5G and Internet of Everything applications.
Assumptions {#assumptions}
You are familiar with the
MQTT protocol
, which is designed as an extremely lightweight publish/subscribe messaging transport protocol.
You are using EMQX or EMQX Cloud for real-time message processing engine, powering event streaming for IoT devices at massive scale.
You have prepared a Clickhouse Cloud instance to persist device data.
We are using
MQTT X
as an MQTT client testing tool to connect the deployment of EMQX Cloud to publish MQTT data. Or other methods connecting to the MQTT broker will do the job as well.
Get your ClickHouse Cloud service {#get-your-clickhouse-cloudservice}
During this setup, we deployed the ClickHouse instance on AWS in N. Virginia (us-east -1), while an EMQX Cloud instance was also deployed in the same region.
During the setup process, you will also need to pay attention to the connection settings. In this tutorial, we choose "Anywhere", but if you apply for a specific location, you will need to add the
NAT gateway
IP address you got from your EMQX Cloud deployment to the whitelist.
Then you need to save your username and password for future use.
After that, you will get a running Click house instance. Click "Connect" to get the instance connection address of Clickhouse Cloud.
Click "Connect to SQL Console" to create database and table for integration with EMQX Cloud.
You can refer to the following SQL statement, or modify the SQL according to the actual situation.
sql
CREATE TABLE emqx.temp_hum
(
client_id String,
timestamp DateTime,
topic String,
temp Float32,
hum Float32
)
ENGINE = MergeTree()
PRIMARY KEY (client_id, timestamp)
Create an MQTT service on EMQX Cloud {#create-an-mqtt-service-on-emqx-cloud}
Creating a dedicated MQTT broker on EMQX Cloud is as easy as a few clicks.
Get an account {#get-an-account} | {"source_file": "index.md"} | [
-0.04767033830285072,
0.000325182598317042,
0.06503378599882126,
-0.011646616272628307,
-0.013484184630215168,
-0.09082932770252228,
0.04067849740386009,
-0.019960077479481697,
0.034774746745824814,
-0.016652286052703857,
-0.028451131656765938,
-0.019695313647389412,
0.07938248664140701,
-... |
44db7f55-ebbc-48ef-b7f3-37ccaee7e3ba | Create an MQTT service on EMQX Cloud {#create-an-mqtt-service-on-emqx-cloud}
Creating a dedicated MQTT broker on EMQX Cloud is as easy as a few clicks.
Get an account {#get-an-account}
EMQX Cloud provides a 14-day free trial for both standard deployment and professional deployment for every account.
Start at the
EMQX Cloud sign up
page and click start free to register an account if you are new to EMQX Cloud.
Create an MQTT cluster {#create-an-mqtt-cluster}
Once logged in, click on "Cloud console" under the account menu and you will be able to see the green button to create a new deployment.
In this tutorial, we will use the Professional deployment because only Pro version provides the data integration functionality, which can send MQTT data directly to ClickHouse without a single line of code.
Select Pro version and choose
N.Virginial
region and click
Create Now
. In just a few minutes, you will get a fully managed MQTT broker:
Now click the panel to go to the cluster view. On this dashboard, you will see the overview of your MQTT broker.
Add client credential {#add-client-credential}
EMQX Cloud does not allow anonymous connections by defaultοΌso you need add a client credential so you can use the MQTT client tool to send data to this broker.
Click 'Authentication & ACL' on the left menu and click 'Authentication' in the submenu. Click the 'Add' button on the right and give a username and password for the MQTT connection later. Here we will use
emqx
and
xxxxxx
for the username and password.
Click 'Confirm' and now we have a fully managed MQTT broker ready.
Enable NAT gateway {#enable-nat-gateway}
Before we can start setting up the ClickHouse integration, we need to enable the NAT gateway first. By default, the MQTT broker is deployed in a private VPC, which can not send data to third-party systems over the public network.
Go back to the Overview page and scroll down to the bottom of the page where you will see the NAT gateway widget. Click the Subscribe button and follow the instructions. Note that NAT Gateway is a value-added service, but it also offers a 14-day free trial.
Once it has been created, you will find the public IP address in the widget. Please note that if you select "Connect from a specific location" during ClickHouse Cloud setup, you will need to add this IP address to the whitelist.
Integration EMQX Cloud with ClickHouse Cloud {#integration-emqx-cloud-with-clickhouse-cloud}
The
EMQX Cloud Data Integrations
is used to configure the rules for handling and responding to EMQX message flows and device events. The Data Integrations not only provides a clear and flexible "configurable" architecture solution, but also simplifies the development process, improves user usability, and reduces the coupling degree between the business system and EMQX Cloud. It also provides a superior infrastructure for customization of EMQX Cloud's proprietary capabilities. | {"source_file": "index.md"} | [
0.013000167906284332,
-0.041668862104415894,
0.015801267698407173,
0.027553588151931763,
-0.07654453814029694,
-0.01028306782245636,
-0.05076318234205246,
0.010714184492826462,
0.008333464153110981,
0.045296698808670044,
0.00010270407801726833,
-0.08252128958702087,
0.14473851025104523,
-0... |
8df55926-94ba-4670-9bd5-5aecb1d231a1 | EMQX Cloud offers more than 30 native integrations with popular data systems. ClickHouse is one of them.
Create ClickHouse resource {#create-clickhouse-resource}
Click "Data Integrations" on the left menu and click "View All Resources". You will find the ClickHouse in the Data Persistence section or you can search for ClickHouse.
Click the ClickHouse card to create a new resource.
Note: add a note for this resource.
Server address: this is the address of your ClickHouse Cloud service, remember don't forget the port.
Database name:
emqx
we created in the above steps.
User: the username for connecting to your ClickHouse Cloud service.
Key: the password for the connection.
Create a new rule {#create-a-new-rule}
During the creation of the resource, you will see a popup, and clicking 'New' will leads you to the rule creation page.
EMQX provides a powerful
rule engine
that can transform, and enrich the raw MQTT message before sending it to third-party systems.
Here's the rule used in this tutorial:
sql
SELECT
clientid AS client_id,
(timestamp div 1000) AS timestamp,
topic AS topic,
payload.temp AS temp,
payload.hum AS hum
FROM
"temp_hum/emqx"
It will read the messages from the
temp_hum/emqx
topic and enrich the JSON object by adding client_id, topic, and timestamp info.
So, the raw JSON you send to the topic:
bash
{"temp": 28.5, "hum": 0.68}
You can use the SQL test to test and see the results.
Now click on the "NEXT" button. This step is to tell EMQX Cloud how to insert refined data into your ClickHouse database.
Add a response action {#add-a-response-action}
If you have only one resource, you don't need to modify the 'Resource' and 'Action Type'.
You only need to set the SQL template. Here's the example used for this tutorial:
bash
INSERT INTO temp_hum (client_id, timestamp, topic, temp, hum) VALUES ('${client_id}', ${timestamp}, '${topic}', ${temp}, ${hum})
This is a template for inserting data into Clickhouse, you can see the variables are used here.
View rules details {#view-rules-details}
Click "Confirm" and "View Details". Now, everything should be well set. You can see the data integration works from rule details page.
All the MQTT messages sent to the
temp_hum/emqx
topic will be persisted into your ClickHouse Cloud database.
Saving Data into ClickHouse {#saving-data-into-clickhouse}
We will simulate temperature and humidity data and report these data to EMQX Cloud via the MQTT X and then use the EMQX Cloud Data Integrations to save the data into ClickHouse Cloud.
Publish MQTT messages to EMQX Cloud {#publish-mqtt-messages-to-emqx-cloud}
You can use any MQTT client or SDK to publish the message. In this tutorial, we will use
MQTT X
, a user friendly MQTT client application provided by EMQ.
Click "New Connection" on MQTTX and fill the connection form:
Name: Connection name. Use whatever name you want. | {"source_file": "index.md"} | [
-0.0324615016579628,
-0.029717525467276573,
-0.037321992218494415,
-0.016605714336037636,
-0.042604900896549225,
-0.039795707911252975,
0.025933759286999702,
-0.029670652002096176,
0.0471658892929554,
-0.01071497518569231,
-0.015201441943645477,
-0.03085540607571602,
0.13893644511699677,
-... |
9ccc4a1a-c63f-4e94-b36f-a7d8b70ddb17 | Click "New Connection" on MQTTX and fill the connection form:
Name: Connection name. Use whatever name you want.
Host: the MQTT broker connection address. You can get it from the EMQX Cloud overview page.
Port: MQTT broker connection port. You can get it from the EMQX Cloud overview page.
Username/Password: Use the credential created above, which should be
emqx
and
xxxxxx
in this tutorial.
Click the "Connect" button on top right and the connection should be established.
Now you can send messages to the MQTT broker using this tool.
Inputs:
1. Set payload format to "JSON".
2. Set to topic:
temp_hum/emqx
(the topic we just set in the rule)
3. JSON body:
bash
{"temp": 23.1, "hum": 0.68}
Click the send button on the right. You can change the temperature value and send more data to MQTT broker.
The data sent to EMQX Cloud should be processed by the rule engine and inserted into ClickHouse Cloud automatically.
View rules monitoring {#view-rules-monitoring}
Check the rule monitoring and add one to the number of success.
Check the data persisted {#check-the-data-persisted}
Now it's time to take a look at the data on the ClickHouse Cloud. Ideally, the data you send using MQTTX will go to the EMQX Cloud and persist to the ClickHouse Cloud's database with the help of native data integration.
You can connect to the SQL console on ClickHouse Cloud panel or use any client tool to fetch data from your ClickHouse. In this tutorial, we used the SQL console.
By executing the SQL:
bash
SELECT * FROM emqx.temp_hum;
Summary {#summary}
You didn't write any piece of code, and now have the MQTT data move from EMQX cloud to ClickHouse Cloud. With EMQX Cloud and ClickHouse Cloud, you don't need to manage the infra and just focus on writing you IoT applications with data storied securely in ClickHouse Cloud. | {"source_file": "index.md"} | [
0.030848897993564606,
0.027178457006812096,
-0.013066411018371582,
0.0498155914247036,
-0.09166236966848373,
-0.04131282493472099,
-0.018679166212677956,
-0.00135896319989115,
0.03629139065742493,
0.025890791788697243,
-0.047202788293361664,
-0.1101791113615036,
0.17317433655261993,
-0.012... |
ce5bb18d-31a3-43c9-90e0-838b74f06485 | sidebar_label: 'ClickHouse Kafka Connect Sink'
sidebar_position: 2
slug: /integrations/kafka/clickhouse-kafka-connect-sink
description: 'The official Kafka connector from ClickHouse.'
title: 'ClickHouse Kafka Connect Sink'
doc_type: 'guide'
keywords: ['ClickHouse Kafka Connect Sink', 'Kafka connector ClickHouse', 'official ClickHouse connector', 'ClickHouse Kafka integration']
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
ClickHouse Kafka Connect Sink
:::note
If you need any help, please
file an issue in the repository
or raise a question in
ClickHouse public Slack
.
:::
ClickHouse Kafka Connect Sink
is the Kafka connector delivering data from a Kafka topic to a ClickHouse table.
License {#license}
The Kafka Connector Sink is distributed under the
Apache 2.0 License
Requirements for the environment {#requirements-for-the-environment}
The
Kafka Connect
framework v2.7 or later should be installed in the environment.
Version compatibility matrix {#version-compatibility-matrix}
| ClickHouse Kafka Connect version | ClickHouse version | Kafka Connect | Confluent platform |
|----------------------------------|--------------------|---------------|--------------------|
| 1.0.0 | > 23.3 | > 2.7 | > 6.1 |
Main features {#main-features}
Shipped with out-of-the-box exactly-once semantics. It's powered by a new ClickHouse core feature named
KeeperMap
(used as a state store by the connector) and allows for minimalistic architecture.
Support for 3rd-party state stores: Currently defaults to In-memory but can use KeeperMap (Redis to be added soon).
Core integration: Built, maintained, and supported by ClickHouse.
Tested continuously against
ClickHouse Cloud
.
Data inserts with a declared schema and schemaless.
Support for all data types of ClickHouse.
Installation instructions {#installation-instructions}
Gather your connection details {#gather-your-connection-details}
General installation instructions {#general-installation-instructions}
The connector is distributed as a single JAR file containing all the class files necessary to run the plugin.
To install the plugin, follow these steps:
Download a zip archive containing the Connector JAR file from the
Releases
page of ClickHouse Kafka Connect Sink repository.
Extract the ZIP file content and copy it to the desired location.
Add a path with the plugin director to
plugin.path
configuration in your Connect properties file to allow Confluent Platform to find the plugin.
Provide a topic name, ClickHouse instance hostname, and password in config. | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.034787531942129135,
-0.05911419913172722,
-0.09857771545648575,
-0.007498839870095253,
-0.04567349702119827,
-0.046306099742650986,
-0.0684029757976532,
-0.042458899319171906,
-0.0480116531252861,
0.0020119682885706425,
0.02179214358329773,
-0.06943755596876144,
-0.035356152802705765,
-... |
8f677da7-cd2e-4fbe-93a9-fb06f8a548d8 | Provide a topic name, ClickHouse instance hostname, and password in config.
yml
connector.class=com.clickhouse.kafka.connect.ClickHouseSinkConnector
tasks.max=1
topics=<topic_name>
ssl=true
jdbcConnectionProperties=?sslmode=STRICT
security.protocol=SSL
hostname=<hostname>
database=<database_name>
password=<password>
ssl.truststore.location=/tmp/kafka.client.truststore.jks
port=8443
value.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
exactlyOnce=true
username=default
schemas.enable=false
Restart the Confluent Platform.
If you use Confluent Platform, log into Confluent Control Center UI to verify the ClickHouse Sink is available in the list of available connectors.
Configuration options {#configuration-options}
To connect the ClickHouse Sink to the ClickHouse server, you need to provide:
connection details: hostname (
required
) and port (optional)
user credentials: password (
required
) and username (optional)
connector class:
com.clickhouse.kafka.connect.ClickHouseSinkConnector
(
required
)
topics or topics.regex: the Kafka topics to poll - topic names must match table names (
required
)
key and value converters: set based on the type of data on your topic. Required if not already defined in worker config.
The full table of configuration options: | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.017391018569469452,
-0.08548805862665176,
-0.11405078321695328,
0.01111044641584158,
-0.07708200812339783,
-0.024196350947022438,
-0.06584370136260986,
-0.06016931310296059,
0.009133502840995789,
0.014083245769143105,
-0.026114802807569504,
-0.09273380786180496,
0.01634017750620842,
0.0... |
c79c83d6-4d21-48a1-a018-44f7f52cc30b | | Property Name | Description | Default Value |
|-------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|
|
hostname
(Required) | The hostname or IP address of the server | N/A |
|
port
| The ClickHouse port - default is 8443 (for HTTPS in the cloud), but for HTTP (the default for self-hosted) it should be 8123 |
8443
|
|
ssl
| Enable ssl connection to ClickHouse |
true
|
|
jdbcConnectionProperties
| Connection properties when connecting to Clickhouse. Must start with
?
and joined by
&
between
param=value
|
""
|
|
username
| ClickHouse database username |
default
|
|
password
(Required) | ClickHouse database password | N/A |
|
database | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
0.028106840327382088,
0.07237494736909866,
-0.0018969732336699963,
0.03124195709824562,
-0.09684710949659348,
0.035552602261304855,
0.0508003830909729,
0.046613723039627075,
0.03381461277604103,
-0.028222929686307907,
0.03091384470462799,
-0.09721053391695023,
-0.005412353202700615,
-0.053... |
e5ddd587-0c06-44c7-8c8c-ad9d9286df28 | |
database
| ClickHouse database name |
default
|
|
connector.class
(Required) | Connector Class(explicit set and keep as the default value) |
"com.clickhouse.kafka.connect.ClickHouseSinkConnector"
|
|
tasks.max
| The number of Connector Tasks |
"1"
|
|
errors.retry.timeout
| ClickHouse JDBC Retry Timeout |
"60"
|
|
exactlyOnce
| Exactly Once Enabled |
"false"
|
|
topics
(Required) | The Kafka topics to poll - topic names must match table names |
""
|
|
key.converter
(Required
- See Description) | Set according to the types of your keys. Required here if you are passing keys (and not defined in worker config). |
"org.apache.kafka.connect.storage.StringConverter"
|
|
value.converter
(Required
- See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. |
"org.apache.kafka.connect.json.JsonConverter"
|
|
value.converter.schemas.enable | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
0.0026478616055101156,
-0.05571902543306351,
-0.09819494932889938,
0.046884555369615555,
-0.08482708036899567,
0.01619229093194008,
-0.02956843562424183,
-0.04254896193742752,
-0.03390391543507576,
0.029733311384916306,
-0.05003465712070465,
-0.08268843591213226,
0.04389007017016411,
-0.03... |
4c185ab8-2d1d-4f9c-9d0a-838aaaab5af9 | "org.apache.kafka.connect.json.JsonConverter"
|
|
value.converter.schemas.enable
| Connector Value Converter Schema Support |
"false"
|
|
errors.tolerance
| Connector Error Tolerance. Supported: none, all |
"none"
|
|
errors.deadletterqueue.topic.name
| If set (with errors.tolerance=all), a DLQ will be used for failed batches (see
Troubleshooting
) |
""
|
|
errors.deadletterqueue.context.headers.enable
| Adds additional headers for the DLQ |
""
|
|
clickhouseSettings
| Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") |
""
|
|
topic2TableMap
| Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") |
""
|
|
tableRefreshInterval
| Time (in seconds) to refresh the table definition cache |
0
|
|
keeperOnCluster
| Allows configuration of ON CLUSTER parameter for self-hosted instances (e.g.
ON CLUSTER clusterNameInConfigFileDefinition
) for exactly-once connect_state table (see
Distributed DDL Queries
|
""
|
|
bypassRowBinary | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
0.031004657968878746,
-0.06319551914930344,
-0.1152157336473465,
0.056366100907325745,
-0.029914531856775284,
0.03997698053717613,
-0.049417294561862946,
-0.025110790506005287,
-0.03822174295783043,
0.06242651864886284,
0.027079809457063675,
-0.11357149481773376,
0.07486987113952637,
-0.02... |
a777a920-315a-4aa5-949a-5957678d6357 | ON CLUSTER clusterNameInConfigFileDefinition
) for exactly-once connect_state table (see
Distributed DDL Queries
|
""
|
|
bypassRowBinary
| Allows disabling use of RowBinary and RowBinaryWithDefaults for Schema-based data (Avro, Protobuf, etc.) - should only be used when data will have missing columns, and Nullable/Default are unacceptable |
"false"
|
|
dateTimeFormats
| Date time formats for parsing DateTime64 schema fields, separated by
;
(e.g.
someDateField=yyyy-MM-dd HH:mm:ss.SSSSSSSSS;someOtherDateField=yyyy-MM-dd HH:mm:ss
). |
""
|
|
tolerateStateMismatch
| Allows the connector to drop records "earlier" than the current offset stored AFTER_PROCESSING (e.g. if offset 5 is sent, and offset 250 was the last recorded offset) |
"false"
|
|
ignorePartitionsWhenBatching
| Will ignore partition when collecting messages for insert (though only if
exactlyOnce
is
false
). Performance Note: The more connector tasks, the fewer kafka partitions assigned per task - this can mean diminishing returns. |
"false"
| | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.007584113162010908,
-0.016306279227137566,
-0.016742272302508354,
0.060889046639204025,
0.007887603715062141,
0.04144326597452164,
-0.03958161547780037,
-0.0522783026099205,
-0.05086132884025574,
-0.010381696745753288,
0.035975731909275055,
-0.04389142245054245,
0.029015902429819107,
-0... |
9cec4862-8afd-4326-8c28-a900ccce3b24 | Target tables {#target-tables}
ClickHouse Connect Sink reads messages from Kafka topics and writes them to appropriate tables. ClickHouse Connect Sink writes data into existing tables. Please, make sure a target table with an appropriate schema was created in ClickHouse before starting to insert data into it.
Each topic requires a dedicated target table in ClickHouse. The target table name must match the source topic name.
Pre-processing {#pre-processing}
If you need to transform outbound messages before they are sent to ClickHouse Kafka Connect
Sink, use
Kafka Connect Transformations
.
Supported data types {#supported-data-types}
With a schema declared:
| Kafka Connect Type | ClickHouse Type | Supported | Primitive |
| --------------------------------------- |-----------------------| --------- | --------- |
| STRING | String | β
| Yes |
| STRING | JSON. See below (1) | β
| Yes |
| INT8 | Int8 | β
| Yes |
| INT16 | Int16 | β
| Yes |
| INT32 | Int32 | β
| Yes |
| INT64 | Int64 | β
| Yes |
| FLOAT32 | Float32 | β
| Yes |
| FLOAT64 | Float64 | β
| Yes |
| BOOLEAN | Boolean | β
| Yes |
| ARRAY | Array(T) | β
| No |
| MAP | Map(Primitive, T) | β
| No |
| STRUCT | Variant(T1, T2, ...) | β
| No |
| STRUCT | Tuple(a T1, b T2, ...) | β
| No |
| STRUCT | Nested(a T1, b T2, ...) | β
| No |
| STRUCT | JSON. See below (1), (2) | β
| No |
| BYTES | String | β
| No |
| org.apache.kafka.connect.data.Time | Int64 / DateTime64 | β
| No |
| org.apache.kafka.connect.data.Timestamp | Int32 / Date32 | β
| No |
| org.apache.kafka.connect.data.Decimal | Decimal | β
| No |
(1) - JSON is supported only when ClickHouse settings has
input_format_binary_read_json_as_string=1
. This works only for RowBinary format family and the setting affects all columns in the insert request so they all should be a string. Connector will convert STRUCT to a JSON string in this case. | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
0.03758862614631653,
-0.06934120506048203,
-0.05857311561703682,
0.06667105853557587,
-0.06741856038570404,
-0.0497281551361084,
-0.05613032355904579,
-0.04045304283499718,
0.024152137339115143,
0.048828043043613434,
-0.025471068918704987,
-0.09950602054595947,
0.013513343408703804,
0.0063... |
cd823eb9-9259-40f4-abb4-33243eccd2b8 | (2) - When struct has unions like
oneof
then converter should be configured to NOT add prefix/suffix to a field names. There is
generate.index.for.unions=false
setting for
ProtobufConverter
.
Without a schema declared:
A record is converted into JSON and sent to ClickHouse as a value in
JSONEachRow
format.
Configuration recipes {#configuration-recipes}
These are some common configuration recipes to get you started quickly.
Basic configuration {#basic-configuration}
The most basic configuration to get you started - it assumes you're running Kafka Connect in distributed mode and have a ClickHouse server running on
localhost:8443
with SSL enabled, data is in schemaless JSON.
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
"tasks.max": "1",
"consumer.override.max.poll.records": "5000",
"consumer.override.max.partition.fetch.bytes": "5242880",
"database": "default",
"errors.retry.timeout": "60",
"exactlyOnce": "false",
"hostname": "localhost",
"port": "8443",
"ssl": "true",
"jdbcConnectionProperties": "?ssl=true&sslmode=strict",
"username": "default",
"password": "<PASSWORD>",
"topics": "<TOPIC_NAME>",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"clickhouseSettings": ""
}
}
Basic configuration with multiple topics {#basic-configuration-with-multiple-topics}
The connector can consume data from multiple topics
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"topics": "SAMPLE_TOPIC, ANOTHER_TOPIC, YET_ANOTHER_TOPIC",
...
}
}
Basic configuration with DLQ {#basic-configuration-with-dlq}
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "<DLQ_TOPIC>",
"errors.deadletterqueue.context.headers.enable": "true",
}
}
Using with different data formats {#using-with-different-data-formats}
Avro schema support {#avro-schema-support}
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "<SCHEMA_REGISTRY_HOST>:<PORT>",
"value.converter.schemas.enable": "true",
}
}
Protobuf schema support {#protobuf-schema-support} | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.027690930292010307,
-0.023425370454788208,
-0.07765047997236252,
0.031189538538455963,
-0.08306526392698288,
-0.018174244090914726,
-0.09529311209917068,
-0.023593218997120857,
0.003866632701829076,
0.00004343758700997569,
-0.0005786179099231958,
-0.1077754944562912,
-0.009505171328783035... |
06d47e01-42ba-4c8f-9bd0-9527dbfb82c7 | Protobuf schema support {#protobuf-schema-support}
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"value.converter": "io.confluent.connect.protobuf.ProtobufConverter",
"value.converter.schema.registry.url": "<SCHEMA_REGISTRY_HOST>:<PORT>",
"value.converter.schemas.enable": "true",
}
}
Please note: if you encounter issues with missing classes, not every environment comes with the protobuf converter and you may need an alternate release of the jar bundled with dependencies.
JSON schema support {#json-schema-support}
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
}
}
String support {#string-support}
The connector supports the String Converter in different ClickHouse formats:
JSON
,
CSV
, and
TSV
.
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"customInsertFormat": "true",
"insertFormat": "CSV"
}
}
Logging {#logging}
Logging is automatically provided by Kafka Connect Platform.
The logging destination and format might be configured via Kafka connect
configuration file
.
If using the Confluent Platform, the logs can be seen by running a CLI command:
bash
confluent local services connect log
For additional details check out the official
tutorial
.
Monitoring {#monitoring}
ClickHouse Kafka Connect reports runtime metrics via
Java Management Extensions (JMX)
. JMX is enabled in Kafka Connector by default.
ClickHouse-Specific Metrics {#clickhouse-specific-metrics}
The connector exposes custom metrics via the following MBean name:
java
com.clickhouse:type=ClickHouseKafkaConnector,name=SinkTask{id}
| Metric Name | Type | Description |
|-----------------------|------|-----------------------------------------------------------------------------------------|
|
receivedRecords
| long | The total number of records received. |
|
recordProcessingTime
| long | Total time in nanoseconds spent grouping and converting records to a unified structure. |
|
taskProcessingTime
| long | Total time in nanoseconds spent processing and inserting data into ClickHouse. |
Kafka Producer/Consumer Metrics {#kafka-producer-consumer-metrics}
The connector exposes standard Kafka producer and consumer metrics that provide insights into data flow, throughput, and performance.
Topic-Level Metrics: | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.014716485515236855,
-0.04222041368484497,
-0.053198233246803284,
-0.03147716075181961,
0.0031866196077317,
-0.0034701719414442778,
-0.08320563286542892,
-0.005748668219894171,
-0.06084134057164192,
0.028455523774027824,
0.023257872089743614,
-0.15890276432037354,
-0.04805703088641167,
0... |
193428ae-6a48-4828-b1d8-cfbf4232470b | The connector exposes standard Kafka producer and consumer metrics that provide insights into data flow, throughput, and performance.
Topic-Level Metrics:
-
records-sent-total
: Total number of records sent to the topic
-
bytes-sent-total
: Total bytes sent to the topic
-
record-send-rate
: Average rate of records sent per second
-
byte-rate
: Average bytes sent per second
-
compression-rate
: Compression ratio achieved
Partition-Level Metrics:
-
records-sent-total
: Total records sent to the partition
-
bytes-sent-total
: Total bytes sent to the partition
-
records-lag
: Current lag in the partition
-
records-lead
: Current lead in the partition
-
replica-fetch-lag
: Lag information for replicas
Node-Level Connection Metrics:
-
connection-creation-total
: Total connections created to the Kafka node
-
connection-close-total
: Total connections closed
-
request-total
: Total requests sent to the node
-
response-total
: Total responses received from the node
-
request-rate
: Average request rate per second
-
response-rate
: Average response rate per second
These metrics help monitor:
-
Throughput
: Track data ingestion rates
-
Lag
: Identify bottlenecks and processing delays
-
Compression
: Measure data compression efficiency
-
Connection Health
: Monitor network connectivity and stability
Kafka Connect Framework Metrics {#kafka-connect-framework-metrics}
The connector integrates with the Kafka Connect framework and exposes metrics for task lifecycle and error tracking.
Task Status Metrics:
-
task-count
: Total number of tasks in the connector
-
running-task-count
: Number of tasks currently running
-
paused-task-count
: Number of tasks currently paused
-
failed-task-count
: Number of tasks that have failed
-
destroyed-task-count
: Number of destroyed tasks
-
unassigned-task-count
: Number of unassigned tasks
Task status values include:
running
,
paused
,
failed
,
destroyed
,
unassigned
Error Metrics:
-
deadletterqueue-produce-failures
: Number of failed DLQ writes
-
deadletterqueue-produce-requests
: Total DLQ write attempts
-
last-error-timestamp
: Timestamp of the last error
-
records-skip-total
: Total number of records skipped due to errors
-
records-retry-total
: Total number of records that were retried
-
errors-total
: Total number of errors encountered
Performance Metrics:
-
offset-commit-failures
: Number of failed offset commits
-
offset-commit-avg-time-ms
: Average time for offset commits
-
offset-commit-max-time-ms
: Maximum time for offset commits
-
put-batch-avg-time-ms
: Average time to process a batch
-
put-batch-max-time-ms
: Maximum time to process a batch
-
source-record-poll-total
: Total records polled
Monitoring Best Practices {#monitoring-best-practices}
Monitor Consumer Lag
: Track
records-lag
per partition to identify processing bottlenecks
Track Error Rates
: Watch
errors-total
and
records-skip-total
to detect data quality issues | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.038500573486089706,
-0.061266202479600906,
-0.060672301799058914,
0.0579686239361763,
0.00227809464558959,
0.007221256848424673,
-0.08393944799900055,
0.004201412666589022,
0.09218750894069672,
0.054457519203424454,
-0.06662173569202423,
-0.015248083509504795,
-0.008496868424117565,
-0.... |
fe04c938-0efe-4c6a-a232-6b200d66bf92 | Monitor Consumer Lag
: Track
records-lag
per partition to identify processing bottlenecks
Track Error Rates
: Watch
errors-total
and
records-skip-total
to detect data quality issues
Observe Task Health
: Monitor task status metrics to ensure tasks are running properly
Measure Throughput
: Use
records-send-rate
and
byte-rate
to track ingestion performance
Monitor Connection Health
: Check node-level connection metrics for network issues
Track Compression Efficiency
: Use
compression-rate
to optimize data transfer
For detailed JMX metric definitions and Prometheus integration, see the
jmx-export-connector.yml
configuration file.
Limitations {#limitations}
Deletes are not supported.
Batch size is inherited from the Kafka Consumer properties.
When using KeeperMap for exactly-once and the offset is changed or re-wound, you need to delete the content from KeeperMap for that specific topic. (See troubleshooting guide below for more details)
Performance tuning and throughput optimization {#tuning-performance}
This section covers performance tuning strategies for the ClickHouse Kafka Connect Sink. Performance tuning is essential when dealing with high-throughput use cases or when you need to optimize resource utilization and minimize lag.
When is performance tuning needed? {#when-is-performance-tuning-needed}
Performance tuning is typically required in the following scenarios:
High-throughput workloads
: When processing millions of events per second from Kafka topics
Consumer lag
: When your connector can't keep up with the rate of data production, causing increasing lag
Resource constraints
: When you need to optimize CPU, memory, or network usage
Multiple topics
: When consuming from multiple high-volume topics simultaneously
Small message sizes
: When dealing with many small messages that would benefit from server-side batching
Performance tuning is
NOT typically needed
when:
You're processing low to moderate volumes (< 10,000 messages/second)
Consumer lag is stable and acceptable for your use case
Default connector settings already meet your throughput requirements
Your ClickHouse cluster can easily handle the incoming load
Understanding the data flow {#understanding-the-data-flow}
Before tuning, it's important to understand how data flows through the connector:
Kafka Connect Framework
fetches messages from Kafka topics in the background
Connector polls
for messages from the framework's internal buffer
Connector batches
messages based on poll size
ClickHouse receives
the batched insert via HTTP/S
ClickHouse processes
the insert (synchronously or asynchronously)
Performance can be optimized at each of these stages.
Kafka Connect batch size tuning {#connect-fetch-vs-connector-poll}
The first level of optimization is controlling how much data the connector receives per batch from Kafka.
Fetch settings {#fetch-settings} | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
0.0016601606039330363,
-0.016262464225292206,
-0.06354662775993347,
0.014540706761181355,
0.030853109434247017,
-0.04040741175413132,
-0.02427762746810913,
-0.022247303277254105,
0.008483988232910633,
0.010050713084638119,
-0.015298857353627682,
-0.010925584472715855,
-0.04631531983613968,
... |
47c224d1-ef2e-4c23-b926-c8480df96f4a | The first level of optimization is controlling how much data the connector receives per batch from Kafka.
Fetch settings {#fetch-settings}
Kafka Connect (the framework) fetches messages from Kafka topics in the background, independent of the connector:
fetch.min.bytes
: Minimum amount of data before the framework passes values to the connector (default: 1 byte)
fetch.max.bytes
: Maximum amount of data to fetch in a single request (default: 52428800 / 50 MB)
fetch.max.wait.ms
: Maximum time to wait before returning data if
fetch.min.bytes
is not met (default: 500 ms)
Poll settings {#poll-settings}
The connector polls for messages from the framework's buffer:
max.poll.records
: Maximum number of records returned in a single poll (default: 500)
max.partition.fetch.bytes
: Maximum amount of data per partition (default: 1048576 / 1 MB)
Recommended settings for high throughput {#recommended-batch-settings}
For optimal performance with ClickHouse, aim for larger batches:
```properties
Increase the number of records per poll
consumer.max.poll.records=5000
Increase the partition fetch size (5 MB)
consumer.max.partition.fetch.bytes=5242880
Optional: Increase minimum fetch size to wait for more data (1 MB)
consumer.fetch.min.bytes=1048576
Optional: Reduce wait time if latency is critical
consumer.fetch.max.wait.ms=300
```
Important
: Kafka Connect fetch settings represent compressed data, while ClickHouse receives uncompressed data. Balance these settings based on your compression ratio.
Trade-offs
:
-
Larger batches
= Better ClickHouse ingestion performance, fewer parts, lower overhead
-
Larger batches
= Higher memory usage, potential increased end-to-end latency
-
Too large batches
= Risk of timeouts, OutOfMemory errors, or exceeding
max.poll.interval.ms
More details:
Confluent documentation
|
Kafka documentation
Asynchronous inserts {#asynchronous-inserts}
Asynchronous inserts are a powerful feature when the connector sends relatively small batches or when you want to further optimize ingestion by shifting batching responsibility to ClickHouse.
When to use async inserts {#when-to-use-async-inserts}
Consider enabling async inserts when:
Many small batches
: Your connector sends frequent small batches (< 1000 rows per batch)
High concurrency
: Multiple connector tasks are writing to the same table
Distributed deployment
: Running many connector instances across different hosts
Part creation overhead
: You're experiencing "too many parts" errors
Mixed workload
: Combining real-time ingestion with query workloads
Do
NOT
use async inserts when:
You're already sending large batches (> 10,000 rows per batch) with controlled frequency
You require immediate data visibility (queries must see data instantly)
Exactly-once semantics with
wait_for_async_insert=0
conflicts with your requirements | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.023559078574180603,
-0.00505837332457304,
-0.09563954174518585,
0.042582493275403976,
-0.0808413177728653,
-0.05735624209046364,
-0.03363833948969841,
0.01520411018282175,
0.04206814616918564,
0.038140423595905304,
-0.052080053836107254,
0.01885060779750347,
-0.03288707509636879,
-0.085... |
e2673aa2-4526-41c3-99fb-41d7375754be | You require immediate data visibility (queries must see data instantly)
Exactly-once semantics with
wait_for_async_insert=0
conflicts with your requirements
Your use case can benefit from client-side batching improvements instead
How async inserts work {#how-async-inserts-work}
With asynchronous inserts enabled, ClickHouse:
Receives the insert query from the connector
Writes data to an in-memory buffer (instead of immediately to disk)
Returns success to the connector (if
wait_for_async_insert=0
)
Flushes the buffer to disk when one of these conditions is met:
Buffer reaches
async_insert_max_data_size
(default: 10 MB)
async_insert_busy_timeout_ms
milliseconds elapsed since first insert (default: 1000 ms)
Maximum number of queries accumulated (
async_insert_max_query_number
, default: 100)
This significantly reduces the number of parts created and improves overall throughput.
Enabling async inserts {#enabling-async-inserts}
Add async insert settings to the
clickhouseSettings
configuration parameter:
json
{
"name": "clickhouse-connect",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
...
"clickhouseSettings": "async_insert=1,wait_for_async_insert=1"
}
}
Key settings
:
async_insert=1
: Enable asynchronous inserts
wait_for_async_insert=1
(recommended): Connector waits for data to be flushed to ClickHouse storage before acknowledging. Provides delivery guarantees.
wait_for_async_insert=0
: Connector acknowledges immediately after buffering. Better performance but data may be lost on server crash before flush.
Tuning async insert behavior {#tuning-async-inserts}
You can fine-tune the async insert flush behavior:
json
"clickhouseSettings": "async_insert=1,wait_for_async_insert=1,async_insert_max_data_size=10485760,async_insert_busy_timeout_ms=1000"
Common tuning parameters:
async_insert_max_data_size
(default: 10485760 / 10 MB): Maximum buffer size before flush
async_insert_busy_timeout_ms
(default: 1000): Maximum time (ms) before flush
async_insert_stale_timeout_ms
(default: 0): Time (ms) since last insert before flush
async_insert_max_query_number
(default: 100): Maximum queries before flush
Trade-offs
:
Benefits
: Fewer parts, better merge performance, lower CPU overhead, improved throughput under high concurrency
Considerations
: Data not immediately queryable, slightly increased end-to-end latency
Risks
: Data loss on server crash if
wait_for_async_insert=0
, potential memory pressure with large buffers
Async inserts with exactly-once semantics {#async-inserts-with-exactly-once}
When using
exactlyOnce=true
with async inserts:
json
{
"config": {
"exactlyOnce": "true",
"clickhouseSettings": "async_insert=1,wait_for_async_insert=1"
}
}
Important
: Always use
wait_for_async_insert=1
with exactly-once to ensure offset commits happen only after data is persisted. | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.03953535482287407,
-0.03686165437102318,
-0.07534246891736984,
0.12778738141059875,
-0.09831446409225464,
-0.0530116930603981,
-0.04725835472345352,
-0.019429629668593407,
0.003500852733850479,
-0.008751573041081429,
0.04216160997748375,
0.020837698131799698,
0.02937944605946541,
-0.073... |
c5044929-443a-4091-bbca-bd9be8956389 | Important
: Always use
wait_for_async_insert=1
with exactly-once to ensure offset commits happen only after data is persisted.
For more information about async inserts, see the
ClickHouse async inserts documentation
.
Connector parallelism {#connector-parallelism}
Increase parallelism to improve throughput:
Tasks per connector {#tasks-per-connector}
json
"tasks.max": "4"
Each task processes a subset of topic partitions. More tasks = more parallelism, but:
Maximum effective tasks = number of topic partitions
Each task maintains its own connection to ClickHouse
More tasks = higher overhead and potential resource contention
Recommendation
: Start with
tasks.max
equal to the number of topic partitions, then adjust based on CPU and throughput metrics.
Ignoring partitions when batching {#ignoring-partitions}
By default, the connector batches messages per partition. For higher throughput, you can batch across partitions:
json
"ignorePartitionsWhenBatching": "true"
** Warning**: Only use when
exactlyOnce=false
. This setting can improve throughput by creating larger batches but loses per-partition ordering guarantees.
Multiple high throughput topics {#multiple-high-throughput-topics}
If your connector is configured to subscribe to multiple topics, you're using
topic2TableMap
to map topics to tables, and you're experiencing a bottleneck at insertion resulting in consumer lag, consider creating one connector per topic instead.
The main reason why this happens is that currently batches are inserted into every table
serially
.
Recommendation
: For multiple high-volume topics, deploy one connector instance per topic to maximize parallel insert throughput.
ClickHouse table engine considerations {#table-engine-considerations}
Choose the appropriate ClickHouse table engine for your use case:
MergeTree
: Best for most use cases, balances query and insert performance
ReplicatedMergeTree
: Required for high availability, adds replication overhead
*MergeTree
with proper
ORDER BY
: Optimize for your query patterns
Settings to consider
:
sql
CREATE TABLE my_table (...)
ENGINE = MergeTree()
ORDER BY (timestamp, id)
SETTINGS
-- Increase max insert threads for parallel part writing
max_insert_threads = 4,
-- Allow inserts with quorum for reliability (ReplicatedMergeTree)
insert_quorum = 2
For connector-level insert settings:
json
"clickhouseSettings": "insert_quorum=2,insert_quorum_timeout=60000"
Connection pooling and timeouts {#connection-pooling}
The connector maintains HTTP connections to ClickHouse. Adjust timeouts for high-latency networks:
json
"clickhouseSettings": "socket_timeout=300000,connection_timeout=30000"
socket_timeout
(default: 30000 ms): Maximum time for read operations
connection_timeout
(default: 10000 ms): Maximum time to establish connection
Increase these values if you experience timeout errors with large batches. | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.049769025295972824,
-0.09034043550491333,
-0.07259353995323181,
0.06211882084608078,
-0.03651135414838791,
-0.05991484597325325,
-0.0616004541516304,
-0.038196220993995667,
0.0013400199823081493,
-0.011841194704174995,
0.0053311921656131744,
0.02550610899925232,
0.025597482919692993,
0.... |
bcd654b3-bcf4-4730-bba7-9ada6d091db8 | connection_timeout
(default: 10000 ms): Maximum time to establish connection
Increase these values if you experience timeout errors with large batches.
Monitoring and troubleshooting performance {#monitoring-performance}
Monitor these key metrics:
Consumer lag
: Use Kafka monitoring tools to track lag per partition
Connector metrics
: Monitor
receivedRecords
,
recordProcessingTime
,
taskProcessingTime
via JMX (see
Monitoring
)
ClickHouse metrics
:
system.asynchronous_inserts
: Monitor async insert buffer usage
system.parts
: Monitor part count to detect merge issues
system.merges
: Monitor active merges
system.events
: Track
InsertedRows
,
InsertedBytes
,
FailedInsertQuery
Common performance issues
:
| Symptom | Possible Cause | Solution |
|---------|----------------|----------|
| High consumer lag | Batches too small | Increase
max.poll.records
, enable async inserts |
| "Too many parts" errors | Small frequent inserts | Enable async inserts, increase batch size |
| Timeout errors | Large batch size, slow network | Reduce batch size, increase
socket_timeout
, check network |
| High CPU usage | Too many small parts | Enable async inserts, increase merge settings |
| OutOfMemory errors | Batch size too large | Reduce
max.poll.records
,
max.partition.fetch.bytes
|
| Uneven task load | Uneven partition distribution | Rebalance partitions or adjust
tasks.max
|
Best practices summary {#performance-best-practices}
Start with defaults
, then measure and tune based on actual performance
Prefer larger batches
: Aim for 10,000-100,000 rows per insert when possible
Use async inserts
when sending many small batches or under high concurrency
Always use
wait_for_async_insert=1
with exactly-once semantics
Scale horizontally
: Increase
tasks.max
up to the number of partitions
One connector per high-volume topic
for maximum throughput
Monitor continuously
: Track consumer lag, part count, and merge activity
Test thoroughly
: Always test configuration changes under realistic load before production deployment
Example: High-throughput configuration {#example-high-throughput}
Here's a complete example optimized for high throughput:
```json
{
"name": "clickhouse-high-throughput",
"config": {
"connector.class": "com.clickhouse.kafka.connect.ClickHouseSinkConnector",
"tasks.max": "8",
"topics": "high_volume_topic",
"hostname": "my-clickhouse-host.cloud",
"port": "8443",
"database": "default",
"username": "default",
"password": "<PASSWORD>",
"ssl": "true",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"exactlyOnce": "false",
"ignorePartitionsWhenBatching": "true",
"consumer.max.poll.records": "10000",
"consumer.max.partition.fetch.bytes": "5242880",
"consumer.fetch.min.bytes": "1048576",
"consumer.fetch.max.wait.ms": "500", | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.027907682582736015,
-0.05756351724267006,
-0.09881588816642761,
0.04869786649942398,
-0.047289032489061356,
-0.006826414726674557,
-0.06453840434551239,
0.030239813029766083,
0.010190235450863838,
0.025127513334155083,
-0.0008938823011703789,
-0.021654821932315826,
-0.057393696159124374,
... |
0184d750-0ce4-45e8-a11e-8f2eb75fafc9 | "consumer.max.poll.records": "10000",
"consumer.max.partition.fetch.bytes": "5242880",
"consumer.fetch.min.bytes": "1048576",
"consumer.fetch.max.wait.ms": "500",
"clickhouseSettings": "async_insert=1,wait_for_async_insert=1,async_insert_max_data_size=16777216,async_insert_busy_timeout_ms=1000,socket_timeout=300000"
}
}
```
This configuration
:
- Processes up to 10,000 records per poll
- Batches across partitions for larger inserts
- Uses async inserts with 16 MB buffer
- Runs 8 parallel tasks (match your partition count)
- Optimized for throughput over strict ordering
Troubleshooting {#troubleshooting}
"State mismatch for topic
[someTopic]
partition
[0]
" {#state-mismatch-for-topic-sometopic-partition-0}
This happens when the offset stored in KeeperMap is different from the offset stored in Kafka, usually when a topic has been deleted
or the offset has been manually adjusted.
To fix this, you would need to delete the old values stored for that given topic + partition.
NOTE: This adjustment may have exactly-once implications.
"What errors will the connector retry?" {#what-errors-will-the-connector-retry}
Right now the focus is on identifying errors that are transient and can be retried, including:
ClickHouseException
- This is a generic exception that can be thrown by ClickHouse.
It is usually thrown when the server is overloaded and the following error codes are considered particularly transient:
3 - UNEXPECTED_END_OF_FILE
159 - TIMEOUT_EXCEEDED
164 - READONLY
202 - TOO_MANY_SIMULTANEOUS_QUERIES
203 - NO_FREE_CONNECTION
209 - SOCKET_TIMEOUT
210 - NETWORK_ERROR
242 - TABLE_IS_READ_ONLY
252 - TOO_MANY_PARTS
285 - TOO_FEW_LIVE_REPLICAS
319 - UNKNOWN_STATUS_OF_INSERT
425 - SYSTEM_ERROR
999 - KEEPER_EXCEPTION
1002 - UNKNOWN_EXCEPTION
SocketTimeoutException
- This is thrown when the socket times out.
UnknownHostException
- This is thrown when the host cannot be resolved.
IOException
- This is thrown when there is a problem with the network.
"All my data is blank/zeroes" {#all-my-data-is-blankzeroes}
Likely the fields in your data don't match the fields in the table - this is especially common with CDC (and the Debezium format).
One common solution is to add the flatten transformation to your connector configuration:
properties
transforms=flatten
transforms.flatten.type=org.apache.kafka.connect.transforms.Flatten$Value
transforms.flatten.delimiter=_
This will transform your data from a nested JSON to a flattened JSON (using
_
as a delimiter). Fields in the table would then follow the "field1_field2_field3" format (i.e. "before_id", "after_id", etc.).
"I want to use my Kafka keys in ClickHouse" {#i-want-to-use-my-kafka-keys-in-clickhouse}
Kafka keys are not stored in the value field by default, but you can use the
KeyToValue
transformation to move the key to the value field (under a new
_key
field name): | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.03885859623551369,
-0.03925781324505806,
-0.06651150435209274,
0.06854146718978882,
-0.030109940096735954,
-0.0822598934173584,
-0.020221315324306488,
-0.023566067218780518,
-0.009490353055298328,
0.01597372442483902,
0.00012997252633795142,
-0.03230506181716919,
0.012861057184636593,
-... |
1cc260b0-a217-442d-b31f-e0f347f5105b | Kafka keys are not stored in the value field by default, but you can use the
KeyToValue
transformation to move the key to the value field (under a new
_key
field name):
properties
transforms=keyToValue
transforms.keyToValue.type=com.clickhouse.kafka.connect.transforms.KeyToValue
transforms.keyToValue.field=_key | {"source_file": "kafka-clickhouse-connect-sink.md"} | [
-0.008035591803491116,
0.00583777017891407,
-0.1475045084953308,
-0.019914304837584496,
-0.07226686179637909,
0.02915799245238304,
-0.00008188136416720226,
-0.003792541567236185,
0.08811561018228531,
0.04888975992798805,
-0.02075185440480709,
-0.030269669368863106,
-0.019876249134540558,
-... |
56a9568a-10d0-4bee-84f0-713aab27b2ee | sidebar_label: 'Integrating Kafka with ClickHouse'
sidebar_position: 1
slug: /integrations/kafka
description: 'Introduction to Kafka with ClickHouse'
title: 'Integrating Kafka with ClickHouse'
keywords: ['Apache Kafka', 'event streaming', 'data pipeline', 'message broker', 'real-time data']
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
Integrating Kafka with ClickHouse
Apache Kafka
is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. ClickHouse provides multiple options to
read from
and
write to
Kafka and other Kafka API-compatible brokers (e.g., Redpanda, Amazon MSK).
Available options {#available-options}
Choosing the right option for your use case depends on multiple factors, including your ClickHouse deployment type, data flow direction and operational requirements.
| Option | Deployment type | Fully managed | Kafka to ClickHouse | ClickHouse to Kafka |
|---------------------------------------------------------|------------|:-------------------:|:-------------------:|:------------------:|
|
ClickPipes for Kafka
|
Cloud
,
BYOC
(coming soon!) | β
| β
| |
|
Kafka Connect Sink
|
Cloud
,
BYOC
,
Self-hosted
| | β
| |
|
Kafka table engine
|
Cloud
,
BYOC
,
Self-hosted
| | β
| β
|
For a more detailed comparison between these options, see
Choosing an option
.
ClickPipes for Kafka {#clickpipes-for-kafka}
ClickPipes
is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Because it is fully managed and purpose-built for production workloads, ClickPipes significantly lowers infrastructure and operational costs, removing the need for external data streaming and ETL tools.
:::tip
This is the recommended option if you're a ClickHouse Cloud user. ClickPipes is
fully managed
and purpose-built to deliver the
best performance
in Cloud environments.
:::
Main features {#clickpipes-for-kafka-main-features}
Optimized for ClickHouse Cloud, delivering blazing-fast performance
Horizontal and vertical scalability for high-throughput workloads
Built-in fault tolerance with configurable replicas and automatic retries
Deployment and management via ClickHouse Cloud UI,
Open API
, or
Terraform
Enterprise-grade security with support for cloud-native authorization (IAM) and private connectivity (PrivateLink)
Supports a wide range of
data sources
, including Confluent Cloud, Amazon MSK, Redpanda Cloud, and Azure Event Hubs
Supports most common serialization formats (JSON, Avro, Protobuf coming soon!)
Getting started {#clickpipes-for-kafka-getting-started} | {"source_file": "index.md"} | [
-0.0137980030849576,
-0.08408242464065552,
-0.10160095989704132,
-0.00008130379137583077,
-0.027131380513310432,
-0.04743889719247818,
-0.06176121532917023,
-0.008283066563308239,
0.00412406399846077,
0.014638290740549564,
-0.00905793160200119,
-0.036113619804382324,
-0.044956330209970474,
... |
0340680d-e19d-476f-b085-32882487c540 | Supports most common serialization formats (JSON, Avro, Protobuf coming soon!)
Getting started {#clickpipes-for-kafka-getting-started}
To get started using ClickPipes for Kafka, see the
reference documentation
or navigate to the
Data Sources
tab in the ClickHouse Cloud UI.
Kafka Connect Sink {#kafka-connect-sink}
Kafka Connect is an open-source framework that works as a centralized data hub for simple data integration between Kafka and other data systems. The
ClickHouse Kafka Connect Sink
connector provides a scalable and highly-configurable option to read data from Apache Kafka and other Kafka API-compatible brokers.
:::tip
This is the recommended option if you prefer
high configurability
or are already a Kafka Connect user.
:::
Main features {#kafka-connect-sink-main-features}
Can be configured to support exactly-once semantics
Supports most common serialization formats (JSON, Avro, Protobuf)
Tested continuously against ClickHouse Cloud
Getting started {#kafka-connect-sink-getting-started}
To get started using the ClickHouse Kafka Connect Sink, see the
reference documentation
.
Kafka table engine {#kafka-table-engine}
The
Kafka table engine
can be used to read data from and write data to Apache Kafka and other Kafka API-compatible brokers. This option is bundled with open-source ClickHouse and is available across all deployment types.
:::tip
This is the recommended option if you're self-hosting ClickHouse and need a
low entry barrier
option, or if you need to
write
data to Kafka.
:::
Main features {#kafka-table-engine-main-features}
Can be used for
reading
and
writing
data
Bundled with open-source ClickHouse
Supports most common serialization formats (JSON, Avro, Protobuf)
Getting started {#kafka-table-engine-getting-started}
To get started using the Kafka table engine, see the
reference documentation
.
Choosing an option {#choosing-an-option} | {"source_file": "index.md"} | [
-0.021840648725628853,
-0.05775143951177597,
-0.10920499265193939,
-0.021822672337293625,
-0.09802433103322983,
0.0059814429841935635,
-0.11189534515142441,
0.003004079917445779,
-0.011922442354261875,
0.028819013386964798,
-0.04854065924882889,
-0.0007108764839358628,
-0.04813733324408531,
... |
c199e518-d6d5-4b6f-aa8f-6dc42819a357 | Getting started {#kafka-table-engine-getting-started}
To get started using the Kafka table engine, see the
reference documentation
.
Choosing an option {#choosing-an-option}
| Product | Strengths | Weaknesses |
|---------|-----------|------------|
|
ClickPipes for Kafka
| β’ Scalable architecture for high throughput and low latency
β’ Built-in monitoring and schema management
β’ Private networking connections (via PrivateLink)
β’ Supports SSL/TLS authentication and IAM authorization
β’ Supports programmatic configuration (Terraform, API endpoints) | β’ Does not support pushing data to Kafka
β’ At-least-once semantics |
|
Kafka Connect Sink
| β’ Exactly-once semantics
β’ Allows granular control over data transformation, batching and error handling
β’ Can be deployed in private networks
β’ Allows real-time replication from databases not yet supported in ClickPipes via Debezium | β’ Does not support pushing data to Kafka
β’ Operationally complex to set up and maintain
β’ Requires Kafka and Kafka Connect expertise |
|
Kafka table engine
| β’ Supports
pushing data to Kafka
β’ Operationally simple to set up | β’ At-least-once semantics
β’ Limited horizontal scaling for consumers. Cannot be scaled independently from the ClickHouse server
β’ Limited error handling and debugging options
β’ Requires Kafka expertise |
Other options {#other-options}
Confluent Cloud
- Confluent Platform provides an option to upload and
run ClickHouse Connector Sink on Confluent Cloud
or use
HTTP Sink Connector for Confluent Platform
that integrates Apache Kafka with an API via HTTP or HTTPS.
Vector
- Vector is a vendor-agnostic data pipeline. With the ability to read from Kafka, and send events to ClickHouse, this represents a robust integration option.
JDBC Connect Sink
- The Kafka Connect JDBC Sink connector allows you to export data from Kafka topics to any relational database with a JDBC driver.
Custom code
- Custom code using Kafka and ClickHouse
client libraries
may be appropriate in cases where custom processing of events is required. | {"source_file": "index.md"} | [
-0.06571359932422638,
-0.06733481585979462,
-0.09553341567516327,
0.018504856154322624,
-0.08882986009120941,
-0.045631419867277145,
-0.08708678930997849,
0.01583155058324337,
-0.01063255313783884,
0.09185772389173508,
-0.028375700116157532,
-0.07339780032634735,
0.011879055760800838,
-0.0... |
a01f7444-b43a-47fc-9597-3442aa4e5861 | title: 'Integrating ClickHouse with Kafka using Named Collections'
description: 'How to use named collections to connect clickhouse to kafka'
keywords: ['named collection', 'how to', 'kafka']
slug: /integrations/data-ingestion/kafka/kafka-table-engine-named-collections
doc_type: 'guide'
Integrating ClickHouse with Kafka using named collections
Introduction {#introduction}
In this guide, we will explore how to connect ClickHouse to Kafka using named collections. Using the configuration file for named collections offers several advantages:
- Centralized and easier management of configuration settings.
- Changes to settings can be made without altering SQL table definitions.
- Easier review and troubleshooting of configurations by inspecting a single configuration file.
This guide has been tested on Apache Kafka 3.4.1 and ClickHouse 24.5.1.
Assumptions {#assumptions}
This document assumes you have:
1. A working Kafka cluster.
2. A ClickHouse cluster set up and running.
3. Basic knowledge of SQL and familiarity with ClickHouse and Kafka configurations.
Prerequisites {#prerequisites}
Ensure the user creating the named collection has the necessary access permissions:
xml
<access_management>1</access_management>
<named_collection_control>1</named_collection_control>
<show_named_collections>1</show_named_collections>
<show_named_collections_secrets>1</show_named_collections_secrets>
Refer to the
User Management Guide
for more details on enabling access control.
Configuration {#configuration}
Add the following section to your ClickHouse
config.xml
file:
```xml
c1-kafka-1:9094,c1-kafka-2:9094,c1-kafka-3:9094
cluster_1_clickhouse_topic
cluster_1_clickhouse_consumer
JSONEachRow
0
1
1
<!-- Kafka extended configuration -->
<kafka>
<security_protocol>SASL_SSL</security_protocol>
<enable_ssl_certificate_verification>false</enable_ssl_certificate_verification>
<sasl_mechanism>PLAIN</sasl_mechanism>
<sasl_username>kafka-client</sasl_username>
<sasl_password>kafkapassword1</sasl_password>
<debug>all</debug>
<auto_offset_reset>latest</auto_offset_reset>
</kafka>
</cluster_1>
<cluster_2>
<!-- ClickHouse Kafka engine parameters -->
<kafka_broker_list>c2-kafka-1:29094,c2-kafka-2:29094,c2-kafka-3:29094</kafka_broker_list>
<kafka_topic_list>cluster_2_clickhouse_topic</kafka_topic_list>
<kafka_group_name>cluster_2_clickhouse_consumer</kafka_group_name>
<kafka_format>JSONEachRow</kafka_format>
<kafka_commit_every_batch>0</kafka_commit_every_batch>
<kafka_num_consumers>1</kafka_num_consumers>
<kafka_thread_per_consumer>1</kafka_thread_per_consumer> | {"source_file": "kafka-table-engine-named-collections.md"} | [
-0.02725212834775448,
-0.0873861014842987,
-0.11042459309101105,
0.01606658101081848,
-0.11510404199361801,
-0.04079622030258179,
-0.03391949459910393,
-0.03255954012274742,
-0.036912426352500916,
0.020175788551568985,
0.026992036029696465,
-0.08578946441411972,
0.035556573420763016,
-0.03... |
c322f3a5-0506-45a2-b42a-cdad484f98d5 | <!-- Kafka extended configuration -->
<kafka>
<security_protocol>SASL_SSL</security_protocol>
<enable_ssl_certificate_verification>false</enable_ssl_certificate_verification>
<sasl_mechanism>PLAIN</sasl_mechanism>
<sasl_username>kafka-client</sasl_username>
<sasl_password>kafkapassword2</sasl_password>
<debug>all</debug>
<auto_offset_reset>latest</auto_offset_reset>
</kafka>
</cluster_2>
```
Configuration notes {#configuration-notes}
Adjust Kafka addresses and related configurations to match your Kafka cluster setup.
The section before
<kafka>
contains ClickHouse Kafka engine parameters. For a full list of parameters, refer to the
Kafka engine parameters
.
The section within
<kafka>
contains extended Kafka configuration options. For more options, refer to the
librdkafka configuration
.
This example uses the
SASL_SSL
security protocol and
PLAIN
mechanism. Adjust these settings based on your Kafka cluster configuration.
Creating tables and databases {#creating-tables-and-databases}
Create the necessary databases and tables on your ClickHouse cluster. If you run ClickHouse as a single node, omit the cluster part of the SQL command and use any other engine instead of
ReplicatedMergeTree
.
Create the database {#create-the-database}
sql
CREATE DATABASE kafka_testing ON CLUSTER LAB_CLICKHOUSE_CLUSTER;
Create Kafka tables {#create-kafka-tables}
Create the first Kafka table for the first Kafka cluster:
sql
CREATE TABLE kafka_testing.first_kafka_table ON CLUSTER LAB_CLICKHOUSE_CLUSTER
(
`id` UInt32,
`first_name` String,
`last_name` String
)
ENGINE = Kafka(cluster_1);
Create the second Kafka table for the second Kafka cluster:
sql
CREATE TABLE kafka_testing.second_kafka_table ON CLUSTER STAGE_CLICKHOUSE_CLUSTER
(
`id` UInt32,
`first_name` String,
`last_name` String
)
ENGINE = Kafka(cluster_2);
Create replicated tables {#create-replicated-tables}
Create a table for the first Kafka table:
sql
CREATE TABLE kafka_testing.first_replicated_table ON CLUSTER STAGE_CLICKHOUSE_CLUSTER
(
`id` UInt32,
`first_name` String,
`last_name` String
) ENGINE = ReplicatedMergeTree()
ORDER BY id;
Create a table for the second Kafka table:
sql
CREATE TABLE kafka_testing.second_replicated_table ON CLUSTER STAGE_CLICKHOUSE_CLUSTER
(
`id` UInt32,
`first_name` String,
`last_name` String
) ENGINE = ReplicatedMergeTree()
ORDER BY id;
Create materialized views {#create-materialized-views}
Create a materialized view to insert data from the first Kafka table into the first replicated table:
sql
CREATE MATERIALIZED VIEW kafka_testing.cluster_1_mv ON CLUSTER STAGE_CLICKHOUSE_CLUSTER TO first_replicated_table AS
SELECT
id,
first_name,
last_name
FROM first_kafka_table;
Create a materialized view to insert data from the second Kafka table into the second replicated table: | {"source_file": "kafka-table-engine-named-collections.md"} | [
-0.0708015039563179,
-0.042995721101760864,
-0.12245254963636398,
0.012624693103134632,
-0.03194427490234375,
0.001102047273889184,
-0.08234959095716476,
-0.04017854854464531,
0.010853976011276245,
-0.028059499338269234,
0.008640149608254433,
-0.051453378051519394,
0.026667559519410133,
-0... |
28b09683-3bbf-41d6-8cc8-8366afd1b7e6 | Create a materialized view to insert data from the second Kafka table into the second replicated table:
sql
CREATE MATERIALIZED VIEW kafka_testing.cluster_2_mv ON CLUSTER STAGE_CLICKHOUSE_CLUSTER TO second_replicated_table AS
SELECT
id,
first_name,
last_name
FROM second_kafka_table;
Verifying the setup {#verifying-the-setup}
You should now see the relative consumer groups on your Kafka clusters:
-
cluster_1_clickhouse_consumer
on
cluster_1
-
cluster_2_clickhouse_consumer
on
cluster_2
Run the following queries on any of your ClickHouse nodes to see the data in both tables:
sql
SELECT * FROM first_replicated_table LIMIT 10;
sql
SELECT * FROM second_replicated_table LIMIT 10;
Note {#note}
In this guide, the data ingested in both Kafka topics is the same. In your case, they would differ. You can add as many Kafka clusters as you want.
Example output:
sql
ββidββ¬βfirst_nameββ¬βlast_nameββ
β 0 β FirstName0 β LastName0 β
β 1 β FirstName1 β LastName1 β
β 2 β FirstName2 β LastName2 β
ββββββ΄βββββββββββββ΄ββββββββββββ
This completes the setup for integrating ClickHouse with Kafka using named collections. By centralizing Kafka configurations in the ClickHouse
config.xml
file, you can manage and adjust settings more easily, ensuring a streamlined and efficient integration. | {"source_file": "kafka-table-engine-named-collections.md"} | [
-0.01169819850474596,
-0.14169225096702576,
-0.06728846579790115,
0.040696095675230026,
-0.026820961385965347,
-0.047968246042728424,
-0.05959443002939224,
-0.049368150532245636,
-0.05359658598899841,
-0.025492314249277115,
0.07308277487754822,
-0.12702228128910065,
0.0791192352771759,
-0.... |
6ca637df-e316-4476-a8b8-5c690b6b317c | sidebar_label: 'Kafka Connect JDBC Connector'
sidebar_position: 4
slug: /integrations/kafka/kafka-connect-jdbc
description: 'Using JDBC Connector Sink with Kafka Connect and ClickHouse'
title: 'JDBC Connector'
doc_type: 'guide'
keywords: ['kafka', 'kafka connect', 'jdbc', 'integration', 'data pipeline']
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
JDBC connector
:::note
This connector should only be used if your data is simple and consists of primitive data types e.g., int. ClickHouse specific types such as maps are not supported.
:::
For our examples, we utilize the Confluent distribution of Kafka Connect.
Below we describe a simple installation, pulling messages from a single Kafka topic and inserting rows into a ClickHouse table. We recommend Confluent Cloud, which offers a generous free tier for those who do not have a Kafka environment.
Note that a schema is required for the JDBC Connector (You cannot use plain JSON or CSV with the JDBC connector). Whilst the schema can be encoded in each message; it is
strongly advised to use the Confluent schema registry
y to avoid the associated overhead. The insertion script provided automatically infers a schema from the messages and inserts this to the registry - this script can thus be reused for other datasets. Kafka's keys are assumed to be Strings. Further details on Kafka schemas can be found
here
.
License {#license}
The JDBC Connector is distributed under the
Confluent Community License
Steps {#steps}
Gather your connection details {#gather-your-connection-details}
1. Install Kafka Connect and Connector {#1-install-kafka-connect-and-connector}
We assume you have downloaded the Confluent package and installed it locally. Follow the installation instructions for installing the connector as documented
here
.
If you use the confluent-hub installation method, your local configuration files will be updated.
For sending data to ClickHouse from Kafka, we use the Sink component of the connector.
2. Download and install the JDBC Driver {#2-download-and-install-the-jdbc-driver}
Download and install the ClickHouse JDBC driver
clickhouse-jdbc-<version>-shaded.jar
from
here
. Install this into Kafka Connect following the details
here
. Other drivers may work but have not been tested.
:::note
Common Issue: the docs suggest copying the jar to
share/java/kafka-connect-jdbc/
. If you experience issues with Connect finding the driver, copy the driver to
share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/
. Or modify
plugin.path
to include the driver - see below.
:::
3. Prepare configuration {#3-prepare-configuration}
Follow
these instructions
for setting up a Connect relevant to your installation type, noting the differences between a standalone and distributed cluster. If using Confluent Cloud the distributed setup is relevant. | {"source_file": "kafka-connect-jdbc.md"} | [
-0.024051515385508537,
-0.0039250110276043415,
-0.073161281645298,
-0.00822276808321476,
-0.08412925153970718,
-0.004606687929481268,
-0.028251541778445244,
-0.0063509889878332615,
-0.047299306839704514,
0.012350384145975113,
-0.025232061743736267,
-0.06682415306568146,
0.047575004398822784,... |
3c699116-2332-4925-aaed-afa746e529ee | The following parameters are relevant to using the JDBC connector with ClickHouse. A full parameter list can be found
here
:
_connection.url_
- this should take the form of
jdbc:clickhouse://<clickhouse host>:<clickhouse http port>/<target database>
connection.user
- a user with write access to the target database
table.name.format
- ClickHouse table to insert data. This must exist.
batch.size
- The number of rows to send in a single batch. Ensure this set is to an appropriately large number. Per ClickHouse
recommendations
a value of 1000 should be considered a minimum.
tasks.max
- The JDBC Sink connector supports running one or more tasks. This can be used to increase performance. Along with batch size this represents your primary means of improving performance.
value.converter.schemas.enable
- Set to false if using a schema registry, true if you embed your schemas in the messages.
value.converter
- Set according to your datatype e.g. for JSON,
io.confluent.connect.json.JsonSchemaConverter
.
key.converter
- Set to
org.apache.kafka.connect.storage.StringConverter
. We utilise String keys.
pk.mode
- Not relevant to ClickHouse. Set to none.
auto.create
- Not supported and must be false.
auto.evolve
- We recommend false for this setting although it may be supported in the future.
insert.mode
- Set to "insert". Other modes are not currently supported.
key.converter
- Set according to the types of your keys.
value.converter
- Set based on the type of data on your topic. This data must have a supported schema - JSON, Avro or Protobuf formats.
If using our sample dataset for testing, ensure the following are set:
value.converter.schemas.enable
- Set to false as we utilize a schema registry. Set to true if you are embedding the schema in each message.
key.converter
- Set to "org.apache.kafka.connect.storage.StringConverter". We utilise String keys.
value.converter
- Set "io.confluent.connect.json.JsonSchemaConverter".
value.converter.schema.registry.url
- Set to the schema server url along with the credentials for the schema server via the parameter
value.converter.schema.registry.basic.auth.user.info
.
Example configuration files for the Github sample data can be found
here
, assuming Connect is run in standalone mode and Kafka is hosted in Confluent Cloud.
4. Create the ClickHouse table {#4-create-the-clickhouse-table}
Ensure the table has been created, dropping it if it already exists from previous examples. An example compatible with the reduced Github dataset is shown below. Not the absence of any Array or Map types that are not currently not supported: | {"source_file": "kafka-connect-jdbc.md"} | [
0.019200848415493965,
-0.06047997996211052,
-0.07495278865098953,
0.04545322805643082,
-0.1349593997001648,
-0.05304010957479477,
-0.03693576157093048,
0.0286302138119936,
-0.0906619057059288,
-0.014436108060181141,
-0.07452907413244247,
-0.017837602645158768,
0.010603221133351326,
-0.0431... |
0a378552-62df-4dc4-92fc-5780d16a0cce | sql
CREATE TABLE github
(
file_time DateTime,
event_type Enum('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4, 'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
actor_login LowCardinality(String),
repo_name LowCardinality(String),
created_at DateTime,
updated_at DateTime,
action Enum('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
comment_id UInt64,
path String,
ref LowCardinality(String),
ref_type Enum('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
creator_user_login LowCardinality(String),
number UInt32,
title String,
state Enum('none' = 0, 'open' = 1, 'closed' = 2),
assignee LowCardinality(String),
closed_at DateTime,
merged_at DateTime,
merge_commit_sha String,
merged_by LowCardinality(String),
review_comments UInt32,
member_login LowCardinality(String)
) ENGINE = MergeTree ORDER BY (event_type, repo_name, created_at)
5. Start Kafka Connect {#5-start-kafka-connect}
Start Kafka Connect in either
standalone
or
distributed
mode.
bash
./bin/connect-standalone connect.properties.ini github-jdbc-sink.properties.ini
6. Add data to Kafka {#6-add-data-to-kafka}
Insert messages to Kafka using the
script and config
provided. You will need to modify github.config to include your Kafka credentials. The script is currently configured for use with Confluent Cloud.
bash
python producer.py -c github.config
This script can be used to insert any ndjson file into a Kafka topic. This will attempt to infer a schema for you automatically. The sample config provided will only insert 10k messages -
modify here
if required. This configuration also removes any incompatible Array fields from the dataset during insertion to Kafka.
This is required for the JDBC connector to convert messages to INSERT statements. If you are using your own data, ensure you either insert a schema with every message (setting _value.converter.schemas.enable _to true) or ensure your client publishes messages referencing a schema to the registry.
Kafka Connect should begin consuming messages and inserting rows into ClickHouse. Note that warnings regards "[JDBC Compliant Mode] Transaction is not supported." are expected and can be ignored. | {"source_file": "kafka-connect-jdbc.md"} | [
0.0731467753648758,
-0.07110010087490082,
-0.06905855238437653,
0.02268429659307003,
-0.039067573845386505,
-0.0020959125831723213,
0.08467341214418411,
0.038164980709552765,
0.0015396819217130542,
0.10248646140098572,
0.09039586782455444,
-0.11955364793539047,
0.043673016130924225,
-0.037... |
8429f25d-9eb5-423b-9474-d036c4c1d8bd | Kafka Connect should begin consuming messages and inserting rows into ClickHouse. Note that warnings regards "[JDBC Compliant Mode] Transaction is not supported." are expected and can be ignored.
A simple read on the target table "Github" should confirm data insertion.
sql
SELECT count() FROM default.github;
response
| count\(\) |
| :--- |
| 10000 |
Recommended further reading {#recommended-further-reading}
Kafka Sink Configuration Parameters
Kafka Connect Deep Dive β JDBC Source Connector
Kafka Connect JDBC Sink deep-dive: Working with Primary Keys
Kafka Connect in Action: JDBC Sink
- for those who prefer to watch over read.
Kafka Connect Deep Dive β Converters and Serialization Explained | {"source_file": "kafka-connect-jdbc.md"} | [
0.0009523826884105802,
-0.10572344064712524,
-0.09385060518980026,
0.042407404631376266,
-0.13893182575702667,
0.00043072018888778985,
-0.026945356279611588,
-0.02716996893286705,
0.012038683518767357,
0.0189314316958189,
-0.004755929112434387,
-0.04534922167658806,
0.0334443561732769,
-0.... |
1e6faf1d-1e15-459c-b2e2-2a6840e906e1 | sidebar_label: 'Kafka Table Engine'
sidebar_position: 5
slug: /integrations/kafka/kafka-table-engine
description: 'Using the Kafka Table Engine'
title: 'Using the Kafka table engine'
doc_type: 'guide'
keywords: ['kafka', 'table engine', 'streaming', 'real-time', 'message queue']
import Image from '@theme/IdealImage';
import kafka_01 from '@site/static/images/integrations/data-ingestion/kafka/kafka_01.png';
import kafka_02 from '@site/static/images/integrations/data-ingestion/kafka/kafka_02.png';
import kafka_03 from '@site/static/images/integrations/data-ingestion/kafka/kafka_03.png';
import kafka_04 from '@site/static/images/integrations/data-ingestion/kafka/kafka_04.png';
Using the Kafka table engine
The Kafka table engine can be used to
read
data from
and
write
data to
Apache Kafka and other Kafka API-compatible brokers (e.g., Redpanda, Amazon MSK).
Kafka to ClickHouse {#kafka-to-clickhouse}
:::note
If you're on ClickHouse Cloud, we recommend using
ClickPipes
instead. ClickPipes natively supports private network connections, scaling ingestion and cluster resources independently, and comprehensive monitoring for streaming Kafka data into ClickHouse.
:::
To use the Kafka table engine, you should be broadly familiar with
ClickHouse materialized views
.
Overview {#overview}
Initially, we focus on the most common use case: using the Kafka table engine to insert data into ClickHouse from Kafka.
The Kafka table engine allows ClickHouse to read from a Kafka topic directly. Whilst useful for viewing messages on a topic, the engine by design only permits one-time retrieval, i.e. when a query is issued to the table, it consumes data from the queue and increases the consumer offset before returning results to the caller. Data cannot, in effect, be re-read without resetting these offsets.
To persist this data from a read of the table engine, we need a means of capturing the data and inserting it into another table. Trigger-based materialized views natively provide this functionality. A materialized view initiates a read on the table engine, receiving batches of documents. The TO clause determines the destination of the data - typically a table of the
Merge Tree family
. This process is visualized below:
Steps {#steps}
1. Prepare {#1-prepare}
If you have data populated on a target topic, you can adapt the following for use in your dataset. Alternatively, a sample Github dataset is provided
here
. This dataset is used in the examples below and uses a reduced schema and subset of the rows (specifically, we limit to Github events concerning the
ClickHouse repository
), compared to the full dataset available
here
, for brevity. This is still sufficient for most of the queries
published with the dataset
to work.
2. Configure ClickHouse {#2-configure-clickhouse} | {"source_file": "kafka-table-engine.md"} | [
-0.016465142369270325,
0.001926607801578939,
-0.08446440100669861,
0.03579554334282875,
0.028165843337774277,
-0.020265385508537292,
-0.010035835206508636,
0.04481789097189903,
-0.017432812601327896,
-0.0033368330914527178,
0.053678661584854126,
-0.06227158382534981,
0.030241290107369423,
... |
eb6e9007-bf35-4809-a64c-7ee0fb697f28 | 2. Configure ClickHouse {#2-configure-clickhouse}
This step is required if you are connecting to a secure Kafka. These settings cannot be passed through the SQL DDL commands and must be configured in the ClickHouse config.xml. We assume you are connecting to a SASL secured instance. This is the simplest method when interacting with Confluent Cloud.
xml
<clickhouse>
<kafka>
<sasl_username>username</sasl_username>
<sasl_password>password</sasl_password>
<security_protocol>sasl_ssl</security_protocol>
<sasl_mechanisms>PLAIN</sasl_mechanisms>
</kafka>
</clickhouse>
Either place the above snippet inside a new file under your conf.d/ directory or merge it into existing configuration files. For settings that can be configured, see
here
.
We're also going to create a database called
KafkaEngine
to use in this tutorial:
sql
CREATE DATABASE KafkaEngine;
Once you've created the database, you'll need to switch over to it:
sql
USE KafkaEngine;
3. Create the destination table {#3-create-the-destination-table}
Prepare your destination table. In the example below we use the reduced GitHub schema for purposes of brevity. Note that although we use a MergeTree table engine, this example could easily be adapted for any member of the
MergeTree family
. | {"source_file": "kafka-table-engine.md"} | [
-0.01820298098027706,
-0.12423267960548401,
-0.10183627903461456,
0.0069655426777899265,
-0.08476078510284424,
-0.024731991812586784,
0.031218748539686203,
-0.0640316978096962,
0.014775194227695465,
0.037102654576301575,
0.012634558603167534,
-0.07834615558385849,
0.0660659596323967,
-0.05... |
cf92c940-d1d1-4b4a-9f77-5042b0e69242 | sql
CREATE TABLE github
(
file_time DateTime,
event_type Enum('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4, 'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
actor_login LowCardinality(String),
repo_name LowCardinality(String),
created_at DateTime,
updated_at DateTime,
action Enum('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
comment_id UInt64,
path String,
ref LowCardinality(String),
ref_type Enum('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
creator_user_login LowCardinality(String),
number UInt32,
title String,
labels Array(LowCardinality(String)),
state Enum('none' = 0, 'open' = 1, 'closed' = 2),
assignee LowCardinality(String),
assignees Array(LowCardinality(String)),
closed_at DateTime,
merged_at DateTime,
merge_commit_sha String,
requested_reviewers Array(LowCardinality(String)),
merged_by LowCardinality(String),
review_comments UInt32,
member_login LowCardinality(String)
) ENGINE = MergeTree ORDER BY (event_type, repo_name, created_at)
4. Create and populate the topic {#4-create-and-populate-the-topic}
Next, we're going to create a topic. There are several tools that we can use to do this. If we're running Kafka locally on our machine or inside a Docker container,
RPK
works well. We can create a topic called
github
with 5 partitions by running the following command:
bash
rpk topic create -p 5 github --brokers <host>:<port>
If we're running Kafka on the Confluent Cloud, we might prefer to use the
Confluent CLI
:
bash
confluent kafka topic create --if-not-exists github
Now we need to populate this topic with some data, which we'll do using
kcat
. We can run a command similar to the following if we're running Kafka locally with authentication disabled:
bash
cat github_all_columns.ndjson |
kcat -P \
-b <host>:<port> \
-t github
Or the following if our Kafka cluster uses SASL to authenticate:
bash
cat github_all_columns.ndjson |
kcat -P \
-b <host>:<port> \
-t github
-X security.protocol=sasl_ssl \
-X sasl.mechanisms=PLAIN \
-X sasl.username=<username> \
-X sasl.password=<password> \ | {"source_file": "kafka-table-engine.md"} | [
0.0731467753648758,
-0.07110010087490082,
-0.06905855238437653,
0.02268429659307003,
-0.039067573845386505,
-0.0020959125831723213,
0.08467341214418411,
0.038164980709552765,
0.0015396819217130542,
0.10248646140098572,
0.09039586782455444,
-0.11955364793539047,
0.043673016130924225,
-0.037... |
be322e45-d2b6-46e9-a3ea-af7ea2a80f22 | The dataset contains 200,000 rows, so it should be ingested in just a few seconds. If you want to work with a larger dataset, take a look at
the large datasets section
of the
ClickHouse/kafka-samples
GitHub repository.
5. Create the Kafka table engine {#5-create-the-kafka-table-engine}
The below example creates a table engine with the same schema as the merge tree table. This isn't strictly required, as you can have an alias or ephemeral columns in the target table. The settings are important; however - note the use of
JSONEachRow
as the data type for consuming JSON from a Kafka topic. The values
github
and
clickhouse
represent the name of the topic and consumer group names, respectively. The topics can actually be a list of values.
sql
CREATE TABLE github_queue
(
file_time DateTime,
event_type Enum('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4, 'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
actor_login LowCardinality(String),
repo_name LowCardinality(String),
created_at DateTime,
updated_at DateTime,
action Enum('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
comment_id UInt64,
path String,
ref LowCardinality(String),
ref_type Enum('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
creator_user_login LowCardinality(String),
number UInt32,
title String,
labels Array(LowCardinality(String)),
state Enum('none' = 0, 'open' = 1, 'closed' = 2),
assignee LowCardinality(String),
assignees Array(LowCardinality(String)),
closed_at DateTime,
merged_at DateTime,
merge_commit_sha String,
requested_reviewers Array(LowCardinality(String)),
merged_by LowCardinality(String),
review_comments UInt32,
member_login LowCardinality(String)
)
ENGINE = Kafka('kafka_host:9092', 'github', 'clickhouse',
'JSONEachRow') SETTINGS kafka_thread_per_consumer = 0, kafka_num_consumers = 1;
We discuss engine settings and performance tuning below. At this point, a simple select on the table
github_queue
should read some rows. Note that this will move the consumer offsets forward, preventing these rows from being re-read without a
reset
. Note the limit and required parameter
stream_like_engine_allow_direct_select. | {"source_file": "kafka-table-engine.md"} | [
0.0036321026273071766,
-0.05806252732872963,
-0.10697181522846222,
0.02250353991985321,
-0.051819875836372375,
-0.05133957788348198,
-0.03411100059747696,
0.005168925039470196,
-0.015559488907456398,
0.03531128168106079,
0.05999016389250755,
-0.047246769070625305,
-0.005909821018576622,
-0... |
d7eb7ee0-1c43-4107-8895-45cd783c1678 | 6. Create the materialized view {#6-create-the-materialized-view}
The materialized view will connect the two previously created tables, reading data from the Kafka table engine and inserting it into the target merge tree table. We can do a number of data transformations. We will do a simple read and insert. The use of * assumes column names are identical (case sensitive).
sql
CREATE MATERIALIZED VIEW github_mv TO github AS
SELECT *
FROM github_queue;
At the point of creation, the materialized view connects to the Kafka engine and commences reading: inserting rows into the target table. This process will continue indefinitely, with subsequent message inserts into Kafka being consumed. Feel free to re-run the insertion script to insert further messages to Kafka.
7. Confirm rows have been inserted {#7-confirm-rows-have-been-inserted}
Confirm data exists in the target table:
sql
SELECT count() FROM github;
You should see 200,000 rows:
response
ββcount()ββ
β 200000 β
βββββββββββ
Common operations {#common-operations}
Stopping & restarting message consumption {#stopping--restarting-message-consumption}
To stop message consumption, you can detach the Kafka engine table:
sql
DETACH TABLE github_queue;
This will not impact the offsets of the consumer group. To restart consumption, and continue from the previous offset, reattach the table.
sql
ATTACH TABLE github_queue;
Adding Kafka metadata {#adding-kafka-metadata}
It can be useful to keep track of the metadata from the original Kafka messages after it's been ingested into ClickHouse. For example, we may want to know how much of a specific topic or partition we have consumed. For this purpose, the Kafka table engine exposes several
virtual columns
. These can be persisted as columns in our target table by modifying our schema and materialized view's select statement.
First, we perform the stop operation described above before adding columns to our target table.
sql
DETACH TABLE github_queue;
Below we add information columns to identify the source topic and the partition from which the row originated.
sql
ALTER TABLE github
ADD COLUMN topic String,
ADD COLUMN partition UInt64;
Next, we need to ensure virtual columns are mapped as required.
Virtual columns are prefixed with
_
.
A complete listing of virtual columns can be found
here
.
To update our table with the virtual columns, we'll need to drop the materialized view, re-attach the Kafka engine table, and re-create the materialized view.
sql
DROP VIEW github_mv;
sql
ATTACH TABLE github_queue;
sql
CREATE MATERIALIZED VIEW github_mv TO github AS
SELECT *, _topic AS topic, _partition as partition
FROM github_queue;
Newly consumed rows should have the metadata.
sql
SELECT actor_login, event_type, created_at, topic, partition
FROM github
LIMIT 10;
The result looks like: | {"source_file": "kafka-table-engine.md"} | [
-0.03702376037836075,
-0.12349776178598404,
-0.11001275479793549,
0.03778732568025589,
-0.045599911361932755,
-0.05788009986281395,
-0.06554553657770157,
-0.01544333714991808,
0.07397554069757462,
0.07344254106283188,
0.023161621764302254,
-0.0008912336779758334,
0.05286812409758568,
-0.08... |
59edb4aa-5506-41f3-beea-27169f6c6457 | Newly consumed rows should have the metadata.
sql
SELECT actor_login, event_type, created_at, topic, partition
FROM github
LIMIT 10;
The result looks like:
| actor_login | event_type | created_at | topic | partition |
| :--- | :--- | :--- | :--- | :--- |
| IgorMinar | CommitCommentEvent | 2011-02-12 02:22:00 | github | 0 |
| queeup | CommitCommentEvent | 2011-02-12 02:23:23 | github | 0 |
| IgorMinar | CommitCommentEvent | 2011-02-12 02:23:24 | github | 0 |
| IgorMinar | CommitCommentEvent | 2011-02-12 02:24:50 | github | 0 |
| IgorMinar | CommitCommentEvent | 2011-02-12 02:25:20 | github | 0 |
| dapi | CommitCommentEvent | 2011-02-12 06:18:36 | github | 0 |
| sourcerebels | CommitCommentEvent | 2011-02-12 06:34:10 | github | 0 |
| jamierumbelow | CommitCommentEvent | 2011-02-12 12:21:40 | github | 0 |
| jpn | CommitCommentEvent | 2011-02-12 12:24:31 | github | 0 |
| Oxonium | CommitCommentEvent | 2011-02-12 12:31:28 | github | 0 |
Modify Kafka engine settings {#modify-kafka-engine-settings}
We recommend dropping the Kafka engine table and recreating it with the new settings. The materialized view does not need to be modified during this process - message consumption will resume once the Kafka engine table is recreated.
Debugging Issues {#debugging-issues}
Errors such as authentication issues are not reported in responses to Kafka engine DDL. For diagnosing issues, we recommend using the main ClickHouse log file clickhouse-server.err.log. Further trace logging for the underlying Kafka client library
librdkafka
can be enabled through configuration.
xml
<kafka>
<debug>all</debug>
</kafka>
Handling malformed messages {#handling-malformed-messages}
Kafka is often used as a "dumping ground" for data. This leads to topics containing mixed message formats and inconsistent field names. Avoid this and utilize Kafka features such Kafka Streams or ksqlDB to ensure messages are well-formed and consistent before insertion into Kafka. If these options are not possible, ClickHouse has some features that can help.
Treat the message field as strings. Functions can be used in the materialized view statement to perform cleansing and casting if required. This should not represent a production solution but might assist in one-off ingestion.
If you're consuming JSON from a topic, using the JSONEachRow format, use the setting
input_format_skip_unknown_fields
. When writing data, by default, ClickHouse throws an exception if input data contains columns that do not exist in the target table. However, if this option is enabled, these excess columns will be ignored. Again this is not a production-level solution and might confuse others. | {"source_file": "kafka-table-engine.md"} | [
0.07317984104156494,
-0.06968297064304352,
-0.03453567996621132,
0.010097862221300602,
-0.010847033001482487,
0.0035861816722899675,
0.08098651468753815,
-0.06088512763381004,
0.036621496081352234,
0.08887642621994019,
0.05267087742686272,
-0.09840728342533112,
-0.004804420284926891,
-0.02... |
1b2946b5-c4ab-42df-8153-507ba445b4a4 | Consider the setting
kafka_skip_broken_messages
. This requires the user to specify the level of tolerance per block for malformed messages - considered in the context of kafka_max_block_size. If this tolerance is exceeded (measured in absolute messages) the usual exception behaviour will revert, and other messages will be skipped.
Delivery Semantics and challenges with duplicates {#delivery-semantics-and-challenges-with-duplicates}
The Kafka table engine has at-least-once semantics. Duplicates are possible in several known rare circumstances. For example, messages could be read from Kafka and successfully inserted into ClickHouse. Before the new offset can be committed, the connection to Kafka is lost. A retry of the block in this situation is required. The block may be
de-duplicated
using a distributed table or ReplicatedMergeTree as the target table. While this reduces the chance of duplicate rows, it relies on identical blocks. Events such as a Kafka rebalancing may invalidate this assumption, causing duplicates in rare circumstances.
Quorum based Inserts {#quorum-based-inserts}
You may need
quorum-based inserts
for cases where higher delivery guarantees are required in ClickHouse. This can't be set on the materialized view or the target table. It can, however, be set for user profiles e.g.
xml
<profiles>
<default>
<insert_quorum>2</insert_quorum>
</default>
</profiles>
ClickHouse to Kafka {#clickhouse-to-kafka}
Although a rarer use case, ClickHouse data can also be persisted in Kafka. For example, we will insert rows manually into a Kafka table engine. This data will be read by the same Kafka engine, whose materialized view will place the data into a Merge Tree table. Finally, we demonstrate the application of materialized views in inserts to Kafka to read tables from existing source tables.
Steps {#steps-1}
Our initial objective is best illustrated:
We assume you have the tables and views created under steps for
Kafka to ClickHouse
and that the topic has been fully consumed.
1. Inserting rows directly {#1-inserting-rows-directly}
First, confirm the count of the target table.
sql
SELECT count() FROM github;
You should have 200,000 rows:
response
ββcount()ββ
β 200000 β
βββββββββββ
Now insert rows from the GitHub target table back into the Kafka table engine github_queue. Note how we utilize JSONEachRow format and LIMIT the select to 100.
sql
INSERT INTO github_queue SELECT * FROM github LIMIT 100 FORMAT JSONEachRow
Recount the row in GitHub to confirm it has increased by 100. As shown in the above diagram, rows have been inserted into Kafka via the Kafka table engine before being re-read by the same engine and inserted into the GitHub target table by our materialized view!
sql
SELECT count() FROM github;
You should see 100 additional rows:
response
ββcount()ββ
β 200100 β
βββββββββββ
2. Using materialized views {#2-using-materialized-views} | {"source_file": "kafka-table-engine.md"} | [
-0.07435688376426697,
-0.06955963373184204,
0.008526149205863476,
0.007869606837630272,
-0.08092664182186127,
-0.028836218640208244,
-0.08458281308412552,
-0.02875816449522972,
0.06707291305065155,
0.030920056626200676,
0.03290931135416031,
-0.017651231959462166,
0.03171490877866745,
-0.06... |
904549fb-63dc-474c-adb5-8810250b36af | sql
SELECT count() FROM github;
You should see 100 additional rows:
response
ββcount()ββ
β 200100 β
βββββββββββ
2. Using materialized views {#2-using-materialized-views}
We can utilize materialized views to push messages to a Kafka engine (and a topic) when documents are inserted into a table. When rows are inserted into the GitHub table, a materialized view is triggered, which causes the rows to be inserted back into a Kafka engine and into a new topic. Again this is best illustrated:
Create a new Kafka topic
github_out
or equivalent. Ensure a Kafka table engine
github_out_queue
points to this topic.
sql
CREATE TABLE github_out_queue
(
file_time DateTime,
event_type Enum('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4, 'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
actor_login LowCardinality(String),
repo_name LowCardinality(String),
created_at DateTime,
updated_at DateTime,
action Enum('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
comment_id UInt64,
path String,
ref LowCardinality(String),
ref_type Enum('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
creator_user_login LowCardinality(String),
number UInt32,
title String,
labels Array(LowCardinality(String)),
state Enum('none' = 0, 'open' = 1, 'closed' = 2),
assignee LowCardinality(String),
assignees Array(LowCardinality(String)),
closed_at DateTime,
merged_at DateTime,
merge_commit_sha String,
requested_reviewers Array(LowCardinality(String)),
merged_by LowCardinality(String),
review_comments UInt32,
member_login LowCardinality(String)
)
ENGINE = Kafka('host:port', 'github_out', 'clickhouse_out',
'JSONEachRow') SETTINGS kafka_thread_per_consumer = 0, kafka_num_consumers = 1;
Now create a new materialized view
github_out_mv
to point at the GitHub table, inserting rows to the above engine when it triggers. Additions to the GitHub table will, as a result, be pushed to our new Kafka topic. | {"source_file": "kafka-table-engine.md"} | [
0.006777019705623388,
-0.1005827859044075,
-0.09773294627666473,
0.10379824042320251,
-0.050319138914346695,
-0.0016992363380268216,
-0.01287099625915289,
-0.017921920865774155,
0.13370099663734436,
0.08127523958683014,
0.011504517868161201,
-0.05771459639072418,
0.05893447622656822,
-0.09... |
17f688b9-be06-4e3d-ad51-b5d26eb09144 | sql
CREATE MATERIALIZED VIEW github_out_mv TO github_out_queue AS
SELECT file_time, event_type, actor_login, repo_name,
created_at, updated_at, action, comment_id, path,
ref, ref_type, creator_user_login, number, title,
labels, state, assignee, assignees, closed_at, merged_at,
merge_commit_sha, requested_reviewers, merged_by,
review_comments, member_login
FROM github
FORMAT JsonEachRow;
Should you insert into the original github topic, created as part of
Kafka to ClickHouse
, documents will magically appear in the "github_clickhouse" topic. Confirm this with native Kafka tooling. For example, below, we insert 100 rows onto the github topic using
kcat
for a Confluent Cloud hosted topic:
sql
head -n 10 github_all_columns.ndjson |
kcat -P \
-b <host>:<port> \
-t github
-X security.protocol=sasl_ssl \
-X sasl.mechanisms=PLAIN \
-X sasl.username=<username> \
-X sasl.password=<password>
A read on the
github_out
topic should confirm delivery of the messages.
sql
kcat -C \
-b <host>:<port> \
-t github_out \
-X security.protocol=sasl_ssl \
-X sasl.mechanisms=PLAIN \
-X sasl.username=<username> \
-X sasl.password=<password> \
-e -q |
wc -l
Although an elaborate example, this illustrates the power of materialized views when used in conjunction with the Kafka engine.
Clusters and performance {#clusters-and-performance}
Working with ClickHouse Clusters {#working-with-clickhouse-clusters}
Through Kafka consumer groups, multiple ClickHouse instances can potentially read from the same topic. Each consumer will be assigned to a topic partition in a 1:1 mapping. When scaling ClickHouse consumption using the Kafka table engine, consider that the total number of consumers within a cluster cannot exceed the number of partitions on the topic. Therefore ensure partitioning is appropriately configured for the topic in advance.
Multiple ClickHouse instances can all be configured to read from a topic using the same consumer group id - specified during the Kafka table engine creation. Therefore, each instance will read from one or more partitions, inserting segments to their local target table. The target tables can, in turn, be configured to use a ReplicatedMergeTree to handle duplication of the data. This approach allows Kafka reads to be scaled with the ClickHouse cluster, provided there are sufficient Kafka partitions.
Tuning performance {#tuning-performance}
Consider the following when looking to increase Kafka Engine table throughput performance: | {"source_file": "kafka-table-engine.md"} | [
-0.01604403741657734,
-0.09937866777181625,
-0.11803478002548218,
0.03175194188952446,
-0.039270661771297455,
-0.013044154271483421,
-0.03967602178454399,
-0.03844313696026802,
0.040838755667209625,
0.06986762583255768,
0.016282325610518456,
-0.0020237909629940987,
0.06521201133728027,
-0.... |
0657f613-e151-49a1-8c25-ee2b8a0aedef | Tuning performance {#tuning-performance}
Consider the following when looking to increase Kafka Engine table throughput performance:
The performance will vary depending on the message size, format, and target table types. 100k rows/sec on a single table engine should be considered obtainable. By default, messages are read in blocks, controlled by the parameter kafka_max_block_size. By default, this is set to the
max_insert_block_size
, defaulting to 1,048,576. Unless messages are extremely large, this should nearly always be increased. Values between 500k to 1M are not uncommon. Test and evaluate the effect on throughput performance.
The number of consumers for a table engine can be increased using kafka_num_consumers. However, by default, inserts will be linearized in a single thread unless kafka_thread_per_consumer is changed from the default value of 1. Set this to 1 to ensure flushes are performed in parallel. Note that creating a Kafka engine table with N consumers (and kafka_thread_per_consumer=1) is logically equivalent to creating N Kafka engines, each with a materialized view and kafka_thread_per_consumer=0.
Increasing consumers is not a free operation. Each consumer maintains its own buffers and threads, increasing the overhead on the server. Be conscious of the overhead of consumers and scale linearly across your cluster first and if possible.
If the throughput of Kafka messages is variable and delays are acceptable, consider increasing the stream_flush_interval_ms to ensure larger blocks are flushed.
background_message_broker_schedule_pool_size
sets the number of threads performing background tasks. These threads are used for Kafka streaming. This setting is applied at the ClickHouse server start and can't be changed in a user session, defaulting to 16. If you see timeouts in the logs, it may be appropriate to increase this.
For communication with Kafka, the librdkafka library is used, which itself creates threads. Large numbers of Kafka tables, or consumers, can thus result in large numbers of context switches. Either distribute this load across the cluster, only replicating the target tables if possible, or consider using a table engine to read from multiple topics - a list of values is supported. Multiple materialized views can be read from a single table, each filtering to the data from a specific topic.
Any settings changes should be tested. We recommend monitoring Kafka consumer lags to ensure you are properly scaled.
Additional settings {#additional-settings}
Aside from the settings discussed above, the following may be of interest:
Kafka_max_wait_ms
- The wait time in milliseconds for reading messages from Kafka before retry. Set at a user profile level and defaults to 5000.
All settings
from the underlying librdkafka can also be placed in the ClickHouse configuration files inside a
kafka
element - setting names should be XML elements with periods replaced with underscores e.g. | {"source_file": "kafka-table-engine.md"} | [
-0.03598456829786301,
-0.03889530152082443,
-0.08988609164953232,
0.02595379576086998,
-0.07337485253810883,
-0.0824117660522461,
-0.07215867936611176,
-0.010062401182949543,
0.008611449971795082,
-0.004163391422480345,
-0.06051623821258545,
-0.03490108251571655,
-0.030361365526914597,
-0.... |
f60a113d-0a8c-4a4f-9727-c1615ee7197a | xml
<clickhouse>
<kafka>
<enable_ssl_certificate_verification>false</enable_ssl_certificate_verification>
</kafka>
</clickhouse>
These are expert settings and we'd suggest you refer to the Kafka documentation for an in-depth explanation. | {"source_file": "kafka-table-engine.md"} | [
-0.03451726213097572,
-0.008054453879594803,
-0.09344060719013214,
0.013563953340053558,
-0.03573617711663246,
-0.0554095022380352,
-0.07262236624956131,
-0.06263945996761322,
0.0782594159245491,
-0.03142840042710304,
-0.02302435226738453,
-0.04120415821671486,
0.012602508999407291,
0.0064... |
2cf23549-b538-4d31-a734-b27247046ea8 | sidebar_label: 'Vector with Kafka'
sidebar_position: 3
slug: /integrations/kafka/kafka-vector
description: 'Using Vector with Kafka and ClickHouse'
title: 'Using Vector with Kafka and ClickHouse'
doc_type: 'guide'
keywords: ['kafka', 'vector', 'log collection', 'observability', 'integration']
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
Using Vector with Kafka and ClickHouse {#using-vector-with-kafka-and-clickhouse}
Vector is a vendor-agnostic data pipeline with the ability to read from Kafka and send events to ClickHouse.
A
getting started
guide for Vector with ClickHouse focuses on the log use case and reading events from a file. We utilize the
Github sample dataset
with events held on a Kafka topic.
Vector utilizes
sources
for retrieving data through a push or pull model.
Sinks
meanwhile provide a destination for events. We, therefore, utilize the Kafka source and ClickHouse sink. Note that whilst Kafka is supported as a Sink, a ClickHouse source is not available. Vector is as a result not appropriate for users wishing to transfer data to Kafka from ClickHouse.
Vector also supports the
transformation
of data. This is beyond the scope of this guide. The user is referred to the Vector documentation should they need this on their dataset.
Note that the current implementation of the ClickHouse sink utilizes the HTTP interface. The ClickHouse sink does not support the use of a JSON schema at this time. Data must be published to Kafka in either plain JSON format or as Strings.
License {#license}
Vector is distributed under the
MPL-2.0 License
Gather your connection details {#gather-your-connection-details}
Steps {#steps}
Create the Kafka
github
topic and insert the
Github dataset
.
bash
cat /opt/data/github/github_all_columns.ndjson | kcat -b <host>:<port> -X security.protocol=sasl_ssl -X sasl.mechanisms=PLAIN -X sasl.username=<username> -X sasl.password=<password> -t github
This dataset consists of 200,000 rows focused on the
ClickHouse/ClickHouse
repository.
Ensure the target table is created. Below we use the default database.
```sql | {"source_file": "kafka-vector.md"} | [
-0.06646279245615005,
0.026068756356835365,
-0.08290407061576843,
0.008526124991476536,
0.011019930243492126,
-0.05438273772597313,
-0.035944078117609024,
-0.00045037889503873885,
0.013578396290540695,
0.022317424416542053,
-0.05996043235063553,
-0.013661114498972893,
-0.01190419215708971,
... |
7edc4c92-c56a-4a06-a9b4-ca91ab36d1e1 | This dataset consists of 200,000 rows focused on the
ClickHouse/ClickHouse
repository.
Ensure the target table is created. Below we use the default database.
```sql
CREATE TABLE github
(
file_time DateTime,
event_type Enum('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4,
'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
actor_login LowCardinality(String),
repo_name LowCardinality(String),
created_at DateTime,
updated_at DateTime,
action Enum('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
comment_id UInt64,
path String,
ref LowCardinality(String),
ref_type Enum('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
creator_user_login LowCardinality(String),
number UInt32,
title String,
labels Array(LowCardinality(String)),
state Enum('none' = 0, 'open' = 1, 'closed' = 2),
assignee LowCardinality(String),
assignees Array(LowCardinality(String)),
closed_at DateTime,
merged_at DateTime,
merge_commit_sha String,
requested_reviewers Array(LowCardinality(String)),
merged_by LowCardinality(String),
review_comments UInt32,
member_login LowCardinality(String)
) ENGINE = MergeTree ORDER BY (event_type, repo_name, created_at);
```
Download and install Vector
. Create a
kafka.toml
configuration file and modify the values for your Kafka and ClickHouse instances.
```toml
[sources.github]
type = "kafka"
auto_offset_reset = "smallest"
bootstrap_servers = "
:
"
group_id = "vector"
topics = [ "github" ]
tls.enabled = true
sasl.enabled = true
sasl.mechanism = "PLAIN"
sasl.username = "
"
sasl.password = "
"
decoding.codec = "json"
[sinks.clickhouse]
type = "clickhouse"
inputs = ["github"]
endpoint = "http://localhost:8123"
database = "default"
table = "github"
skip_unknown_fields = true
auth.strategy = "basic"
auth.user = "username"
auth.password = "password"
buffer.max_events = 10000
batch.timeout_secs = 1
```
A few important notes on this configuration and behavior of Vector:
This example has been tested against Confluent Cloud. Therefore, the
sasl.*
and
ssl.enabled
security options may not be appropriate in self-managed cases. | {"source_file": "kafka-vector.md"} | [
0.08201948553323746,
-0.1036042869091034,
-0.06575030088424683,
0.01545027643442154,
-0.009594635106623173,
-0.03338206559419632,
0.04253548011183739,
-0.021939966827630997,
-0.022126607596874237,
0.09643197804689407,
0.06660950928926468,
-0.10512058436870575,
0.013563423417508602,
-0.0690... |
006d83a6-2311-45fb-90bf-9cb01a465325 | This example has been tested against Confluent Cloud. Therefore, the
sasl.*
and
ssl.enabled
security options may not be appropriate in self-managed cases.
A protocol prefix is not required for the configuration parameter
bootstrap_servers
e.g.
pkc-2396y.us-east-1.aws.confluent.cloud:9092
The source parameter
decoding.codec = "json"
ensures the message is passed to the ClickHouse sink as a single JSON object. If handling messages as Strings and using the default
bytes
value, the contents of the message will be appended to a field
message
. In most cases this will require processing in ClickHouse as described in the
Vector getting started
guide.
Vector
adds a number of fields
to the messages. In our example, we ignore these fields in the ClickHouse sink via the configuration parameter
skip_unknown_fields = true
. This ignores fields that are not part of the target table schema. Feel free to adjust your schema to ensure these meta fields such as
offset
are added.
Notice how the sink references of the source of events via the parameter
inputs
.
Note the behavior of the ClickHouse sink as described
here
. For optimal throughput, users may wish to tune the
buffer.max_events
,
batch.timeout_secs
and
batch.max_bytes
parameters. Per ClickHouse
recommendations
a value of 1000 is should be considered a minimum for the number of events in any single batch. For uniform high throughput use cases, users may increase the parameter
buffer.max_events
. More variable throughputs may require changes in the parameter
batch.timeout_secs
The parameter
auto_offset_reset = "smallest"
forces the Kafka source to start from the start of the topic - thus ensuring we consume the messages published in step (1). Users may require different behavior. See
here
for further details.
Start Vector
bash
vector --config ./kafka.toml
By default, a
health check
is required before insertions begin to ClickHouse. This ensures connectivity can be established and the schema read. Prepend
VECTOR_LOG=debug
to obtain further logging which can be helpful should you encounter issues.
Confirm the insertion of the data.
sql
SELECT count() AS count FROM github;
| count |
| :--- |
| 200000 | | {"source_file": "kafka-vector.md"} | [
-0.049565453082323074,
-0.0032727366778999567,
-0.06381186842918396,
0.06760367751121521,
0.041761402040719986,
-0.04708918184041977,
0.01969459094107151,
-0.09387905150651932,
0.05262181535363197,
0.0011740477057173848,
0.005801239982247353,
0.008879056200385094,
0.10374701023101807,
-0.0... |
24fd6a90-88a5-4171-8f11-46dd65ca20a1 | sidebar_label: 'Templates'
slug: /integrations/google-dataflow/templates
sidebar_position: 3
description: 'Users can ingest data into ClickHouse using Google Dataflow Templates'
title: 'Google Dataflow Templates'
doc_type: 'guide'
keywords: ['google dataflow', 'gcp', 'data pipeline', 'templates', 'batch processing']
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Google Dataflow templates
Google Dataflow templates provide a convenient way to execute prebuilt, ready-to-use data pipelines without the need to write custom code. These templates are designed to simplify common data processing tasks and are built using
Apache Beam
, leveraging connectors like
ClickHouseIO
for seamless integration with ClickHouse databases. By running these templates on Google Dataflow, you can achieve highly scalable, distributed data processing with minimal effort.
Why use Dataflow templates? {#why-use-dataflow-templates}
Ease of Use
: Templates eliminate the need for coding by offering preconfigured pipelines tailored to specific use cases.
Scalability
: Dataflow ensures your pipeline scales efficiently, handling large volumes of data with distributed processing.
Cost Efficiency
: Pay only for the resources you consume, with the ability to optimize pipeline execution costs.
How to run Dataflow templates {#how-to-run-dataflow-templates}
As of today, the ClickHouse official template is available via the Google Cloud Console, CLI or Dataflow REST API.
For detailed step-by-step instructions, refer to the
Google Dataflow Run Pipeline From a Template Guide
.
List of ClickHouse Templates {#list-of-clickhouse-templates}
BigQuery To ClickHouse
GCS To ClickHouse
(coming soon!)
Pub Sub To ClickHouse
(coming soon!) | {"source_file": "templates.md"} | [
-0.11278669536113739,
0.028172219172120094,
0.08638022094964981,
-0.017432542517781258,
0.0021560273598879576,
0.00018501942395232618,
-0.0030809477902948856,
-0.005036473274230957,
-0.03442319110035896,
-0.036256659775972366,
-0.03602520003914833,
0.017701685428619385,
-0.004298332147300243... |
9e5c3464-74a2-4174-a759-764a6271069e | sidebar_label: 'Integrating Dataflow with ClickHouse'
slug: /integrations/google-dataflow/dataflow
sidebar_position: 1
description: 'Users can ingest data into ClickHouse using Google Dataflow'
title: 'Integrating Google Dataflow with ClickHouse'
doc_type: 'guide'
keywords: ['Google Dataflow ClickHouse', 'Dataflow ClickHouse integration', 'Apache Beam ClickHouse', 'ClickHouseIO connector', 'Google Cloud ClickHouse integration']
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Integrating Google Dataflow with ClickHouse
Google Dataflow
is a fully managed stream and batch data processing service. It supports pipelines written in Java or Python and is built on the Apache Beam SDK.
There are two main ways to use Google Dataflow with ClickHouse, both of which leverage
ClickHouseIO Apache Beam connector
.
These are:
-
Java runner
-
Predefined templates
Java runner {#1-java-runner}
The
Java runner
allows users to implement custom Dataflow pipelines using the Apache Beam SDK
ClickHouseIO
integration. This approach provides full flexibility and control over the pipeline logic, enabling users to tailor the ETL process to specific requirements.
However, this option requires knowledge of Java programming and familiarity with the Apache Beam framework.
Key features {#key-features}
High degree of customization.
Ideal for complex or advanced use cases.
Requires coding and understanding of the Beam API.
Predefined templates {#2-predefined-templates}
ClickHouse offers
predefined templates
designed for specific use cases, such as importing data from BigQuery into ClickHouse. These templates are ready-to-use and simplify the integration process, making them an excellent choice for users who prefer a no-code solution.
Key features {#key-features-1}
No Beam coding required.
Quick and easy setup for simple use cases.
Suitable also for users with minimal programming expertise.
Both approaches are fully compatible with Google Cloud and the ClickHouse ecosystem, offering flexibility depending on your technical expertise and project requirements. | {"source_file": "dataflow.md"} | [
-0.1101168617606163,
-0.03696423023939133,
0.027858631685376167,
-0.036651745438575745,
-0.05018465220928192,
-0.04608842357993126,
0.007017721422016621,
-0.03278256580233574,
-0.09680286794900894,
-0.08173718303442001,
-0.011329853907227516,
0.015155903063714504,
0.04071341082453728,
-0.0... |
26a439e3-fec8-4085-b2a9-d9b155e103f8 | sidebar_label: 'Java Runner'
slug: /integrations/google-dataflow/java-runner
sidebar_position: 2
description: 'Users can ingest data into ClickHouse using Google Dataflow Java Runner'
title: 'Dataflow Java Runner'
doc_type: 'guide'
keywords: ['Dataflow Java Runner', 'Google Dataflow ClickHouse', 'Apache Beam Java ClickHouse', 'ClickHouseIO connector']
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Dataflow Java runner
The Dataflow Java Runner lets you execute custom Apache Beam pipelines on Google Cloud's Dataflow service. This approach provides maximum flexibility and is well-suited for advanced ETL workflows.
How it works {#how-it-works}
Pipeline Implementation
To use the Java Runner, you need to implement your Beam pipeline using the
ClickHouseIO
- our official Apache Beam connector. For code examples and instructions on how to use the
ClickHouseIO
, please visit
ClickHouse Apache Beam
.
Deployment
Once your pipeline is implemented and configured, you can deploy it to Dataflow using Google Cloud's deployment tools. Comprehensive deployment instructions are provided in the
Google Cloud Dataflow documentation - Java Pipeline
.
Note
: This approach assumes familiarity with the Beam framework and coding expertise. If you prefer a no-code solution, consider using
ClickHouse's predefined templates
. | {"source_file": "java-runner.md"} | [
-0.09022368490695953,
-0.00782293826341629,
0.08138799667358398,
-0.05688547343015671,
-0.04702956601977348,
-0.005938700400292873,
-0.02936263382434845,
-0.02478725090622902,
-0.09819395840167999,
-0.08541572093963623,
-0.03191623091697693,
0.00033077201806008816,
0.025910643860697746,
-0... |
4eb52403-8b50-46d5-9a37-8a7acfa15e76 | sidebar_label: 'Using the azureBlobStorage table function'
slug: /integrations/azure-data-factory/table-function
description: 'Using ClickHouse''s azureBlobStorage table function'
keywords: ['azure data factory', 'azure', 'microsoft', 'data', 'azureBlobStorage']
title: 'Using ClickHouse''s azureBlobStorage table function to bring Azure data into ClickHouse'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import azureDataStoreSettings from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-data-store-settings.png';
import azureDataStoreAccessKeys from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-data-store-access-keys.png';
Using ClickHouse's azureBlobStorage table function {#using-azureBlobStorage-function}
This is one of the most efficient and straightforward ways to copy data from
Azure Blob Storage or Azure Data Lake Storage into ClickHouse. With this table
function, you can instruct ClickHouse to connect directly to Azure storage and
read data on demand.
It provides a table-like interface that allows you to select, insert, and
filter data directly from the source. The function is highly optimized and
supports many widely used file formats, including
CSV
,
JSON
,
Parquet
,
Arrow
,
TSV
,
ORC
,
Avro
, and more. For the full list see
"Data formats"
.
In this section, we'll walk through a simple startup guide for transferring
data from Azure Blob Storage to ClickHouse, along with important considerations
for using this function effectively. For more details and advanced options,
refer to the official documentation:
azureBlobStorage
Table Function documentation page
Acquiring Azure Blob Storage Access Keys {#acquiring-azure-blob-storage-access-keys}
To allow ClickHouse to access your Azure Blob Storage, you'll need a connection string with an access key.
In the Azure portal, navigate to your
Storage Account
.
In the left-hand menu, select
Access keys
under the
Security +
networking
section.
Choose either
key1
or
key2
, and click the
Show
button next to
the
Connection string
field.
Copy the connection string β you'll use this as a parameter in the azureBlobStorage table function.
Querying the data from Azure Blob Storage {#querying-the-data-from-azure-blob-storage}
Open your preferred ClickHouse query console β this can be the ClickHouse Cloud
web interface, the ClickHouse CLI client, or any other tool you use to run
queries. Once you have both the connection string and your ClickHouse query
console ready, you can start querying data directly from Azure Blob Storage.
In the following example, we query all the data stored in JSON files located in
a container named data-container:
sql
SELECT * FROM azureBlobStorage(
'<YOUR CONNECTION STRING>',
'data-container',
'*.json',
'JSONEachRow'); | {"source_file": "using_azureblobstorage.md"} | [
0.02267673797905445,
-0.012731388211250305,
-0.04634227603673935,
0.07745484262704849,
-0.021977774798870087,
0.023685378953814507,
0.05571595951914787,
0.0018726529087871313,
-0.07824856042861938,
0.10047902166843414,
0.09956366568803787,
-0.02899017184972763,
0.1383400410413742,
-0.03169... |
befe7f56-1765-4fe0-b165-48b334f889ef | sql
SELECT * FROM azureBlobStorage(
'<YOUR CONNECTION STRING>',
'data-container',
'*.json',
'JSONEachRow');
If you'd like to copy that data into a local ClickHouse table (e.g., my_table),
you can use an
INSERT INTO ... SELECT
statement:
sql
INSERT INTO my_table
SELECT * FROM azureBlobStorage(
'<YOUR CONNECTION STRING>',
'data-container',
'*.json',
'JSONEachRow');
This allows you to efficiently pull external data into ClickHouse without
needing intermediate ETL steps.
A simple example using the Environmental sensors dataset {#simple-example-using-the-environmental-sensors-dataset}
As an example we will download a single file from the Environmental Sensors
Dataset.
Download a
sample file
from the
Environmental Sensors Dataset
In the Azure Portal, create a new storage account if you don't already have one.
:::warning
Make sure that
Allow storage account key access
is enabled for your storage
account, otherwise you will not be able to use the account keys to access the
data.
:::
Create a new container in your storage account. In this example, we name it sensors.
You can skip this step if you're using an existing container.
Upload the previously downloaded
2019-06_bmp180.csv.zst
file to the
container.
Follow the steps described earlier to obtain the Azure Blob Storage
connection string.
Now that everything is set up, you can query the data directly from Azure Blob Storage:
```sql
SELECT *
FROM azureBlobStorage(
'<YOUR CONNECTION STRING>',
'sensors',
'2019-06_bmp180.csv.zst',
'CSVWithNames')
LIMIT 10
SETTINGS format_csv_delimiter = ';'
```
To load the data into a table, create a simplified version of the
schema used in the original dataset:
sql
CREATE TABLE sensors
(
sensor_id UInt16,
lat Float32,
lon Float32,
timestamp DateTime,
temperature Float32
)
ENGINE = MergeTree
ORDER BY (timestamp, sensor_id);
:::info
For more information on configuration options and schema inference when
querying external sources like Azure Blob Storage, see
Automatic schema
inference from input data
:::
Now insert the data from Azure Blob Storage into the sensors table:
sql
INSERT INTO sensors
SELECT sensor_id, lat, lon, timestamp, temperature
FROM azureBlobStorage(
'<YOUR CONNECTION STRING>',
'sensors',
'2019-06_bmp180.csv.zst',
'CSVWithNames')
SETTINGS format_csv_delimiter = ';'
Your sensors table is now populated with data from the
2019-06_bmp180.csv.zst
file stored in Azure Blob Storage.
Additional resources {#additional-resources}
This is just a basic introduction to using the azureBlobStorage function. For
more advanced options and configuration details, please refer to the official
documentation:
azureBlobStorage Table Function
Formats for Input and Output Data | {"source_file": "using_azureblobstorage.md"} | [
0.00019487881218083203,
-0.0403764508664608,
-0.06142636388540268,
0.09198340028524399,
-0.0210876427590847,
-0.0002611284435261041,
0.057902850210666656,
-0.0270286463201046,
-0.005044323857873678,
0.12182801216840744,
0.05595288798213005,
-0.06395678967237473,
0.09102776646614075,
-0.024... |
16f1c177-00f4-47b9-8ec6-6e7716fd0148 | azureBlobStorage Table Function
Formats for Input and Output Data
Automatic schema inference from input data | {"source_file": "using_azureblobstorage.md"} | [
0.04150686785578728,
-0.03078656643629074,
-0.057245563715696335,
0.05089255049824715,
-0.05030608922243118,
0.08994453400373459,
-0.011602329090237617,
-0.022985151037573814,
-0.057638928294181824,
0.12858900427818298,
-0.01726415380835533,
-0.10017284750938416,
0.06991419196128845,
0.033... |
382b18d1-ef4d-4076-a999-8ec07815155f | sidebar_label: 'Overview'
slug: /integrations/azure-data-factory/overview
description: 'Bringing Azure Data into ClickHouse - Overview'
keywords: ['azure data factory', 'azure', 'microsoft', 'data']
title: 'Bringing Azure Data into ClickHouse'
doc_type: 'guide'
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Bringing Azure Data into ClickHouse
Microsoft Azure offers a wide range of tools to store, transform, and analyze
data. However, in many scenarios, ClickHouse can provide significantly better
performance for low-latency querying and processing of huge datasets. In
addition, ClickHouse's columnar storage and compression can greatly reduce the
cost of querying large volumes of analytical data compared to general-purpose
Azure databases.
In this section of the docs, we will explore two ways to ingest data from Microsoft Azure
into ClickHouse:
| Method | Description |
|----------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Using the
azureBlobStorage
Table Function
| Involves using ClickHouse's
azureBlobStorage
Table Function
to transfer data directly from Azure Blob Storage. |
|
Using the ClickHouse HTTP interface
| Uses the
ClickHouse HTTP interface
as a data source within Azure Data Factory, allowing you to copy data or use it in data flow activities as part of your pipelines. | | {"source_file": "overview.md"} | [
-0.0219638179987669,
0.014467529021203518,
-0.0657753124833107,
0.05311887711286545,
-0.004602521657943726,
-0.008037613704800606,
0.0196370892226696,
-0.07010603696107864,
-0.027064789086580276,
0.09542370587587357,
0.020397262647747993,
0.02862822823226452,
0.07050073146820068,
-0.031800... |
f4296bdf-73ca-4044-80f6-5c826e489727 | sidebar_label: 'Using the HTTP interface'
slug: /integrations/azure-data-factory/http-interface
description: 'Using ClickHouse''s HTTP interface to bring data from Azure Data Factory into ClickHouse'
keywords: ['azure data factory', 'azure', 'microsoft', 'data', 'http interface']
title: 'Using ClickHouse HTTP Interface to bring Azure data into ClickHouse'
doc_type: 'guide'
import Image from '@theme/IdealImage'; | {"source_file": "using_http_interface.md"} | [
-0.011273160576820374,
0.0319642499089241,
-0.057176027446985245,
0.04087533429265022,
0.027939962223172188,
0.029626963660120964,
0.03429136425256729,
-0.03375483676791191,
-0.09542381763458252,
0.07511898875236511,
0.09236671775579453,
-0.006765192840248346,
0.07561919838190079,
-0.02260... |
8d02b446-74c4-4a1b-aefe-29cb340ce2c0 | import azureHomePage from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-home-page.png';
import azureNewResourceAnalytics from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-new-resource-analytics.png';
import azureNewDataFactory from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-new-data-factory.png';
import azureNewDataFactoryConfirm from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-new-data-factory-confirm.png';
import azureNewDataFactorySuccess from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-new-data-factory-success.png';
import azureHomeWithDataFactory from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-home-with-data-factory.png';
import azureDataFactoryPage from '@site/static/images/integrations/data-ingestion/azure-data-factory/azure-data-factory-page.png';
import adfCreateLinkedServiceButton from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-create-linked-service-button.png';
import adfNewLinkedServiceSearch from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-linked-service-search.png';
import adfNewLinedServicePane from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-lined-service-pane.png';
import adfNewLinkedServiceBaseUrlEmpty from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-linked-service-base-url-empty.png';
import adfNewLinkedServiceParams from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-linked-service-params.png';
import adfNewLinkedServiceExpressionFieldFilled from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-linked-service-expression-field-filled.png';
import adfNewLinkedServiceCheckConnection from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-linked-service-check-connection.png';
import adfLinkedServicesList from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-linked-services-list.png';
import adfNewDatasetItem from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-dataset-item.png';
import adfNewDatasetPage from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-dataset-page.png';
import adfNewDatasetProperties from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-dataset-properties.png';
import adfNewDatasetQuery from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-dataset-query.png'; | {"source_file": "using_http_interface.md"} | [
-0.01673753187060356,
-0.021090446040034294,
-0.04068692401051521,
0.016188153997063637,
0.003772808937355876,
0.007378647569566965,
0.04302172735333443,
-0.03374524787068367,
-0.05206700786948204,
0.08990851789712906,
0.10950492322444916,
-0.06564554572105408,
0.11281763017177582,
0.03935... |
973774e6-be99-47a2-afa8-3d9568764c9e | import adfNewDatasetQuery from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-dataset-query.png';
import adfNewDatasetConnectionSuccessful from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-dataset-connection-successful.png';
import adfNewPipelineItem from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-pipeline-item.png';
import adfNewCopyDataItem from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-new-copy-data-item.png';
import adfCopyDataSource from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-copy-data-source.png';
import adfCopyDataSinkSelectPost from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-copy-data-sink-select-post.png';
import adfCopyDataDebugSuccess from '@site/static/images/integrations/data-ingestion/azure-data-factory/adf-copy-data-debug-success.png'; | {"source_file": "using_http_interface.md"} | [
-0.0026676307898014784,
-0.07222715765237808,
-0.0796460434794426,
0.004275299143046141,
-0.016180619597434998,
-0.062018487602472305,
0.0666651502251625,
-0.0015135653084143996,
-0.0729239284992218,
0.07029702514410019,
0.12553703784942627,
-0.09535574913024902,
0.006582793779671192,
-0.0... |
abe3c3f7-2e73-4fc8-8dca-cfbefe6edc9f | Using ClickHouse HTTP interface in Azure data factory {#using-clickhouse-http-interface-in-azure-data-factory}
The
azureBlobStorage
Table Function
is a fast and convenient way to ingest data from Azure Blob Storage into
ClickHouse. Using it may however not always be suitable for the following reasons:
Your data might not be stored in Azure Blob Storage β for example, it could be in Azure SQL Database, Microsoft SQL Server, or Cosmos DB.
Security policies might prevent external access to Blob Storage
altogether β for example, if the storage account is locked down with no public endpoint.
In such scenarios, you can use Azure Data Factory together with the
ClickHouse HTTP interface
to send data from Azure services into ClickHouse.
This method reverses the flow: instead of having ClickHouse pull the data from
Azure, Azure Data Factory pushes the data to ClickHouse. This approach
typically requires your ClickHouse instance to be accessible from the public
internet.
:::info
It is possible to avoid exposing your ClickHouse instance to the internet by
using Azure Data Factory's Self-hosted Integration Runtime. This setup allows
data to be sent over a private network. However, it's beyond the scope of this
article. You can find more information in the official guide:
Create and configure a self-hosted integration
runtime
:::
Turning ClickHouse into a REST service {#turning-clickhouse-to-a-rest-service}
Azure Data Factory supports sending data to external systems over HTTP in JSON
format. We can use this capability to insert data directly into ClickHouse
using the
ClickHouse HTTP interface
.
You can learn more in the
ClickHouse HTTP Interface
documentation
.
For this example, we only need to specify the destination table, define the
input data format as JSON, and include options to allow more flexible timestamp
parsing.
sql
INSERT INTO my_table
SETTINGS
date_time_input_format='best_effort',
input_format_json_read_objects_as_strings=1
FORMAT JSONEachRow
To send this query as part of an HTTP request, you simply pass it as a
URL-encoded string to the query parameter in your ClickHouse endpoint:
text
https://your-clickhouse-url.com?query=INSERT%20INTO%20my_table%20SETTINGS%20date_time_input_format%3D%27best_effort%27%2C%20input_format_json_read_objects_as_strings%3D1%20FORMAT%20JSONEachRow%0A
:::info
Azure Data Factory can handle this encoding automatically using its built-in
encodeUriComponent
function, so you don't have to do it manually.
:::
Now you can send JSON-formatted data to this URL. The data should match the
structure of the target table. Here's a simple example using curl, assuming a
table with three columns:
col_1
,
col_2
, and
col_3
.
text
curl \
-XPOST "https://your-clickhouse-url.com?query=<our_URL_encded_query>" \
--data '{"col_1":9119,"col_2":50.994,"col_3":"2019-06-01 00:00:00"}' | {"source_file": "using_http_interface.md"} | [
-0.0027870142366737127,
-0.035009853541851044,
-0.07480467855930328,
0.019291458651423454,
-0.09040507674217224,
0.04385429248213768,
0.04049362614750862,
-0.06821507215499878,
-0.0297667495906353,
0.11212367564439774,
0.04066028818488121,
0.0009732777834869921,
0.08570371568202972,
-0.021... |
216f6f4d-f4e1-42ed-a233-fe9191bbba4d | text
curl \
-XPOST "https://your-clickhouse-url.com?query=<our_URL_encded_query>" \
--data '{"col_1":9119,"col_2":50.994,"col_3":"2019-06-01 00:00:00"}'
You can also send a JSON array of objects, or JSON Lines (newline-delimited
JSON objects). Azure Data Factory uses the JSON array format, which works
perfectly with ClickHouse's
JSONEachRow
input.
As you can see, for this step you don't need to do anything special on the ClickHouse
side. The HTTP interface already provides everything needed to act as a
REST-like endpoint β no additional configuration required.
Now that we've made ClickHouse behave like a REST endpoint, it's time to
configure Azure Data Factory to use it.
In the next steps, we'll create an Azure Data Factory instance, set up a Linked
Service to your ClickHouse instance, define a Dataset for the
REST sink
,
and create a Copy Data activity to send data from Azure to ClickHouse.
Creating an Azure data factory instance {#create-an-azure-data-factory-instance}
This guide assumes that you have access to Microsoft Azure account, and you
already have configured a subscription and a resource group. If you have
an Azure Data Factory already configured, then you can safely skip this step
and move to the next one using your existing service.
Log in to the
Microsoft Azure Portal
and click
Create a resource
.
In the Categories pane on the left, select
Analytics
, then click on
Data Factory
in the list of popular services.
Select your subscription and resource group, enter a name for the new Data
Factory instance, choose the region and leave the version as V2.
Click
Review + Create
, then click
Create
to launch the deployment.
Once the deployment completes successfully, you can start using your new Azure
Data Factory instance.
Creating a new REST-Based linked service {#-creating-new-rest-based-linked-service}
Log in to the Microsoft Azure Portal and open your Data Factory instance.
On the Data Factory overview page, click
Launch Studio
.
In the left-hand menu, select
Manage
, then go to
Linked services
,
and click
+ New
to create a new linked service.
In the
New linked service search bar
, type
REST
, select
REST
, and click
Continue
to create
a REST connector
instance.
In the linked service configuration pane enter a name for your new service,
click the
Base URL
field, then click
Add dynamic content
(this link only
appears when the field is selected).
In the dynamic content pane you can create a parameterized URL, which
allows you to define the query later when creating datasets for different
tables β this makes the linked service reusable.
Click the
"+"
next to the filter input and add a new parameter, name it
pQuery
, set the type to String, and set the default value to
SELECT 1
.
Click
Save
. | {"source_file": "using_http_interface.md"} | [
-0.03239477053284645,
-0.004357664845883846,
-0.045799463987350464,
0.017785634845495224,
-0.12169860303401947,
0.0002842786780092865,
-0.033476416021585464,
-0.06599508225917816,
-0.013412325643002987,
0.037944648414850235,
0.0406373031437397,
-0.05637119337916374,
0.04734617844223976,
-0... |
edc66bc7-5522-4588-9312-57a4c7060d4c | Click the
"+"
next to the filter input and add a new parameter, name it
pQuery
, set the type to String, and set the default value to
SELECT 1
.
Click
Save
.
In the expression field, enter the following and click
OK
. Replace
your-clickhouse-url.com
with the actual address of your ClickHouse
instance.
text
@{concat('https://your-clickhouse-url.com:8443/?query=', encodeUriComponent(linkedService().pQuery))}
Back in the main form select Basic authentication, enter the username and
password used to connect to your ClickHouse HTTP interface, click
Test
connection
. If everything is configured correctly, you'll see a success
message.
Click
Create
to finalize the setup.
You should now see your newly registered REST-based linked service in the list.
Creating a new dataset for the ClickHouse HTTP Interface {#creating-a-new-dataset-for-the-clickhouse-http-interface}
Now that we have a linked service configured for the ClickHouse HTTP interface,
we can create a dataset that Azure Data Factory will use to send data to
ClickHouse.
In this example, we'll insert a small portion of the
Environmental Sensors
Data
.
Open the ClickHouse query console of your choice β this could be the
ClickHouse Cloud web UI, the CLI client, or any other interface you use to
run queries β and create the target table:
sql
CREATE TABLE sensors
(
sensor_id UInt16,
lat Float32,
lon Float32,
timestamp DateTime,
temperature Float32
)
ENGINE = MergeTree
ORDER BY (timestamp, sensor_id);
In Azure Data Factory Studio, select Author in the left-hand pane. Hover
over the Dataset item, click the three-dot icon, and choose New dataset.
In the search bar, type
REST
, select
REST
, and click
Continue
.
Enter a name for your dataset and select the
linked service
you created
in the previous step. Click
OK
to create the dataset.
You should now see your newly created dataset listed under the Datasets
section in the Factory Resources pane on the left. Select the dataset to
open its properties. You'll see the
pQuery
parameter that was defined in the
linked service. Click the
Value
text field. Then click
Add dynamic
content.
In the pane that opens, paste the following query:
sql
INSERT INTO sensors
SETTINGS
date_time_input_format=''best_effort'',
input_format_json_read_objects_as_strings=1
FORMAT JSONEachRow
:::danger
All single quotes
'
in the query must be replaced with two single quotes
''
. This is required by Azure Data Factory's expression parser. If you
don't escape them, you may not see an error immediately β but it will fail
later when you try to use or save the dataset. For example,
'best_effort'
must be written as
''best_effort''
.
::: | {"source_file": "using_http_interface.md"} | [
-0.025392429903149605,
-0.030609630048274994,
-0.02706260420382023,
0.04055015370249748,
-0.11103034019470215,
0.038026463240385056,
0.030864104628562927,
-0.0722808688879013,
-0.00787914078682661,
0.03259025514125824,
-0.026554906740784645,
-0.04348760098218918,
0.0906820222735405,
0.0251... |
424cb414-282b-4947-86d4-24d9d567d50a | must be written as
''best_effort''
.
:::
Click OK to save the expression. Click Test connection. If everything is
configured correctly, you'll see a Connection successful message. Click Publish
all at the top of the page to save your changes.
Setting up an example dataset {#setting-up-an-example-dataset}
In this example, we will not use the full Environmental Sensors Dataset, but
just a small subset available at the
Sensors Dataset Sample
.
:::info
To keep this guide focused, we won't go into the exact steps for creating the
source dataset in Azure Data Factory. You can upload the sample data to any
storage service of your choice β for example, Azure Blob Storage, Microsoft SQL
Server, or even a different file format supported by Azure Data Factory.
:::
Upload the dataset to your Azure Blob Storage (or another preferred storage
service), Then, in Azure Data Factory Studio, go to the Factory Resources pane.
Create a new dataset that points to the uploaded data. Click Publish all to
save your changes.
Creating a Copy Activity to transfer data to ClickHouse {#creating-the-copy-activity-to-transfer-data-to-clickhouse}
Now that we've configured both the input and output datasets, we can set up a
Copy Data
activity to transfer data from our example dataset into the
sensors
table in ClickHouse.
Open
Azure Data Factory Studio
, go to the
Author tab
. In the
Factory Resources
pane, hover over
Pipeline
, click the three-dot
icon, and select
New pipeline
.
In the
Activities
pane, expand the
Move and transform
section and
drag the
Copy data
activity onto the canvas.
Select the
Source
tab, and choose the source dataset you created earlier.
Go to the
Sink
tab and select the ClickHouse dataset created for your
sensors table. Set
Request method
to POST. Ensure
HTTP compression
type
is set to
None
.
:::warning
HTTP compression does not work correctly in Azure Data Factory's Copy Data
activity. When enabled, Azure sends a payload consisting of zero bytes only
β likely a bug in the service. Be sure to leave compression disabled.
:::
:::info
We recommend keeping the default batch size of 10,000, or even increasing it
further. For more details, see
Selecting an Insert Strategy / Batch inserts if synchronous
for more details.
:::
Click
Debug
at the top of the canvas to run the pipeline. After a short
wait, the activity will be queued and executed. If everything is configured
correctly, the task should finish with a
Success
status.
Once complete, click
Publish all
to save your pipeline and dataset changes.
Additional resources {#additional-resources-1}
HTTP Interface
Copy and transform data from and to a REST endpoint by using Azure Data Factory
Selecting an Insert Strategy
Create and configure a self-hosted integration runtime | {"source_file": "using_http_interface.md"} | [
0.010483311489224434,
-0.044873204082250595,
-0.025748683139681816,
0.04254132881760597,
0.033889174461364746,
0.009441490285098553,
0.05806853994727135,
-0.06333403289318085,
-0.0601547509431839,
0.12261421978473663,
0.02624249830842018,
-0.07598897814750671,
0.09857913106679916,
-0.03508... |
394828a6-bd84-4132-9add-2ca069ac5943 | slug: /integrations/azure-data-factory
description: 'Bringing Azure Data into ClickHouse'
keywords: ['azure data factory', 'azure', 'microsoft', 'data']
title: 'Bringing Azure Data into ClickHouse'
doc_type: 'guide'
| Page | Description |
|-----------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Overview
| Overview of the two approaches used for bringing Azure Data into ClickHouse |
|
Using ClickHouse's azureBlobStorage table function
| Option 1 - an efficient and straightforward way to copy data from Azure Blob Storage or Azure Data Lake Storage into ClickHouse using the
azureBlobStorage
table function |
|
Using ClickHouse's HTTP interface
| Option 2 - instead of having ClickHouse pull the data from Azure, have Azure Data Factory push the data to ClickHouse using it's HTTP interface | | {"source_file": "index.md"} | [
0.0008688846719451249,
0.00524830212816596,
-0.025550477206707,
0.04864151030778885,
-0.02764076180756092,
0.017324864864349365,
-0.005650597624480724,
-0.07067584246397018,
-0.046127334237098694,
0.08554727584123611,
0.06980904936790466,
-0.04423171281814575,
0.0657786950469017,
-0.031688... |
80b1164f-e5da-4155-ab9a-315fcd530441 | sidebar_label: 'Google Cloud Storage (GCS)'
sidebar_position: 4
slug: /integrations/gcs
description: 'Google Cloud Storage (GCS) Backed MergeTree'
title: 'Integrate Google Cloud Storage with ClickHouse'
doc_type: 'guide'
keywords: ['Google Cloud Storage ClickHouse', 'GCS ClickHouse integration', 'GCS backed MergeTree', 'ClickHouse GCS storage', 'Google Cloud ClickHouse']
import BucketDetails from '@site/docs/_snippets/_GCS_authentication_and_bucket.md';
import Image from '@theme/IdealImage';
import GCS_examine_bucket_1 from '@site/static/images/integrations/data-ingestion/s3/GCS-examine-bucket-1.png';
import GCS_examine_bucket_2 from '@site/static/images/integrations/data-ingestion/s3/GCS-examine-bucket-2.png';
Integrate Google Cloud Storage with ClickHouse
:::note
If you are using ClickHouse Cloud on
Google Cloud
, this page does not apply as your services will already be using
Google Cloud Storage
. If you are looking to
SELECT
or
INSERT
data from GCS, please see the
gcs
table function
.
:::
ClickHouse recognizes that GCS represents an attractive storage solution for users seeking to separate storage and compute. To help achieve this, support is provided for using GCS as the storage for a MergeTree engine. This will enable users to exploit the scalability and cost benefits of GCS, and the insert and query performance of the MergeTree engine.
GCS backed MergeTree {#gcs-backed-mergetree}
Creating a disk {#creating-a-disk}
To utilize a GCS bucket as a disk, we must first declare it within the ClickHouse configuration in a file under
conf.d
. An example of a GCS disk declaration is shown below. This configuration includes multiple sections to configure the GCS "disk", the cache, and the policy that is specified in DDL queries when tables are to be created on the GCS disk. Each of these are described below.
Storage configuration > disks > gcs {#storage_configuration--disks--gcs}
This part of the configuration is shown in the highlighted section and specifies that:
- Batch deletes are not to be performed. GCS does not currently support batch deletes, so the autodetect is disabled to suppress error messages.
- The type of the disk is
s3
because the S3 API is in use.
- The endpoint as provided by GCS
- The service account HMAC key and secret
- The metadata path on the local disk | {"source_file": "index.md"} | [
-0.043398063629865646,
0.01841767318546772,
0.0100847277790308,
0.001102649956010282,
0.03636978194117546,
-0.05594422668218613,
0.03809545934200287,
0.011129312217235565,
-0.011943095363676548,
0.03613150864839554,
0.03467434272170067,
-0.017089325934648514,
0.11118830740451813,
-0.030275... |
f4652c63-3e09-4344-9b96-464f6bdbe3d4 | xml
<clickhouse>
<storage_configuration>
<disks>
<gcs>
<!--highlight-start-->
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/BUCKET NAME/FOLDER NAME/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
<!--highlight-end-->
</gcs>
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>
Storage configuration > disks > cache {#storage_configuration--disks--cache}
The example configuration highlighted below enables a 10Gi memory cache for the disk
gcs
.
xml
<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/BUCKET NAME/FOLDER NAME/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
<!--highlight-start-->
<gcs_cache>
<type>cache</type>
<disk>gcs</disk>
<path>/var/lib/clickhouse/disks/gcs_cache/</path>
<max_size>10Gi</max_size>
</gcs_cache>
<!--highlight-end-->
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs_cache</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>
Storage configuration > policies > gcs_main {#storage_configuration--policies--gcs_main}
Storage configuration policies allow choosing where data is stored. The policy highlighted below allows data to be stored on the disk
gcs
by specifying the policy
gcs_main
. For example,
CREATE TABLE ... SETTINGS storage_policy='gcs_main'
. | {"source_file": "index.md"} | [
-0.002081369748339057,
-0.017337704077363014,
-0.06458598375320435,
-0.058720480650663376,
0.005459957756102085,
-0.052967481315135956,
0.02716061659157276,
-0.037315621972084045,
0.01725217141211033,
0.06140075996518135,
0.04120183363556862,
0.04465094581246376,
0.00473350565880537,
-0.04... |
1dd5d7f0-2701-4816-9d13-fe5e71ca3e33 | xml
<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/BUCKET NAME/FOLDER NAME/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
</disks>
<policies>
<!--highlight-start-->
<gcs_main>
<volumes>
<main>
<disk>gcs</disk>
</main>
</volumes>
</gcs_main>
<!--highlight-end-->
</policies>
</storage_configuration>
</clickhouse>
A complete list of settings relevant to this disk declaration can be found
here
.
Creating a table {#creating-a-table}
Assuming you have configured your disk to use a bucket with write access, you should be able to create a table such as in the example below. For purposes of brevity, we use a subset of the NYC taxi columns and stream data directly to the GCS-backed table:
sql
CREATE TABLE trips_gcs
(
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`tip_amount` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4)
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
-- highlight-next-line
SETTINGS storage_policy='gcs_main'
sql
INSERT INTO trips_gcs SELECT trip_id, pickup_date, pickup_datetime, dropoff_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tip_amount, total_amount, payment_type FROM s3('https://ch-nyc-taxi.s3.eu-west-3.amazonaws.com/tsv/trips_{0..9}.tsv.gz', 'TabSeparatedWithNames') LIMIT 1000000;
Depending on the hardware, this latter insert of 1m rows may take a few minutes to execute. You can confirm the progress via the system.processes table. Feel free to adjust the row count up to the limit of 10m and explore some sample queries.
sql
SELECT passenger_count, avg(tip_amount) AS avg_tip, avg(total_amount) AS avg_amount FROM trips_gcs GROUP BY passenger_count;
Handling replication {#handling-replication}
Replication with GCS disks can be accomplished by using the
ReplicatedMergeTree
table engine. See the
replicating a single shard across two GCP regions using GCS
guide for details.
Learn more {#learn-more} | {"source_file": "index.md"} | [
-0.01690923422574997,
-0.0668068528175354,
-0.07296288013458252,
-0.051052045077085495,
-0.006254990119487047,
-0.029586032032966614,
0.05740106478333473,
-0.035273127257823944,
0.011410458944737911,
0.058840468525886536,
0.05990694835782051,
-0.024385133758187294,
0.08637518435716629,
-0.... |
264ba361-b884-42ee-8bd5-335467073fdf | Learn more {#learn-more}
The
Cloud Storage XML API
is interoperable with some tools and libraries that work with services such as Amazon Simple Storage Service (Amazon S3).
For further information on tuning threads, see
Optimizing for Performance
.
Using Google Cloud Storage (GCS) {#gcs-multi-region}
:::tip
Object storage is used by default in ClickHouse Cloud, you do not need to follow this procedure if you are running in ClickHouse Cloud.
:::
Plan the deployment {#plan-the-deployment}
This tutorial is written to describe a replicated ClickHouse deployment running in Google Cloud and using Google Cloud Storage (GCS) as the ClickHouse storage disk "type".
In the tutorial, you will deploy ClickHouse server nodes in Google Cloud Engine VMs, each with an associated GCS bucket for storage. Replication is coordinated by a set of ClickHouse Keeper nodes, also deployed as VMs.
Sample requirements for high availability:
- Two ClickHouse server nodes, in two GCP regions
- Two GCS buckets, deployed in the same regions as the two ClickHouse server nodes
- Three ClickHouse Keeper nodes, two of them are deployed in the same regions as the ClickHouse server nodes. The third can be in the same region as one of the first two Keeper nodes, but in a different availability zone.
ClickHouse Keeper requires two nodes to function, hence a requirement for three nodes for high availability.
Prepare virtual machines {#prepare-vms}
Deploy five VMS in three regions:
| Region | ClickHouse Server | Bucket | ClickHouse Keeper |
|--------|-------------------|-------------------|-------------------|
| 1 |
chnode1
|
bucket_regionname
|
keepernode1
|
| 2 |
chnode2
|
bucket_regionname
|
keepernode2
|
| 3
*
| | |
keepernode3
|
*
This can be a different availability zone in the same region as 1 or 2.
Deploy ClickHouse {#deploy-clickhouse}
Deploy ClickHouse on two hosts, in the sample configurations these are named
chnode1
,
chnode2
.
Place
chnode1
in one GCP region, and
chnode2
in a second. In this guide
us-east1
and
us-east4
are used for the compute engine VMs, and also for GCS buckets.
:::note
Do not start
clickhouse server
until after it is configured. Just install it.
:::
Refer to the
installation instructions
when performing the deployment steps on the ClickHouse server nodes.
Deploy ClickHouse Keeper {#deploy-clickhouse-keeper}
Deploy ClickHouse Keeper on three hosts, in the sample configurations these are named
keepernode1
,
keepernode2
, and
keepernode3
.
keepernode1
can be deployed in the same region as
chnode1
,
keepernode2
with
chnode2
, and
keepernode3
in either region, but in a different availability zone from the ClickHouse node in that region.
Refer to the
installation instructions
when performing the deployment steps on the ClickHouse Keeper nodes. | {"source_file": "index.md"} | [
-0.04564070329070091,
-0.061638232320547104,
-0.005186393391340971,
-0.016113443300127983,
-0.02628369629383087,
-0.0450432188808918,
-0.02286834828555584,
-0.06707876175642014,
0.0241074338555336,
0.0372091569006443,
0.03577835485339165,
0.016722459346055984,
0.07114803045988083,
-0.09032... |
ca73fa31-16cb-40d3-87bd-87ed994ae0e7 | Refer to the
installation instructions
when performing the deployment steps on the ClickHouse Keeper nodes.
Create two buckets {#create-two-buckets}
The two ClickHouse servers will be located in different regions for high availability. Each will have a GCS bucket in the same region.
In
Cloud Storage > Buckets
choose
CREATE BUCKET
. For this tutorial two buckets are created, one in each of
us-east1
and
us-east4
. The buckets are single region, standard storage class, and not public. When prompted, enable public access prevention. Do not create folders, they will be created when ClickHouse writes to the storage.
If you need step-by-step instructions to create buckets and an HMAC key, then expand
Create GCS buckets and an HMAC key
and follow along:
Configure ClickHouse Keeper {#configure-clickhouse-keeper}
All of the ClickHouse Keeper nodes have the same configuration file except for the
server_id
line (first highlighted line below). Modify the file with the hostnames for your ClickHouse Keeper servers, and on each of the servers set the
server_id
to match the appropriate
server
entry in the
raft_configuration
. Since this example has
server_id
set to
3
, we have highlighted the matching lines in the
raft_configuration
.
Edit the file with your hostnames, and make sure that they resolve from the ClickHouse server nodes and the Keeper nodes
Copy the file into place (
/etc/clickhouse-keeper/keeper_config.xml
on each of the Keeper servers
Edit the
server_id
on each machine, based on its entry number in the
raft_configuration
```xml title=/etc/clickhouse-keeper/keeper_config.xml
trace
/var/log/clickhouse-keeper/clickhouse-keeper.log
/var/log/clickhouse-keeper/clickhouse-keeper.err.log
1000M
3
0.0.0.0
9181
<server_id>3</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>warning</raft_logs_level>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>keepernode1.us-east1-b.c.clickhousegcs-374921.internal</hostname>
<port>9234</port>
</server>
<server>
<id>2</id>
<hostname>keepernode2.us-east4-c.c.clickhousegcs-374921.internal</hostname>
<port>9234</port>
</server>
<server>
<id>3</id>
<hostname>keepernode3.us-east5-a.c.clickhousegcs-374921.internal</hostname>
<port>9234</port>
</server>
</raft_configuration>
</keeper_server>
```
Configure ClickHouse server {#configure-clickhouse-server} | {"source_file": "index.md"} | [
0.01962265558540821,
-0.06254907697439194,
-0.016053002327680588,
-0.04511444270610809,
-0.03754984959959984,
-0.08182212710380554,
0.00659604649990797,
-0.07942360639572144,
-0.01082068681716919,
0.05925840511918068,
0.047314222902059555,
-0.023076936602592468,
0.09956756979227066,
-0.022... |
8d1c9233-e6a2-4cd6-970c-93b1383c78fd | </raft_configuration>
</keeper_server>
```
Configure ClickHouse server {#configure-clickhouse-server}
:::note best practice
Some of the steps in this guide will ask you to place a configuration file in
/etc/clickhouse-server/config.d/
. This is the default location on Linux systems for configuration override files. When you put these files into that directory ClickHouse will merge the content with the default configuration. By placing these files in the
config.d
directory you will avoid losing your configuration during an upgrade.
:::
Networking {#networking}
By default, ClickHouse listens on the loopback interface, in a replicated setup networking between machines is necessary. Listen on all interfaces:
xml title=/etc/clickhouse-server/config.d/network.xml
<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>
Remote ClickHouse Keeper servers {#remote-clickhouse-keeper-servers}
Replication is coordinated by ClickHouse Keeper. This configuration file identifies the ClickHouse Keeper nodes by hostname and port number.
Edit the hostnames to match your Keeper hosts
xml title=/etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse>
<zookeeper>
<node index="1">
<host>keepernode1.us-east1-b.c.clickhousegcs-374921.internal</host>
<port>9181</port>
</node>
<node index="2">
<host>keepernode2.us-east4-c.c.clickhousegcs-374921.internal</host>
<port>9181</port>
</node>
<node index="3">
<host>keepernode3.us-east5-a.c.clickhousegcs-374921.internal</host>
<port>9181</port>
</node>
</zookeeper>
</clickhouse>
Remote ClickHouse servers {#remote-clickhouse-servers}
This file configures the hostname and port of each ClickHouse server in the cluster. The default configuration file contains sample cluster definitions, in order to show only the clusters that are completely configured the tag
replace="true"
is added to the
remote_servers
entry so that when this configuration is merged with the default it replaces the
remote_servers
section instead of adding to it.
Edit the file with your hostnames, and make sure that they resolve from the ClickHouse server nodes
xml title=/etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse>
<remote_servers replace="true">
<cluster_1S_2R>
<shard>
<replica>
<host>chnode1.us-east1-b.c.clickhousegcs-374921.internal</host>
<port>9000</port>
</replica>
<replica>
<host>chnode2.us-east4-c.c.clickhousegcs-374921.internal</host>
<port>9000</port>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
</clickhouse>
Replica identification {#replica-identification} | {"source_file": "index.md"} | [
-0.008640668354928493,
-0.057809434831142426,
-0.06755084544420242,
-0.06986819207668304,
-0.030075019225478172,
-0.12275474518537521,
-0.02594148740172386,
-0.07123326510190964,
-0.06785357743501663,
-0.0004490325227379799,
0.07725084573030472,
0.07683772593736649,
0.03265470266342163,
-0... |
6e1deac5-4b04-4622-88a9-3b09f4573706 | Replica identification {#replica-identification}
This file configures settings related to the ClickHouse Keeper path. Specifically the macros used to identify which replica the data is part of. On one server the replica should be specified as
replica_1
, and on the other server
replica_2
. The names can be changed, based on our example of one replica being stored in South Carolina and the other in Northern Virginia the values could be
carolina
and
virginia
; just make sure that they are different on each machine.
```xml title=/etc/clickhouse-server/config.d/macros.xml
/clickhouse/task_queue/ddl
cluster_1S_2R
1
<replica>replica_1</replica>
</macros>
```
Storage in GCS {#storage-in-gcs}
ClickHouse storage configuration includes
disks
and
policies
. The disk being configured below is named
gcs
, and is of
type
s3
. The type is s3 because ClickHouse accesses the GCS bucket as if it was an AWS S3 bucket. Two copies of this configuration will be needed, one for each of the ClickHouse server nodes.
These substitutions should be made in the configuration below.
These substitutions differ between the two ClickHouse server nodes:
-
REPLICA 1 BUCKET
should be set to the name of the bucket in the same region as the server
-
REPLICA 1 FOLDER
should be changed to
replica_1
on one of the servers, and
replica_2
on the other
These substitutions are common across the two nodes:
- The
access_key_id
should be set to the HMAC Key generated earlier
- The
secret_access_key
should be set to HMAC Secret generated earlier
xml title=/etc/clickhouse-server/config.d/storage.xml
<clickhouse>
<storage_configuration>
<disks>
<gcs>
<support_batch_delete>false</support_batch_delete>
<type>s3</type>
<endpoint>https://storage.googleapis.com/REPLICA 1 BUCKET/REPLICA 1 FOLDER/</endpoint>
<access_key_id>SERVICE ACCOUNT HMAC KEY</access_key_id>
<secret_access_key>SERVICE ACCOUNT HMAC SECRET</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/gcs/</metadata_path>
</gcs>
<cache>
<type>cache</type>
<disk>gcs</disk>
<path>/var/lib/clickhouse/disks/gcs_cache/</path>
<max_size>10Gi</max_size>
</cache>
</disks>
<policies>
<gcs_main>
<volumes>
<main>
<disk>gcs</disk>
</main>
</volumes>
</gcs_main>
</policies>
</storage_configuration>
</clickhouse>
Start ClickHouse Keeper {#start-clickhouse-keeper}
Use the commands for your operating system, for example:
bash
sudo systemctl enable clickhouse-keeper
sudo systemctl start clickhouse-keeper
sudo systemctl status clickhouse-keeper
Check ClickHouse Keeper status {#check-clickhouse-keeper-status} | {"source_file": "index.md"} | [
-0.06722047179937363,
-0.05309201031923294,
-0.042570047080516815,
-0.04194704443216324,
-0.00024674294400028884,
-0.06176408380270004,
0.013354895636439323,
-0.10666294395923615,
0.01655440591275692,
0.007894320413470268,
0.011467372067272663,
-0.026642203330993652,
0.09113381057977676,
-... |
640b3a79-0ef6-4099-a1a1-77e489d8b4d8 | bash
sudo systemctl enable clickhouse-keeper
sudo systemctl start clickhouse-keeper
sudo systemctl status clickhouse-keeper
Check ClickHouse Keeper status {#check-clickhouse-keeper-status}
Send commands to the ClickHouse Keeper with
netcat
. For example,
mntr
returns the state of the ClickHouse Keeper cluster. If you run the command on each of the Keeper nodes you will see that one is a leader, and the other two are followers:
bash
echo mntr | nc localhost 9181
```response
zk_version v22.7.2.15-stable-f843089624e8dd3ff7927b8a125cf3a7a769c069
zk_avg_latency 0
zk_max_latency 11
zk_min_latency 0
zk_packets_received 1783
zk_packets_sent 1783
highlight-start
zk_num_alive_connections 2
zk_outstanding_requests 0
zk_server_state leader
highlight-end
zk_znode_count 135
zk_watch_count 8
zk_ephemerals_count 3
zk_approximate_data_size 42533
zk_key_arena_size 28672
zk_latest_snapshot_size 0
zk_open_file_descriptor_count 182
zk_max_file_descriptor_count 18446744073709551615
highlight-start
zk_followers 2
zk_synced_followers 2
highlight-end
```
Start ClickHouse server {#start-clickhouse-server}
On
chnode1
and
chnode
run:
bash
sudo service clickhouse-server start
bash
sudo service clickhouse-server status
Verification {#verification}
Verify disk configuration {#verify-disk-configuration}
system.disks
should contain records for each disk:
- default
- gcs
- cache
sql
SELECT *
FROM system.disks
FORMAT Vertical
```response
Row 1:
ββββββ
name: cache
path: /var/lib/clickhouse/disks/gcs/
free_space: 18446744073709551615
total_space: 18446744073709551615
unreserved_space: 18446744073709551615
keep_free_space: 0
type: s3
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 1
is_broken: 0
cache_path: /var/lib/clickhouse/disks/gcs_cache/
Row 2:
ββββββ
name: default
path: /var/lib/clickhouse/
free_space: 6555529216
total_space: 10331889664
unreserved_space: 6555529216
keep_free_space: 0
type: local
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 0
is_broken: 0
cache_path:
Row 3:
ββββββ
name: gcs
path: /var/lib/clickhouse/disks/gcs/
free_space: 18446744073709551615
total_space: 18446744073709551615
unreserved_space: 18446744073709551615
keep_free_space: 0
type: s3
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 1
is_broken: 0
cache_path:
3 rows in set. Elapsed: 0.002 sec.
```
Verify that tables created on the cluster are created on both nodes {#verify-that-tables-created-on-the-cluster-are-created-on-both-nodes} | {"source_file": "index.md"} | [
0.01141019631177187,
-0.01797967217862606,
-0.09789907187223434,
-0.017868928611278534,
0.043619245290756226,
-0.07216215878725052,
0.018498580902814865,
-0.05755414441227913,
-0.04188352823257446,
0.07016335427761078,
0.021111799404025078,
-0.052889592945575714,
0.06981032341718674,
-0.04... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.