id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
60fea322-bf4c-4a49-b9fa-2dc0585d3891 | |IPv6 |β |β |
|Object |β |β |
|Point |β |β |
|Nothing |β |β |
|MultiPolygon |β |β |
|Ring |β |β |
|Polygon |β |β |
|SimpleAggregateFunction|β |β |
|AggregateFunction |β |β |
|Variant |β |β |
|Dynamic |β |β |
|JSON |β |β | | {"source_file": "index.md"} | [
-0.008404366672039032,
0.020577218383550644,
-0.061440855264663696,
-0.02818080596625805,
-0.05130193009972572,
-0.01634921506047249,
-0.08616439998149872,
0.05550788715481758,
-0.008716101758182049,
-0.03697163611650467,
-0.0010163107654079795,
-0.01093334425240755,
-0.038041695952415466,
... |
683663bc-ae0f-4c30-8181-56996dded885 | ClickHouse Data Types
:::note
- AggregatedFunction - :warning: does not support
SELECT * FROM table ...
- Decimal -
SET output_format_decimal_trailing_zeros=1
in 21.9+ for consistency
- Enum - can be treated as both string and integer
- UInt64 - mapped to
long
in client-v1
:::
Features {#features}
Table of features of the clients:
| Name | Client V2 | Client V1 | Comments
|----------------------------------------------|:---------:|:---------:|:---------:|
| Http Connection |β |β | |
| Http Compression (LZ4) |β |β | |
| Server Response Compression - LZ4 |β |β | |
| Client Request Compression - LZ4 |β |β | |
| HTTPS |β |β | |
| Client SSL Cert (mTLS) |β |β | |
| Http Proxy |β |β | |
| POJO SerDe |β |β | |
| Connection Pool |β |β | When Apache HTTP Client used |
| Named Parameters |β |β | |
| Retry on failure |β |β | |
| Failover |β |β | |
| Load-balancing |β |β | |
| Server auto-discovery |β |β | |
| Log Comment |β |β | |
| Session Roles |β |β | |
| SSL Client Authentication |β |β | |
| Session timezone |β |β | |
JDBC Drive inherits same features as underlying client implementation. Other JDBC features are listed on its
page
.
Compatibility {#compatibility}
All projects in this repo are tested with all
active LTS versions
of ClickHouse.
Support policy
We recommend to upgrade client continuously to not miss security fixes and new improvements
If you have an issue with migration to v2 API -
create an issue
and we will respond!
Logging {#logging}
Our Java language client uses
SLF4J
for logging. You can use any SLF4J-compatible logging framework, such as
Logback
or
Log4j
.
For example, if you are using Maven you could add the following dependency to your
pom.xml
file:
```xml title="pom.xml"
org.slf4j
slf4j-api
2.0.16
<!-- Logback Core -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.5.16</version> <!-- Use the latest version -->
</dependency>
<!-- Logback Classic (bridges SLF4J to Logback) -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.5.16</version> <!-- Use the latest version -->
</dependency>
``` | {"source_file": "index.md"} | [
-0.03514324501156807,
0.026939602568745613,
-0.056934963911771774,
0.06714755296707153,
-0.03455270454287529,
-0.015689672902226448,
-0.01073669083416462,
-0.018312273547053337,
0.020091090351343155,
-0.008307615295052528,
0.044653888791799545,
-0.09449347108602524,
0.09736808389425278,
-0... |
004a0293-905e-4993-b157-689ff299fda8 | ```
Configuring logging {#configuring-logging}
This is going to depend on the logging framework you are using. For example, if you are using
Logback
, you could configure logging in a file called
logback.xml
:
```xml title="logback.xml"
[%d{yyyy-MM-dd HH:mm:ss}] [%level] [%thread] %logger{36} - %msg%n
<!-- File Appender -->
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>logs/app.log</file>
<append>true</append>
<encoder>
<pattern>[%d{yyyy-MM-dd HH:mm:ss}] [%level] [%thread] %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- Root Logger -->
<root level="info">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</root>
<!-- Custom Log Levels for Specific Packages -->
<logger name="com.clickhouse" level="info" />
```
Changelog | {"source_file": "index.md"} | [
0.01869801990687847,
0.03615571931004524,
-0.053874045610427856,
0.017838383093476295,
0.0016178456135094166,
-0.028991933912038803,
-0.006104946136474609,
0.03515171632170677,
0.056949928402900696,
0.019018283113837242,
0.038277506828308105,
-0.0103018032386899,
0.009172886610031128,
0.02... |
b9bea760-270b-42c7-90f2-4cabd6b15702 | sidebar_label: 'R2DBC Driver'
sidebar_position: 5
keywords: ['clickhouse', 'java', 'driver', 'integrate', 'r2dbc']
description: 'ClickHouse R2DBC Driver'
slug: /integrations/java/r2dbc
title: 'R2DBC driver'
doc_type: 'reference'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
R2DBC driver
R2DBC driver {#r2dbc-driver}
R2DBC
wrapper of async Java client for ClickHouse.
Environment requirements {#environment-requirements}
OpenJDK
version >= 8
Setup {#setup}
xml
<dependency>
<groupId>com.clickhouse</groupId>
<!-- change to clickhouse-r2dbc_0.9.1 for SPI 0.9.1.RELEASE -->
<artifactId>clickhouse-r2dbc</artifactId>
<version>0.7.1</version>
<!-- use uber jar with all dependencies included, change classifier to http or grpc for smaller jar -->
<classifier>all</classifier>
<exclusions>
<exclusion>
<groupId>*</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
Connect to ClickHouse {#connect-to-clickhouse}
```java showLineNumbers
ConnectionFactory connectionFactory = ConnectionFactories
.get("r2dbc:clickhouse:http://{username}:{password}@{host}:{port}/{database}");
Mono.from(connectionFactory.create())
.flatMapMany(connection -> connection
```
Query {#query}
java showLineNumbers
connection
.createStatement("select domain, path, toDate(cdate) as d, count(1) as count from clickdb.clicks where domain = :domain group by domain, path, d")
.bind("domain", domain)
.execute()
.flatMap(result -> result
.map((row, rowMetadata) -> String.format("%s%s[%s]:%d", row.get("domain", String.class),
row.get("path", String.class),
row.get("d", LocalDate.class),
row.get("count", Long.class)))
)
.doOnNext(System.out::println)
.subscribe();
Insert {#insert}
java showLineNumbers
connection
.createStatement("insert into clickdb.clicks values (:domain, :path, :cdate, :count)")
.bind("domain", click.getDomain())
.bind("path", click.getPath())
.bind("cdate", LocalDateTime.now())
.bind("count", 1)
.execute(); | {"source_file": "r2dbc.md"} | [
-0.028124088421463966,
-0.05027095600962639,
-0.08337748795747757,
0.035315681248903275,
0.0072505418211221695,
0.026471110060811043,
0.041224535554647446,
0.04600473493337631,
-0.13515706360340118,
-0.07424328476190567,
0.009161802008748055,
-0.03857674449682236,
0.015559909865260124,
-0.... |
65b1d332-99ec-48dd-86d5-78822983d563 | slug: /integrations/s3/performance
sidebar_position: 2
sidebar_label: 'Optimizing for performance'
title: 'Optimizing for S3 Insert and Read Performance'
description: 'Optimizing the performance of S3 read and insert'
doc_type: 'guide'
keywords: ['s3', 'performance', 'optimization', 'object storage', 'data loading']
import Image from '@theme/IdealImage';
import InsertMechanics from '@site/static/images/integrations/data-ingestion/s3/insert_mechanics.png';
import Pull from '@site/static/images/integrations/data-ingestion/s3/pull.png';
import Merges from '@site/static/images/integrations/data-ingestion/s3/merges.png';
import ResourceUsage from '@site/static/images/integrations/data-ingestion/s3/resource_usage.png';
import InsertThreads from '@site/static/images/integrations/data-ingestion/s3/insert_threads.png';
import S3Cluster from '@site/static/images/integrations/data-ingestion/s3/s3Cluster.png';
import HardwareSize from '@site/static/images/integrations/data-ingestion/s3/hardware_size.png';
This section focuses on optimizing performance when reading and inserting data from S3 using the
s3 table functions
.
:::info
The lesson described in this guide can be applied to other object storage implementations with their own dedicated table functions such as
GCS
and
Azure Blob storage
.
:::
Before tuning threads and block sizes to improve insert performance, we recommend users understand the mechanics of S3 inserts. If you're familiar with the insert mechanics, or just want some quick tips, skip to our example
below
.
Insert Mechanics (single node) {#insert-mechanics-single-node}
Two main factors, in addition to hardware size, influence the performance and resource usage of ClickHouse's data insert mechanics (for a single node):
insert block size
and
insert parallelism
.
Insert block size {#insert-block-size}
When performing an
INSERT INTO SELECT
, ClickHouse receives some data portion, and β forms (at least) one in-memory insert block (per
partitioning key
) from the received data. The block's data is sorted, and table engine-specific optimizations are applied. The data is then compressed and β‘ written to the database storage in the form of a new data part.
The insert block size impacts both the
disk file I/O usage
and memory usage of a ClickHouse server. Larger insert blocks use more memory but generate larger and fewer initial parts. The fewer parts ClickHouse needs to create for loading a large amount of data, the less disk file I/O and automatic
background merges required
.
When using an
INSERT INTO SELECT
query in combination with an integration table engine or a table function, the data is pulled by the ClickHouse server:
Until the data is completely loaded, the server executes a loop:
```bash
β Pull and parse the next portion of data and form an in-memory data block (one per partitioning key) from it.
β‘ Write the block into a new part on storage.
Go to β
``` | {"source_file": "performance.md"} | [
0.001134472549892962,
-0.0009356016525998712,
-0.06998753547668457,
0.017270460724830627,
0.03734198585152626,
-0.04272099584341049,
-0.011076747439801693,
0.05845081806182861,
-0.03786936029791832,
0.01504618488252163,
0.021931802853941917,
0.020203517749905586,
0.13620969653129578,
-0.07... |
b6776f1c-8e2d-4377-a5c6-c540682c21dc | ```bash
β Pull and parse the next portion of data and form an in-memory data block (one per partitioning key) from it.
β‘ Write the block into a new part on storage.
Go to β
```
In β , the size depends on the insert block size, which can be controlled with two settings:
min_insert_block_size_rows
(default:
1048545
million rows)
min_insert_block_size_bytes
(default:
256 MiB
)
When either the specified number of rows is collected in the insert block, or the configured amount of data is reached (whichever happens first), then this will trigger the block being written into a new part. The insert loop continues at step β .
Note that the
min_insert_block_size_bytes
value denotes the uncompressed in-memory block size (and not the compressed on-disk part size). Also, note that the created blocks and parts rarely precisely contain the configured number of rows or bytes because ClickHouse streams and
processes
data row-
block
-wise. Therefore, these settings specify minimum thresholds.
Be aware of merges {#be-aware-of-merges}
The smaller the configured insert block size is, the more initial parts get created for a large data load, and the more background part merges are executed concurrently with the data ingestion. This can cause resource contention (CPU and memory) and require additional time (for reaching a
healthy
(3000) number of parts) after the ingestion is finished.
:::important
ClickHouse query performance will be negatively impacted if the part count exceeds the
recommended limits
.
:::
ClickHouse will continuously
merge parts
into larger parts until they
reach
a compressed size of ~150 GiB. This diagram shows how a ClickHouse server merges parts:
A single ClickHouse server utilizes several
background merge threads
to execute concurrent
part merges
. Each thread executes a loop:
```bash
β Decide which parts to merge next, and load these parts as blocks into memory.
β‘ Merge the loaded blocks in memory into a larger block.
β’ Write the merged block into a new part on disk.
Go to β
```
Note that
increasing
the number of CPU cores and the size of RAM increases the background merge throughput.
Parts that were merged into larger parts are marked as
inactive
and finally deleted after a
configurable
number of minutes. Over time, this creates a tree of merged parts (hence the name
MergeTree
table).
Insert parallelism {#insert-parallelism}
A ClickHouse server can process and insert data in parallel. The level of insert parallelism impacts the ingest throughput and memory usage of a ClickHouse server. Loading and processing data in parallel requires more main memory but increases the ingest throughput as data is processed faster. | {"source_file": "performance.md"} | [
-0.004419524688273668,
-0.030117949470877647,
-0.054742954671382904,
-0.01571212150156498,
-0.009702260605990887,
-0.07800404727458954,
0.0014407889684662223,
0.04250666871666908,
0.008842004463076591,
0.021812403574585915,
0.0598100945353508,
0.048210110515356064,
0.059120167046785355,
-0... |
31b561ea-5292-4735-9c42-4d3c1f92c488 | Table functions like s3 allow specifying sets of to-be-loaded-file names via glob patterns. When a glob pattern matches multiple existing files, ClickHouse can parallelize reads across and within these files and insert the data in parallel into a table by utilizing parallel running insert threads (per server):
Until all data from all files is processed, each insert thread executes a loop:
```bash
β Get the next portion of unprocessed file data (portion size is based on the configured block size) and create an in-memory data block from it.
β‘ Write the block into a new part on storage.
Go to β .
```
The number of such parallel insert threads can be configured with the
max_insert_threads
setting. The default value is
1
for open-source ClickHouse and 4 for
ClickHouse Cloud
.
With a large number of files, the parallel processing by multiple insert threads works well. It can fully saturate both the available CPU cores and the network bandwidth (for parallel file downloads). In scenarios where just a few large files will be loaded into a table, ClickHouse automatically establishes a high level of data processing parallelism and optimizes network bandwidth usage by spawning additional reader threads per insert thread for reading (downloading) more distinct ranges within large files in parallel.
For the s3 function and table, parallel downloading of an individual file is determined by the values
max_download_threads
and
max_download_buffer_size
. Files will only be downloaded in parallel if their size is greater than
2 * max_download_buffer_size
. By default, the
max_download_buffer_size
default is set to 10MiB. In some cases, you can safely increase this buffer size to 50 MB (
max_download_buffer_size=52428800
), with the aim of ensuring each file was downloaded by a single thread. This can reduce the time each thread spends making S3 calls and thus also lower the S3 wait time. Furthermore, for files that are too small for parallel reading, to increase throughput, ClickHouse automatically prefetches data by pre-reading such files asynchronously.
Measuring performance {#measuring-performance}
Optimizing the performance of queries using the S3 table functions is required when both running queries against data in place i.e. ad-hoc querying where only ClickHouse compute is used and the data remains in S3 in its original format, and when inserting data from S3 into a ClickHouse MergeTree table engine. Unless specified the following recommendations apply to both scenarios.
Impact of hardware size {#impact-of-hardware-size}
The number of available CPU cores and the size of RAM impacts the:
supported
initial size of parts
possible level of
insert parallelism
throughput of
background part merges
and, therefore, the overall ingest throughput.
Region locality {#region-locality} | {"source_file": "performance.md"} | [
-0.041198767721652985,
-0.06372052431106567,
-0.08596455305814743,
-0.0142155010253191,
-0.010343728587031364,
-0.08907061815261841,
-0.0252086091786623,
-0.00860315840691328,
0.03380344808101654,
0.0836101844906807,
0.03614584729075432,
0.028988802805542946,
0.05365462228655815,
-0.094706... |
5c35358e-5bcc-413b-b6b2-acd58f49d742 | possible level of
insert parallelism
throughput of
background part merges
and, therefore, the overall ingest throughput.
Region locality {#region-locality}
Ensure your buckets are located in the same region as your ClickHouse instances. This simple optimization can dramatically improve throughput performance, especially if you deploy your ClickHouse instances on AWS infrastructure.
Formats {#formats}
ClickHouse can read files stored in S3 buckets in the
supported formats
using the
s3
function and
S3
engine. If reading raw files, some of these formats have distinct advantages:
Formats with encoded column names such as Native, Parquet, CSVWithNames, and TabSeparatedWithNames will be less verbose to query since the user will not be required to specify the column name is the
s3
function. The column names allow this information to be inferred.
Formats will differ in performance with respect to read and write throughputs. Native and parquet represent the most optimal formats for read performance since they are already column orientated and more compact. The native format additionally benefits from alignment with how ClickHouse stores data in memory - thus reducing processing overhead as data is streamed into ClickHouse.
The block size will often impact the latency of reads on large files. This is very apparent if you only sample the data, e.g., returning the top N rows. In the case of formats such as CSV and TSV, files must be parsed to return a set of rows. Formats such as Native and Parquet will allow faster sampling as a result.
Each compression format brings pros and cons, often balancing the compression level for speed and biasing compression or decompression performance. If compressing raw files such as CSV or TSV, lz4 offers the fastest decompression performance, sacrificing the compression level. Gzip typically compresses better at the expense of slightly slower read speeds. Xz takes this further by usually offering the best compression with the slowest compression and decompression performance. If exporting, Gz and lz4 offer comparable compression speeds. Balance this against your connection speeds. Any gains from faster decompression or compression will be easily negated by a slower connection to your s3 buckets.
Formats such as native or parquet do not typically justify the overhead of compression. Any savings in data size are likely to be minimal since these formats are inherently compact. The time spent compressing and decompressing will rarely offset network transfer times - especially since s3 is globally available with higher network bandwidth.
Example dataset {#example-dataset}
To illustrate further potential optimizations, purposes we will use
the posts from the Stack Overflow dataset
- optimizing both the query and insert performance of this data.
This dataset consists of 189 Parquet files, with one for every month between July 2008 and March 2024. | {"source_file": "performance.md"} | [
-0.009065316990017891,
-0.05202711001038551,
-0.10578922927379608,
-0.08021888881921768,
0.013640687800943851,
-0.05239797756075859,
-0.04945869743824005,
-0.007210900075733662,
0.06339135020971298,
0.0065217399969697,
-0.008054039441049099,
0.008669252507388592,
-0.0005648824153468013,
-0... |
1d47750d-3777-485a-ac9c-728f0665b0d7 | This dataset consists of 189 Parquet files, with one for every month between July 2008 and March 2024.
Note that we use Parquet for performance, per our
recommendations above
, executing all queries on a ClickHouse Cluster located in the same region as the bucket. This cluster has 3 nodes, each with 32GiB of RAM and 8 vCPUs.
With no tuning, we demonstrate the performance to insert this dataset into a MergeTree table engine as well as execute a query to compute the users asking the most questions. Both of these queries intentionally require a complete scan of the data.
```sql
-- Top usernames
SELECT
OwnerDisplayName,
count() AS num_posts
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
WHERE OwnerDisplayName NOT IN ('', 'anon')
GROUP BY OwnerDisplayName
ORDER BY num_posts DESC
LIMIT 5
ββOwnerDisplayNameββ¬βnum_postsββ
β user330315 β 10344 β
β user4039065 β 5316 β
β user149341 β 4102 β
β user529758 β 3700 β
β user3559349 β 3068 β
ββββββββββββββββββββ΄ββββββββββββ
5 rows in set. Elapsed: 3.013 sec. Processed 59.82 million rows, 24.03 GB (19.86 million rows/s., 7.98 GB/s.)
Peak memory usage: 603.64 MiB.
-- Load into posts table
INSERT INTO posts SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
0 rows in set. Elapsed: 191.692 sec. Processed 59.82 million rows, 24.03 GB (312.06 thousand rows/s., 125.37 MB/s.)
```
In our example we only return a few rows. If measuring the performance of
SELECT
queries, where large volumes of data are returned to the client, either utilize the
null format
for queries or direct results to the
Null
engine
. This should avoid the client being overwhelmed with data and network saturation.
:::info
When reading from queries, the initial query can often appear slower than if the same query is repeated. This can be attributed to both S3's own caching but also the
ClickHouse Schema Inference Cache
. This stores the inferred schema for files and means the inference step can be skipped on subsequent accesses, thus reducing query time.
:::
Using threads for reads {#using-threads-for-reads}
Read performance on S3 will scale linearly with the number of cores, provided you are not limited by network bandwidth or local I/O. Increasing the number of threads also has memory overhead permutations that users should be aware of. The following can be modified to improve read throughput performance potentially: | {"source_file": "performance.md"} | [
0.016644593328237534,
-0.046858612447977066,
-0.03561006113886833,
-0.032135091722011566,
-0.044961944222450256,
-0.07946301996707916,
0.03816918283700943,
-0.02620096132159233,
-0.018240660429000854,
0.04284770041704178,
0.056543633341789246,
-0.0391431525349617,
0.062449563294649124,
-0.... |
af75af57-f660-4856-bcd6-1a59ade1b84a | Usually, the default value of
max_threads
is sufficient, i.e., the number of cores. If the amount of memory used for a query is high, and this needs to be reduced, or the
LIMIT
on results is low, this value can be set lower. Users with plenty of memory may wish to experiment with increasing this value for possible higher read throughput from S3. Typically this is only beneficial on machines with lower core counts, i.e., < 10. The benefit from further parallelization typically diminishes as other resources act as a bottleneck, e.g., network and CPU contention.
Versions of ClickHouse before 22.3.1 only parallelized reads across multiple files when using the
s3
function or
S3
table engine. This required the user to ensure files were split into chunks on S3 and read using a glob pattern to achieve optimal read performance. Later versions now parallelize downloads within a file.
In low thread count scenarios, users may benefit from setting
remote_filesystem_read_method
to "read" to cause the synchronous reading of files from S3.
For the s3 function and table, parallel downloading of an individual file is determined by the values
max_download_threads
and
max_download_buffer_size
. While
max_download_threads
controls the number of threads used, files will only be downloaded in parallel if their size is greater than 2 *
max_download_buffer_size
. By default, the
max_download_buffer_size
default is set to 10MiB. In some cases, you can safely increase this buffer size to 50 MB (
max_download_buffer_size=52428800
), with the aim of ensuring smaller files are only downloaded by a single thread. This can reduce the time each thread spends making S3 calls and thus also lower the S3 wait time. See
this blog post
for an example of this.
Before making any changes to improve performance, ensure you measure appropriately. As S3 API calls are sensitive to latency and may impact client timings, use the query log for performance metrics, i.e.,
system.query_log
.
Consider our earlier query, doubling the
max_threads
to
16
(default
max_thread
is the number of cores on a node) improves our read query performance by 2x at the expense of higher memory. Further increasing
max_threads
has diminishing returns as shown.
```sql
SELECT
OwnerDisplayName,
count() AS num_posts
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
WHERE OwnerDisplayName NOT IN ('', 'anon')
GROUP BY OwnerDisplayName
ORDER BY num_posts DESC
LIMIT 5
SETTINGS max_threads = 16
ββOwnerDisplayNameββ¬βnum_postsββ
β user330315 β 10344 β
β user4039065 β 5316 β
β user149341 β 4102 β
β user529758 β 3700 β
β user3559349 β 3068 β
ββββββββββββββββββββ΄ββββββββββββ
5 rows in set. Elapsed: 1.505 sec. Processed 59.82 million rows, 24.03 GB (39.76 million rows/s., 15.97 GB/s.)
Peak memory usage: 178.58 MiB.
SETTINGS max_threads = 32 | {"source_file": "performance.md"} | [
-0.035720836371183395,
-0.08284108340740204,
-0.10992590337991714,
-0.048167988657951355,
-0.010691949166357517,
-0.023367779329419136,
-0.05627286434173584,
0.01654588058590889,
0.03317577391862869,
0.021880825981497765,
0.014407621696591377,
0.06972677260637283,
0.013721299357712269,
-0.... |
976e49cb-09ab-4680-9b0f-01ae48da40fe | 5 rows in set. Elapsed: 1.505 sec. Processed 59.82 million rows, 24.03 GB (39.76 million rows/s., 15.97 GB/s.)
Peak memory usage: 178.58 MiB.
SETTINGS max_threads = 32
5 rows in set. Elapsed: 0.779 sec. Processed 59.82 million rows, 24.03 GB (76.81 million rows/s., 30.86 GB/s.)
Peak memory usage: 369.20 MiB.
SETTINGS max_threads = 64
5 rows in set. Elapsed: 0.674 sec. Processed 59.82 million rows, 24.03 GB (88.81 million rows/s., 35.68 GB/s.)
Peak memory usage: 639.99 MiB.
```
Tuning threads and block size for inserts {#tuning-threads-and-block-size-for-inserts}
To achieve maximum ingestion performance, you must choose (1) an insert block size and (2) an appropriate level of insert parallelism based on (3) the amount of available CPU cores and RAM available. In summary:
The larger we
configure the insert block size
, the fewer parts ClickHouse has to create, and the fewer
disk file I/O
and
background merges
are required.
The higher we configure the
number of parallel insert threads
, the faster the data will be processed.
There is a conflicting tradeoff between these two performance factors (plus a tradeoff with the background part merging). The amount of available main memory of ClickHouse servers is limited. Larger blocks use more main memory, which limits the number of parallel insert threads we can utilize. Conversely, a higher number of parallel insert threads requires more main memory, as the number of insert threads determines the number of insert blocks created in memory concurrently. This limits the possible size of insert blocks. Additionally, there can be resource contention between insert threads and background merge threads. A high number of configured insert threads (1) creates more parts that need to be merged and (2) takes away CPU cores and memory space from background merge threads.
For a detailed description of how the behavior of these parameters impacts performance and resources, we recommend
reading this blog post
. As described in this blog post, tuning can involve a careful balance of the two parameters. This exhaustive testing is often impractical, so in summary, we recommend:
```bash
β’ max_insert_threads: choose ~ half of the available CPU cores for insert threads (to leave enough dedicated cores for background merges)
β’ peak_memory_usage_in_bytes: choose an intended peak memory usage; either all available RAM (if it is an isolated ingest) or half or less (to leave room for other concurrent tasks)
Then:
min_insert_block_size_bytes = peak_memory_usage_in_bytes / (~3 * max_insert_threads)
```
With this formula, you can set
min_insert_block_size_rows
to 0 (to disable the row based threshold) while setting
max_insert_threads
to the chosen value and
min_insert_block_size_bytes
to the calculated result from the above formula.
Using this formula with our earlier Stack Overflow example.
max_insert_threads=4
(8 cores per node) | {"source_file": "performance.md"} | [
0.042195625603199005,
-0.017169974744319916,
-0.08547463268041611,
-0.010996241122484207,
-0.05974531173706055,
-0.05291253700852394,
-0.020351724699139595,
0.06389506906270981,
-0.053071342408657074,
0.026431478559970856,
-0.05931099131703377,
0.02470978908240795,
0.012210115790367126,
-0... |
de1e3fff-b62f-4037-b563-2efbe28168a2 | Using this formula with our earlier Stack Overflow example.
max_insert_threads=4
(8 cores per node)
peak_memory_usage_in_bytes
- 32 GiB (100% of node resources) or
34359738368
bytes.
min_insert_block_size_bytes
=
34359738368/(3*4) = 2863311530
```sql
INSERT INTO posts SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet') SETTINGS min_insert_block_size_rows=0, max_insert_threads=4, min_insert_block_size_bytes=2863311530
0 rows in set. Elapsed: 128.566 sec. Processed 59.82 million rows, 24.03 GB (465.28 thousand rows/s., 186.92 MB/s.)
```
As shown, tuning of these setting has improved insert performance by over
33%
. We leave this to the reader to see if they can improve single node performance further.
Scaling with resources and nodes {#scaling-with-resources-and-nodes}
Scaling with resources and nodes applies to both read and insert queries.
Vertical scaling {#vertical-scaling}
All previous tuning and queries have only used a single node in our ClickHouse Cloud cluster. Users will also often have more than one node of ClickHouse available. We recommend users scale vertically initially, improving S3 throughput linearly with the number of cores. If we repeat our earlier insert and read queries on a larger ClickHouse Cloud node to twice the resources (64GiB, 16 vCPUs) with appropriate settings, both execute approximately twice as fast.
```sql
INSERT INTO posts SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet') SETTINGS min_insert_block_size_rows=0, max_insert_threads=8, min_insert_block_size_bytes=2863311530
0 rows in set. Elapsed: 67.294 sec. Processed 59.82 million rows, 24.03 GB (888.93 thousand rows/s., 357.12 MB/s.)
SELECT
OwnerDisplayName,
count() AS num_posts
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
WHERE OwnerDisplayName NOT IN ('', 'anon')
GROUP BY OwnerDisplayName
ORDER BY num_posts DESC
LIMIT 5
SETTINGS max_threads = 92
5 rows in set. Elapsed: 0.421 sec. Processed 59.82 million rows, 24.03 GB (142.08 million rows/s., 57.08 GB/s.)
```
:::note
Individual nodes can also be bottlenecked by network and S3 GET requests, preventing linear scaling of performance vertically.
:::
Horizontal scaling {#horizontal-scaling}
Eventually, horizontal scaling is often necessary due to hardware availability and cost-efficiency. In ClickHouse Cloud, production clusters have at least 3 nodes. Users may also wish to therefore utilize all nodes for an insert.
Utilizing a cluster for S3 reads requires using the
s3Cluster
function as described in
Utilizing Clusters
. This allows reads to be distributed across nodes. | {"source_file": "performance.md"} | [
-0.0117736691609025,
-0.05683232098817825,
-0.07924345880746841,
-0.022465592250227928,
-0.029737599194049835,
-0.09013700485229492,
-0.042228106409311295,
0.018285155296325684,
-0.003744180081412196,
0.03267880156636238,
-0.027940116822719574,
0.009924606420099735,
0.04183391109108925,
-0... |
62ae8259-08a8-48b2-86c6-9b9265d152ed | Utilizing a cluster for S3 reads requires using the
s3Cluster
function as described in
Utilizing Clusters
. This allows reads to be distributed across nodes.
The server that initially receives the insert query first resolves the glob pattern and then dispatches the processing of each matching file dynamically to itself and the other servers.
We repeat our earlier read query distributing the workload across 3 nodes, adjusting the query to use
s3Cluster
. This is performed automatically in ClickHouse Cloud, by referring to the
default
cluster.
As noted in
Utilizing Clusters
this work is distributed a file level. To benefit from this feature users will require a sufficient number of files i.e. at least > the number of nodes.
```sql
SELECT
OwnerDisplayName,
count() AS num_posts
FROM s3Cluster('default', 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
WHERE OwnerDisplayName NOT IN ('', 'anon')
GROUP BY OwnerDisplayName
ORDER BY num_posts DESC
LIMIT 5
SETTINGS max_threads = 16
ββOwnerDisplayNameββ¬βnum_postsββ
β user330315 β 10344 β
β user4039065 β 5316 β
β user149341 β 4102 β
β user529758 β 3700 β
β user3559349 β 3068 β
ββββββββββββββββββββ΄ββββββββββββ
5 rows in set. Elapsed: 0.622 sec. Processed 59.82 million rows, 24.03 GB (96.13 million rows/s., 38.62 GB/s.)
Peak memory usage: 176.74 MiB.
```
Likewise, our insert query can be distributed, using the improved settings identified earlier for a single node:
```sql
INSERT INTO posts SELECT *
FROM s3Cluster('default', 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet') SETTINGS min_insert_block_size_rows=0, max_insert_threads=4, min_insert_block_size_bytes=2863311530
0 rows in set. Elapsed: 171.202 sec. Processed 59.82 million rows, 24.03 GB (349.41 thousand rows/s., 140.37 MB/s.)
```
Readers will notice the reading of files has improved query but not insert performance. By default, although reads are distributed using
s3Cluster
, inserts will occur against the initiator node. This means that while reads will occur on each node, the resulting rows will be routed to the initiator for distribution. In high throughput scenarios, this may prove a bottleneck. To address this, set the parameter
parallel_distributed_insert_select
for the
s3cluster
function.
Setting this to
parallel_distributed_insert_select=2
, ensures the
SELECT
and
INSERT
will be executed on each shard from/to the underlying table of the distributed engine on each node.
```sql
INSERT INTO posts
SELECT *
FROM s3Cluster('default', 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
SETTINGS parallel_distributed_insert_select = 2, min_insert_block_size_rows=0, max_insert_threads=4, min_insert_block_size_bytes=2863311530 | {"source_file": "performance.md"} | [
-0.020085280761122704,
-0.09439072012901306,
-0.07787491381168365,
0.01679171249270439,
-0.017623024061322212,
-0.06025161221623421,
-0.029444735497236252,
-0.05541566386818886,
0.04522686079144478,
0.05436248332262039,
0.007812952622771263,
-0.004106317646801472,
0.09291170537471771,
-0.0... |
3626f682-98db-4738-972e-440571cc6f5d | 0 rows in set. Elapsed: 54.571 sec. Processed 59.82 million rows, 24.03 GB (1.10 million rows/s., 440.38 MB/s.)
Peak memory usage: 11.75 GiB.
```
As expected, this reduces insert performance by 3x.
Further tuning {#further-tuning}
Disable de-duplication {#disable-de-duplication}
Insert operations can sometimes fail due to errors such as timeouts. When inserts fail, data may or may not have been successfully inserted. To allow inserts to be safely re-tried by the client, by default in distributed deployments such as ClickHouse Cloud, ClickHouse tries to determine whether the data has already been successfully inserted. If the inserted data is marked as a duplicate, ClickHouse does not insert it into the destination table. However, the user will still receive a successful operation status as if the data had been inserted normally.
While this behavior, which incurs an insert overhead, makes sense when loading data from a client or in batches it can be unnecessary when performing an
INSERT INTO SELECT
from object storage. By disabling this functionality at insert time, we can improve performance as shown below:
```sql
INSERT INTO posts
SETTINGS parallel_distributed_insert_select = 2, min_insert_block_size_rows = 0, max_insert_threads = 4, min_insert_block_size_bytes = 2863311530, insert_deduplicate = 0
SELECT *
FROM s3Cluster('default', 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
SETTINGS parallel_distributed_insert_select = 2, min_insert_block_size_rows = 0, max_insert_threads = 4, min_insert_block_size_bytes = 2863311530, insert_deduplicate = 0
0 rows in set. Elapsed: 52.992 sec. Processed 59.82 million rows, 24.03 GB (1.13 million rows/s., 453.50 MB/s.)
Peak memory usage: 26.57 GiB.
```
Optimize on insert {#optimize-on-insert}
In ClickHouse, the
optimize_on_insert
setting controls whether data parts are merged during the insert process. When enabled (
optimize_on_insert = 1
by default), small parts are merged into larger ones as they are inserted, improving query performance by reducing the number of parts that need to be read. However, this merging adds overhead to the insert process, potentially slowing down high-throughput insertions.
Disabling this setting (
optimize_on_insert = 0
) skips merging during inserts, allowing data to be written more quickly, especially when handling frequent small inserts. The merging process is deferred to the background, allowing for better insert performance but temporarily increasing the number of small parts, which may slow down queries until the background merge completes. This setting is ideal when insert performance is a priority, and the background merge process can handle optimization efficiently later. As shown below, disabling setting can improve insert throughput: | {"source_file": "performance.md"} | [
-0.015894724056124687,
-0.031461793929338455,
-0.007208433002233505,
0.0644145980477333,
-0.0803716704249382,
-0.07812124490737915,
0.01305557880550623,
-0.03717678785324097,
0.043720345944166183,
0.057552482932806015,
0.06309132277965546,
0.036871254444122314,
0.0656171515583992,
-0.12008... |
a9883f0b-f145-4e5e-84cd-3e69baead2b1 | ```sql
SELECT *
FROM s3Cluster('default', 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
SETTINGS parallel_distributed_insert_select = 2, min_insert_block_size_rows = 0, max_insert_threads = 4, min_insert_block_size_bytes = 2863311530, insert_deduplicate = 0, optimize_on_insert = 0
0 rows in set. Elapsed: 49.688 sec. Processed 59.82 million rows, 24.03 GB (1.20 million rows/s., 483.66 MB/s.)
```
Misc notes {#misc-notes}
For low memory scenarios, consider lowering
max_insert_delayed_streams_for_parallel_write
if inserting into S3. | {"source_file": "performance.md"} | [
-0.05678914114832878,
-0.062413908541202545,
-0.08236684650182724,
-0.04214351624250412,
-0.03885519877076149,
-0.094223253428936,
-0.02923487313091755,
0.01284624170511961,
0.02008640393614769,
0.043259695172309875,
0.03294752165675163,
-0.0075724124908447266,
0.01784839667379856,
-0.1099... |
78114c18-01b3-4d9f-91ea-4441ebddeb3d | slug: /integrations/s3
sidebar_position: 1
sidebar_label: 'Integrating S3 with ClickHouse'
title: 'Integrating S3 with ClickHouse'
description: 'Page describing how to integrate S3 with ClickHouse'
keywords: ['Amazon S3', 'object storage', 'cloud storage', 'data lake', 'S3 integration']
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
import BucketDetails from '@site/docs/_snippets/_S3_authentication_and_bucket.md';
import S3J from '@site/static/images/integrations/data-ingestion/s3/s3-j.png';
import Bucket1 from '@site/static/images/integrations/data-ingestion/s3/bucket1.png';
import Bucket2 from '@site/static/images/integrations/data-ingestion/s3/bucket2.png';
import Image from '@theme/IdealImage';
Integrating S3 with ClickHouse
You can insert data from S3 into ClickHouse and also use S3 as an export destination, thus allowing interaction with "Data Lake" architectures. Furthermore, S3 can provide "cold" storage tiers and assist with separating storage and compute. In the sections below we use the New York City taxi dataset to demonstrate the process of moving data between S3 and ClickHouse, as well as identifying key configuration parameters and providing hints on optimizing performance.
S3 table functions {#s3-table-functions}
The
s3
table function allows you to read and write files from and to S3 compatible storage. The outline for this syntax is:
sql
s3(path, [aws_access_key_id, aws_secret_access_key,] [format, [structure, [compression]]])
where:
path β Bucket URL with a path to the file. This supports following wildcards in read-only mode:
*
,
?
,
{abc,def}
and
{N..M}
where
N
,
M
are numbers,
'abc'
,
'def'
are strings. For more information, see the docs on
using wildcards in path
.
format β The
format
of the file.
structure β Structure of the table. Format
'column1_name column1_type, column2_name column2_type, ...'
.
compression β Parameter is optional. Supported values:
none
,
gzip/gz
,
brotli/br
,
xz/LZMA
,
zstd/zst
. By default, it will autodetect compression by file extension.
Using wildcards in the path expression allow multiple files to be referenced and opens the door for parallelism.
Preparation {#preparation}
Prior to creating the table in ClickHouse, you may want to first take a closer look at the data in the S3 bucket. You can do this directly from ClickHouse using the
DESCRIBE
statement:
sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames');
The output of the
DESCRIBE TABLE
statement should show you how ClickHouse would automatically infer this data, as viewed in the S3 bucket. Notice that it also automatically recognizes and decompresses the gzip compression format:
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames') SETTINGS describe_compact_output=1 | {"source_file": "index.md"} | [
0.015872633084654808,
0.007390670012682676,
-0.031268294900655746,
-0.04987509921193123,
0.07029746472835541,
-0.017587600275874138,
0.016771553084254265,
0.019366279244422913,
-0.03034220077097416,
0.004964344669133425,
0.026967164129018784,
-0.047476157546043396,
0.12768538296222687,
-0.... |
ecbd5c9c-dd1d-4fab-b143-de7b0bf8d938 | ```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames') SETTINGS describe_compact_output=1
ββnameβββββββββββββββββββ¬βtypeββββββββββββββββ
β trip_id β Nullable(Int64) β
β vendor_id β Nullable(Int64) β
β pickup_date β Nullable(Date) β
β pickup_datetime β Nullable(DateTime) β
β dropoff_date β Nullable(Date) β
β dropoff_datetime β Nullable(DateTime) β
β store_and_fwd_flag β Nullable(Int64) β
β rate_code_id β Nullable(Int64) β
β pickup_longitude β Nullable(Float64) β
β pickup_latitude β Nullable(Float64) β
β dropoff_longitude β Nullable(Float64) β
β dropoff_latitude β Nullable(Float64) β
β passenger_count β Nullable(Int64) β
β trip_distance β Nullable(String) β
β fare_amount β Nullable(String) β
β extra β Nullable(String) β
β mta_tax β Nullable(String) β
β tip_amount β Nullable(String) β
β tolls_amount β Nullable(Float64) β
β ehail_fee β Nullable(Int64) β
β improvement_surcharge β Nullable(String) β
β total_amount β Nullable(String) β
β payment_type β Nullable(String) β
β trip_type β Nullable(Int64) β
β pickup β Nullable(String) β
β dropoff β Nullable(String) β
β cab_type β Nullable(String) β
β pickup_nyct2010_gid β Nullable(Int64) β
β pickup_ctlabel β Nullable(Float64) β
β pickup_borocode β Nullable(Int64) β
β pickup_ct2010 β Nullable(String) β
β pickup_boroct2010 β Nullable(String) β
β pickup_cdeligibil β Nullable(String) β
β pickup_ntacode β Nullable(String) β
β pickup_ntaname β Nullable(String) β
β pickup_puma β Nullable(Int64) β
β dropoff_nyct2010_gid β Nullable(Int64) β
β dropoff_ctlabel β Nullable(Float64) β
β dropoff_borocode β Nullable(Int64) β
β dropoff_ct2010 β Nullable(String) β
β dropoff_boroct2010 β Nullable(String) β
β dropoff_cdeligibil β Nullable(String) β
β dropoff_ntacode β Nullable(String) β
β dropoff_ntaname β Nullable(String) β
β dropoff_puma β Nullable(Int64) β
βββββββββββββββββββββββββ΄βββββββββββββββββββββ
```
To interact with our S3-based dataset, we prepare a standard
MergeTree
table as our destination. The statement below creates a table named
trips
in the default database. Note that we have chosen to modify some of those data types as inferred above, particularly to not use the
Nullable()
data type modifier, which could cause some unnecessary additional stored data and some additional performance overhead: | {"source_file": "index.md"} | [
0.06331528723239899,
-0.0611739307641983,
-0.07324850559234619,
0.059414222836494446,
0.01884089596569538,
0.007783122360706329,
0.03839096799492836,
-0.024630675092339516,
-0.06667210906744003,
0.04588032513856888,
0.06603235751390457,
-0.05353935435414314,
0.004030691925436258,
-0.045776... |
f4a5cf76-7efd-47a4-8fbd-aa186902bc3c | sql
CREATE TABLE trips
(
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
Note the use of
partitioning
on the
pickup_date
field. Usually a partition key is for data management, but later on we will use this key to parallelize writes to S3.
Each entry in our taxi dataset contains a taxi trip. This anonymized data consists of 20M records compressed in the S3 bucket https://datasets-documentation.s3.eu-west-3.amazonaws.com/ under the folder
nyc-taxi
. The data is in the TSV format with approximately 1M rows per file.
Reading Data from S3 {#reading-data-from-s3}
We can query S3 data as a source without requiring persistence in ClickHouse. In the following query, we sample 10 rows. Note the absence of credentials here as the bucket is publicly accessible:
sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames')
LIMIT 10;
Note that we are not required to list the columns since the
TabSeparatedWithNames
format encodes the column names in the first row. Other formats, such as
CSV
or
TSV
, will return auto-generated columns for this query, e.g.,
c1
,
c2
,
c3
etc.
Queries additionally support
virtual columns
, like
_path
and
_file
, that provide information regarding the bucket path and filename respectively. For example: | {"source_file": "index.md"} | [
0.05478036776185036,
-0.014967425726354122,
-0.0390169695019722,
0.0376826673746109,
-0.06296306103467941,
-0.015843145549297333,
0.040226712822914124,
0.044907987117767334,
-0.03883081674575806,
0.06348885595798492,
0.10021739453077316,
-0.1203337088227272,
-0.0009971016552299261,
-0.0162... |
382901c0-0da3-4096-840c-c6237268e4b3 | Queries additionally support
virtual columns
, like
_path
and
_file
, that provide information regarding the bucket path and filename respectively. For example:
sql
SELECT _path, _file, trip_id
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_0.gz', 'TabSeparatedWithNames')
LIMIT 5;
response
ββ_pathβββββββββββββββββββββββββββββββββββββββ¬β_fileβββββββ¬ββββtrip_idββ
β datasets-documentation/nyc-taxi/trips_0.gz β trips_0.gz β 1199999902 β
β datasets-documentation/nyc-taxi/trips_0.gz β trips_0.gz β 1199999919 β
β datasets-documentation/nyc-taxi/trips_0.gz β trips_0.gz β 1199999944 β
β datasets-documentation/nyc-taxi/trips_0.gz β trips_0.gz β 1199999969 β
β datasets-documentation/nyc-taxi/trips_0.gz β trips_0.gz β 1199999990 β
ββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββ
Confirm the number of rows in this sample dataset. Note the use of wildcards for file expansion, so we consider all twenty files. This query will take around 10 seconds, depending on the number of cores on the ClickHouse instance:
sql
SELECT count() AS count
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames');
response
βββββcountββ
β 20000000 β
ββββββββββββ
While useful for sampling data and executing ae-hoc, exploratory queries, reading data directly from S3 is not something you want to do regularly. When it is time to get serious, import the data into a
MergeTree
table in ClickHouse.
Using clickhouse-local {#using-clickhouse-local}
The
clickhouse-local
program enables you to perform fast processing on local files without deploying and configuring the ClickHouse server. Any queries using the
s3
table function can be performed with this utility. For example:
sql
clickhouse-local --query "SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames') LIMIT 10"
Inserting Data from S3 {#inserting-data-from-s3}
To exploit the full capabilities of ClickHouse, we next read and insert the data into our instance.
We combine our
s3
function with a simple
INSERT
statement to achieve this. Note that we aren't required to list our columns because our target table provides the required structure. This requires the columns to appear in the order specified in the table DDL statement: columns are mapped according to their position in the
SELECT
clause. The insertion of all 10m rows can take a few minutes depending on the ClickHouse instance. Below we insert 1M rows to ensure a prompt response. Adjust the
LIMIT
clause or column selection to import subsets as required:
sql
INSERT INTO trips
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames')
LIMIT 1000000;
Remote Insert using ClickHouse Local {#remote-insert-using-clickhouse-local} | {"source_file": "index.md"} | [
0.03781596198678017,
-0.060264214873313904,
-0.0555591806769371,
0.01599602773785591,
0.03193839639425278,
-0.05327046662569046,
0.044283781200647354,
-0.022716926410794258,
0.0042019193060696125,
0.03440723940730095,
0.006710818037390709,
-0.030168116092681885,
0.015948833897709846,
-0.02... |
1e7b31d3-d985-49d4-88e5-d2384b9da5e1 | Remote Insert using ClickHouse Local {#remote-insert-using-clickhouse-local}
If network security policies prevent your ClickHouse cluster from making outbound connections, you can potentially insert S3 data using
clickhouse-local
. In the example below, we read from an S3 bucket and insert into ClickHouse using the
remote
function:
sql
clickhouse-local --query "INSERT INTO TABLE FUNCTION remote('localhost:9000', 'default.trips', 'username', 'password') (*) SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz', 'TabSeparatedWithNames') LIMIT 10"
:::note
To execute this over a secure SSL connection, utilize the
remoteSecure
function.
:::
Exporting data {#exporting-data}
You can write to files in S3 using the
s3
table function. This will require appropriate permissions. We pass the credentials needed in the request, but view the
Managing Credentials
page for more options.
In the simple example below, we use the table function as a destination instead of a source. Here we stream 10,000 rows from the
trips
table to a bucket, specifying
lz4
compression and output type of
CSV
:
sql
INSERT INTO FUNCTION
s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/csv/trips.csv.lz4',
's3_key',
's3_secret',
'CSV'
)
SELECT *
FROM trips
LIMIT 10000;
Note here how the format of the file is inferred from the extension. We also don't need to specify the columns in the
s3
function - this can be inferred from the
SELECT
.
Splitting large files {#splitting-large-files}
It is unlikely you will want to export your data as a single file. Most tools, including ClickHouse, will achieve higher throughput performance when reading and writing to multiple files due to the possibility of parallelism. We could execute our
INSERT
command multiple times, targeting a subset of the data. ClickHouse offers a means of automatic splitting files using a
PARTITION
key.
In the example below, we create ten files using a modulus of the
rand()
function. Notice how the resulting partition ID is referenced in the filename. This results in ten files with a numerical suffix, e.g.
trips_0.csv.lz4
,
trips_1.csv.lz4
etc...:
sql
INSERT INTO FUNCTION
s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/csv/trips_{_partition_id}.csv.lz4',
's3_key',
's3_secret',
'CSV'
)
PARTITION BY rand() % 10
SELECT *
FROM trips
LIMIT 100000;
Alternatively, we can reference a field in the data. For this dataset, the
payment_type
provides a natural partitioning key with a cardinality of 5.
sql
INSERT INTO FUNCTION
s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/csv/trips_{_partition_id}.csv.lz4',
's3_key',
's3_secret',
'CSV'
)
PARTITION BY payment_type
SELECT *
FROM trips
LIMIT 100000;
Utilizing clusters {#utilizing-clusters} | {"source_file": "index.md"} | [
0.040470559149980545,
-0.07656191289424896,
-0.12665845453739166,
0.01158574502915144,
-0.03781726956367493,
-0.0024969533551484346,
-0.021218128502368927,
-0.036352671682834625,
-0.012206957675516605,
0.028111733496189117,
-0.0014011544408276677,
-0.02753368578851223,
0.09790357947349548,
... |
5caca38c-a993-478d-86cf-643e26f27012 | Utilizing clusters {#utilizing-clusters}
The above functions are all limited to execution on a single node. Read speeds will scale linearly with CPU cores until other resources (typically network) are saturated, allowing users to vertically scale. However, this approach has its limitations. While users can alleviate some resource pressure by inserting into a distributed table when performing an
INSERT INTO SELECT
query, this still leaves a single node reading, parsing, and processing the data. To address this challenge and allow us to scale reads horizontally, we have the
s3Cluster
function.
The node which receives the query, known as the initiator, creates a connection to every node in the cluster. The glob pattern determining which files need to be read is resolved to a set of files. The initiator distributes files to the nodes in the cluster, which act as workers. These workers, in turn, request files to process as they complete reads. This process ensures that we can scale reads horizontally.
The
s3Cluster
function takes the same format as the single node variants, except that a target cluster is required to denote the worker nodes:
sql
s3Cluster(cluster_name, source, [access_key_id, secret_access_key,] format, structure)
cluster_name
β Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers.
source
β URL to a file or a bunch of files. Supports following wildcards in read-only mode:
*
,
?
,
{'abc','def'}
and
{N..M}
where N, M β numbers, abc, def β strings. For more information see
Wildcards In Path
.
access_key_id
and
secret_access_key
β Keys that specify credentials to use with the given endpoint. Optional.
format
β The
format
of the file.
structure
β Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...'.
Like any
s3
functions, the credentials are optional if the bucket is insecure or you define security through the environment, e.g., IAM roles. Unlike the s3 function, however, the structure must be specified in the request as of 22.3.1, i.e., the schema is not inferred.
This function will be used as part of an
INSERT INTO SELECT
in most cases. In this case, you will often be inserting a distributed table. We illustrate a simple example below where trips_all is a distributed table. While this table uses the events cluster, the consistency of the nodes used for reads and writes is not a requirement:
sql
INSERT INTO default.trips_all
SELECT *
FROM s3Cluster(
'events',
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_*.gz',
'TabSeparatedWithNames'
) | {"source_file": "index.md"} | [
-0.014459977857768536,
-0.062473833560943604,
-0.07247725874185562,
0.06749258190393448,
0.01459625456482172,
-0.08191614598035812,
-0.027151724323630333,
-0.0053045558743178844,
-0.01669655553996563,
0.02751643769443035,
-0.017759356647729874,
-0.00003853158341371454,
0.07945019751787186,
... |
38126c39-4c69-44bf-bea8-443fc0ab85a2 | Inserts will occur against the initiator node. This means that while reads will occur on each node, the resulting rows will be routed to the initiator for distribution. In high throughput scenarios, this may prove a bottleneck. To address this, set the parameter
parallel_distributed_insert_select
for the
s3cluster
function.
S3 table engines {#s3-table-engines}
While the
s3
functions allow ad-hoc queries to be performed on data stored in S3, they are syntactically verbose. The
S3
table engine allows you to not have to specify the bucket URL and credentials over and over again. To address this, ClickHouse provides the S3 table engine.
sql
CREATE TABLE s3_engine_table (name String, value UInt32)
ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, [compression])
[SETTINGS ...]
path
β Bucket URL with a path to the file. Supports following wildcards in read-only mode:
*
,
?
,
{abc,def}
and
{N..M}
where N, M β numbers, 'abc', 'def' β strings. For more information, see
here
.
format
β The
format
of the file.
aws_access_key_id
,
aws_secret_access_key
- Long-term credentials for the AWS account user. You can use these to authenticate your requests. The parameter is optional. If credentials are not specified, configuration file values are used. For more information, see
Managing credentials
.
compression
β Compression type. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. The parameter is optional. By default, it will autodetect compression by file extension.
Reading data {#reading-data}
In the following example, we create a table named
trips_raw
using the first ten TSV files located in the
https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/
bucket. Each of these contains 1M rows each: | {"source_file": "index.md"} | [
-0.030624259263277054,
-0.08253741264343262,
-0.09936565160751343,
0.014082199893891811,
-0.027263334020972252,
-0.07531314343214035,
-0.04823102056980133,
-0.02580287866294384,
0.016793807968497276,
0.028636347502470016,
-0.012642218731343746,
-0.017079954966902733,
0.11795669049024582,
-... |
0f88e52b-0e58-4296-8b0f-13a0a9a19c38 | sql
CREATE TABLE trips_raw
(
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type_` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` FixedString(7),
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` FixedString(7),
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
) ENGINE = S3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_{0..9}.gz', 'TabSeparatedWithNames', 'gzip');
Notice the use of the
{0..9}
pattern to limit to the first ten files. Once created, we can query this table like any other table:
```sql
SELECT DISTINCT(pickup_ntaname)
FROM trips_raw
LIMIT 10;
ββpickup_ntanameββββββββββββββββββββββββββββββββββββ
β Lenox Hill-Roosevelt Island β
β Airport β
β SoHo-TriBeCa-Civic Center-Little Italy β
β West Village β
β Chinatown β
β Hudson Yards-Chelsea-Flatiron-Union Square β
β Turtle Bay-East Midtown β
β Upper West Side β
β Murray Hill-Kips Bay β
β DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
Inserting data {#inserting-data} | {"source_file": "index.md"} | [
0.055880725383758545,
-0.010203063488006592,
-0.038123175501823425,
0.03788571059703827,
-0.056968506425619125,
-0.019468845799565315,
0.028072964400053024,
0.0459333173930645,
-0.03585420921444893,
0.0658552423119545,
0.10307788848876953,
-0.1284516453742981,
0.0003522765473462641,
-0.020... |
b9b99c06-a9ca-4544-a4f1-9d1148b22b01 | Inserting data {#inserting-data}
The
S3
table engine supports parallel reads. Writes are only supported if the table definition does not contain glob patterns. The above table, therefore, would block writes.
To demonstrate writes, create a table that points to a writable S3 bucket:
sql
CREATE TABLE trips_dest
(
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`tip_amount` Float32,
`total_amount` Float32
) ENGINE = S3('<bucket path>/trips.bin', 'Native');
sql
INSERT INTO trips_dest
SELECT
trip_id,
pickup_date,
pickup_datetime,
dropoff_datetime,
tip_amount,
total_amount
FROM trips
LIMIT 10;
sql
SELECT * FROM trips_dest LIMIT 5;
response
βββββtrip_idββ¬βpickup_dateββ¬βββββpickup_datetimeββ¬ββββdropoff_datetimeββ¬βtip_amountββ¬βtotal_amountββ
β 1200018648 β 2015-07-01 β 2015-07-01 00:00:16 β 2015-07-01 00:02:57 β 0 β 7.3 β
β 1201452450 β 2015-07-01 β 2015-07-01 00:00:20 β 2015-07-01 00:11:07 β 1.96 β 11.76 β
β 1202368372 β 2015-07-01 β 2015-07-01 00:00:40 β 2015-07-01 00:05:46 β 0 β 7.3 β
β 1200831168 β 2015-07-01 β 2015-07-01 00:01:06 β 2015-07-01 00:09:23 β 2 β 12.3 β
β 1201362116 β 2015-07-01 β 2015-07-01 00:01:07 β 2015-07-01 00:03:31 β 0 β 5.3 β
ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββ
Note that rows can only be inserted into new files. There are no merge cycles or file split operations. Once a file is written, subsequent inserts will fail. Users have two options here:
Specify the setting
s3_create_new_file_on_insert=1
. This will cause the creation of new files on each insert. A numeric suffix will be appended to the end of each file that will monotonically increase for each insert operation. For the above example, a subsequent insert would cause the creation of a trips_1.bin file.
Specify the setting
s3_truncate_on_insert=1
. This will cause a truncation of the file, i.e. it will only contain the newly inserted rows once complete.
Both of these settings default to 0 - thus forcing the user to set one of them.
s3_truncate_on_insert
will take precedence if both are set.
Some notes about the
S3
table engine:
Unlike a traditional
MergeTree
family table, dropping an
S3
table will not delete the underlying data.
Full settings for this table type can be found
here
.
Be aware of the following caveats when using this engine:
ALTER queries are not supported
SAMPLE operations are not supported
There is no notion of indexes, i.e. primary or skip.
Managing credentials {#managing-credentials} | {"source_file": "index.md"} | [
-0.04676879197359085,
-0.06421243399381638,
-0.009038520976901054,
0.053167924284935,
-0.0381409116089344,
-0.04318975657224655,
0.0016355665866285563,
-0.006143076810985804,
-0.00277648470364511,
0.0545237734913826,
0.015849221497774124,
-0.036133747547864914,
0.03856785222887993,
-0.0542... |
d6b84772-9dfd-49a3-a15f-7f680453a749 | ALTER queries are not supported
SAMPLE operations are not supported
There is no notion of indexes, i.e. primary or skip.
Managing credentials {#managing-credentials}
In the previous examples, we have passed credentials in the
s3
function or
S3
table definition. While this may be acceptable for occasional usage, users require less explicit authentication mechanisms in production. To address this, ClickHouse has several options:
Specify the connection details in the
config.xml
or an equivalent configuration file under
conf.d
. The contents of an example file are shown below, assuming installation using the debian package.
xml
ubuntu@single-node-clickhouse:/etc/clickhouse-server/config.d$ cat s3.xml
<clickhouse>
<s3>
<endpoint-name>
<endpoint>https://dalem-files.s3.amazonaws.com/test/</endpoint>
<access_key_id>key</access_key_id>
<secret_access_key>secret</secret_access_key>
<!-- <use_environment_credentials>false</use_environment_credentials> -->
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
</endpoint-name>
</s3>
</clickhouse>
These credentials will be used for any requests where the endpoint above is an exact prefix match for the requested URL. Also, note the ability in this example to declare an authorization header as an alternative to access and secret keys. A complete list of supported settings can be found
here
.
The example above highlights the availability of the configuration parameter
use_environment_credentials
. This configuration parameter can also be set globally at the
s3
level:
xml
<clickhouse>
<s3>
<use_environment_credentials>true</use_environment_credentials>
</s3>
</clickhouse>
This setting turns on an attempt to retrieve S3 credentials from the environment, thus allowing access through IAM roles. Specifically, the following order of retrieval is performed:
A lookup for the environment variables
AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
and
AWS_SESSION_TOKEN
Check performed in
$HOME/.aws
Temporary credentials obtained via the AWS Security Token Service - i.e. via
AssumeRole
API
Checks for credentials in the ECS environment variables
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
or
AWS_CONTAINER_CREDENTIALS_FULL_URI
and
AWS_ECS_CONTAINER_AUTHORIZATION_TOKEN
.
Obtains the credentials via
Amazon EC2 instance metadata
provided
AWS_EC2_METADATA_DISABLED
is not set to true.
These same settings can also be set for a specific endpoint, using the same prefix matching rule.
Optimizing for performance {#s3-optimizing-performance}
For how to optimize reading and inserting using the S3 function, see the
dedicated performance guide
.
S3 storage tuning {#s3-storage-tuning} | {"source_file": "index.md"} | [
-0.0018485257169231772,
-0.0020264333579689264,
-0.11343731731176376,
0.00045150131336413324,
-0.04903480038046837,
-0.025698192417621613,
-0.007034748326987028,
-0.05640041083097458,
0.029040301218628883,
0.027423573657870293,
0.031645193696022034,
-0.05094745382666588,
0.10563431680202484,... |
efe997b1-561b-49cb-8427-7b8093e31bea | For how to optimize reading and inserting using the S3 function, see the
dedicated performance guide
.
S3 storage tuning {#s3-storage-tuning}
Internally, the ClickHouse merge tree uses two primary storage formats:
Wide
and
Compact
. While the current implementation uses the default behavior of ClickHouse (controlled through the settings
min_bytes_for_wide_part
and
min_rows_for_wide_part
), we expect behavior to diverge for S3 in the future releases, e.g., a larger default value of
min_bytes_for_wide_part
encouraging a more
Compact
format and thus fewer files. Users may now wish to tune these settings when using exclusively S3 storage.
S3 backed MergeTree {#s3-backed-mergetree}
The
s3
functions and associated table engine allow us to query data in S3 using familiar ClickHouse syntax. However, concerning data management features and performance, they are limited. There is no support for primary indexes, no-cache support, and files inserts need to be managed by the user.
ClickHouse recognizes that S3 represents an attractive storage solution, especially where query performance on "colder" data is less critical, and users seek to separate storage and compute. To help achieve this, support is provided for using S3 as the storage for a MergeTree engine. This will enable users to exploit the scalability and cost benefits of S3, and the insert and query performance of the MergeTree engine.
Storage Tiers {#storage-tiers}
ClickHouse storage volumes allow physical disks to be abstracted from the MergeTree table engine. Any single volume can be composed of an ordered set of disks. Whilst principally allowing multiple block devices to be potentially used for data storage, this abstraction also allows other storage types, including S3. ClickHouse data parts can be moved between volumes and fill rates according to storage policies, thus creating the concept of storage tiers.
Storage tiers unlock hot-cold architectures where the most recent data, which is typically also the most queried, requires only a small amount of space on high-performing storage, e.g., NVMe SSDs. As the data ages, SLAs for query times increase, as does query frequency. This fat tail of data can be stored on slower, less performant storage such as HDD or object storage such as S3.
Creating a disk {#creating-a-disk}
To utilize an S3 bucket as a disk, we must first declare it within the ClickHouse configuration file. Either extend config.xml or preferably provide a new file under conf.d. An example of an S3 disk declaration is shown below:
```xml
...
s3
https://sample-bucket.s3.us-east-2.amazonaws.com/tables/
your_access_key_id
your_secret_access_key
/var/lib/clickhouse/disks/s3/
cache
s3
/var/lib/clickhouse/disks/s3_cache/
10Gi
...
``` | {"source_file": "index.md"} | [
-0.019911497831344604,
-0.06367578357458115,
-0.08125962316989899,
0.015117575414478779,
0.03159867599606514,
-0.04343724623322487,
-0.0657539814710617,
0.02890453301370144,
0.004013873171061277,
0.020589925348758698,
0.003381956834346056,
0.060113128274679184,
0.05184704810380936,
-0.0696... |
305aa0cd-af44-485c-a066-252876f8ae74 | your_access_key_id
your_secret_access_key
/var/lib/clickhouse/disks/s3/
cache
s3
/var/lib/clickhouse/disks/s3_cache/
10Gi
...
```
A complete list of settings relevant to this disk declaration can be found
here
. Note that credentials can be managed here using the same approaches described in
Managing credentials
, i.e., the use_environment_credentials can be set to true in the above settings block to use IAM roles.
Creating a storage policy {#creating-a-storage-policy}
Once configured, this "disk" can be used by a storage volume declared within a policy. For the example below, we assume s3 is our only storage. This ignores more complex hot-cold architectures where data can be relocated based on TTLs and fill rates.
xml
<clickhouse>
<storage_configuration>
<disks>
<s3>
...
</s3>
<s3_cache>
...
</s3_cache>
</disks>
<policies>
<s3_main>
<volumes>
<main>
<disk>s3</disk>
</main>
</volumes>
</s3_main>
</policies>
</storage_configuration>
</clickhouse>
Creating a table {#creating-a-table}
Assuming you have configured your disk to use a bucket with write access, you should be able to create a table such as in the example below. For purposes of brevity, we use a subset of the NYC taxi columns and stream data directly to the s3 backed table:
sql
CREATE TABLE trips_s3
(
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`tip_amount` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4)
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
SETTINGS storage_policy='s3_main'
sql
INSERT INTO trips_s3 SELECT trip_id, pickup_date, pickup_datetime, dropoff_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tip_amount, total_amount, payment_type FROM s3('https://ch-nyc-taxi.s3.eu-west-3.amazonaws.com/tsv/trips_{0..9}.tsv.gz', 'TabSeparatedWithNames') LIMIT 1000000;
Depending on the hardware, this latter insert of 1m rows may take a few minutes to execute. You can confirm the progress via the system.processes table. Feel free to adjust the row count up to the limit of 10m and explore some sample queries.
sql
SELECT passenger_count, avg(tip_amount) AS avg_tip, avg(total_amount) AS avg_amount FROM trips_s3 GROUP BY passenger_count;
Modifying a table {#modifying-a-table} | {"source_file": "index.md"} | [
-0.0314062237739563,
-0.049286093562841415,
-0.10375025123357773,
-0.032707881182432175,
-0.0505891777575016,
-0.020024526864290237,
0.06381255388259888,
0.0036986477207392454,
0.02398429624736309,
0.06536940485239029,
0.010653002187609673,
0.0033239726908504963,
0.11807973682880402,
-0.04... |
ca0485ae-d278-4216-adb8-0c33e5ea79d1 | sql
SELECT passenger_count, avg(tip_amount) AS avg_tip, avg(total_amount) AS avg_amount FROM trips_s3 GROUP BY passenger_count;
Modifying a table {#modifying-a-table}
Occasionally users may need to modify the storage policy of a specific table. Whilst this is possible, it comes with limitations. The new target policy must contain all of the disks and volumes of the previous policy, i.e., data will not be migrated to satisfy a policy change. When validating these constraints, volumes and disks will be identified by their name, with attempts to violate resulting in an error. However, assuming you use the previous examples, the following changes are valid.
xml
<policies>
<s3_main>
<volumes>
<main>
<disk>s3</disk>
</main>
</volumes>
</s3_main>
<s3_tiered>
<volumes>
<hot>
<disk>default</disk>
</hot>
<main>
<disk>s3</disk>
</main>
</volumes>
<move_factor>0.2</move_factor>
</s3_tiered>
</policies>
sql
ALTER TABLE trips_s3 MODIFY SETTING storage_policy='s3_tiered'
Here we reuse the main volume in our new s3_tiered policy and introduce a new hot volume. This uses the default disk, which consists of only one disk configured via the parameter
<path>
. Note that our volume names and disks do not change. New inserts to our table will reside on the default disk until this reaches move_factor * disk_size - at which data will be relocated to S3.
Handling replication {#handling-replication}
Replication with S3 disks can be accomplished by using the
ReplicatedMergeTree
table engine. See the
replicating a single shard across two AWS regions using S3 Object Storage
guide for details.
Read & writes {#read--writes}
The following notes cover the implementation of S3 interactions with ClickHouse. Whilst generally only informative, it may help the readers when
Optimizing for Performance
:
By default, the maximum number of query processing threads used by any stage of the query processing pipeline is equal to the number of cores. Some stages are more parallelizable than others, so this value provides an upper bound. Multiple query stages may execute at once since data is streamed from the disk. The exact number of threads used for a query may thus exceed this. Modify through the setting
max_threads
. | {"source_file": "index.md"} | [
0.06756274402141571,
-0.04630566015839577,
-0.011778108775615692,
0.043162260204553604,
-0.03096972405910492,
0.011458054184913635,
0.05660272762179375,
-0.021923307329416275,
-0.025290032848715782,
0.04268858954310417,
0.1064310222864151,
-0.052261851727962494,
0.12105214595794678,
-0.032... |
18709ba3-27ca-4e97-bab4-f06c7c0acb12 | Reads on S3 are asynchronous by default. This behavior is determined by setting
remote_filesystem_read_method
, set to the value
threadpool
by default. When serving a request, ClickHouse reads granules in stripes. Each of these stripes potentially contain many columns. A thread will read the columns for their granules one by one. Rather than doing this synchronously, a prefetch is made for all columns before waiting for the data. This offers significant performance improvements over synchronous waits on each column. Users will not need to change this setting in most cases - see
Optimizing for Performance
.
Writes are performed in parallel, with a maximum of 100 concurrent file writing threads.
max_insert_delayed_streams_for_parallel_write
, which has a default value of 1000, controls the number of S3 blobs written in parallel. Since a buffer is required for each file being written (~1MB), this effectively limits the memory consumption of an INSERT. It may be appropriate to lower this value in low server memory scenarios.
Use S3 object storage as a ClickHouse disk {#configuring-s3-for-clickhouse-use}
If you need step-by-step instructions to create buckets and an IAM role, then expand
Create S3 buckets and an IAM role
and follow along:
Configure ClickHouse to use the S3 bucket as a disk {#configure-clickhouse-to-use-the-s3-bucket-as-a-disk}
The following example is based on a Linux Deb package installed as a service with default ClickHouse directories.
Create a new file in the ClickHouse
config.d
directory to store the storage configuration.
bash
vim /etc/clickhouse-server/config.d/storage_config.xml
Add the following for storage configuration; substituting the bucket path, access key and secret keys from earlier steps
xml
<clickhouse>
<storage_configuration>
<disks>
<s3_disk>
<type>s3</type>
<endpoint>https://mars-doc-test.s3.amazonaws.com/clickhouse3/</endpoint>
<access_key_id>ABC123</access_key_id>
<secret_access_key>Abc+123</secret_access_key>
<metadata_path>/var/lib/clickhouse/disks/s3_disk/</metadata_path>
</s3_disk>
<s3_cache>
<type>cache</type>
<disk>s3_disk</disk>
<path>/var/lib/clickhouse/disks/s3_cache/</path>
<max_size>10Gi</max_size>
</s3_cache>
</disks>
<policies>
<s3_main>
<volumes>
<main>
<disk>s3_disk</disk>
</main>
</volumes>
</s3_main>
</policies>
</storage_configuration>
</clickhouse>
:::note
The tags
s3_disk
and
s3_cache
within the
<disks>
tag are arbitrary labels. These can be set to something else but the same label must be used in the
<disk>
tab under the
<policies>
tab to reference the disk.
The
<S3_main>
tag is also arbitrary and is the name of the policy which will be used as the identifier storage target when creating resources in ClickHouse. | {"source_file": "index.md"} | [
0.016697494313120842,
-0.06818220764398575,
-0.14049546420574188,
-0.010753168724477291,
-0.013400426134467125,
-0.06767261028289795,
-0.03849248215556145,
-0.006009120959788561,
0.0787605494260788,
0.05824493616819382,
-0.009419292211532593,
0.0886620506644249,
0.00555848004296422,
-0.092... |
3916e68c-8292-4ca9-b86b-e760b69ef651 | The configuration shown above is for ClickHouse version 22.8 or higher, if you are using an older version please see the
storing data
docs.
For more information about using S3:
Integrations Guide:
S3 Backed MergeTree
:::
Update the owner of the file to the
clickhouse
user and group
bash
chown clickhouse:clickhouse /etc/clickhouse-server/config.d/storage_config.xml
Restart the ClickHouse instance to have the changes take effect.
bash
service clickhouse-server restart
Testing {#testing}
Log in with the ClickHouse client, something like the following
bash
clickhouse-client --user default --password ClickHouse123!
Create a table specifying the new S3 storage policy
sql
CREATE TABLE s3_table1
(
`id` UInt64,
`column1` String
)
ENGINE = MergeTree
ORDER BY id
SETTINGS storage_policy = 's3_main';
Show that the table was created with the correct policy
sql
SHOW CREATE TABLE s3_table1;
response
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE default.s3_table1
(
`id` UInt64,
`column1` String
)
ENGINE = MergeTree
ORDER BY id
SETTINGS storage_policy = 's3_main', index_granularity = 8192
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Insert test rows into the table
sql
INSERT INTO s3_table1
(id, column1)
VALUES
(1, 'abc'),
(2, 'xyz');
```response
INSERT INTO s3_table1 (id, column1) FORMAT Values
Query id: 0265dd92-3890-4d56-9d12-71d4038b85d5
Ok.
2 rows in set. Elapsed: 0.337 sec.
5. View the rows
sql
SELECT * FROM s3_table1;
response
ββidββ¬βcolumn1ββ
β 1 β abc β
β 2 β xyz β
ββββββ΄ββββββββββ
2 rows in set. Elapsed: 0.284 sec.
```
6. In the AWS console, navigate to the buckets, and select the new one and the folder.
You should see something like the following:
Replicating a single shard across two AWS regions using S3 Object Storage {#s3-multi-region}
:::tip
Object storage is used by default in ClickHouse Cloud, you do not need to follow this procedure if you are running in ClickHouse Cloud.
:::
Plan the deployment {#plan-the-deployment}
This tutorial is based on deploying two ClickHouse Server nodes and three ClickHouse Keeper nodes in AWS EC2. The data store for the ClickHouse servers is S3. Two AWS regions, with a ClickHouse Server and an S3 Bucket in each region, are used in order to support disaster recovery.
ClickHouse tables are replicated across the two servers, and therefore across the two regions.
Install software {#install-software}
ClickHouse server nodes {#clickhouse-server-nodes}
Refer to the
installation instructions
when performing the deployment steps on the ClickHouse server nodes.
Deploy ClickHouse {#deploy-clickhouse}
Deploy ClickHouse on two hosts, in the sample configurations these are named
chnode1
,
chnode2
. | {"source_file": "index.md"} | [
0.017308993265032768,
-0.11887895315885544,
-0.08667565882205963,
-0.038906343281269073,
-0.04200239107012749,
0.006059219129383564,
0.014626402407884598,
-0.03723280504345894,
0.008274937979876995,
0.05461946874856949,
0.05116885527968407,
0.020876454189419746,
0.10955307632684708,
-0.063... |
aecb3eb2-4735-4381-b796-7e6087e0493d | Deploy ClickHouse {#deploy-clickhouse}
Deploy ClickHouse on two hosts, in the sample configurations these are named
chnode1
,
chnode2
.
Place
chnode1
in one AWS region, and
chnode2
in a second.
Deploy ClickHouse Keeper {#deploy-clickhouse-keeper}
Deploy ClickHouse Keeper on three hosts, in the sample configurations these are named
keepernode1
,
keepernode2
, and
keepernode3
.
keepernode1
can be deployed in the same region as
chnode1
,
keepernode2
with
chnode2
, and
keepernode3
in either region but a different availability zone from the ClickHouse node in that region.
Refer to the
installation instructions
when performing the deployment steps on the ClickHouse Keeper nodes.
Create S3 buckets {#create-s3-buckets}
Create two S3 buckets, one in each of the regions that you have placed
chnode1
and
chnode2
.
If you need step-by-step instructions to create buckets and an IAM role, then expand
Create S3 buckets and an IAM role
and follow along:
The configuration files will then be placed in
/etc/clickhouse-server/config.d/
. Here is a sample configuration file for one bucket, the other is similar with the three highlighted lines differing:
```xml title="/etc/clickhouse-server/config.d/storage_config.xml"
s3
https://docs-clickhouse-s3.s3.us-east-2.amazonaws.com/clickhouses3/
ABCDEFGHIJKLMNOPQRST
Tjdm4kf5snfkj303nfljnev79wkjn2l3knr81007
/var/lib/clickhouse/disks/s3_disk/
<s3_cache>
<type>cache</type>
<disk>s3_disk</disk>
<path>/var/lib/clickhouse/disks/s3_cache/</path>
<max_size>10Gi</max_size>
</s3_cache>
</disks>
<policies>
<s3_main>
<volumes>
<main>
<disk>s3_disk</disk>
</main>
</volumes>
</s3_main>
</policies>
``
:::note
Many of the steps in this guide will ask you to place a configuration file in
/etc/clickhouse-server/config.d/`. This is the default location on Linux systems for configuration override files. When you put these files into that directory ClickHouse will use the content to override the default configuration. By placing these files in the override directory you will avoid losing your configuration during an upgrade.
:::
Configure ClickHouse Keeper {#configure-clickhouse-keeper}
When running ClickHouse Keeper standalone (separate from ClickHouse server) the configuration is a single XML file. In this tutorial, the file is
/etc/clickhouse-keeper/keeper_config.xml
. All three Keeper servers use the same configuration with one setting different;
<server_id>
. | {"source_file": "index.md"} | [
0.045890457928180695,
-0.06533447653055191,
-0.04160638898611069,
-0.0616028793156147,
0.013873878866434097,
-0.06808814406394958,
0.028142403811216354,
-0.1115696132183075,
-0.016529886052012444,
0.05200740322470665,
0.025372352451086044,
-0.10554572939872742,
0.09558316320180893,
-0.0551... |
98ef33bf-3308-4d87-b4fe-f73fb9102ea1 | server_id
indicates the ID to be assigned to the host where the configuration files is used. In the example below, the
server_id
is
3
, and if you look further down in the file in the
<raft_configuration>
section, you will see that server 3 has the hostname
keepernode3
. This is how the ClickHouse Keeper process knows which other servers to connect to when choosing a leader and all other activities.
```xml title="/etc/clickhouse-keeper/keeper_config.xml"
trace
/var/log/clickhouse-keeper/clickhouse-keeper.log
/var/log/clickhouse-keeper/clickhouse-keeper.err.log
1000M
3
0.0.0.0
9181
<server_id>3</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<operation_timeout_ms>10000</operation_timeout_ms>
<session_timeout_ms>30000</session_timeout_ms>
<raft_logs_level>warning</raft_logs_level>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>keepernode1</hostname>
<port>9234</port>
</server>
<server>
<id>2</id>
<hostname>keepernode2</hostname>
<port>9234</port>
</server>
<server>
<id>3</id>
<hostname>keepernode3</hostname>
<port>9234</port>
</server>
</raft_configuration>
</keeper_server>
```
Copy the configuration file for ClickHouse Keeper in place (remembering to set the
<server_id>
):
bash
sudo -u clickhouse \
cp keeper.xml /etc/clickhouse-keeper/keeper.xml
Configure ClickHouse server {#configure-clickhouse-server}
Define a cluster {#define-a-cluster}
ClickHouse cluster(s) are defined in the
<remote_servers>
section of the configuration. In this sample one cluster,
cluster_1S_2R
, is defined and it consists of a single shard with two replicas. The replicas are located on the hosts
chnode1
and
chnode2
.
xml title="/etc/clickhouse-server/config.d/remote-servers.xml"
<clickhouse>
<remote_servers replace="true">
<cluster_1S_2R>
<shard>
<replica>
<host>chnode1</host>
<port>9000</port>
</replica>
<replica>
<host>chnode2</host>
<port>9000</port>
</replica>
</shard>
</cluster_1S_2R>
</remote_servers>
</clickhouse>
When working with clusters it is handy to define macros that populate DDL queries with the cluster, shard, and replica settings. This sample allows you to specify the use of a replicated table engine without providing
shard
and
replica
details. When you create a table you can see how the
shard
and
replica
macros are used by querying
system.tables
. | {"source_file": "index.md"} | [
-0.0068646809086203575,
-0.015175621025264263,
-0.09718988835811615,
-0.034382414072752,
0.0378170870244503,
-0.10796960443258286,
0.061201080679893494,
-0.08168456703424454,
0.05824695900082588,
0.018620328977704048,
0.04791347309947014,
0.030603397637605667,
0.07195787876844406,
-0.02419... |
c5f97ce8-2401-4866-a6e2-e631f850b24a | xml title="/etc/clickhouse-server/config.d/macros.xml"
<clickhouse>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
</distributed_ddl>
<macros>
<cluster>cluster_1S_2R</cluster>
<shard>1</shard>
<replica>replica_1</replica>
</macros>
</clickhouse>
:::note
The above macros are for
chnode1
, on
chnode2
set
replica
to
replica_2
.
:::
Disable zero-copy replication {#disable-zero-copy-replication}
In ClickHouse versions 22.7 and lower the setting
allow_remote_fs_zero_copy_replication
is set to
true
by default for S3 and HDFS disks. This setting should be set to
false
for this disaster recovery scenario, and in version 22.8 and higher it is set to
false
by default.
This setting should be false for two reasons: 1) this feature is not production ready; 2) in a disaster recovery scenario both the data and metadata need to be stored in multiple regions. Set
allow_remote_fs_zero_copy_replication
to
false
.
xml title="/etc/clickhouse-server/config.d/remote-servers.xml"
<clickhouse>
<merge_tree>
<allow_remote_fs_zero_copy_replication>false</allow_remote_fs_zero_copy_replication>
</merge_tree>
</clickhouse>
ClickHouse Keeper is responsible for coordinating the replication of data across the ClickHouse nodes. To inform ClickHouse about the ClickHouse Keeper nodes add a configuration file to each of the ClickHouse nodes.
xml title="/etc/clickhouse-server/config.d/use_keeper.xml"
<clickhouse>
<zookeeper>
<node index="1">
<host>keepernode1</host>
<port>9181</port>
</node>
<node index="2">
<host>keepernode2</host>
<port>9181</port>
</node>
<node index="3">
<host>keepernode3</host>
<port>9181</port>
</node>
</zookeeper>
</clickhouse>
Configure networking {#configure-networking}
See the
network ports
list when you configure the security settings in AWS so that your servers can communicate with each other, and you can communicate with them.
All three servers must listen for network connections so that they can communicate between the servers and with S3. By default, ClickHouse listens only on the loopback address, so this must be changed. This is configured in
/etc/clickhouse-server/config.d/
. Here is a sample that configures ClickHouse and ClickHouse Keeper to listen on all IP v4 interfaces. see the documentation or the default configuration file
/etc/clickhouse/config.xml
for more information.
xml title="/etc/clickhouse-server/config.d/networking.xml"
<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>
Start the servers {#start-the-servers}
Run ClickHouse Keeper {#run-clickhouse-keeper}
On each Keeper server run the commands for your operating system, for example:
bash
sudo systemctl enable clickhouse-keeper
sudo systemctl start clickhouse-keeper
sudo systemctl status clickhouse-keeper | {"source_file": "index.md"} | [
-0.001708306255750358,
-0.09967100620269775,
-0.03650985658168793,
-0.04966038092970848,
0.027069998905062675,
-0.03566223755478859,
-0.035032179206609726,
-0.08377476781606674,
0.023152528330683708,
0.036148712038993835,
0.03877847641706467,
0.0012187790125608444,
0.10980721563100815,
-0.... |
aa37ab2e-17dc-42a7-8413-944277d93e44 | bash
sudo systemctl enable clickhouse-keeper
sudo systemctl start clickhouse-keeper
sudo systemctl status clickhouse-keeper
Check ClickHouse Keeper status {#check-clickhouse-keeper-status}
Send commands to the ClickHouse Keeper with
netcat
. For example,
mntr
returns the state of the ClickHouse Keeper cluster. If you run the command on each of the Keeper nodes you will see that one is a leader, and the other two are followers:
bash
echo mntr | nc localhost 9181
```response
zk_version v22.7.2.15-stable-f843089624e8dd3ff7927b8a125cf3a7a769c069
zk_avg_latency 0
zk_max_latency 11
zk_min_latency 0
zk_packets_received 1783
zk_packets_sent 1783
highlight-start
zk_num_alive_connections 2
zk_outstanding_requests 0
zk_server_state leader
highlight-end
zk_znode_count 135
zk_watch_count 8
zk_ephemerals_count 3
zk_approximate_data_size 42533
zk_key_arena_size 28672
zk_latest_snapshot_size 0
zk_open_file_descriptor_count 182
zk_max_file_descriptor_count 18446744073709551615
highlight-start
zk_followers 2
zk_synced_followers 2
highlight-end
```
Run ClickHouse server {#run-clickhouse-server}
On each ClickHouse server run
bash
sudo service clickhouse-server start
Verify ClickHouse server {#verify-clickhouse-server}
When you added the
cluster configuration
a single shard replicated across the two ClickHouse nodes was defined. In this verification step you will check that the cluster was built when ClickHouse was started, and you will create a replicated table using that cluster.
- Verify that the cluster exists:
sql
show clusters
```response
ββclusterββββββββ
β cluster_1S_2R β
βββββββββββββββββ
1 row in set. Elapsed: 0.009 sec. `
```
Create a table in the cluster using the
ReplicatedMergeTree
table engine:
sql
create table trips on cluster 'cluster_1S_2R' (
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`tip_amount` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4))
ENGINE = ReplicatedMergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
SETTINGS storage_policy='s3_main'
response
ββhostβββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode1 β 9000 β 0 β β 1 β 0 β
β chnode2 β 9000 β 0 β β 0 β 0 β
βββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
Understand the use of the macros defined earlier | {"source_file": "index.md"} | [
0.01141019631177187,
-0.01797967217862606,
-0.09789907187223434,
-0.017868928611278534,
0.043619245290756226,
-0.07216215878725052,
0.018498580902814865,
-0.05755414441227913,
-0.04188352823257446,
0.07016335427761078,
0.021111799404025078,
-0.052889592945575714,
0.06981032341718674,
-0.04... |
06ca1d39-2976-4ebc-8313-68406a1739de | Understand the use of the macros defined earlier
The macros
shard
, and
replica
were
defined earlier
, and in the highlighted line below you can see where the values are substituted on each ClickHouse node. Additionally, the value
uuid
is used;
uuid
is not defined in the macros as it is generated by the system.
sql
SELECT create_table_query
FROM system.tables
WHERE name = 'trips'
FORMAT Vertical
```response
Query id: 4d326b66-0402-4c14-9c2f-212bedd282c0
Row 1:
ββββββ
create_table_query: CREATE TABLE default.trips (
trip_id
UInt32,
pickup_date
Date,
pickup_datetime
DateTime,
dropoff_datetime
DateTime,
pickup_longitude
Float64,
pickup_latitude
Float64,
dropoff_longitude
Float64,
dropoff_latitude
Float64,
passenger_count
UInt8,
trip_distance
Float64,
tip_amount
Float32,
total_amount
Float32,
payment_type
Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4))
# highlight-next-line
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
PARTITION BY toYYYYMM(pickup_date) ORDER BY pickup_datetime SETTINGS storage_policy = 's3_main'
1 row in set. Elapsed: 0.012 sec.
``
:::note
You can customize the zookeeper path
'clickhouse/tables/{uuid}/{shard}
shown above by setting
default_replica_path
and
default_replica_name`. The docs are
here
.
:::
Testing {#testing-1}
These tests will verify that data is being replicated across the two servers, and that it is stored in the S3 Buckets and not on local disk.
Add data from the New York City taxi dataset:
sql
INSERT INTO trips
SELECT trip_id,
pickup_date,
pickup_datetime,
dropoff_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
tip_amount,
total_amount,
payment_type
FROM s3('https://ch-nyc-taxi.s3.eu-west-3.amazonaws.com/tsv/trips_{0..9}.tsv.gz', 'TabSeparatedWithNames') LIMIT 1000000;
Verify that data is stored in S3.
This query shows the size of the data on disk, and the policy used to determine which disk is used.
sql
SELECT
engine,
data_paths,
metadata_path,
storage_policy,
formatReadableSize(total_bytes)
FROM system.tables
WHERE name = 'trips'
FORMAT Vertical
```response
Query id: af7a3d1b-7730-49e0-9314-cc51c4cf053c
Row 1:
ββββββ
engine: ReplicatedMergeTree
data_paths: ['/var/lib/clickhouse/disks/s3_disk/store/551/551a859d-ec2d-4512-9554-3a4e60782853/']
metadata_path: /var/lib/clickhouse/store/e18/e18d3538-4c43-43d9-b083-4d8e0f390cf7/trips.sql
storage_policy: s3_main
formatReadableSize(total_bytes): 36.42 MiB
1 row in set. Elapsed: 0.009 sec.
``` | {"source_file": "index.md"} | [
0.01696496456861496,
-0.010503619909286499,
-0.019679764285683632,
0.08095870912075043,
-0.034373823553323746,
-0.013280916027724743,
0.08104463666677475,
0.024604231119155884,
-0.021128611639142036,
0.020644310861825943,
0.05985771864652634,
-0.07401207089424133,
0.04180535674095154,
-0.0... |
9db0a7c2-7575-4082-8986-f885e996daec | 1 row in set. Elapsed: 0.009 sec.
```
Check the size of data on the local disk. From above, the size on disk for the millions of rows stored is 36.42 MiB. This should be on S3, and not the local disk. The query above also tells us where on local disk data and metadata is stored. Check the local data:
response
root@chnode1:~# du -sh /var/lib/clickhouse/disks/s3_disk/store/551
536K /var/lib/clickhouse/disks/s3_disk/store/551
Check the S3 data in each S3 Bucket (the totals are not shown, but both buckets have approximately 36 MiB stored after the inserts):
S3Express {#s3express}
S3Express
is a new high-performance, single-Availability Zone storage class in Amazon S3.
You could refer to this
blog
to read about our experience testing S3Express with ClickHouse.
:::note
S3Express stores data within a single AZ. It means data will be unavailable in case of AZ outage.
:::
S3 disk {#s3-disk}
Creating a table with storage backed by a S3Express bucket involves the following steps:
Create a bucket of
Directory
type
Install appropriate bucket policy to grant all required permissions to your S3 user (e.g.
"Action": "s3express:*"
to simply allow unrestricted access)
When configuring the storage policy please provide the
region
parameter
Storage configuration is the same as for ordinary S3 and for example might look the following way:
sql
<storage_configuration>
<disks>
<s3_express>
<type>s3</type>
<endpoint>https://my-test-bucket--eun1-az1--x-s3.s3express-eun1-az1.eu-north-1.amazonaws.com/store/</endpoint>
<region>eu-north-1</region>
<access_key_id>...</access_key_id>
<secret_access_key>...</secret_access_key>
</s3_express>
</disks>
<policies>
<s3_express>
<volumes>
<main>
<disk>s3_express</disk>
</main>
</volumes>
</s3_express>
</policies>
</storage_configuration>
And then create a table on the new storage:
sql
CREATE TABLE t
(
a UInt64,
s String
)
ENGINE = MergeTree
ORDER BY a
SETTINGS storage_policy = 's3_express';
S3 storage {#s3-storage}
S3 storage is also supported but only for
Object URL
paths. Example:
sql
SELECT * FROM s3('https://test-bucket--eun1-az1--x-s3.s3express-eun1-az1.eu-north-1.amazonaws.com/file.csv', ...)
it also requires specifying bucket region in the config:
xml
<s3>
<perf-bucket-url>
<endpoint>https://test-bucket--eun1-az1--x-s3.s3express-eun1-az1.eu-north-1.amazonaws.com</endpoint>
<region>eu-north-1</region>
</perf-bucket-url>
</s3>
Backups {#backups}
It is possible to store a backup on the disk we created above:
``` sql
BACKUP TABLE t TO Disk('s3_express', 't.zip') | {"source_file": "index.md"} | [
-0.024822663515806198,
-0.04242565482854843,
-0.09525734931230545,
0.003283452009782195,
0.060624174773693085,
-0.05101938173174858,
-0.03578254580497742,
-0.05242651328444481,
0.05008507892489433,
0.07029645889997482,
0.023026026785373688,
-0.001650211401283741,
0.10435553640127182,
-0.07... |
9895848a-2ba0-4e55-b381-46f52626536a | Backups {#backups}
It is possible to store a backup on the disk we created above:
``` sql
BACKUP TABLE t TO Disk('s3_express', 't.zip')
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββββββββ
β c61f65ac-0d76-4390-8317-504a30ba7595 β BACKUP_CREATED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
```
``` sql
RESTORE TABLE t AS t_restored FROM Disk('s3_express', 't.zip')
ββidββββββββββββββββββββββββββββββββββββ¬βstatusββββ
β 4870e829-8d76-4171-ae59-cffaf58dea04 β RESTORED β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββ
``` | {"source_file": "index.md"} | [
-0.09902329742908478,
-0.05379851907491684,
-0.07781081646680832,
0.05399532988667488,
0.03529315069317818,
0.01860906183719635,
0.034793782979249954,
0.014784798957407475,
-0.007460394874215126,
0.07573127001523972,
0.010007449425756931,
0.04128633067011833,
0.1286901831626892,
-0.0376574... |
1bad7593-f187-41e9-a71d-eeefec229194 | sidebar_label: 'Azure Synapse'
slug: /integrations/azure-synapse
description: 'Introduction to Azure Synapse with ClickHouse'
keywords: ['clickhouse', 'azure synapse', 'azure', 'synapse', 'microsoft', 'azure spark', 'data']
title: 'Integrating Azure Synapse with ClickHouse'
doc_type: 'guide'
import TOCInline from '@theme/TOCInline';
import Image from '@theme/IdealImage';
import sparkConfigViaNotebook from '@site/static/images/integrations/data-ingestion/azure-synapse/spark_notebook_conf.png';
import sparkUICHSettings from '@site/static/images/integrations/data-ingestion/azure-synapse/spark_ui_ch_settings.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Integrating Azure Synapse with ClickHouse
Azure Synapse
is an integrated analytics service that combines big data, data science and warehousing to enable fast, large-scale data analysis.
Within Synapse, Spark pools provide on-demand, scalable
Apache Spark
clusters that let users run complex data transformations, machine learning, and integrations with external systems.
This article will show you how to integrate the
ClickHouse Spark connector
when working with Apache Spark within Azure Synapse.
Add the connector's dependencies {#add-connector-dependencies}
Azure Synapse supports three levels of
packages maintenance
:
1. Default packages
2. Spark pool level
3. Session level
Follow the
Manage libraries for Apache Spark pools guide
and add the following required dependencies to your Spark application
-
clickhouse-spark-runtime-{spark_version}_{scala_version}-{connector_version}.jar
-
official maven
-
clickhouse-jdbc-{java_client_version}-all.jar
-
official maven
Please visit our
Spark Connector Compatibility Matrix
docs to understand which versions suit your needs.
Add ClickHouse as a catalog {#add-clickhouse-as-catalog}
There are a variety of ways to add Spark configs to your session:
* Custom configuration file to load with your session
* Add configurations via Azure Synapse UI
* Add configurations in your Synapse notebook
Follow this
Manage Apache Spark configuration
and add the
connector required Spark configurations
.
For instance, you can configure your Spark session in your notebook with these settings:
python
%%configure -f
{
"conf": {
"spark.sql.catalog.clickhouse": "com.clickhouse.spark.ClickHouseCatalog",
"spark.sql.catalog.clickhouse.host": "<clickhouse host>",
"spark.sql.catalog.clickhouse.protocol": "https",
"spark.sql.catalog.clickhouse.http_port": "<port>",
"spark.sql.catalog.clickhouse.user": "<username>",
"spark.sql.catalog.clickhouse.password": "password",
"spark.sql.catalog.clickhouse.database": "default"
}
}
Make sure it will be in the first cell as follows:
Please visit the
ClickHouse Spark configurations page
for additional settings. | {"source_file": "index.md"} | [
-0.06322115659713745,
-0.07888983190059662,
-0.03495771437883377,
0.08227387070655823,
0.03274975344538689,
0.003523959079757333,
0.10061003267765045,
-0.021327616646885872,
-0.04126721993088722,
0.06470050662755966,
0.014587628655135632,
0.02433004602789879,
0.0711878165602684,
-0.0137210... |
6f89153c-9a5a-4911-9142-c736fe09c3dd | Make sure it will be in the first cell as follows:
Please visit the
ClickHouse Spark configurations page
for additional settings.
:::info
When working with ClickHouse Cloud Please make sure to set the
required Spark settings
.
:::
Setup verification {#setup-verification}
To verify that the dependencies and configurations were set successfully, please visit your session's Spark UI, and go to your
Environment
tab.
There, look for your ClickHouse related settings:
Additional resources {#additional-resources}
ClickHouse Spark Connector Docs
Azure Synapse Spark Pools Overview
Optimize performance for Apache Spark workloads
Manage libraries for Apache Spark pools in Synapse
Manage Apache Spark configuration in Synapse | {"source_file": "index.md"} | [
0.007146070711314678,
-0.10668660700321198,
0.008160402998328209,
0.0008100806153379381,
-0.012838169001042843,
-0.041100457310676575,
0.025906072929501534,
-0.06385369598865509,
0.031065942719578743,
0.033458828926086426,
-0.0021682411897927523,
-0.00797728355973959,
-0.032122839242219925,
... |
2da893aa-0268-43ea-812f-a9c7471de631 | sidebar_label: 'Amazon Glue'
sidebar_position: 1
slug: /integrations/glue
description: 'Integrate ClickHouse and Amazon Glue'
keywords: ['clickhouse', 'amazon', 'aws', 'glue', 'migrating', 'data', 'spark']
title: 'Integrating Amazon Glue with ClickHouse and Spark'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import notebook_connections_config from '@site/static/images/integrations/data-ingestion/aws-glue/notebook-connections-config.png';
import dependent_jars_path_option from '@site/static/images/integrations/data-ingestion/aws-glue/dependent_jars_path_option.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Integrating Amazon Glue with ClickHouse and Spark
Amazon Glue
is a fully managed, serverless data integration service provided by Amazon Web Services (AWS). It simplifies the process of discovering, preparing, and transforming data for analytics, machine learning, and application development.
Installation {#installation}
To integrate your Glue code with ClickHouse, you can use our official Spark connector in Glue via one of the following:
- Installing the ClickHouse Glue connector from the AWS Marketplace (recommended).
- Manually adding the Spark Connector's jars to your Glue job.
Subscribe to the Connector
To access the connector in your account, subscribe to the ClickHouse AWS Glue Connector from AWS Marketplace.
Grant Required Permissions
Ensure your Glue jobβs IAM role has the necessary permissions, as described in the minimum privileges
guide
.
Activate the Connector & Create a Connection
You can activate the connector and create a connection directly by clicking
this link
, which opens the Glue connection creation page with key fields pre-filled. Give the connection a name, and press create (no need to provide the ClickHouse connection details at this stage).
Use in Glue Job
In your Glue job, select the
Job details
tab, and expend the
Advanced properties
window. Under the
Connections
section, select the connection you just created. The connector automatically injects the required JARs into the job runtime.
:::note
The JARs used in the Glue connector are built for
Spark 3.3
,
Scala 2
, and
Python 3
. Make sure to select these versions when configuring your Glue job.
:::
To add the required jars manually, please follow the following:
1. Upload the following jars to an S3 bucket -
clickhouse-jdbc-0.6.X-all.jar
and
clickhouse-spark-runtime-3.X_2.X-0.8.X.jar
.
2. Make sure the Glue job has access to this bucket.
3. Under the
Job details
tab, scroll down and expend the
Advanced properties
drop down, and fill the jars path in
Dependent JARs path
:
Examples {#example} | {"source_file": "index.md"} | [
-0.028107671067118645,
-0.027733201161026955,
-0.03441591560840607,
0.01760045811533928,
0.04924991726875305,
0.008617687970399857,
0.04090264439582825,
0.028401561081409454,
-0.07636748254299164,
0.015794947743415833,
0.09203961491584778,
0.0408962108194828,
0.06528247147798538,
-0.025427... |
fdad6bce-b8ba-4449-acdb-b9450185e901 | Examples {#example}
```java
import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.clickhouseScala.Native.NativeSparkRead.spark
import org.apache.spark.sql.SparkSession
import scala.collection.JavaConverters.
import org.apache.spark.sql.types.
import org.apache.spark.sql.functions._
object ClickHouseGlueExample {
def main(sysArgs: Array[String]) {
val args = GlueArgParser.getResolvedOptions(sysArgs, Seq("JOB_NAME").toArray)
val sparkSession: SparkSession = SparkSession.builder
.config("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
.config("spark.sql.catalog.clickhouse.host", "<your-clickhouse-host>")
.config("spark.sql.catalog.clickhouse.protocol", "https")
.config("spark.sql.catalog.clickhouse.http_port", "<your-clickhouse-port>")
.config("spark.sql.catalog.clickhouse.user", "default")
.config("spark.sql.catalog.clickhouse.password", "<your-password>")
.config("spark.sql.catalog.clickhouse.database", "default")
// for ClickHouse cloud
.config("spark.sql.catalog.clickhouse.option.ssl", "true")
.config("spark.sql.catalog.clickhouse.option.ssl_mode", "NONE")
.getOrCreate
val glueContext = new GlueContext(sparkSession.sparkContext)
Job.init(args("JOB_NAME"), glueContext, args.asJava)
import sparkSession.implicits._
val url = "s3://{path_to_cell_tower_data}/cell_towers.csv.gz"
val schema = StructType(Seq(
StructField("radio", StringType, nullable = false),
StructField("mcc", IntegerType, nullable = false),
StructField("net", IntegerType, nullable = false),
StructField("area", IntegerType, nullable = false),
StructField("cell", LongType, nullable = false),
StructField("unit", IntegerType, nullable = false),
StructField("lon", DoubleType, nullable = false),
StructField("lat", DoubleType, nullable = false),
StructField("range", IntegerType, nullable = false),
StructField("samples", IntegerType, nullable = false),
StructField("changeable", IntegerType, nullable = false),
StructField("created", TimestampType, nullable = false),
StructField("updated", TimestampType, nullable = false),
StructField("averageSignal", IntegerType, nullable = false)
))
val df = sparkSession.read
.option("header", "true")
.schema(schema)
.csv(url)
// Write to ClickHouse
df.writeTo("clickhouse.default.cell_towers").append()
// Read from ClickHouse
val dfRead = spark.sql("select * from clickhouse.default.cell_towers")
Job.commit()
}
}
```
```python
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql import Row
@params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME']) | {"source_file": "index.md"} | [
-0.01766929030418396,
-0.09260059893131256,
-0.01117345504462719,
0.0026598055846989155,
-0.03531274572014809,
-0.0165999922901392,
0.04662603512406349,
-0.04391257464885712,
-0.07057148963212967,
-0.050394147634506226,
-0.00624499237164855,
-0.029318824410438538,
-0.0072359442710876465,
-... |
192494ec-fe52-4b28-b334-861410dc2422 | @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
logger = glueContext.get_logger()
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
spark.conf.set("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
spark.conf.set("spark.sql.catalog.clickhouse.host", "
")
spark.conf.set("spark.sql.catalog.clickhouse.protocol", "https")
spark.conf.set("spark.sql.catalog.clickhouse.http_port", "
")
spark.conf.set("spark.sql.catalog.clickhouse.user", "default")
spark.conf.set("spark.sql.catalog.clickhouse.password", "
")
spark.conf.set("spark.sql.catalog.clickhouse.database", "default")
spark.conf.set("spark.clickhouse.write.format", "json")
spark.conf.set("spark.clickhouse.read.format", "arrow")
for ClickHouse cloud
spark.conf.set("spark.sql.catalog.clickhouse.option.ssl", "true")
spark.conf.set("spark.sql.catalog.clickhouse.option.ssl_mode", "NONE")
Create DataFrame
data = [Row(id=11, name="John"), Row(id=12, name="Doe")]
df = spark.createDataFrame(data)
Write DataFrame to ClickHouse
df.writeTo("clickhouse.default.example_table").append()
Read DataFrame from ClickHouse
df_read = spark.sql("select * from clickhouse.default.example_table")
logger.info(str(df.take(10)))
job.commit()
```
For more details, please visit our
Spark documentation
. | {"source_file": "index.md"} | [
-0.024127546697854996,
-0.10180345922708511,
-0.04519842192530632,
0.0017192888772115111,
-0.028015581890940666,
-0.05324435234069824,
0.1141531690955162,
-0.028651252388954163,
-0.12961512804031372,
-0.06135951355099678,
-0.00818453636020422,
-0.022430524230003357,
-0.001711785327643156,
... |
c4addbf8-1e49-42fd-89f1-a411dbfa3681 | slug: /integrations/clickpipes/secure-kinesis
sidebar_label: 'Kinesis Role-Based Access'
title: 'Kinesis Role-Based Access'
description: 'This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Kinesis and access their data streams securely.'
doc_type: 'guide'
keywords: ['Amazon Kinesis']
import secure_kinesis from '@site/static/images/integrations/data-ingestion/clickpipes/securekinesis.jpg';
import secures3_arn from '@site/static/images/cloud/security/secures3_arn.png';
import Image from '@theme/IdealImage';
This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Kinesis and access their data streams securely.
Prerequisites {#prerequisite}
To follow this guide, you will need:
- An active ClickHouse Cloud service
- An AWS account
Introduction {#introduction}
Before diving into the setup for secure Kinesis access, it's important to understand the mechanism. Here's an overview of how ClickPipes can access Amazon Kinesis streams by assuming a role within customers' AWS accounts.
Using this approach, customers can manage all access to their Kinesis data streams in a single place (the IAM policy of the assumed-role) without having to modify each stream's access policy individually.
Setup {#setup}
Obtaining the ClickHouse service IAM role Arn {#obtaining-the-clickhouse-service-iam-role-arn}
Login to your ClickHouse cloud account.
Select the ClickHouse service you want to create the integration
Select the
Settings
tab
Scroll down to the
Network security information
section at the bottom of the page
Copy the
Service role ID (IAM)
value belong to the service as shown below.
Setting up IAM assume role {#setting-up-iam-assume-role}
Manually create IAM role. {#manually-create-iam-role}
Login to your AWS Account in the web browser with an IAM user that has permission to create & manage IAM role.
Browse to IAM Service Console
Create a new IAM role with Trusted Entity Type of
AWS account
. Note that the name of the IAM role
must start with
ClickHouseAccessRole-
for this to work.
i. Configure the Trust Policy
The trust policy allows the ClickHouse IAM role to assume this role. Replace
{ClickHouse_IAM_ARN}
with the IAM Role ARN from your ClickHouse service (obtained in the previous step).
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "{ClickHouse_IAM_ARN}"
},
"Action": "sts:AssumeRole"
}
]
}
ii. Configure the Permission Policy
The permission policy grants access to your Kinesis stream. Replace the following placeholders:
-
{REGION}
: Your AWS region (e.g.,
us-east-1
)
-
{ACCOUNT_ID}
: Your AWS account ID
-
{STREAM_NAME}
: Your Kinesis stream name | {"source_file": "secure-kinesis.md"} | [
-0.048599690198898315,
0.03605234622955322,
-0.05228080600500107,
-0.07834146171808243,
0.027764111757278442,
-0.018408216536045074,
0.07503955066204071,
0.025241674855351448,
0.02761978469789028,
0.004243857692927122,
0.019607698544859886,
0.03630100563168526,
0.08432299643754959,
-0.0347... |
6d129a68-a252-4f0c-9b58-e65c467bdb9e | json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListShards",
"kinesis:RegisterStreamConsumer",
"kinesis:DeregisterStreamConsumer",
"kinesis:ListStreamConsumers"
],
"Resource": [
"arn:aws:kinesis:{REGION}:{ACCOUNT_ID}:stream/{STREAM_NAME}"
]
},
{
"Effect": "Allow",
"Action": [
"kinesis:SubscribeToShard",
"kinesis:DescribeStreamConsumer"
],
"Resource": [
"arn:aws:kinesis:{REGION}:{ACCOUNT_ID}:stream/{STREAM_NAME}/*"
]
},
{
"Effect": "Allow",
"Action": [
"kinesis:ListStreams"
],
"Resource": "*"
}
]
}
Copy the new
IAM Role Arn
after creation. This is what is needed to access your Kinesis stream. | {"source_file": "secure-kinesis.md"} | [
-0.05954039841890335,
0.0665617361664772,
0.01540422160178423,
-0.039942603558301926,
0.018621528521180153,
0.010134067386388779,
0.01070514414459467,
-0.021997682750225067,
0.02797202579677105,
-0.008062331937253475,
0.021699562668800354,
-0.059524111449718475,
0.004974666051566601,
0.034... |
8771c328-b87a-4bd2-b92f-dfa7a1674892 | sidebar_label: 'AWS PrivateLink for ClickPipes'
description: 'Establish a secure connection between ClickPipes and a data source using AWS PrivateLink.'
slug: /integrations/clickpipes/aws-privatelink
title: 'AWS PrivateLink for ClickPipes'
doc_type: 'guide'
keywords: ['aws privatelink', 'ClickPipes security', 'vpc endpoint', 'private connectivity', 'vpc resource']
import cp_service from '@site/static/images/integrations/data-ingestion/clickpipes/cp_service.png';
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import cp_rpe_select from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_select.png';
import cp_rpe_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_step0.png';
import cp_rpe_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_step1.png';
import cp_rpe_step2 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_step2.png';
import cp_rpe_step3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_step3.png';
import cp_rpe_settings0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_settings0.png';
import cp_rpe_settings1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_rpe_settings1.png';
import Image from '@theme/IdealImage';
AWS PrivateLink for ClickPipes
You can use
AWS PrivateLink
to establish secure connectivity between VPCs,
AWS services, your on-premises systems, and ClickHouse Cloud without exposing traffic to the public Internet.
This document outlines the ClickPipes reverse private endpoint functionality
that allows setting up an AWS PrivateLink VPC endpoint.
Supported ClickPipes data sources {#supported-sources}
ClickPipes reverse private endpoint functionality is limited to the following
data source types:
- Kafka
- Postgres
- MySQL
Supported AWS PrivateLink endpoint types {#aws-privatelink-endpoint-types}
ClickPipes reverse private endpoint can be configured with one of the following AWS PrivateLink approaches:
VPC resource
MSK multi-VPC connectivity for MSK ClickPipe
VPC endpoint service
VPC resource {#vpc-resource}
Your VPC resources can be accessed in ClickPipes using PrivateLink and
AWS VPC Lattice
. This approach doesn't require setting up a load balancer in front of your data source.
Resource configuration can be targeted with a specific host or RDS cluster ARN.
Cross-region is not supported.
It's the preferred choice for Postgres CDC ingesting data from an RDS cluster.
To set up PrivateLink with VPC resource:
1. Create a resource gateway
2. Create a resource configuration
3. Create a resource share
1. Create a Resource-Gateway {#create-resource-gateway}
Resource-Gateway is the point that receives traffic for specified resources in your VPC.
You can create a Resource-Gateway from the
AWS console
or with the following command: | {"source_file": "aws-privatelink.md"} | [
-0.07024497538805008,
0.03797748684883118,
-0.09252719581127167,
0.00723047461360693,
0.05539313331246376,
-0.025597725063562393,
0.07996948063373566,
0.05647630617022514,
-0.027674967423081398,
0.0008507882594130933,
0.0699164867401123,
-0.04461425170302391,
0.08608148247003555,
-0.041503... |
3f8c46a7-61f5-4f5a-ab5a-2f78b3171c04 | Resource-Gateway is the point that receives traffic for specified resources in your VPC.
You can create a Resource-Gateway from the
AWS console
or with the following command:
bash
aws vpc-lattice create-resource-gateway \
--vpc-identifier <VPC_ID> \
--subnet-ids <SUBNET_IDS> \
--security-group-ids <SG_IDs> \
--name <RESOURCE_GATEWAY_NAME>
The output will contain a Resource-Gateway id, which you will need for the next step.
Before you can proceed, you'll need to wait for the Resource-Gateway to enter into an
Active
state. You can check the state by running the following command:
bash
aws vpc-lattice get-resource-gateway \
--resource-gateway-identifier <RESOURCE_GATEWAY_ID>
2. Create a VPC Resource-Configuration {#create-resource-configuration}
Resource-Configuration is associated with Resource-Gateway to make your resource accessible.
You can create a Resource-Configuration from the
AWS console
or with the following command:
bash
aws vpc-lattice create-resource-configuration \
--resource-gateway-identifier <RESOURCE_GATEWAY_ID> \
--type <RESOURCE_CONFIGURATION_TYPE> \
--resource-configuration-definition <RESOURCE_CONFIGURATION_DEFINITION> \
--name <RESOURCE_CONFIGURATION_NAME>
The simplest
resource configuration type
is a single Resource-Configuration. You can configure with the ARN directly, or share an IP address or a domain name that is publicly resolvable.
For example, to configure with the ARN of an RDS Cluster:
bash
aws vpc-lattice create-resource-configuration \
--name my-rds-cluster-config \
--type ARN \
--resource-gateway-identifier rgw-0bba03f3d56060135 \
--resource-configuration-definition 'arnResource={arn=arn:aws:rds:us-east-1:123456789012:cluster:my-rds-cluster}'
The output will contain a Resource-Configuration ARN, which you will need for the next step. It will also contain a Resource-Configuration ID, which you will need to set up a ClickPipe connection with VPC resource.
3. Create a Resource-Share {#create-resource-share}
Sharing your resource requires a Resource-Share. This is facilitated through the Resource Access Manager (RAM).
You can put the Resource-Configuration into the Resource-Share through
AWS console
or by running the following command with ClickPipes account ID
072088201116
:
bash
aws ram create-resource-share \
--principals 072088201116 \
--resource-arns <RESOURCE_CONFIGURATION_ARN> \
--name <RESOURCE_SHARE_NAME>
The output will contain a Resource-Share ARN, which you will need to set up a ClickPipe connection with VPC resource.
You are ready to
create a ClickPipe with Reverse private endpoint
using VPC resource. You will need to:
- Set
VPC endpoint type
to
VPC Resource
.
- Set
Resource configuration ID
to the ID of the Resource-Configuration created in step 2.
- Set
Resource share ARN
to the ARN of the Resource-Share created in step 3. | {"source_file": "aws-privatelink.md"} | [
-0.012118922546505928,
-0.03539285063743591,
-0.09631060808897018,
0.040651675313711166,
0.03498246893286705,
0.017613032832741737,
0.08057399094104767,
-0.06673175096511841,
0.050778742879629135,
0.02036456950008869,
-0.06823252141475677,
-0.028184156864881516,
0.06041138991713524,
0.0182... |
532492b8-0de4-405d-baa1-606464675de9 | For more details on PrivateLink with VPC resource, see
AWS documentation
.
MSK multi-VPC connectivity {#msk-multi-vpc}
The
Multi-VPC connectivity
is a built-in feature of AWS MSK that allows you to connect multiple VPCs to a single MSK cluster.
Private DNS support is out of the box and does not require any additional configuration.
Cross-region is not supported.
It is a recommended option for ClickPipes for MSK.
See the
getting started
guide for more details.
:::info
Update your MSK cluster policy and add
072088201116
to the allowed principals to your MSK cluster.
See AWS guide for
attaching a cluster policy
for more details.
:::
Follow our
MSK setup guide for ClickPipes
to learn how to set up the connection.
VPC endpoint service {#vpc-endpoint-service}
VPC endpoint service
is another approach to share your data source with ClickPipes.
It requires setting up a NLB (Network Load Balancer) in front of your data source
and configuring the VPC endpoint service to use the NLB.
VPC endpoint service can be
configured with a private DNS
,
that will be accessible in a ClickPipes VPC.
It's a preferred choice for:
Any on-premise Kafka setup that requires private DNS support
Cross-region connectivity for Postgres CDC
Cross-region connectivity for MSK cluster. Please reach out to the ClickHouse support team for assistance.
See the
getting started
guide for more details.
:::info
Add ClickPipes account ID
072088201116
to the allowed principals to your VPC endpoint service.
See AWS guide for
managing permissions
for more details.
:::
:::info
Cross-region access
can be configured for ClickPipes. Add
your ClickPipe region
to the allowed regions in your VPC endpoint service.
:::
Creating a ClickPipe with reverse private endpoint {#creating-clickpipe}
Access the SQL Console for your ClickHouse Cloud Service.
Select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe"
Select either Kafka or Postgres as a data source.
Select the
Reverse private endpoint
option.
Select any of existing reverse private endpoints or create a new one.
:::info
If cross-region access is required for RDS, you need to create a VPC endpoint service and
this guide should provide
a good starting point to set it up.
For same-region access, creating a VPC Resource is the recommended approach.
:::
Provide the required parameters for the selected endpoint type.
- For VPC resource, provide the configuration share ARN and configuration ID.
- For MSK multi-VPC, provide the cluster ARN and authentication method used with a created endpoint.
- For VPC endpoint service, provide the service name.
Click on
Create
and wait for the reverse private endpoint to be ready. | {"source_file": "aws-privatelink.md"} | [
-0.08282767236232758,
-0.0448426753282547,
-0.05875227600336075,
0.03946591913700104,
0.015845656394958496,
0.08694595098495483,
-0.036294933408498764,
-0.006411103531718254,
0.0437290258705616,
0.01565873809158802,
-0.04512189328670502,
-0.05647877976298332,
0.03730669245123863,
-0.022378... |
ade6488e-69eb-4621-9148-0829068fa5e5 | Click on
Create
and wait for the reverse private endpoint to be ready.
If you are creating a new endpoint, it will take some time to set up the endpoint.
The page will refresh automatically once the endpoint is ready.
VPC endpoint service might require accepting the connection request in your AWS console.
Once the endpoint is ready, you can use a DNS name to connect to the data source.
On a list of endpoints, you can see the DNS name for the available endpoint.
It can be either an internally ClickPipes provisioned DNS name or a private DNS name supplied by a PrivateLink service.
DNS name is not a complete network address.
Add the port according to the data source.
MSK connection string can be accessed in the AWS console.
To see a full list of DNS names, access it in the cloud service settings.
Managing existing reverse private endpoints {#managing-existing-endpoints}
You can manage existing reverse private endpoints in the ClickHouse Cloud service settings:
On a sidebar find the
Settings
button and click on it.
Click on
Reverse private endpoints
in a
ClickPipe reverse private endpoints
section.
Reverse private endpoint extended information is shown in the flyout.
Endpoint can be removed from here. It will affect any ClickPipes using this endpoint.
Supported AWS regions {#aws-privatelink-regions}
AWS PrivateLink support is limited to specific AWS regions for ClickPipes.
Please refer to the
ClickPipes regions list
to see the available regions.
This restriction does not apply to PrivateLink VPC endpoint service with a cross-region connectivity enabled.
Limitations {#limitations}
AWS PrivateLink endpoints for ClickPipes created in ClickHouse Cloud are not guaranteed to be created
in the same AWS region as the ClickHouse Cloud service.
Currently, only VPC endpoint service supports
cross-region connectivity.
Private endpoints are linked to a specific ClickHouse service and are not transferable between services.
Multiple ClickPipes for a single ClickHouse service can reuse the same endpoint. | {"source_file": "aws-privatelink.md"} | [
-0.0685780867934227,
-0.05911081284284592,
-0.04730496555566788,
-0.0038697486743330956,
-0.06320413202047348,
0.026053884997963905,
-0.04588231444358826,
-0.04534061625599861,
0.08867637813091278,
0.08490289747714996,
-0.003804972628131509,
-0.03995930776000023,
-0.00006749379099346697,
-... |
68e467a4-687a-48d6-96dd-69627260a0c5 | sidebar_label: 'Introduction'
description: 'Seamlessly connect your external data sources to ClickHouse Cloud.'
slug: /integrations/clickpipes
title: 'Integrating with ClickHouse Cloud'
doc_type: 'guide'
keywords: ['ClickPipes', 'data ingestion platform', 'streaming data', 'integration platform', 'ClickHouse Cloud']
import Kafkasvg from '@site/static/images/integrations/logos/kafka.svg';
import Confluentsvg from '@site/static/images/integrations/logos/confluent.svg';
import Msksvg from '@site/static/images/integrations/logos/msk.svg';
import Azureeventhubssvg from '@site/static/images/integrations/logos/azure_event_hubs.svg';
import Warpstreamsvg from '@site/static/images/integrations/logos/warpstream.svg';
import S3svg from '@site/static/images/integrations/logos/amazon_s3_logo.svg';
import Amazonkinesis from '@site/static/images/integrations/logos/amazon_kinesis_logo.svg';
import Gcssvg from '@site/static/images/integrations/logos/gcs.svg';
import DOsvg from '@site/static/images/integrations/logos/digitalocean.svg';
import ABSsvg from '@site/static/images/integrations/logos/azureblobstorage.svg';
import Postgressvg from '@site/static/images/integrations/logos/postgresql.svg';
import Mysqlsvg from '@site/static/images/integrations/logos/mysql.svg';
import Mongodbsvg from '@site/static/images/integrations/logos/mongodb.svg';
import redpanda_logo from '@site/static/images/integrations/logos/logo_redpanda.png';
import clickpipes_stack from '@site/static/images/integrations/data-ingestion/clickpipes/clickpipes_stack.png';
import cp_custom_role from '@site/static/images/integrations/data-ingestion/clickpipes/cp_custom_role.png';
import cp_advanced_settings from '@site/static/images/integrations/data-ingestion/clickpipes/cp_advanced_settings.png';
import Image from '@theme/IdealImage';
Integrating with ClickHouse Cloud
Introduction {#introduction}
ClickPipes
is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
Supported data sources {#supported-data-sources} | {"source_file": "index.md"} | [
-0.01602867618203163,
-0.0025161830708384514,
-0.07842697203159332,
-0.005566952750086784,
0.03298851102590561,
-0.026588037610054016,
-0.01027577742934227,
0.0038128907326608896,
0.005342146847397089,
0.05932556465268135,
0.004007462877780199,
-0.04885541647672653,
0.006290022749453783,
-... |
c5c61d5d-7a26-4d50-9243-2313c0636c12 | | Name | Logo |Type| Status | Description |
|----------------------------------------------------|--------------------------------------------------------------------------------------------------|----|------------------|------------------------------------------------------------------------------------------------------|
|
Apache Kafka
|
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Apache Kafka into ClickHouse Cloud. |
| Confluent Cloud |
|Streaming| Stable | Unlock the combined power of Confluent and ClickHouse Cloud through our direct integration. |
| Redpanda |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
| AWS MSK |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from AWS MSK into ClickHouse Cloud. |
| Azure Event Hubs |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Azure Event Hubs into ClickHouse Cloud. See the
Azure Event Hubs FAQ
for guidance. |
| WarpStream |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |
| Amazon S3 |
|Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
| Google Cloud Storage |
|Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
| DigitalOcean Spaces |
| Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
| Azure Blob Storage |
| Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
|
Amazon Kinesis
|
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Amazon Kinesis into ClickHouse cloud. |
|
Postgres
|
|DBMS| Stable | Configure ClickPipes and start ingesting data from Postgres into ClickHouse Cloud. |
|
MySQL
| | {"source_file": "index.md"} | [
-0.0023011094890534878,
0.08496814221143723,
-0.015909554436802864,
0.003183495718985796,
-0.03347279876470566,
0.0332457609474659,
0.024602554738521576,
-0.0024384886492043734,
0.014942298643290997,
-0.04490828514099121,
0.019827865064144135,
-0.013515936210751534,
0.003529396140947938,
-... |
54b9e88d-3eac-4310-abc3-39ed2bf59267 | |
Postgres
|
|DBMS| Stable | Configure ClickPipes and start ingesting data from Postgres into ClickHouse Cloud. |
|
MySQL
|
|DBMS| Public Beta | Configure ClickPipes and start ingesting data from MySQL into ClickHouse Cloud. |
|
MongoDB
|
|DBMS| Private Preview | Configure ClickPipes and start ingesting data from MongoDB into ClickHouse Cloud. | | {"source_file": "index.md"} | [
0.0610540509223938,
-0.06997683644294739,
-0.07221432775259018,
0.026136605069041252,
-0.05259745940566063,
-0.024358537048101425,
0.017107727006077766,
-0.007142254617065191,
-0.055363502353429794,
0.051110927015542984,
-0.014374601654708385,
0.004518364556133747,
-0.014481360092759132,
-... |
14d9dd24-be70-4e97-aa5c-874c4144738a | More connectors will get added to ClickPipes, you can find out more by
contacting us
.
List of Static IPs {#list-of-static-ips}
The following are the static NAT IPs (separated by region) that ClickPipes uses to connect to your external services. Add your related instance region IPs to your IP allow list to allow traffic.
For all services, ClickPipes traffic will originate from a default region based on your service's location:
-
eu-central-1
: For all services in EU regions. (this includes GCP and Azure EU regions)
-
us-east-1
: For all services in AWS
us-east-1
.
-
ap-south-1
: For services in AWS
ap-south-1
created on or after 25 Jun 2025 (services created before this date use
us-east-2
IPs).
-
ap-southeast-2
: For services in AWS
ap-southeast-2
created on or after 25 Jun 2025 (services created before this date use
us-east-2
IPs).
-
us-west-2
: For services in AWS
us-west-2
created on or after 24 Jun 2025 (services created before this date use
us-east-2
IPs).
-
us-east-2
: For all other regions not explicitly listed. (this includes GCP and Azure US regions)
| AWS region | IP Addresses |
|---------------------------------------| ------------------------------------------------------------------------------------------------------------------------------------------------ |
|
eu-central-1
|
18.195.233.217
,
3.127.86.90
,
35.157.23.2
,
18.197.167.47
,
3.122.25.29
,
52.28.148.40
|
|
us-east-1
|
54.82.38.199
,
3.90.133.29
,
52.5.177.8
,
3.227.227.145
,
3.216.6.184
,
54.84.202.92
,
3.131.130.196
,
3.23.172.68
,
3.20.208.150
|
|
us-east-2
|
3.131.130.196
,
3.23.172.68
,
3.20.208.150
,
3.132.20.192
,
18.119.76.110
,
3.134.185.180
|
|
ap-south-1
(from 25 Jun 2025) |
13.203.140.189
,
13.232.213.12
,
13.235.145.208
,
35.154.167.40
,
65.0.39.245
,
65.1.225.89
|
|
ap-southeast-2
(from 25 Jun 2025) |
3.106.48.103
,
52.62.168.142
,
13.55.113.162
,
3.24.61.148
,
54.206.77.184
,
54.79.253.17
|
|
us-west-2
(from 24 Jun 2025) |
52.42.100.5
,
44.242.47.162
,
52.40.44.52
,
44.227.206.163
,
44.246.241.23
,
35.83.230.19
|
Adjusting ClickHouse settings {#adjusting-clickhouse-settings} | {"source_file": "index.md"} | [
0.002074645832180977,
-0.09304314851760864,
-0.030366308987140656,
-0.11124023050069809,
-0.029088299721479416,
-0.0002374998148297891,
-0.0037464573979377747,
-0.07661928236484528,
0.016055794432759285,
0.042883943766355515,
-0.016623618081212044,
0.008610249496996403,
-0.00636881310492754,... |
e4ab3143-6bba-4ac5-8432-396c94259b5c | Adjusting ClickHouse settings {#adjusting-clickhouse-settings}
ClickHouse Cloud provides sensible defaults for most of the use cases. However, if you need to adjust some ClickHouse settings for the ClickPipes destination tables, a dedicated role for ClickPipes is the most flexible solution.
Steps:
1. create a custom role
CREATE ROLE my_clickpipes_role SETTINGS ...
. See
CREATE ROLE
syntax for details.
2. add the custom role to ClickPipes user on step
Details and Settings
during the ClickPipes creation.
Adjusting ClickPipes advanced settings {#clickpipes-advanced-settings}
ClickPipes provides sensible defaults that cover the requirements of most use cases. If your use case requires additional fine-tuning, you can adjust the following settings:
Object Storage ClickPipes {#clickpipes-advanced-settings-object-storage}
| Setting | Default value | Description |
|------------------------------------|---------------|---------------------------------------------------------------------------------------|
|
Max insert bytes
| 10GB | Number of bytes to process in a single insert batch. |
|
Max file count
| 100 | Maximum number of files to process in a single insert batch. |
|
Max threads
| auto(3) |
Maximum number of concurrent threads
for file processing. |
|
Max insert threads
| 1 |
Maximum number of concurrent insert threads
for file processing. |
|
Min insert block size bytes
| 1GB |
Minimum size of bytes in the block
which can be inserted into a table. |
|
Max download threads
| 4 |
Maximum number of concurrent download threads
. |
|
Object storage polling interval
| 30s | Configures the maximum wait period before inserting data into the ClickHouse cluster. |
|
Parallel distributed insert select
| 2 |
Parallel distributed insert select setting
. |
|
Parallel view processing
| false | Whether to enable pushing to attached views
concurrently instead of sequentially
. |
|
Use cluster function
| true | Whether to process files in parallel across multiple nodes. |
Streaming ClickPipes {#clickpipes-advanced-settings-streaming}
| Setting | Default value | Description |
|------------------------------------|---------------|---------------------------------------------------------------------------------------|
|
Streaming max insert wait time
| 5s | Configures the maximum wait period before inserting data into the ClickHouse cluster. |
Error reporting {#error-reporting}
ClickPipes will store errors in two separate tables depending on the type of error encountered during the ingestion process. | {"source_file": "index.md"} | [
0.045539651066064835,
-0.04778781533241272,
-0.03168172761797905,
0.0071301767602562904,
-0.11358106136322021,
0.02147815190255642,
0.035486575216054916,
0.026581309735774994,
-0.05520322918891907,
0.04518330469727516,
-0.009569770656526089,
-0.06765013188123703,
0.019955968484282494,
0.01... |
5d0a7f8c-2ad0-4791-b3cb-48e66dd791f4 | Error reporting {#error-reporting}
ClickPipes will store errors in two separate tables depending on the type of error encountered during the ingestion process.
Record Errors {#record-errors}
ClickPipes will create a table next to your destination table with the postfix
<destination_table_name>_clickpipes_error
. This table will contain any errors from malformed data or mismatched schema and will include the entirety of the invalid message. This table has a
TTL
of 7 days.
System Errors {#system-errors}
Errors related to the operation of the ClickPipe will be stored in the
system.clickpipes_log
table. This will store all other errors related to the operation of your ClickPipe (network, connectivity, etc.). This table has a
TTL
of 7 days.
If ClickPipes cannot connect to a data source after 15 min or to a destination after 1 hr, the ClickPipes instance stops and stores an appropriate message in the system error table (provided the ClickHouse instance is available).
FAQ {#faq}
What is ClickPipes?
ClickPipes is a ClickHouse Cloud feature that makes it easy for users to connect their ClickHouse services to external data sources, specifically Kafka. With ClickPipes for Kafka, users can easily continuously load data into ClickHouse, making it available for real-time analytics.
Does ClickPipes support data transformation?
Yes, ClickPipes supports basic data transformation by exposing the DDL creation. You can then apply more advanced transformations to the data as it is loaded into its destination table in a ClickHouse Cloud service leveraging ClickHouse's
materialized views feature
.
Does using ClickPipes incur an additional cost?
ClickPipes is billed on two dimensions: Ingested Data and Compute. The full details of the pricing are available on
this page
. Running ClickPipes might also generate an indirect compute and storage cost on the destination ClickHouse Cloud service similar to any ingest workload.
Is there a way to handle errors or failures when using ClickPipes for Kafka?
Yes, ClickPipes for Kafka will automatically retry in the event of failures when consuming data from Kafka for any operational issue including network issues, connectivity issues, etc. In the event of malformed data or invalid schema, ClickPipes will store the record in the record_error table and continue processing. | {"source_file": "index.md"} | [
-0.017667310312390327,
-0.042209744453430176,
-0.04052445665001869,
0.024753274396061897,
-0.02201310358941555,
-0.042476553469896317,
0.034949637949466705,
0.02971816062927246,
0.0032523691188544035,
-0.01132469717413187,
-0.02940717712044716,
0.015512604266405106,
0.09869881719350815,
-0... |
5d8a80fe-72da-4d60-a339-2b4a5e7d7810 | sidebar_label: 'ClickPipes for Amazon Kinesis'
description: 'Seamlessly connect your Amazon Kinesis data sources to ClickHouse Cloud.'
slug: /integrations/clickpipes/kinesis
title: 'Integrating Amazon Kinesis with ClickHouse Cloud'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'clickpipes'
keywords: ['clickpipes', 'kinesis', 'streaming', 'aws', 'data ingestion']
import cp_service from '@site/static/images/integrations/data-ingestion/clickpipes/cp_service.png';
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import cp_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step1.png';
import cp_step2_kinesis from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step2_kinesis.png';
import cp_step3_kinesis from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step3_kinesis.png';
import cp_step4a from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a.png';
import cp_step4a3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a3.png';
import cp_step4b from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4b.png';
import cp_step5 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step5.png';
import cp_success from '@site/static/images/integrations/data-ingestion/clickpipes/cp_success.png';
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
import Image from '@theme/IdealImage';
Integrating Amazon Kinesis with ClickHouse Cloud
Prerequisite {#prerequisite}
You have familiarized yourself with the
ClickPipes intro
and setup
IAM credentials
or an
IAM Role
. Follow the
Kinesis Role-Based Access guide
for information on how to setup a role that works with ClickHouse Cloud.
Creating your first ClickPipe {#creating-your-first-clickpipe}
Access the SQL Console for your ClickHouse Cloud Service.
Select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe"
Select your data source.
Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and other connection details.
Select Kinesis Stream and starting offset. The UI will display a sample document from the selected source (Kafka topic, etc). You can also enable Enhanced Fan-out for Kinesis streams to improve the performance and stability of your ClickPipe (More information on Enhanced Fan-out can be found
here
) | {"source_file": "kinesis.md"} | [
-0.019925571978092194,
-0.007701018359512091,
-0.026187989860773087,
-0.07804466784000397,
0.03368698060512543,
0.0016887221718207002,
0.01943003199994564,
0.03783450275659561,
-0.030729319900274277,
0.00835167896002531,
0.05436643958091736,
-0.05775955319404602,
0.054609011858701706,
-0.0... |
49ff5938-a3cf-4013-b4a5-2aa30bd371e7 | In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
You can also customize the advanced settings using the controls provided
Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
Finally, you can configure permissions for the internal ClickPipes user.
Permissions:
ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
-
Full access
: with the full access to the cluster. It might be useful if you use materialized view or Dictionary with the destination table.
-
Only destination table
: with the
INSERT
permissions to the destination table only.
By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.
The summary table provides controls to display sample data from the source or the destination table in ClickHouse
As well as controls to remove the ClickPipe and display a summary of the ingest job.
Congratulations!
you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
Supported data formats {#supported-data-formats}
The supported formats are:
-
JSON
Supported data types {#supported-data-types}
Standard types support {#standard-types-support}
The following ClickHouse data types are currently supported in ClickPipes:
Base numeric types - [U]Int8/16/32/64, Float32/64, and BFloat16
Large integer types - [U]Int128/256
Decimal Types
Boolean
String
FixedString
Date, Date32
DateTime, DateTime64 (UTC timezones only)
Enum8/Enum16
UUID
IPv4
IPv6
all ClickHouse LowCardinality types
Map with keys and values using any of the above types (including Nullables)
Tuple and Array with elements using any of the above types (including Nullables, one level depth only)
SimpleAggregateFunction types (for AggregatingMergeTree or SummingMergeTree destinations)
Variant type support {#variant-type-support}
You can manually specify a Variant type (such as
Variant(String, Int64, DateTime)
) for any JSON field
in the source data stream. Because of the way ClickPipes determines the correct variant subtype to use, only one integer or datetime
type can be used in the Variant definition - for example,
Variant(Int64, UInt32)
is not supported.
JSON type support {#json-type-support} | {"source_file": "kinesis.md"} | [
0.052270520478487015,
-0.10128866881132126,
-0.10432609170675278,
0.028461024165153503,
-0.057224709540605545,
-0.03659283369779587,
0.05894862860441208,
-0.0074569969438016415,
-0.10091226547956467,
0.02463790588080883,
0.01846383512020111,
-0.10445856302976608,
0.05573863536119461,
-0.01... |
80454871-0844-4d96-adbe-46d9ffad0e0a | JSON type support {#json-type-support}
JSON fields that are always a JSON object can be assigned to a JSON destination column. You will have to manually change the destination
column to the desired JSON type, including any fixed or skipped paths.
Kinesis virtual columns {#kinesis-virtual-columns}
The following virtual columns are supported for Kinesis stream. When creating a new destination table virtual columns can be added by using the
Add Column
button.
| Name | Description | Recommended Data Type |
|------------------|---------------------------------------------------------------|-----------------------|
| _key | Kinesis Partition Key | String |
| _timestamp | Kinesis Approximate Arrival Timestamp (millisecond precision) | DateTime64(3) |
| _stream | Kinesis Stream Name | String |
| _sequence_number | Kinesis Sequence Number | String |
| _raw_message | Full Kinesis Message | String |
The _raw_message field can be used in cases where only full Kinesis JSON record is required (such as using ClickHouse
JsonExtract*
functions to populate a downstream materialized
view). For such pipes, it may improve ClickPipes performance to delete all the "non-virtual" columns.
Limitations {#limitations}
DEFAULT
is not supported.
Performance {#performance}
Batching {#batching}
ClickPipes inserts data into ClickHouse in batches. This is to avoid creating too many parts in the database which can lead to performance issues in the cluster.
Batches are inserted when one of the following criteria has been met:
- The batch size has reached the maximum size (100,000 rows or 32MB per 1GB of replica memory)
- The batch has been open for a maximum amount of time (5 seconds)
Latency {#latency}
Latency (defined as the time between the Kinesis message being sent to the stream and the message being available in ClickHouse) will be dependent on a number of factors (i.e. Kinesis latency, network latency, message size/format). The
batching
described in the section above will also impact latency. We always recommend testing your specific use case to understand the latency you can expect.
If you have specific low-latency requirements, please
contact us
.
Scaling {#scaling}
ClickPipes for Kinesis is designed to scale both horizontally and vertically. By default, we create a consumer group with one consumer. This can be configured during ClickPipe creation, or at any other point under
Settings
->
Advanced Settings
->
Scaling
.
ClickPipes provides high-availability with an availability zone distributed architecture.
This requires scaling to at least two consumers. | {"source_file": "kinesis.md"} | [
0.028279049322009087,
-0.0060568926855921745,
-0.05876586586236954,
-0.042186129838228226,
-0.02087518945336342,
0.007423992734402418,
-0.06729382276535034,
0.03504649177193642,
0.00963850598782301,
0.03624911233782768,
0.044127341359853745,
-0.04280361905694008,
-0.012091526761651039,
0.0... |
08bfc044-7681-4f61-8c61-1531d509f35d | ClickPipes provides high-availability with an availability zone distributed architecture.
This requires scaling to at least two consumers.
Regardless number of running consumers, fault tolerance is available by design.
If a consumer or its underlying infrastructure fails,
the ClickPipe will automatically restart the consumer and continue processing messages.
Authentication {#authentication}
To access Amazon Kinesis streams, you can use
IAM credentials
or an
IAM Role
. For more details on how to setup an IAM role, you can
refer to this guide
for information on how to setup a role that works with ClickHouse Cloud | {"source_file": "kinesis.md"} | [
-0.003757153172045946,
-0.05346239358186722,
-0.053561389446258545,
-0.08034518361091614,
0.0009707316639833152,
0.004489259794354439,
-0.00006516816938528791,
-0.01212323922663927,
0.0041707721538841724,
0.03352038934826851,
-0.02183232456445694,
0.0008629861404187977,
0.02753058448433876,
... |
fa28bd0f-a845-4def-a6f9-0b31e4b9ebf5 | slug: /integrations/clickpipes/secure-rds
sidebar_label: 'AWS IAM DB Authentication (RDS/Aurora)'
title: 'AWS IAM DB Authentication (RDS/Aurora)'
description: 'This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon RDS/Aurora and access their database securely.'
doc_type: 'guide'
keywords: ['clickpipes', 'rds', 'security', 'aws', 'private connection']
import secures3_arn from '@site/static/images/cloud/security/secures3_arn.png';
import Image from '@theme/IdealImage';
This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Aurora and RDS and access their databases securely.
:::warning
For AWS RDS Postgres and Aurora Postgres you can only run
Initial Load Only
ClickPipes due to the limitations of the AWS IAM DB Authentication.
For MySQL and MariaDB, this limitation does not apply, and you can run both
Initial Load Only
and
CDC
ClickPipes.
:::
Setup {#setup}
Obtaining the ClickHouse service IAM role Arn {#obtaining-the-clickhouse-service-iam-role-arn}
1 - Login to your ClickHouse cloud account.
2 - Select the ClickHouse service you want to create the integration
3 - Select the
Settings
tab
4 - Scroll down to the
Network security information
section at the bottom of the page
5 - Copy the
Service role ID (IAM)
value belong to the service as shown below.
Let's call this value
{ClickHouse_IAM_ARN}
. This is the IAM role that will be used to access your RDS/Aurora instance.
Configuring the RDS/Aurora instance {#configuring-the-rds-aurora-instance}
Enabling IAM DB Authentication {#enabling-iam-db-authentication}
Login to your AWS Account and navigate to the RDS instance you want to configure.
Click on the
Modify
button.
Scroll down to the
Database authentication
section.
Enable the
Password and IAM database authentication
option.
Click on the
Continue
button.
Review the changes and click on the
Apply immediately
option.
Obtaining the RDS/Aurora Resource ID {#obtaining-the-rds-resource-id}
Login to your AWS Account and navigate to the RDS instance/Aurora Cluster you want to configure.
Click on the
Configuration
tab.
Note the
Resource ID
value. It should look like
db-xxxxxxxxxxxxxx
for RDS or
cluster-xxxxxxxxxxxxxx
for Aurora cluster. Let's call this value
{RDS_RESOURCE_ID}
. This is the resource ID that will be used in the IAM policy to allow access to the RDS instance.
Setting up the Database User {#setting-up-the-database-user}
PostgreSQL {#setting-up-the-database-user-postgres}
Connect to your RDS/Aurora instance and create a new database user with the following command:
sql
CREATE USER clickpipes_iam_user;
GRANT rds_iam TO clickpipes_iam_user;
Follow the rest of the steps in the
PostgreSQL source setup guide
to configure your RDS instance for ClickPipes.
MySQL / MariaDB {#setting-up-the-database-user-mysql} | {"source_file": "secure-rds.md"} | [
-0.03294898569583893,
0.00270683947019279,
-0.1235264465212822,
-0.02743079513311386,
-0.04103899002075195,
-0.03547709062695503,
0.07292696088552475,
-0.022852523252367973,
-0.0010043452493846416,
0.0306253544986248,
-0.02475023828446865,
0.015788253396749496,
0.09533143043518066,
-0.0189... |
c09a60f4-0398-441d-bffe-f169f7e5217e | Follow the rest of the steps in the
PostgreSQL source setup guide
to configure your RDS instance for ClickPipes.
MySQL / MariaDB {#setting-up-the-database-user-mysql}
Connect to your RDS/Aurora instance and create a new database user with the following command:
sql
CREATE USER 'clickpipes_iam_user' IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
Follow the rest of the steps in the
MySQL source setup guide
to configure your RDS/Aurora instance for ClickPipes.
Setting up the IAM role {#setting-up-iam-role}
Manually create IAM role. {#manually-create-iam-role}
1 - Login to your AWS Account in the web browser with an IAM user that has permission to create & manage IAM role.
2 - Browse to IAM Service Console
3 - Create a new IAM role with the following IAM & Trust policy.
Trust policy (Please replace
{ClickHouse_IAM_ARN}
with the IAM Role arn belong to your ClickHouse instance):
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "{ClickHouse_IAM_ARN}"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
IAM policy (Please replace
{RDS_RESOURCE_ID}
with the Resource ID of your RDS instance). Please make sure to replace
{RDS_REGION}
with the region of your RDS/Aurora instance and
{AWS_ACCOUNT}
with your AWS account ID:
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:{RDS_REGION}:{AWS_ACCOUNT}:dbuser:{RDS_RESOURCE_ID}/clickpipes_iam_user"
]
}
]
}
4 - Copy the new
IAM Role Arn
after creation. This is what needed to access your AWS Database securely from ClickPipes. Let's call this
{RDS_ACCESS_IAM_ROLE_ARN}
.
You can now use this IAM role to authenticate with your RDS/Aurora instance from ClickPipes. | {"source_file": "secure-rds.md"} | [
0.01957061141729355,
-0.07751242816448212,
-0.07375897467136383,
-0.006454617250710726,
-0.1310502141714096,
0.030380750074982643,
0.04868858680129051,
0.009632417932152748,
-0.01696654222905636,
0.041698239743709564,
-0.04556842893362045,
-0.05752778798341751,
0.0455716997385025,
0.002942... |
5df65166-7012-4ed3-8654-a7e8afd852a4 | sidebar_label: 'Spark JDBC'
sidebar_position: 3
slug: /integrations/apache-spark/spark-jdbc
description: 'Introduction to Apache Spark with ClickHouse'
keywords: ['clickhouse', 'Apache Spark', 'jdbc', 'migrating', 'data']
title: 'Spark JDBC'
doc_type: 'guide'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import TOCInline from '@theme/TOCInline';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Spark JDBC
JDBC is one of the most commonly used data sources in Spark.
In this section, we will provide details on how to
use the
ClickHouse official JDBC connector
with Spark.
Read data {#read-data}
```java
public static void main(String[] args) {
// Initialize Spark session
SparkSession spark = SparkSession.builder().appName("example").master("local").getOrCreate();
String jdbcURL = "jdbc:ch://localhost:8123/default";
String query = "select * from example_table where id > 2";
//---------------------------------------------------------------------------------------------------
// Load the table from ClickHouse using jdbc method
//---------------------------------------------------------------------------------------------------
Properties jdbcProperties = new Properties();
jdbcProperties.put("user", "default");
jdbcProperties.put("password", "123456");
Dataset<Row> df1 = spark.read().jdbc(jdbcURL, String.format("(%s)", query), jdbcProperties);
df1.show();
//---------------------------------------------------------------------------------------------------
// Load the table from ClickHouse using load method
//---------------------------------------------------------------------------------------------------
Dataset<Row> df2 = spark.read()
.format("jdbc")
.option("url", jdbcURL)
.option("user", "default")
.option("password", "123456")
.option("query", query)
.load();
df2.show();
// Stop the Spark session
spark.stop();
}
```
```java
object ReadData extends App {
// Initialize Spark session
val spark: SparkSession = SparkSession.builder.appName("example").master("local").getOrCreate
val jdbcURL = "jdbc:ch://localhost:8123/default"
val query: String = "select * from example_table where id > 2"
//---------------------------------------------------------------------------------------------------
// Load the table from ClickHouse using jdbc method
//---------------------------------------------------------------------------------------------------
val connectionProperties = new Properties()
connectionProperties.put("user", "default")
connectionProperties.put("password", "123456")
val df1: Dataset[Row] = spark.read.
jdbc(jdbcURL, s"($query)", connectionProperties) | {"source_file": "spark-jdbc.md"} | [
-0.018169673159718513,
-0.03515166416764259,
-0.04797391593456268,
0.0021835055667907,
-0.03778291121125221,
0.035414207726716995,
0.07985860854387283,
0.035493385046720505,
-0.10458015650510788,
-0.060569439083337784,
-0.008554324507713318,
0.025770023465156555,
-0.002853120444342494,
-0.... |
f4679059-1d17-4567-97df-1ef0e32f87ab | val df1: Dataset[Row] = spark.read.
jdbc(jdbcURL, s"($query)", connectionProperties)
df1.show()
//---------------------------------------------------------------------------------------------------
// Load the table from ClickHouse using load method
//---------------------------------------------------------------------------------------------------
val df2: Dataset[Row] = spark.read
.format("jdbc")
.option("url", jdbcURL)
.option("user", "default")
.option("password", "123456")
.option("query", query)
.load()
df2.show()
// Stop the Spark session// Stop the Spark session
spark.stop()
}
```
```python
from pyspark.sql import SparkSession
jar_files = [
"jars/clickhouse-jdbc-X.X.X-SNAPSHOT-all.jar"
]
Initialize Spark session with JARs
spark = SparkSession.builder \
.appName("example") \
.master("local") \
.config("spark.jars", ",".join(jar_files)) \
.getOrCreate()
url = "jdbc:ch://localhost:8123/default"
user = "your_user"
password = "your_password"
query = "select * from example_table where id > 2"
driver = "com.clickhouse.jdbc.ClickHouseDriver"
df = (spark.read
.format('jdbc')
.option('driver', driver)
.option('url', url)
.option('user', user)
.option('password', password).option(
'query', query).load())
df.show()
```
```sql
CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:ch://localhost:8123/default",
dbtable "schema.tablename",
user "username",
password "password",
driver "com.clickhouse.jdbc.ClickHouseDriver"
);
SELECT * FROM jdbcTable;
```
Write data {#write-data}
```java
public static void main(String[] args) {
// Initialize Spark session
SparkSession spark = SparkSession.builder().appName("example").master("local").getOrCreate();
// JDBC connection details
String jdbcUrl = "jdbc:ch://localhost:8123/default";
Properties jdbcProperties = new Properties();
jdbcProperties.put("user", "default");
jdbcProperties.put("password", "123456");
// Create a sample DataFrame
StructType schema = new StructType(new StructField[]{
DataTypes.createStructField("id", DataTypes.IntegerType, false),
DataTypes.createStructField("name", DataTypes.StringType, false)
});
List<Row> rows = new ArrayList<Row>();
rows.add(RowFactory.create(1, "John"));
rows.add(RowFactory.create(2, "Doe"));
Dataset<Row> df = spark.createDataFrame(rows, schema);
//---------------------------------------------------------------------------------------------------
// Write the df to ClickHouse using the jdbc method
//--------------------------------------------------------------------------------------------------- | {"source_file": "spark-jdbc.md"} | [
0.02020365372300148,
-0.07552370429039001,
-0.05971699580550194,
-0.0022065159864723682,
-0.056339334696531296,
-0.02132423035800457,
0.03974746912717819,
-0.00791400671005249,
-0.05146854743361473,
-0.04652232676744461,
-0.015555666759610176,
-0.029980238527059555,
-0.02899828925728798,
-... |
ffed9ceb-a767-4bb5-8450-6d75e643cf22 | df.write()
.mode(SaveMode.Append)
.jdbc(jdbcUrl, "example_table", jdbcProperties);
//---------------------------------------------------------------------------------------------------
// Write the df to ClickHouse using the save method
//---------------------------------------------------------------------------------------------------
df.write()
.format("jdbc")
.mode("append")
.option("url", jdbcUrl)
.option("dbtable", "example_table")
.option("user", "default")
.option("password", "123456")
.save();
// Stop the Spark session
spark.stop();
}
```
```java
object WriteData extends App {
val spark: SparkSession = SparkSession.builder.appName("example").master("local").getOrCreate
// JDBC connection details
val jdbcUrl: String = "jdbc:ch://localhost:8123/default"
val jdbcProperties: Properties = new Properties
jdbcProperties.put("user", "default")
jdbcProperties.put("password", "123456")
// Create a sample DataFrame
val rows = Seq(Row(1, "John"), Row(2, "Doe"))
val schema = List(
StructField("id", DataTypes.IntegerType, nullable = false),
StructField("name", StringType, nullable = true)
)
val df: DataFrame = spark.createDataFrame(
spark.sparkContext.parallelize(rows),
StructType(schema)
)
//---------------------------------------------------------------------------------------------------//---------------------------------------------------------------------------------------------------
// Write the df to ClickHouse using the jdbc method
//---------------------------------------------------------------------------------------------------//---------------------------------------------------------------------------------------------------
df.write
.mode(SaveMode.Append)
.jdbc(jdbcUrl, "example_table", jdbcProperties)
//---------------------------------------------------------------------------------------------------//---------------------------------------------------------------------------------------------------
// Write the df to ClickHouse using the save method
//---------------------------------------------------------------------------------------------------//---------------------------------------------------------------------------------------------------
df.write
.format("jdbc")
.mode("append")
.option("url", jdbcUrl)
.option("dbtable", "example_table")
.option("user", "default")
.option("password", "123456")
.save()
// Stop the Spark session// Stop the Spark session
spark.stop()
}
```
```python
from pyspark.sql import SparkSession
from pyspark.sql import Row
jar_files = [
"jars/clickhouse-jdbc-X.X.X-SNAPSHOT-all.jar"
]
Initialize Spark session with JARs | {"source_file": "spark-jdbc.md"} | [
0.06588252633810043,
-0.08601310104131699,
-0.0963679850101471,
0.03167347237467766,
-0.0029046994168311357,
-0.006444786675274372,
-0.01755131594836712,
-0.010691928677260876,
-0.03046325407922268,
0.0751401036977768,
0.045387957245111465,
0.003744145156815648,
-0.0254853293299675,
-0.116... |
73838ff0-36b4-42f4-963f-79c3d5080a9b | }
```
```python
from pyspark.sql import SparkSession
from pyspark.sql import Row
jar_files = [
"jars/clickhouse-jdbc-X.X.X-SNAPSHOT-all.jar"
]
Initialize Spark session with JARs
spark = SparkSession.builder \
.appName("example") \
.master("local") \
.config("spark.jars", ",".join(jar_files)) \
.getOrCreate()
Create DataFrame
data = [Row(id=11, name="John"), Row(id=12, name="Doe")]
df = spark.createDataFrame(data)
url = "jdbc:ch://localhost:8123/default"
user = "your_user"
password = "your_password"
driver = "com.clickhouse.jdbc.ClickHouseDriver"
Write DataFrame to ClickHouse
df.write \
.format("jdbc") \
.option("driver", driver) \
.option("url", url) \
.option("user", user) \
.option("password", password) \
.option("dbtable", "example_table") \
.mode("append") \
.save()
```
```sql
CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:ch://localhost:8123/default",
dbtable "schema.tablename",
user "username",
password "password",
driver "com.clickhouse.jdbc.ClickHouseDriver"
);
-- resultTable could be created with df.createTempView or with Spark SQL
INSERT INTO TABLE jdbcTable
SELECT * FROM resultTable;
```
Parallelism {#parallelism}
When using Spark JDBC, Spark reads the data using a single partition. To achieve higher concurrency, you must specify
partitionColumn
,
lowerBound
,
upperBound
, and
numPartitions
, which describe how to partition the table when
reading in parallel from multiple workers.
Please visit Apache Spark's official documentation for more information
on
JDBC configurations
.
JDBC limitations {#jdbc-limitations}
As of today, you can insert data using JDBC only into existing tables (currently there is no way to auto create the
table on DF insertion, as Spark does with other connectors). | {"source_file": "spark-jdbc.md"} | [
0.006144715938717127,
-0.02493959292769432,
-0.033664505928754807,
-0.0381767563521862,
-0.009895369410514832,
-0.00454673869535327,
0.061197374016046524,
0.03985253721475601,
-0.04309697449207306,
0.00022331280342768878,
0.007164272479712963,
0.01564701646566391,
-0.03313741087913513,
-0.... |
bb40029d-4460-40a8-a3b7-be1fa00d4d40 | sidebar_label: 'Integrating Apache Spark with ClickHouse'
sidebar_position: 1
slug: /integrations/apache-spark
description: 'Introduction to Apache Spark with ClickHouse'
keywords: ['clickhouse', 'Apache Spark', 'migrating', 'data']
title: 'Integrating Apache Spark with ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import TOCInline from '@theme/TOCInline';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Integrating Apache Spark with ClickHouse
Apache Spark
is a multi-language engine for executing data engineering, data
science, and machine learning on single-node machines or clusters.
There are two main ways to connect Apache Spark and ClickHouse:
Spark Connector
- The Spark connector implements the
DataSourceV2
and has its own Catalog
management. As of today, this is the recommended way to integrate ClickHouse and Spark.
Spark JDBC
- Integrate Spark and ClickHouse
using a
JDBC data source
.
Both solutions have been successfully tested and are fully compatible with various APIs, including Java, Scala, PySpark, and Spark SQL. | {"source_file": "index.md"} | [
-0.060478344559669495,
-0.08737332373857498,
-0.028488436713814735,
0.01598322205245495,
0.004176905378699303,
-0.001008995226584375,
0.03402914106845856,
-0.008922520093619823,
-0.12088850140571594,
-0.05145736783742905,
0.028758924454450607,
-0.012030680663883686,
0.0020187224727123976,
... |
6a6680c0-b4fa-4b0d-a124-ca2fed6e5818 | sidebar_label: 'Spark Native Connector'
sidebar_position: 2
slug: /integrations/apache-spark/spark-native-connector
description: 'Introduction to Apache Spark with ClickHouse'
keywords: ['clickhouse', 'Apache Spark', 'migrating', 'data']
title: 'Spark Connector'
doc_type: 'guide'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import TOCInline from '@theme/TOCInline';
Spark connector
This connector leverages ClickHouse-specific optimizations, such as advanced partitioning and predicate pushdown, to
improve query performance and data handling.
The connector is based on
ClickHouse's official JDBC connector
, and
manages its own catalog.
Before Spark 3.0, Spark lacked a built-in catalog concept, so users typically relied on external catalog systems such as
Hive Metastore or AWS Glue.
With these external solutions, users had to register their data source tables manually before accessing them in Spark.
However, since Spark 3.0 introduced the catalog concept, Spark can now automatically discover tables by registering
catalog plugins.
Spark's default catalog is
spark_catalog
, and tables are identified by
{catalog name}.{database}.{table}
. With the new
catalog feature, it is now possible to add and work with multiple catalogs in a single Spark application.
Requirements {#requirements}
Java 8 or 17
Scala 2.12 or 2.13
Apache Spark 3.3 or 3.4 or 3.5
Compatibility matrix {#compatibility-matrix}
| Version | Compatible Spark Versions | ClickHouse JDBC version |
|---------|---------------------------|-------------------------|
| main | Spark 3.3, 3.4, 3.5 | 0.6.3 |
| 0.8.1 | Spark 3.3, 3.4, 3.5 | 0.6.3 |
| 0.8.0 | Spark 3.3, 3.4, 3.5 | 0.6.3 |
| 0.7.3 | Spark 3.3, 3.4 | 0.4.6 |
| 0.6.0 | Spark 3.3 | 0.3.2-patch11 |
| 0.5.0 | Spark 3.2, 3.3 | 0.3.2-patch11 |
| 0.4.0 | Spark 3.2, 3.3 | Not depend on |
| 0.3.0 | Spark 3.2, 3.3 | Not depend on |
| 0.2.1 | Spark 3.2 | Not depend on |
| 0.1.2 | Spark 3.2 | Not depend on |
Installation & setup {#installation--setup}
For integrating ClickHouse with Spark, there are multiple installation options to suit different project setups.
You can add the ClickHouse Spark connector as a dependency directly in your project's build file (such as in
pom.xml
for Maven or
build.sbt
for SBT).
Alternatively, you can put the required JAR files in your
$SPARK_HOME/jars/
folder, or pass them directly as a Spark
option using the
--jars
flag in the
spark-submit
command.
Both approaches ensure the ClickHouse connector is available in your Spark environment.
Import as a Dependency {#import-as-a-dependency} | {"source_file": "spark-native-connector.md"} | [
-0.02634325809776783,
-0.12487467378377914,
-0.038452792912721634,
0.016042428091168404,
0.013150216080248356,
-0.00022268184693530202,
0.039063189178705215,
0.01651417650282383,
-0.06722905486822128,
-0.01276389230042696,
0.02093014493584633,
0.03282379359006882,
-0.02005622163414955,
-0.... |
733dcd63-98f2-4b6f-9da6-67ad79b70379 | Import as a Dependency {#import-as-a-dependency}
maven
<dependency>
<groupId>com.clickhouse.spark</groupId>
<artifactId>clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}</artifactId>
<version>{{ stable_version }}</version>
</dependency>
<dependency>
<groupId>com.clickhouse</groupId>
<artifactId>clickhouse-jdbc</artifactId>
<classifier>all</classifier>
<version>{{ clickhouse_jdbc_version }}</version>
<exclusions>
<exclusion>
<groupId>*</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
Add the following repository if you want to use SNAPSHOT version.
maven
<repositories>
<repository>
<id>sonatype-oss-snapshots</id>
<name>Sonatype OSS Snapshots Repository</name>
<url>https://s01.oss.sonatype.org/content/repositories/snapshots</url>
</repository>
</repositories>
gradle
dependencies {
implementation("com.clickhouse.spark:clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}:{{ stable_version }}")
implementation("com.clickhouse:clickhouse-jdbc:{{ clickhouse_jdbc_version }}:all") { transitive = false }
}
Add the following repository if you want to use the SNAPSHOT version:
gradle
repositries {
maven { url = "https://s01.oss.sonatype.org/content/repositories/snapshots" }
}
sbt
libraryDependencies += "com.clickhouse" % "clickhouse-jdbc" % {{ clickhouse_jdbc_version }} classifier "all"
libraryDependencies += "com.clickhouse.spark" %% clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }} % {{ stable_version }}
When working with Spark's shell options (Spark SQL CLI, Spark Shell CLI, and Spark Submit command), the dependencies can be
registered by passing the required jars:
text
$SPARK_HOME/bin/spark-sql \
--jars /path/clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}:{{ stable_version }}.jar,/path/clickhouse-jdbc-{{ clickhouse_jdbc_version }}-all.jar
If you want to avoid copying the JAR files to your Spark client node, you can use the following instead:
text
--repositories https://{maven-central-mirror or private-nexus-repo} \
--packages com.clickhouse.spark:clickhouse-spark-runtime-{{ spark_binary_version }}_{{ scala_binary_version }}:{{ stable_version }},com.clickhouse:clickhouse-jdbc:{{ clickhouse_jdbc_version }}
Note: For SQL-only use cases,
Apache Kyuubi
is recommended
for production.
Download the library {#download-the-library}
The name pattern of the binary JAR is:
bash
clickhouse-spark-runtime-${spark_binary_version}_${scala_binary_version}-${version}.jar
You can find all available released JAR files
in the
Maven Central Repository
and all daily build SNAPSHOT JAR files in the
Sonatype OSS Snapshots Repository
.
:::important
It's essential to include the
clickhouse-jdbc JAR
with the "all" classifier,
as the connector relies on
clickhouse-http | {"source_file": "spark-native-connector.md"} | [
-0.03726492449641228,
-0.007611290086060762,
-0.03348604589700699,
-0.07803358882665634,
0.025659797713160515,
-0.024867448955774307,
-0.02588704600930214,
-0.061846766620874405,
-0.08529379218816757,
-0.05053872615098953,
-0.005923587363213301,
-0.06085629016160965,
-0.016537301242351532,
... |
b12e32ee-d07d-49fa-9a8a-039b216a64c2 | :::important
It's essential to include the
clickhouse-jdbc JAR
with the "all" classifier,
as the connector relies on
clickhouse-http
and
clickhouse-client
β both of which are bundled
in clickhouse-jdbc:all.
Alternatively, you can add
clickhouse-client JAR
and
clickhouse-http
individually if you
prefer not to use the full JDBC package.
In any case, ensure that the package versions are compatible according to
the
Compatibility Matrix
.
:::
Register the catalog (required) {#register-the-catalog-required}
In order to access your ClickHouse tables, you must configure a new Spark catalog with the following configs:
| Property | Value | Default Value | Required |
|----------------------------------------------|------------------------------------------|----------------|----------|
|
spark.sql.catalog.<catalog_name>
|
com.clickhouse.spark.ClickHouseCatalog
| N/A | Yes |
|
spark.sql.catalog.<catalog_name>.host
|
<clickhouse_host>
|
localhost
| No |
|
spark.sql.catalog.<catalog_name>.protocol
|
http
|
http
| No |
|
spark.sql.catalog.<catalog_name>.http_port
|
<clickhouse_port>
|
8123
| No |
|
spark.sql.catalog.<catalog_name>.user
|
<clickhouse_username>
|
default
| No |
|
spark.sql.catalog.<catalog_name>.password
|
<clickhouse_password>
| (empty string) | No |
|
spark.sql.catalog.<catalog_name>.database
|
<database>
|
default
| No |
|
spark.<catalog_name>.write.format
|
json
|
arrow
| No |
These settings could be set via one of the following:
Edit/Create
spark-defaults.conf
.
Pass the configuration to your
spark-submit
command (or to your
spark-shell
/
spark-sql
CLI commands).
Add the configuration when initiating your context.
:::important
When working with a ClickHouse cluster, you need to set a unique catalog name for each instance.
For example:
```text
spark.sql.catalog.clickhouse1 com.clickhouse.spark.ClickHouseCatalog
spark.sql.catalog.clickhouse1.host 10.0.0.1
spark.sql.catalog.clickhouse1.protocol https
spark.sql.catalog.clickhouse1.http_port 8443
spark.sql.catalog.clickhouse1.user default
spark.sql.catalog.clickhouse1.password
spark.sql.catalog.clickhouse1.database default
spark.sql.catalog.clickhouse1.option.ssl true | {"source_file": "spark-native-connector.md"} | [
-0.015418869443237782,
-0.10787084698677063,
-0.03294406458735466,
-0.005618101917207241,
-0.029166636988520622,
0.001388657372444868,
-0.01121821254491806,
-0.03645603731274605,
-0.06859876960515976,
-0.021811045706272125,
-0.01497447956353426,
-0.044516950845718384,
-0.04542076215147972,
... |
f65a2427-3aca-4a1d-a456-e5ac1288a063 | spark.sql.catalog.clickhouse2 com.clickhouse.spark.ClickHouseCatalog
spark.sql.catalog.clickhouse2.host 10.0.0.2
spark.sql.catalog.clickhouse2.protocol https
spark.sql.catalog.clickhouse2.http_port 8443
spark.sql.catalog.clickhouse2.user default
spark.sql.catalog.clickhouse2.password
spark.sql.catalog.clickhouse2.database default
spark.sql.catalog.clickhouse2.option.ssl true
```
That way, you would be able to access clickhouse1 table
<ck_db>.<ck_table>
from Spark SQL by
clickhouse1.<ck_db>.<ck_table>
, and access clickhouse2 table
<ck_db>.<ck_table>
by
clickhouse2.<ck_db>.<ck_table>
.
:::
ClickHouse Cloud settings {#clickhouse-cloud-settings}
When connecting to
ClickHouse Cloud
, make sure to enable SSL and set the appropriate SSL mode. For example:
text
spark.sql.catalog.clickhouse.option.ssl true
spark.sql.catalog.clickhouse.option.ssl_mode NONE
Read data {#read-data}
```java
public static void main(String[] args) {
// Create a Spark session
SparkSession spark = SparkSession.builder()
.appName("example")
.master("local[*]")
.config("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
.config("spark.sql.catalog.clickhouse.host", "127.0.0.1")
.config("spark.sql.catalog.clickhouse.protocol", "http")
.config("spark.sql.catalog.clickhouse.http_port", "8123")
.config("spark.sql.catalog.clickhouse.user", "default")
.config("spark.sql.catalog.clickhouse.password", "123456")
.config("spark.sql.catalog.clickhouse.database", "default")
.config("spark.clickhouse.write.format", "json")
.getOrCreate();
Dataset<Row> df = spark.sql("select * from clickhouse.default.example_table");
df.show();
spark.stop();
}
```
```java
object NativeSparkRead extends App {
val spark = SparkSession.builder
.appName("example")
.master("local[*]")
.config("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
.config("spark.sql.catalog.clickhouse.host", "127.0.0.1")
.config("spark.sql.catalog.clickhouse.protocol", "http")
.config("spark.sql.catalog.clickhouse.http_port", "8123")
.config("spark.sql.catalog.clickhouse.user", "default")
.config("spark.sql.catalog.clickhouse.password", "123456")
.config("spark.sql.catalog.clickhouse.database", "default")
.config("spark.clickhouse.write.format", "json")
.getOrCreate
val df = spark.sql("select * from clickhouse.default.example_table")
df.show()
spark.stop()
}
```
```python
from pyspark.sql import SparkSession | {"source_file": "spark-native-connector.md"} | [
0.006784528493881226,
-0.10410551726818085,
-0.05876794457435608,
0.010928669944405556,
-0.06372544914484024,
-0.018307771533727646,
0.023183472454547882,
-0.09006168693304062,
-0.025681260973215103,
-0.050500694662332535,
0.01452060416340828,
-0.01576344482600689,
0.039438072592020035,
-0... |
830dfc67-6649-41d2-b48a-3429ea559ffd | val df = spark.sql("select * from clickhouse.default.example_table")
df.show()
spark.stop()
}
```
```python
from pyspark.sql import SparkSession
packages = [
"com.clickhouse.spark:clickhouse-spark-runtime-3.4_2.12:0.8.0",
"com.clickhouse:clickhouse-client:0.7.0",
"com.clickhouse:clickhouse-http-client:0.7.0",
"org.apache.httpcomponents.client5:httpclient5:5.2.1"
]
spark = (SparkSession.builder
.config("spark.jars.packages", ",".join(packages))
.getOrCreate())
spark.conf.set("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
spark.conf.set("spark.sql.catalog.clickhouse.host", "127.0.0.1")
spark.conf.set("spark.sql.catalog.clickhouse.protocol", "http")
spark.conf.set("spark.sql.catalog.clickhouse.http_port", "8123")
spark.conf.set("spark.sql.catalog.clickhouse.user", "default")
spark.conf.set("spark.sql.catalog.clickhouse.password", "123456")
spark.conf.set("spark.sql.catalog.clickhouse.database", "default")
spark.conf.set("spark.clickhouse.write.format", "json")
df = spark.sql("select * from clickhouse.default.example_table")
df.show()
```
```sql
CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:ch://localhost:8123/default",
dbtable "schema.tablename",
user "username",
password "password",
driver "com.clickhouse.jdbc.ClickHouseDriver"
);
SELECT * FROM jdbcTable;
```
Write data {#write-data}
```java
public static void main(String[] args) throws AnalysisException {
// Create a Spark session
SparkSession spark = SparkSession.builder()
.appName("example")
.master("local[*]")
.config("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
.config("spark.sql.catalog.clickhouse.host", "127.0.0.1")
.config("spark.sql.catalog.clickhouse.protocol", "http")
.config("spark.sql.catalog.clickhouse.http_port", "8123")
.config("spark.sql.catalog.clickhouse.user", "default")
.config("spark.sql.catalog.clickhouse.password", "123456")
.config("spark.sql.catalog.clickhouse.database", "default")
.config("spark.clickhouse.write.format", "json")
.getOrCreate();
// Define the schema for the DataFrame
StructType schema = new StructType(new StructField[]{
DataTypes.createStructField("id", DataTypes.IntegerType, false),
DataTypes.createStructField("name", DataTypes.StringType, false),
});
List<Row> data = Arrays.asList(
RowFactory.create(1, "Alice"),
RowFactory.create(2, "Bob")
);
// Create a DataFrame
Dataset<Row> df = spark.createDataFrame(data, schema);
df.writeTo("clickhouse.default.example_table").append();
spark.stop();
}
``` | {"source_file": "spark-native-connector.md"} | [
0.05787458270788193,
-0.11853189766407013,
-0.016164084896445274,
-0.009831958450376987,
0.037645697593688965,
-0.04554164782166481,
0.06911156326532364,
-0.027616165578365326,
-0.04907021299004555,
-0.012549815699458122,
-0.002599631668999791,
-0.019944684579968452,
-0.04002811387181282,
... |
c9115156-e76e-4400-9a9c-11a1e30ce5ea | // Create a DataFrame
Dataset<Row> df = spark.createDataFrame(data, schema);
df.writeTo("clickhouse.default.example_table").append();
spark.stop();
}
```
```java
object NativeSparkWrite extends App {
// Create a Spark session
val spark: SparkSession = SparkSession.builder
.appName("example")
.master("local[*]")
.config("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
.config("spark.sql.catalog.clickhouse.host", "127.0.0.1")
.config("spark.sql.catalog.clickhouse.protocol", "http")
.config("spark.sql.catalog.clickhouse.http_port", "8123")
.config("spark.sql.catalog.clickhouse.user", "default")
.config("spark.sql.catalog.clickhouse.password", "123456")
.config("spark.sql.catalog.clickhouse.database", "default")
.config("spark.clickhouse.write.format", "json")
.getOrCreate
// Define the schema for the DataFrame
val rows = Seq(Row(1, "John"), Row(2, "Doe"))
val schema = List(
StructField("id", DataTypes.IntegerType, nullable = false),
StructField("name", StringType, nullable = true)
)
// Create the df
val df: DataFrame = spark.createDataFrame(
spark.sparkContext.parallelize(rows),
StructType(schema)
)
df.writeTo("clickhouse.default.example_table").append()
spark.stop()
}
```
```python
from pyspark.sql import SparkSession
from pyspark.sql import Row
Feel free to use any other packages combination satesfying the compatibility matrix provided above.
packages = [
"com.clickhouse.spark:clickhouse-spark-runtime-3.4_2.12:0.8.0",
"com.clickhouse:clickhouse-client:0.7.0",
"com.clickhouse:clickhouse-http-client:0.7.0",
"org.apache.httpcomponents.client5:httpclient5:5.2.1"
]
spark = (SparkSession.builder
.config("spark.jars.packages", ",".join(packages))
.getOrCreate())
spark.conf.set("spark.sql.catalog.clickhouse", "com.clickhouse.spark.ClickHouseCatalog")
spark.conf.set("spark.sql.catalog.clickhouse.host", "127.0.0.1")
spark.conf.set("spark.sql.catalog.clickhouse.protocol", "http")
spark.conf.set("spark.sql.catalog.clickhouse.http_port", "8123")
spark.conf.set("spark.sql.catalog.clickhouse.user", "default")
spark.conf.set("spark.sql.catalog.clickhouse.password", "123456")
spark.conf.set("spark.sql.catalog.clickhouse.database", "default")
spark.conf.set("spark.clickhouse.write.format", "json")
Create DataFrame
data = [Row(id=11, name="John"), Row(id=12, name="Doe")]
df = spark.createDataFrame(data)
Write DataFrame to ClickHouse
df.writeTo("clickhouse.default.example_table").append()
```
```sql
-- resultTable is the Spark intermediate df we want to insert into clickhouse.default.example_table
INSERT INTO TABLE clickhouse.default.example_table
SELECT * FROM resultTable;
```
DDL operations {#ddl-operations} | {"source_file": "spark-native-connector.md"} | [
0.0659298375248909,
-0.09813607484102249,
-0.04485435411334038,
-0.04244364798069,
-0.010769369080662727,
-0.022898007184267044,
0.033730339258909225,
0.00972127728164196,
-0.06364089995622635,
0.045289959758520126,
0.0016965094255283475,
-0.022961273789405823,
-0.03395967558026314,
-0.032... |
a0a2c7a3-1a4c-4962-9481-e57e809795a9 | ```
DDL operations {#ddl-operations}
You can perform DDL operations on your ClickHouse instance using Spark SQL, with all changes immediately persisted in
ClickHouse.
Spark SQL allows you to write queries exactly as you would in ClickHouse,
so you can directly execute commands such as CREATE TABLE, TRUNCATE, and more - without modification, for instance:
:::note
When using Spark SQL, only one statement can be executed at a time.
:::
sql
USE clickhouse;
```sql
CREATE TABLE test_db.tbl_sql (
create_time TIMESTAMP NOT NULL,
m INT NOT NULL COMMENT 'part key',
id BIGINT NOT NULL COMMENT 'sort key',
value STRING
) USING ClickHouse
PARTITIONED BY (m)
TBLPROPERTIES (
engine = 'MergeTree()',
order_by = 'id',
settings.index_granularity = 8192
);
```
The above examples demonstrate Spark SQL queries, which you can run within your application using any APIβJava, Scala,
PySpark, or shell.
Configurations {#configurations}
The following are the adjustable configurations available in the connector: | {"source_file": "spark-native-connector.md"} | [
-0.02485782653093338,
-0.07581445574760437,
0.0005597694544121623,
0.03254486620426178,
-0.06513481587171555,
-0.05986679345369339,
0.0545123890042305,
0.012599274516105652,
-0.062272559851408005,
0.005999444052577019,
0.026362217962741852,
0.06472893804311752,
-0.020189456641674042,
-0.06... |
4b0d462c-3edc-4c15-a431-e89b3c957ede | | Key | Default | Description | Since |
|----------------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|
| spark.clickhouse.ignoreUnsupportedTransform | false | ClickHouse supports using complex expressions as sharding keys or partition values, e.g.
cityHash64(col_1, col_2)
, which are currently not supported by Spark. If
true
, ignore the unsupported expressions, otherwise fail fast w/ an exception. Note, when
spark.clickhouse.write.distributed.convertLocal
is enabled, ignore unsupported sharding keys may corrupt the data. | 0.4.0 |
| spark.clickhouse.read.compression.codec | lz4 | The codec used to decompress data for reading. Supported codecs: none, lz4. | 0.5.0 |
| spark.clickhouse.read.distributed.convertLocal | true | When reading Distributed table, read local table instead of itself. If
true
, ignore
spark.clickhouse.read.distributed.useClusterNodes
. | 0.1.0 | | {"source_file": "spark-native-connector.md"} | [
0.015752343460917473,
0.03157888725399971,
-0.004919787868857384,
-0.0158375333994627,
-0.06898266077041626,
0.050967585295438766,
0.029756667092442513,
0.02758653461933136,
0.03812377154827118,
-0.025610225275158882,
0.04441751167178154,
-0.02769470028579235,
0.014969959855079651,
-0.0901... |
45856be1-f870-4b3c-b740-7ab9912250a2 | | spark.clickhouse.read.fixedStringAs | binary | Read ClickHouse FixedString type as the specified Spark data type. Supported types: binary, string | 0.8.0 |
| spark.clickhouse.read.format | json | Serialize format for reading. Supported formats: json, binary | 0.6.0 |
| spark.clickhouse.read.runtimeFilter.enabled | false | Enable runtime filter for reading. | 0.8.0 |
| spark.clickhouse.read.splitByPartitionId | true | If
true
, construct input partition filter by virtual column
_partition_id
, instead of partition value. There are known issues with assembling SQL predicates by partition value. This feature requires ClickHouse Server v21.6+ | 0.4.0 |
| spark.clickhouse.useNullableQuerySchema | false | If
true
, mark all the fields of the query schema as nullable when executing
CREATE/REPLACE TABLE ... AS SELECT ...
on creating the table. Note, this configuration requires SPARK-43390(available in Spark 3.5), w/o this patch, it always acts as
true
. | 0.8.0 | | {"source_file": "spark-native-connector.md"} | [
0.031618498265743256,
-0.06817595660686493,
0.013811304233968258,
0.016102170571684837,
-0.026198329403996468,
-0.015927787870168686,
0.014420588500797749,
0.043163832277059555,
-0.060709964483976364,
-0.0072141592390835285,
-0.036659903824329376,
-0.036857325583696365,
-0.058800555765628815... |
99d0f4f7-4d27-4763-b5b3-55bdbf2e80ad | true
. | 0.8.0 |
| spark.clickhouse.write.batchSize | 10000 | The number of records per batch on writing to ClickHouse. | 0.1.0 |
| spark.clickhouse.write.compression.codec | lz4 | The codec used to compress data for writing. Supported codecs: none, lz4. | 0.3.0 |
| spark.clickhouse.write.distributed.convertLocal | false | When writing Distributed table, write local table instead of itself. If
true
, ignore
spark.clickhouse.write.distributed.useClusterNodes
. | 0.1.0 |
| spark.clickhouse.write.distributed.useClusterNodes | true | Write to all nodes of cluster when writing Distributed table. | 0.1.0 |
| spark.clickhouse.write.format | arrow | Serialize format for writing. Supported formats: json, arrow | 0.4.0 |
| spark.clickhouse.write.localSortByKey | true | If
true | {"source_file": "spark-native-connector.md"} | [
0.018389815464615822,
-0.0718998983502388,
-0.08262894302606583,
0.0835363045334816,
0.013632051646709442,
-0.010205219499766827,
-0.0710124596953392,
0.002970486879348755,
-0.0744614526629448,
0.09777438640594482,
-0.0030133421532809734,
-0.006191257853060961,
0.0196533240377903,
-0.05082... |
484f036c-6a13-4fb2-8321-93e1d6aff6d1 | | spark.clickhouse.write.localSortByKey | true | If
true
, do local sort by sort keys before writing. | 0.3.0 |
| spark.clickhouse.write.localSortByPartition | value of spark.clickhouse.write.repartitionByPartition | If
true
, do local sort by partition before writing. If not set, it equals to
spark.clickhouse.write.repartitionByPartition
. | 0.3.0 |
| spark.clickhouse.write.maxRetry | 3 | The maximum number of write we will retry for a single batch write failed with retryable codes. | 0.1.0 |
| spark.clickhouse.write.repartitionByPartition | true | Whether to repartition data by ClickHouse partition keys to meet the distributions of ClickHouse table before writing. | 0.3.0 |
| spark.clickhouse.write.repartitionNum | 0 | Repartition data to meet the distributions of ClickHouse table is required before writing, use this conf to specific the repartition number, value less than 1 mean no requirement. | 0.1.0 |
| spark.clickhouse.write.repartitionStrictly | false | If
true | {"source_file": "spark-native-connector.md"} | [
0.014417137019336224,
-0.10144346207380295,
0.06599543988704681,
0.04203072190284729,
-0.05648491904139519,
-0.018701253458857536,
0.005111473146826029,
0.011465701274573803,
-0.07397148758172989,
0.058108069002628326,
-0.0053176977671682835,
0.02509349212050438,
-0.02396329864859581,
-0.0... |
57ea8e82-6323-47fb-ab53-880f49e677fc | | spark.clickhouse.write.repartitionStrictly | false | If
true
, Spark will strictly distribute incoming records across partitions to satisfy the required distribution before passing the records to the data source table on write. Otherwise, Spark may apply certain optimizations to speed up the query but break the distribution requirement. Note, this configuration requires SPARK-37523(available in Spark 3.4), w/o this patch, it always acts as
true
. | 0.3.0 |
| spark.clickhouse.write.retryInterval | 10s | The interval in seconds between write retry. | 0.1.0 |
| spark.clickhouse.write.retryableErrorCodes | 241 | The retryable error codes returned by ClickHouse server when write failing. | 0.1.0 | | {"source_file": "spark-native-connector.md"} | [
-0.03572213649749756,
-0.08406948298215866,
0.04079164192080498,
0.0691332072019577,
-0.03890250623226166,
-0.06110619008541107,
-0.008938063867390156,
-0.009609891101717949,
-0.03839624300599098,
0.017402375116944313,
0.00481803435832262,
0.03639668971300125,
0.03314678743481636,
-0.05813... |
3472aa97-1fba-43ba-bfdf-f07f681009d0 | Supported data types {#supported-data-types}
This section outlines the mapping of data types between Spark and ClickHouse. The tables below provide quick references
for converting data types when reading from ClickHouse into Spark and when inserting data from Spark into ClickHouse.
Reading data from ClickHouse into Spark {#reading-data-from-clickhouse-into-spark} | {"source_file": "spark-native-connector.md"} | [
0.0337538979947567,
-0.10748542845249176,
-0.013553120195865631,
-0.007650289218872786,
-0.01107072550803423,
-0.0006622944492846727,
0.03258722648024559,
-0.014491112902760506,
-0.12581899762153625,
-0.06450659781694412,
-0.008646025322377682,
0.001591842737980187,
-0.057787492871284485,
... |
0795a309-3cf3-4a7b-bc81-e59bab401599 | | ClickHouse Data Type | Spark Data Type | Supported | Is Primitive | Notes |
|-------------------------------------------------------------------|--------------------------------|-----------|--------------|----------------------------------------------------|
|
Nothing
|
NullType
| β
| Yes | |
|
Bool
|
BooleanType
| β
| Yes | |
|
UInt8
,
Int16
|
ShortType
| β
| Yes | |
|
Int8
|
ByteType
| β
| Yes | |
|
UInt16
,
Int32
|
IntegerType
| β
| Yes | |
|
UInt32
,
Int64
,
UInt64
|
LongType
| β
| Yes | |
|
Int128
,
UInt128
,
Int256
,
UInt256
|
DecimalType(38, 0)
| β
| Yes | |
|
Float32
|
FloatType
| β
| Yes | |
|
Float64
|
DoubleType
| β
| Yes | |
|
String
,
JSON
,
UUID
,
Enum8
,
Enum16
,
IPv4
,
IPv6
|
StringType
| β
| Yes | |
|
FixedString
|
BinaryType
,
StringType
| β
| Yes | Controlled by configuration
READ_FIXED_STRING_AS
|
|
Decimal
|
DecimalType
| β
| Yes | Precision and scale up to
Decimal128
|
|
Decimal32
|
DecimalType(9, scale)
| β
| Yes | |
|
Decimal64
|
DecimalType(18, scale)
| β
| Yes | |
|
Decimal128 | {"source_file": "spark-native-connector.md"} | [
0.019312487915158272,
-0.04964930936694145,
-0.01643877476453781,
0.03274054825305939,
-0.016642292961478233,
-0.030555889010429382,
0.052441105246543884,
-0.023427467793226242,
-0.10839159041643143,
-0.0272944625467062,
0.07710326462984085,
-0.057858675718307495,
-0.036669325083494186,
-0... |
a4e26c99-0e54-4e19-9a13-a2f18eec87b0 | |
Decimal64
|
DecimalType(18, scale)
| β
| Yes | |
|
Decimal128
|
DecimalType(38, scale)
| β
| Yes | |
|
Date
,
Date32
|
DateType
| β
| Yes | |
|
DateTime
,
DateTime32
,
DateTime64
|
TimestampType
| β
| Yes | |
|
Array
|
ArrayType
| β
| No | Array element type is also converted |
|
Map
|
MapType
| β
| No | Keys are limited to
StringType
|
|
IntervalYear
|
YearMonthIntervalType(Year)
| β
| Yes | |
|
IntervalMonth
|
YearMonthIntervalType(Month)
| β
| Yes | |
|
IntervalDay
,
IntervalHour
,
IntervalMinute
,
IntervalSecond
|
DayTimeIntervalType
| β
| No | Specific interval type is used |
|
Object
| | β | | |
|
Nested
| | β | | |
|
Tuple
| | β | | |
|
Point
| | β | | |
|
Polygon
| | β | | |
|
MultiPolygon
| | β | | |
|
Ring
| | β | | |
|
IntervalQuarter | {"source_file": "spark-native-connector.md"} | [
0.07051412761211395,
0.0459701344370842,
-0.055052194744348526,
0.0339241661131382,
-0.09619738161563873,
-0.009610974229872227,
-0.03884709253907204,
0.029314912855625153,
-0.06186201795935631,
-0.036370839923620224,
0.020661065354943275,
-0.13274434208869934,
0.03192310407757759,
-0.0326... |
3de071a3-2a4e-4c99-af44-dde1a600cc39 | Ring
| | β | | |
|
IntervalQuarter
| | β | | |
|
IntervalWeek
| | β | | |
|
Decimal256
| | β | | |
|
AggregateFunction
| | β | | |
|
SimpleAggregateFunction
| | β | | | | {"source_file": "spark-native-connector.md"} | [
-0.03511848673224449,
0.041003599762916565,
-0.013328829780220985,
0.014328674413263798,
-0.11055466532707214,
0.076032355427742,
-0.02447531372308731,
0.032965533435344696,
-0.03679872676730156,
-0.07608035206794739,
-0.027966530993580818,
-0.07006364315748215,
0.029546838253736496,
-0.04... |
2e6fc07c-fb33-4df8-b777-f9f644bb8784 | Inserting data from Spark into ClickHouse {#inserting-data-from-spark-into-clickhouse}
| Spark Data Type | ClickHouse Data Type | Supported | Is Primitive | Notes |
|-------------------------------------|----------------------|-----------|--------------|----------------------------------------|
|
BooleanType
|
UInt8
| β
| Yes | |
|
ByteType
|
Int8
| β
| Yes | |
|
ShortType
|
Int16
| β
| Yes | |
|
IntegerType
|
Int32
| β
| Yes | |
|
LongType
|
Int64
| β
| Yes | |
|
FloatType
|
Float32
| β
| Yes | |
|
DoubleType
|
Float64
| β
| Yes | |
|
StringType
|
String
| β
| Yes | |
|
VarcharType
|
String
| β
| Yes | |
|
CharType
|
String
| β
| Yes | |
|
DecimalType
|
Decimal(p, s)
| β
| Yes | Precision and scale up to
Decimal128
|
|
DateType
|
Date
| β
| Yes | |
|
TimestampType
|
DateTime
| β
| Yes | |
|
ArrayType
(list, tuple, or array) |
Array
| β
| No | Array element type is also converted |
|
MapType
|
Map
| β
| No | Keys are limited to
StringType
|
|
Object
| | β | | |
|
Nested
| | β | | |
Contributing and support {#contributing-and-support}
If you'd like to contribute to the project or report any issues, we welcome your input!
Visit our
GitHub repository
to open an issue, suggest
improvements, or submit a pull request.
Contributions are welcome! Please check the contribution guidelines in the repository before starting.
Thank you for helping improve our ClickHouse Spark connector! | {"source_file": "spark-native-connector.md"} | [
0.03305266797542572,
-0.06296952813863754,
-0.04923165962100029,
0.042481016367673874,
-0.0442766509950161,
-0.036253008991479874,
0.031803689897060394,
0.011262220330536366,
-0.1091647520661354,
-0.015752797946333885,
0.08224699646234512,
-0.06810194998979568,
-0.023242689669132233,
-0.01... |
98396e8c-5d3e-4f3b-a759-e1616f8d6ef2 | sidebar_label: 'dlt'
keywords: ['clickhouse', 'dlt', 'connect', 'integrate', 'etl', 'data integration']
description: 'Load data into Clickhouse using dlt integration'
title: 'Connect dlt to ClickHouse'
slug: /integrations/data-ingestion/etl-tools/dlt-and-clickhouse
doc_type: 'guide'
import PartnerBadge from '@theme/badges/PartnerBadge';
Connect dlt to ClickHouse
dlt
is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets.
Install dlt with ClickHouse {#install-dlt-with-clickhouse}
To Install the
dlt
library with ClickHouse dependencies: {#to-install-the-dlt-library-with-clickhouse-dependencies}
bash
pip install "dlt[clickhouse]"
Setup guide {#setup-guide}
Initialize the dlt Project {#1-initialize-the-dlt-project}
Start by initializing a new
dlt
project as follows:
bash
dlt init chess clickhouse
:::note
This command will initialize your pipeline with chess as the source and ClickHouse as the destination.
:::
The above command generates several files and directories, including
.dlt/secrets.toml
and a requirements file for ClickHouse. You can install the necessary dependencies specified in the requirements file by executing it as follows:
bash
pip install -r requirements.txt
or with
pip install dlt[clickhouse]
, which installs the
dlt
library and the necessary dependencies for working with ClickHouse as a destination.
Setup ClickHouse Database {#2-setup-clickhouse-database}
To load data into ClickHouse, you need to create a ClickHouse database. Here's a rough outline of what should you do:
You can use an existing ClickHouse database or create a new one.
To create a new database, connect to your ClickHouse server using the
clickhouse-client
command line tool or a SQL client of your choice.
Run the following SQL commands to create a new database, user and grant the necessary permissions:
bash
CREATE DATABASE IF NOT EXISTS dlt;
CREATE USER dlt IDENTIFIED WITH sha256_password BY 'Dlt*12345789234567';
GRANT CREATE, ALTER, SELECT, DELETE, DROP, TRUNCATE, OPTIMIZE, SHOW, INSERT, dictGet ON dlt.* TO dlt;
GRANT SELECT ON INFORMATION_SCHEMA.COLUMNS TO dlt;
GRANT CREATE TEMPORARY TABLE, S3 ON *.* TO dlt;
Add credentials {#3-add-credentials}
Next, set up the ClickHouse credentials in the
.dlt/secrets.toml
file as shown below: | {"source_file": "dlt-and-clickhouse.md"} | [
-0.046753864735364914,
-0.012834373861551285,
-0.0059033907018601894,
0.018434733152389526,
-0.05494590848684311,
-0.015502091497182846,
0.02550385147333145,
0.012780386954545975,
-0.12150419503450394,
-0.035427026450634,
0.018378568813204765,
-0.0007622052216902375,
0.06277987360954285,
-... |
fb29dc54-2102-44b6-abb4-f8ddd170e0f7 | Add credentials {#3-add-credentials}
Next, set up the ClickHouse credentials in the
.dlt/secrets.toml
file as shown below:
```bash
[destination.clickhouse.credentials]
database = "dlt" # The database name you created
username = "dlt" # ClickHouse username, default is usually "default"
password = "Dlt*12345789234567" # ClickHouse password if any
host = "localhost" # ClickHouse server host
port = 9000 # ClickHouse HTTP port, default is 9000
http_port = 8443 # HTTP Port to connect to ClickHouse server's HTTP interface. Defaults to 8443.
secure = 1 # Set to 1 if using HTTPS, else 0.
[destination.clickhouse]
dataset_table_separator = "___" # Separator for dataset table names from dataset.
```
:::note HTTP_PORT
The
http_port
parameter specifies the port number to use when connecting to the ClickHouse server's HTTP interface. This is different from default port 9000, which is used for the native TCP protocol.
You must set
http_port
if you are not using external staging (i.e. you don't set the staging parameter in your pipeline). This is because the built-in ClickHouse local storage staging uses the
clickhouse content
library, which communicates with ClickHouse over HTTP.
Make sure your ClickHouse server is configured to accept HTTP connections on the port specified by
http_port
. For example, if you set
http_port = 8443
, then ClickHouse should be listening for HTTP requests on port 8443. If you are using external staging, you can omit the
http_port
parameter, since clickhouse-connect will not be used in this case.
:::
You can pass a database connection string similar to the one used by the
clickhouse-driver
library. The credentials above will look like this:
```bash
keep it at the top of your toml file, before any section starts.
destination.clickhouse.credentials="clickhouse://dlt:Dlt*12345789234567@localhost:9000/dlt?secure=1"
```
Write disposition {#write-disposition}
All
write dispositions
are supported.
Write dispositions in the dlt library define how the data should be written to the destination. There are three types of write dispositions:
Replace
: This disposition replaces the data in the destination with the data from the resource. It deletes all the classes and objects and recreates the schema before loading the data. You can learn more about it
here
.
Merge
: This write disposition merges the data from the resource with the data at the destination. For
merge
disposition, you would need to specify a
primary_key
for the resource. You can learn more about it
here
.
Append
: This is the default disposition. It will append the data to the existing data in the destination, ignoring the
primary_key
field.
Data loading {#data-loading} | {"source_file": "dlt-and-clickhouse.md"} | [
-0.030264317989349365,
-0.05596563592553139,
-0.08377280831336975,
-0.08507204800844193,
-0.14437785744667053,
-0.04692566767334938,
0.0220168586820364,
-0.015612803399562836,
-0.05041254684329033,
-0.0024455366656184196,
0.013956340029835701,
-0.07146338373422623,
0.045145146548748016,
-0... |
f6d93774-d603-42e8-99b9-21b372ccb9ba | Append
: This is the default disposition. It will append the data to the existing data in the destination, ignoring the
primary_key
field.
Data loading {#data-loading}
Data is loaded into ClickHouse using the most efficient method depending on the data source:
For local files, the
clickhouse-connect
library is used to directly load files into ClickHouse tables using the
INSERT
command.
For files in remote storage like
S3
,
Google Cloud Storage
, or
Azure Blob Storage
, ClickHouse table functions like s3, gcs and azureBlobStorage are used to read the files and insert the data into tables.
Datasets {#datasets}
Clickhouse
does not support multiple datasets in one database, whereas
dlt
relies on datasets due to multiple reasons. In order to make
Clickhouse
work with
dlt
, tables generated by
dlt
in your
Clickhouse
database will have their names prefixed with the dataset name, separated by the configurable
dataset_table_separator
. Additionally, a special sentinel table that does not contain any data will be created, allowing
dlt
to recognize which virtual datasets already exist in a
Clickhouse
destination.
Supported file formats {#supported-file-formats}
jsonl
is the preferred format for both direct loading and staging.
parquet
is supported for both direct loading and staging.
The
clickhouse
destination has a few specific deviations from the default sql destinations:
Clickhouse
has an experimental
object
datatype, but we have found it to be a bit unpredictable, so the dlt clickhouse destination will load the complex datatype to a text column. If you need this feature, get in touch with our Slack community, and we will consider adding it.
Clickhouse
does not support the
time
datatype. Time will be loaded to a
text
column.
Clickhouse
does not support the
binary
datatype. Instead, binary data will be loaded into a
text
column. When loading from
jsonl
, the binary data will be a base64 string, and when loading from parquet, the
binary
object will be converted to
text
.
Clickhouse
accepts adding columns to a populated table that are not null.
Clickhouse
can produce rounding errors under certain conditions when using the float or double datatype. If you cannot afford to have rounding errors, make sure to use the decimal datatype. For example, loading the value 12.7001 into a double column with the loader file format set to
jsonl
will predictably produce a rounding error.
Supported column hints {#supported-column-hints}
ClickHouse supports the following
column hints
:
primary_key
- marks the column as part of the primary key. Multiple columns can have this hint to create a composite primary key.
Table engine {#table-engine}
By default, tables are created using the
ReplicatedMergeTree
table engine in ClickHouse. You can specify an alternate table engine using the
table_engine_type
with the clickhouse adapter: | {"source_file": "dlt-and-clickhouse.md"} | [
0.010738210752606392,
-0.1060907244682312,
-0.04654160887002945,
0.01844271458685398,
-0.0036385285202413797,
-0.019317112863063812,
0.05301939323544502,
-0.03366682678461075,
0.046181291341781616,
0.08497557789087296,
0.07727143913507462,
0.01168252807110548,
0.053786374628543854,
-0.0768... |
254a6a01-923d-4342-9245-395b68d5bb91 | By default, tables are created using the
ReplicatedMergeTree
table engine in ClickHouse. You can specify an alternate table engine using the
table_engine_type
with the clickhouse adapter:
```bash
from dlt.destinations.adapters import clickhouse_adapter
@dlt.resource()
def my_resource():
...
clickhouse_adapter(my_resource, table_engine_type="merge_tree")
```
Supported values are:
merge_tree
- creates tables using the
MergeTree
engine
replicated_merge_tree
(default) - creates tables using the
ReplicatedMergeTree
engine
Staging support {#staging-support}
ClickHouse supports Amazon S3, Google Cloud Storage and Azure Blob Storage as file staging destinations.
dlt
will upload Parquet or jsonl files to the staging location and use ClickHouse table functions to load the data directly from the staged files.
Please refer to the filesystem documentation to learn how to configure credentials for the staging destinations:
Amazon S3
Google Cloud Storage
Azure Blob Storage
To run a pipeline with staging enabled:
bash
pipeline = dlt.pipeline(
pipeline_name='chess_pipeline',
destination='clickhouse',
staging='filesystem', # add this to activate staging
dataset_name='chess_data'
)
Using Google Cloud Storage as a staging area {#using-google-cloud-storage-as-a-staging-area}
dlt supports using Google Cloud Storage (GCS) as a staging area when loading data into ClickHouse. This is handled automatically by ClickHouse's
GCS table function
which dlt uses under the hood.
The clickhouse GCS table function only supports authentication using Hash-based Message Authentication Code (HMAC) keys. To enable this, GCS provides an S3 compatibility mode that emulates the Amazon S3 API. ClickHouse takes advantage of this to allow accessing GCS buckets via its S3 integration.
To set up GCS staging with HMAC authentication in dlt:
Create HMAC keys for your GCS service account by following the
Google Cloud guide
.
Configure the HMAC keys as well as the
client_email
,
project_id
and
private_key
for your service account in your dlt project's ClickHouse destination settings in
config.toml
:
```bash
[destination.filesystem]
bucket_url = "gs://dlt-ci"
[destination.filesystem.credentials]
project_id = "a-cool-project"
client_email = "my-service-account@a-cool-project.iam.gserviceaccount.com"
private_key = "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkaslkdjflasjnkdcopauihj...wEiEx7y+mx\nNffxQBqVVej2n/D93xY99pM=\n-----END PRIVATE KEY-----\n"
[destination.clickhouse.credentials]
database = "dlt"
username = "dlt"
password = "Dlt
12345789234567"
host = "localhost"
port = 9440
secure = 1
gcp_access_key_id = "JFJ$$
f2058024835jFffsadf"
gcp_secret_access_key = "DFJdwslf2hf57)%$02jaflsedjfasoi"
``` | {"source_file": "dlt-and-clickhouse.md"} | [
-0.016395429149270058,
-0.12897498905658722,
-0.09535662084817886,
-0.025994960218667984,
-0.034016791731119156,
-0.05618204176425934,
0.005843508988618851,
-0.022487740963697433,
-0.012915315106511116,
0.1079036295413971,
0.029198041185736656,
-0.047470711171627045,
0.12406025826931,
-0.0... |
5f2bab1c-5d52-4d5d-8a63-980e6dba3418 | Note: In addition to the HMAC keys
bashgcp_access_key_id
and
gcp_secret_access_key
), you now need to provide the
client_email
,
project_id
and
private_key
for your service account under
[destination.filesystem.credentials]
. This is because the GCS staging support is now implemented as a temporary workaround and is still unoptimized.
dlt will pass these credentials to ClickHouse which will handle the authentication and GCS access.
There is active work in progress to simplify and improve the GCS staging setup for the ClickHouse dlt destination in the future. Proper GCS staging support is being tracked in these GitHub issues:
Make filesystem destination
work
with gcs in s3 compatibility mode
Google Cloud Storage staging area
support
Dbt support {#dbt-support}
Integration with
dbt
is generally supported via dbt-clickhouse.
Syncing of
dlt
state {#syncing-of-dlt-state}
This destination fully supports
dlt
state sync. | {"source_file": "dlt-and-clickhouse.md"} | [
-0.07773090898990631,
-0.07365114986896515,
0.021680256351828575,
-0.09029752016067505,
-0.0010297553380951285,
-0.011347776278853416,
0.06823171675205231,
-0.05491204559803009,
-0.0011111603816971183,
0.07999831438064575,
-0.02355371229350567,
-0.015598585829138756,
0.09487175941467285,
-... |
7ea8faf5-5feb-4068-aaf8-4edff956ac98 | sidebar_label: 'Vector'
sidebar_position: 220
slug: /integrations/vector
description: 'How to tail a log file into ClickHouse using Vector'
title: 'Integrating Vector with ClickHouse'
show_related_blogs: true
doc_type: 'guide'
integration:
- support_level: 'partner'
- category: 'data_ingestion'
- website: 'https://vector.dev/'
keywords: ['vector', 'log collection', 'observability', 'data ingestion', 'pipeline']
import Image from '@theme/IdealImage';
import vector01 from '@site/static/images/integrations/data-ingestion/etl-tools/vector_01.png';
import vector02 from '@site/static/images/integrations/data-ingestion/etl-tools/vector_02.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Integrating Vector with ClickHouse
Being able to analyze your logs in real time is critical for production applications.
ClickHouse excels at storing and analyzing log data due to it's excellent compression (up to
170x
for logs)
and ability to aggregate large amounts of data quickly.
This guide shows you how to use the popular data pipeline
Vector
to tail an Nginx log file and send it to ClickHouse.
The steps below are similar for tailing any type of log file.
Prerequisites:
- You already have ClickHouse up and running
- You have Vector installed
Create a database and table {#1-create-a-database-and-table}
Define a table to store the log events:
Begin with a new database named
nginxdb
:
sql
CREATE DATABASE IF NOT EXISTS nginxdb
Insert the entire log event as a single string. Obviously this is not a great format for performing analytics on the log data, but we will figure that part out below using
materialized views
.
sql
CREATE TABLE IF NOT EXISTS nginxdb.access_logs (
message String
)
ENGINE = MergeTree()
ORDER BY tuple()
:::note
ORDER BY
is set to
tuple()
(an empty tuple) as there is no need for a primary key yet.
:::
Configure Nginx {#2--configure-nginx}
In this step, you will be shown how to get Nginx logging configured.
The following
access_log
property sends logs to
/var/log/nginx/my_access.log
in the
combined
format.
This value goes in the
http
section of your
nginx.conf
file:
bash
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/my_access.log combined;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
Be sure to restart Nginx if you had to modify
nginx.conf
.
Generate some log events in the access log by visiting pages on your web server.
Logs in the
combined
format look as follows: | {"source_file": "vector-to-clickhouse.md"} | [
0.003029594663530588,
0.05034947767853737,
-0.0384594202041626,
-0.017037156969308853,
0.06577462702989578,
-0.09513276815414429,
0.03042558953166008,
0.045747388154268265,
-0.04033791273832321,
0.009218087419867516,
0.01069839671254158,
0.0034536488819867373,
0.05055464431643486,
-0.00251... |
788fc051-18ee-450f-b4fb-5fb06d6c34b7 | Be sure to restart Nginx if you had to modify
nginx.conf
.
Generate some log events in the access log by visiting pages on your web server.
Logs in the
combined
format look as follows:
bash
192.168.208.1 - - [12/Oct/2021:03:31:44 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36"
192.168.208.1 - - [12/Oct/2021:03:31:44 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36"
192.168.208.1 - - [12/Oct/2021:03:31:49 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36"
Configure Vector {#3-configure-vector}
Vector collects, transforms and routes logs, metrics, and traces (referred to as
sources
) to many different vendors (referred to as
sinks
), including out-of-the-box compatibility with ClickHouse.
Sources and sinks are defined in a configuration file named
vector.toml
.
The following
vector.toml
file defines a
source
of type
file
that tails the end of
my_access.log
, and it also defines a
sink
as the
access_logs
table defined above:
```bash
[sources.nginx_logs]
type = "file"
include = [ "/var/log/nginx/my_access.log" ]
read_from = "end"
[sinks.clickhouse]
type = "clickhouse"
inputs = ["nginx_logs"]
endpoint = "http://clickhouse-server:8123"
database = "nginxdb"
table = "access_logs"
skip_unknown_fields = true
```
Start Vector using the configuration above. Visit the Vector
documentation
for more details on defining sources and sinks.
Verify that the access logs are being inserted into ClickHouse by running the following query. You should see the access logs in your table:
sql
SELECT * FROM nginxdb.access_logs
Parse the Logs {#4-parse-the-logs}
Having the logs in ClickHouse is great, but storing each event as a single string does not allow for much data analysis.
We'll next look at how to parse the log events using a
materialized view
.
A
materialized view
functions similarly to an insert trigger in SQL. When rows of data are inserted into a source table, the materialized view makes some transformation of these rows and inserts the results into a target table.
The materialized view can be configured to configure a parsed representation of the log events in
access_logs
.
An example of one such log event is shown below:
bash
192.168.208.1 - - [12/Oct/2021:15:32:43 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36"
There are various functions in ClickHouse to parse the above string. The
splitByWhitespace
function parses a string by whitespace and returns each token in an array.
To demonstrate, run the following command: | {"source_file": "vector-to-clickhouse.md"} | [
0.011131249368190765,
0.02329825982451439,
-0.02231859602034092,
-0.058869779109954834,
0.04210224375128746,
-0.10962424427270889,
-0.02763269655406475,
-0.07376282662153244,
0.05581395700573921,
-0.009536675177514553,
-0.036952756345272064,
0.030391905456781387,
0.00011037637159461156,
0.... |
cc986b37-a368-4165-b89c-580eb2b4dbb5 | sql title="Query"
SELECT splitByWhitespace('192.168.208.1 - - [12/Oct/2021:15:32:43 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36"')
text title="Response"
["192.168.208.1","-","-","[12/Oct/2021:15:32:43","+0000]","\"GET","/","HTTP/1.1\"","304","0","\"-\"","\"Mozilla/5.0","(Macintosh;","Intel","Mac","OS","X","10_15_7)","AppleWebKit/537.36","(KHTML,","like","Gecko)","Chrome/93.0.4577.63","Safari/537.36\""]
A few of the strings have some extra characters, and the user agent (the browser details) did not need to be parsed, but
the resulting array is close to what is needed.
Similar to
splitByWhitespace
, the
splitByRegexp
function splits a string into an array based on a regular expression.
Run the following command, which returns two strings.
sql
SELECT splitByRegexp('\S \d+ "([^"]*)"', '192.168.208.1 - - [12/Oct/2021:15:32:43 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36"')
Notice that the second string returned is the user agent successfully parsed from the log:
text
["192.168.208.1 - - [12/Oct/2021:15:32:43 +0000] \"GET / HTTP/1.1\" 30"," \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36\""]
Before looking at the final
CREATE MATERIALIZED VIEW
command, let's view a couple more functions used to clean up the data.
For example, the value of
RequestMethod
is
"GET
containing an unwanted double-quote.
You can use the
trimBoth
(alias
trim
)
function to remove the double quote:
sql
SELECT trim(LEADING '"' FROM '"GET')
The time string has a leading square bracket, and is also not in a format that ClickHouse can parse into a date.
However, if we change the separator from a colon (
:
) to a comma (
,
) then the parsing works great:
sql
SELECT parseDateTimeBestEffort(replaceOne(trim(LEADING '[' FROM '[12/Oct/2021:15:32:43'), ':', ' '))
We are now ready to define the materialized view.
The definition below includes
POPULATE
, which means the existing rows in
access_logs
will be processed and inserted right away.
Run the following SQL statement: | {"source_file": "vector-to-clickhouse.md"} | [
-0.008556390181183815,
-0.002297996310517192,
0.042552780359983444,
0.0356750413775444,
-0.02154877595603466,
-0.08663090318441391,
0.03138631582260132,
-0.0698942095041275,
-0.009072075597941875,
-0.03558596596121788,
0.038761939853429794,
-0.04216606914997101,
0.04253998398780823,
-0.094... |
a5d1930b-d12f-4dce-8472-765b21416950 | sql
CREATE MATERIALIZED VIEW nginxdb.access_logs_view
(
RemoteAddr String,
Client String,
RemoteUser String,
TimeLocal DateTime,
RequestMethod String,
Request String,
HttpVersion String,
Status Int32,
BytesSent Int64,
UserAgent String
)
ENGINE = MergeTree()
ORDER BY RemoteAddr
POPULATE AS
WITH
splitByWhitespace(message) as split,
splitByRegexp('\S \d+ "([^"]*)"', message) as referer
SELECT
split[1] AS RemoteAddr,
split[2] AS Client,
split[3] AS RemoteUser,
parseDateTimeBestEffort(replaceOne(trim(LEADING '[' FROM split[4]), ':', ' ')) AS TimeLocal,
trim(LEADING '"' FROM split[6]) AS RequestMethod,
split[7] AS Request,
trim(TRAILING '"' FROM split[8]) AS HttpVersion,
split[9] AS Status,
split[10] AS BytesSent,
trim(BOTH '"' from referer[2]) AS UserAgent
FROM
(SELECT message FROM nginxdb.access_logs)
Now verify it worked.
You should see the access logs nicely parsed into columns:
sql
SELECT * FROM nginxdb.access_logs_view
:::note
The lesson above stored the data in two tables, but you could change the initial
nginxdb.access_logs
table to use the
Null
table engine.
The parsed data will still end up in the
nginxdb.access_logs_view
table, but the raw data will not be stored in a table.
:::
By using Vector, which only requires a simple install and quick configuration, you can send logs from an Nginx server to a table in ClickHouse. By using a materialized view, you can parse those logs into columns for easier analytics. | {"source_file": "vector-to-clickhouse.md"} | [
0.022296437993645668,
0.0016324277967214584,
0.016942592337727547,
0.01138066966086626,
-0.0806623324751854,
-0.0169107336550951,
0.00763298524543643,
0.016482025384902954,
-0.018038848415017128,
0.11991938948631287,
-0.041688576340675354,
-0.04612789675593376,
0.00424167700111866,
-0.0153... |
597b8903-07e5-4ca6-afdb-f214e550a63d | sidebar_label: 'NiFi'
sidebar_position: 12
keywords: ['clickhouse', 'NiFi', 'connect', 'integrate', 'etl', 'data integration']
slug: /integrations/nifi
description: 'Stream data into ClickHouse using NiFi data pipelines'
title: 'Connect Apache NiFi to ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'community'
- category: 'data_ingestion'
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
import Image from '@theme/IdealImage';
import nifi01 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_01.png';
import nifi02 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_02.png';
import nifi03 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_03.png';
import nifi04 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_04.png';
import nifi05 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_05.png';
import nifi06 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_06.png';
import nifi07 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_07.png';
import nifi08 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_08.png';
import nifi09 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_09.png';
import nifi10 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_10.png';
import nifi11 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_11.png';
import nifi12 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_12.png';
import nifi13 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_13.png';
import nifi14 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_14.png';
import nifi15 from '@site/static/images/integrations/data-ingestion/etl-tools/nifi_15.png';
import CommunityMaintainedBadge from '@theme/badges/CommunityMaintained';
Connect Apache NiFi to ClickHouse
Apache NiFi
is an open-source workflow management software designed to automate data flow between software systems. It allows the creation of ETL data pipelines and is shipped with more than 300 data processors. This step-by-step tutorial shows how to connect Apache NiFi to ClickHouse as both a source and destination, and to load a sample dataset.
Gather your connection details {#1-gather-your-connection-details}
Download and run Apache NiFi {#2-download-and-run-apache-nifi}
For a new setup, download the binary from https://nifi.apache.org/download.html and start by running
./bin/nifi.sh start
Download the ClickHouse JDBC driver {#3-download-the-clickhouse-jdbc-driver}
Visit the
ClickHouse JDBC driver release page
on GitHub and look for the latest JDBC release version
In the release version, click on "Show all xx assets" and look for the JAR file containing the keyword "shaded" or "all", for example,
clickhouse-jdbc-0.5.0-all.jar | {"source_file": "nifi-and-clickhouse.md"} | [
0.0030410210601985455,
0.03768576309084892,
-0.04462990164756775,
-0.031125428155064583,
0.018886011093854904,
-0.06952419877052307,
0.03612799197435379,
0.00639612739905715,
-0.08883252739906311,
-0.044106170535087585,
0.028762077912688255,
-0.0390014573931694,
0.08457189053297043,
-0.021... |
fa709e27-2175-4783-8aea-83c4ddb24c19 | In the release version, click on "Show all xx assets" and look for the JAR file containing the keyword "shaded" or "all", for example,
clickhouse-jdbc-0.5.0-all.jar
Place the JAR file in a folder accessible by Apache NiFi and take note of the absolute path
Add
DBCPConnectionPool
Controller Service and configure its properties {#4-add-dbcpconnectionpool-controller-service-and-configure-its-properties}
To configure a Controller Service in Apache NiFi, visit the NiFi Flow Configuration page by clicking on the "gear" button
Select the Controller Services tab and add a new Controller Service by clicking on the
+
button at the top right
Search for
DBCPConnectionPool
and click on the "Add" button
The newly added
DBCPConnectionPool
will be in an Invalid state by default. Click on the "gear" button to start configuring
Under the "Properties" section, input the following values
| Property | Value | Remark |
| --------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------------------------------- |
| Database Connection URL | jdbc:ch:https://HOSTNAME:8443/default?ssl=true | Replace HOSTNAME in the connection URL accordingly |
| Database Driver Class Name | com.clickhouse.jdbc.ClickHouseDriver ||
| Database Driver Location(s) | /etc/nifi/nifi-X.XX.X/lib/clickhouse-jdbc-0.X.X-patchXX-shaded.jar | Absolute path to the ClickHouse JDBC driver JAR file |
| Database User | default | ClickHouse username |
| Password | password | ClickHouse password |
In the Settings section, change the name of the Controller Service to "ClickHouse JDBC" for easy reference
Activate the
DBCPConnectionPool
Controller Service by clicking on the "lightning" button and then the "Enable" button
Check the Controller Services tab and ensure that the Controller Service is enabled
Read from a table using the
ExecuteSQL
processor {#5-read-from-a-table-using-the-executesql-processor}
Add an β
βExecuteSQL
processor, along with the appropriate upstream and downstream processors
Under the "Properties" section of the β
βExecuteSQL
processor, input the following values | {"source_file": "nifi-and-clickhouse.md"} | [
-0.04331203177571297,
-0.08548363298177719,
-0.052487365901470184,
-0.06490185111761093,
-0.039190974086523056,
0.07615785300731659,
0.03883181884884834,
-0.04052403196692467,
-0.03738091140985489,
-0.0007232907228171825,
-0.015521473251283169,
-0.03323144465684891,
0.050534527748823166,
0... |
3d3c640f-91a8-4327-a8fd-af9925be4959 | Add an β
βExecuteSQL
processor, along with the appropriate upstream and downstream processors
Under the "Properties" section of the β
βExecuteSQL
processor, input the following values
| Property | Value | Remark |
|-------------------------------------|--------------------------------------|---------------------------------------------------------|
| Database Connection Pooling Service | ClickHouse JDBC | Select the Controller Service configured for ClickHouse |
| SQL select query | SELECT * FROM system.metrics | Input your query here |
Start the
ββExecuteSQL
processor
To confirm that the query has been processed successfully, inspect one of the
FlowFile
in the output queue
Switch view to "formatted" to view the result of the output
FlowFile
Write to a table using
MergeRecord
and
PutDatabaseRecord
processor {#6-write-to-a-table-using-mergerecord-and-putdatabaserecord-processor}
To write multiple rows in a single insert, we first need to merge multiple records into a single record. This can be done using the
MergeRecord
processor
Under the "Properties" section of the
MergeRecord
processor, input the following values
| Property | Value | Remark |
|---------------------------|-------------------|---------------------------------------------------------------------------------------------------------------------------------|
| Record Reader |
JSONTreeReader
| Select the appropriate record reader |
| Record Writer |
JSONReadSetWriter
| Select the appropriate record writer |
| Minimum Number of Records | 1000 | Change this to a higher number so that the minimum number of rows are merged to form a single record. Default to 1 row |
| Maximum Number of Records | 10000 | Change this to a higher number than "Minimum Number of Records". Default to 1,000 rows |
To confirm that multiple records are merged into one, examine the input and output of the
MergeRecord
processor. Note that the output is an array of multiple input records
Input
Output
Under the "Properties" section of the
PutDatabaseRecord
processor, input the following values | {"source_file": "nifi-and-clickhouse.md"} | [
-0.0038681025616824627,
-0.09738507866859436,
-0.10175935178995132,
0.05042660981416702,
-0.14182835817337036,
-0.018781522288918495,
0.06420818716287613,
-0.034286655485630035,
-0.0018749114824458957,
0.031253837049007416,
-0.015602734871208668,
-0.09174858033657074,
0.029824143275618553,
... |
0f9d0580-a113-458b-a7d9-02093ad59dd5 | Input
Output
Under the "Properties" section of the
PutDatabaseRecord
processor, input the following values
| Property | Value | Remark |
| ----------------------------------- | --------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| Record Reader |
JSONTreeReader
| Select the appropriate record reader |
| Database Type | Generic | Leave as default |
| Statement Type | INSERT | |
| Database Connection Pooling Service | ClickHouse JDBC | Select the ClickHouse controller service |
| Table Name | tbl | Input your table name here |
| Translate Field Names | false | Set to "false" so that field names inserted must match the column name |
| Maximum Batch Size | 1000 | Maximum number of rows per insert. This value should not be lower than the value of "Minimum Number of Records" in
MergeRecord
processor |
To confirm that each insert contains multiple rows, check that the row count in the table is incrementing by at least the value of "Minimum Number of Records" defined in
MergeRecord
.
Congratulations - you have successfully loaded your data into ClickHouse using Apache NiFi ! | {"source_file": "nifi-and-clickhouse.md"} | [
0.0587029829621315,
0.015739453956484795,
0.0134219229221344,
0.0733509287238121,
-0.12404897063970566,
0.055489327758550644,
0.027116313576698303,
0.05592508614063263,
-0.019576480612158775,
0.03350502625107765,
0.020848555490374565,
-0.061734117567539215,
0.01551116444170475,
-0.05022466... |
41ddb82b-c47e-4c81-be5d-8632be656b04 | sidebar_label: 'BladePipe'
sidebar_position: 20
keywords: ['clickhouse', 'BladePipe', 'connect', 'integrate', 'cdc', 'etl', 'data integration']
slug: /integrations/bladepipe
description: 'Stream data into ClickHouse using BladePipe data pipelines'
title: 'Connect BladePipe to ClickHouse'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import bp_ck_1 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_1.png';
import bp_ck_2 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_2.png';
import bp_ck_3 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_3.png';
import bp_ck_4 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_4.png';
import bp_ck_5 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_5.png';
import bp_ck_6 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_6.png';
import bp_ck_7 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_7.png';
import bp_ck_8 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_8.png';
import bp_ck_9 from '@site/static/images/integrations/data-ingestion/etl-tools/bp_ck_9.png';
import PartnerBadge from '@theme/badges/PartnerBadge';
Connect BladePipe to ClickHouse
BladePipe
is a real-time end-to-end data integration tool with sub-second latency, boosting seamless data flow across platforms.
ClickHouse is one of BladePipe's pre-built connectors, allowing users to integrate data from various sources into ClickHouse automatically. This page will show how to load data into ClickHouse in real time step by step.
Supported sources {#supported-sources}
Currently BladePipe supports for data integration to ClickHouse from the following sources:
- MySQL/MariaDB/AuroraMySQL
- Oracle
- PostgreSQL/AuroraPostgreSQL
- MongoDB
- Kafka
- PolarDB-MySQL
- OceanBase
- TiDB
More sources are to be supported.
Download and run BladePipe {#1-run-bladepipe}
Log in to
BladePipe Cloud
.
Follow the instructions in
Install Worker (Docker)
or
Install Worker (Binary)
to download and install a BladePipe Worker.
:::note
Alternatively, you can download and deploy
BladePipe Enterprise
.
:::
Add ClickHouse as a target {#2-add-clickhouse-as-a-target}
:::note
1. BladePipe supports ClickHouse version
20.12.3.3
or above.
2. To use ClickHouse as a target, make sure that the user has SELECT, INSERT and common DDL permissions.
:::
In BladePipe, click "DataSource" > "Add DataSource".
Select
ClickHouse
, and fill out the settings by providing your ClickHouse host and port, username and password, and click "Test Connection".
Click "Add DataSource" at the bottom, and a ClickHouse instance is added.
Add MySQL as a source {#3-add-mysql-as-a-source}
In this tutorial, we use a MySQL instance as the source, and explain the process of loading MySQL data to ClickHouse. | {"source_file": "bladepipe-and-clickhouse.md"} | [
-0.003042469033971429,
0.004820000380277634,
-0.054766520857810974,
-0.019133945927023888,
-0.020021021366119385,
0.00410148361697793,
0.0057081421837210655,
0.060833632946014404,
-0.020627131685614586,
-0.07386364787817001,
0.05856308341026306,
-0.08122167736291885,
0.03057854436337948,
-... |
8de3cfad-b6d3-42fc-8543-04ed9375ac28 | Add MySQL as a source {#3-add-mysql-as-a-source}
In this tutorial, we use a MySQL instance as the source, and explain the process of loading MySQL data to ClickHouse.
:::note
To use MySQL as a source, make sure that the user has the
required permissions
.
:::
In BladePipe, click "DataSource" > "Add DataSource".
Select
MySQL
, and fill out the settings by providing your MySQL host and port, username and password, and click "Test Connection".
Click "Add DataSource" at the bottom, and a MySQL instance is added.
Create a pipeline {#4-create-a-pipeline}
In BladePipe, click "DataJob" > "Create DataJob".
Select the added MySQL and ClickHouse instances and click "Test Connection" to ensure BladePipe is connected to the instances. Then, select the databases to be moved.
Select "Incremental" for DataJob Type, together with the "Full Data" option.
Select the tables to be replicated.
Select the columns to be replicated.
Confirm the DataJob creation, and the DataJob runs automatically.
Verify the data {#5-verify-the-data}
Stop data write in MySQL instance and wait for ClickHouse to merge data.
:::note
Due to the unpredictable timing of ClickHouse's automatic merging, you can manually trigger a merging by running the
OPTIMIZE TABLE xxx FINAL;
command. Note that there is a chance that this manual merging may not always succeed.
Alternatively, you can run the
CREATE VIEW xxx_v AS SELECT * FROM xxx FINAL;
command to create a view and perform queries on the view to ensure the data is fully merged.
:::
Create a
Verification DataJob
. Once the Verification DataJob is completed, review the results to confirm that the data in ClickHouse is the same as the data in MySQL. | {"source_file": "bladepipe-and-clickhouse.md"} | [
0.06576225161552429,
-0.13582338392734528,
-0.0805894061923027,
0.025344697758555412,
-0.12608414888381958,
-0.03878209739923477,
-0.0218205526471138,
0.012235535308718681,
-0.025035947561264038,
0.03787064924836159,
0.04469037801027298,
-0.08856602758169174,
0.09614631533622742,
-0.023130... |
fcd88929-fc54-4ec6-bae3-7a0180319510 | sidebar_label: 'Apache Beam'
slug: /integrations/apache-beam
description: 'Users can ingest data into ClickHouse using Apache Beam'
title: 'Integrating Apache Beam and ClickHouse'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
keywords: ['apache beam', 'stream processing', 'batch processing', 'jdbc connector', 'data pipeline']
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Integrating Apache Beam and ClickHouse
Apache Beam
is an open-source, unified programming model that enables developers to define and execute both batch and stream (continuous) data processing pipelines. The flexibility of Apache Beam lies in its ability to support a wide range of data processing scenarios, from ETL (Extract, Transform, Load) operations to complex event processing and real-time analytics.
This integration leverage ClickHouse's official
JDBC connector
for the underlying insertion layer.
Integration package {#integration-package}
The integration package required to integrate Apache Beam and ClickHouse is maintained and developed under
Apache Beam I/O Connectors
- an integrations bundle of many popular data storage systems and databases.
org.apache.beam.sdk.io.clickhouse.ClickHouseIO
implementation located within the
Apache Beam repo
.
Setup of the Apache Beam ClickHouse package {#setup-of-the-apache-beam-clickhouse-package}
Package installation {#package-installation}
Add the following dependency to your package management framework:
xml
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-clickhouse</artifactId>
<version>${beam.version}</version>
</dependency>
:::important Recommended Beam version
The
ClickHouseIO
connector is recommended for use starting from Apache Beam version
2.59.0
.
Earlier versions may not fully support the connector's functionality.
:::
The artifacts could be found in the
official maven repository
.
Code example {#code-example}
The following example reads a CSV file named
input.csv
as a
PCollection
, converts it to a Row object (using the defined schema) and inserts it into a local ClickHouse instance using
ClickHouseIO
:
```java
package org.example;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.clickhouse.ClickHouseIO;
import org.apache.beam.sdk.schemas.Schema;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.Row;
import org.joda.time.DateTime;
public class Main {
public static void main(String[] args) {
// Create a Pipeline object.
Pipeline p = Pipeline.create(); | {"source_file": "apache-beam.md"} | [
-0.06139872968196869,
-0.042081378400325775,
-0.028182893991470337,
0.007731707766652107,
-0.02937368117272854,
-0.05000075697898865,
0.014450923539698124,
-0.029107633978128433,
-0.1159796193242073,
-0.09235343337059021,
-0.006996147334575653,
0.02230244129896164,
-0.013332552276551723,
-... |
6bd76f78-265e-4756-901e-c1c76e054236 | public class Main {
public static void main(String[] args) {
// Create a Pipeline object.
Pipeline p = Pipeline.create();
Schema SCHEMA =
Schema.builder()
.addField(Schema.Field.of("name", Schema.FieldType.STRING).withNullable(true))
.addField(Schema.Field.of("age", Schema.FieldType.INT16).withNullable(true))
.addField(Schema.Field.of("insertion_time", Schema.FieldType.DATETIME).withNullable(false))
.build();
// Apply transforms to the pipeline.
PCollection<String> lines = p.apply("ReadLines", TextIO.read().from("src/main/resources/input.csv"));
PCollection<Row> rows = lines.apply("ConvertToRow", ParDo.of(new DoFn<String, Row>() {
@ProcessElement
public void processElement(@Element String line, OutputReceiver<Row> out) {
String[] values = line.split(",");
Row row = Row.withSchema(SCHEMA)
.addValues(values[0], Short.parseShort(values[1]), DateTime.now())
.build();
out.output(row);
}
})).setRowSchema(SCHEMA);
rows.apply("Write to ClickHouse",
ClickHouseIO.write("jdbc:clickhouse://localhost:8123/default?user=default&password=******", "test_table"));
// Run the pipeline.
p.run().waitUntilFinish();
}
}
```
Supported data types {#supported-data-types} | {"source_file": "apache-beam.md"} | [
0.029922524467110634,
0.016326796263456345,
-0.0885293111205101,
0.006626151502132416,
-0.1113014817237854,
0.007733601611107588,
-0.07189926505088806,
0.09980817884206772,
-0.02910132147371769,
0.00005775822137366049,
-0.018266500905156136,
-0.118467316031456,
-0.007712837308645248,
-0.09... |
898bb0eb-9c81-4026-9826-e9cbe1c8d9f6 | | ClickHouse | Apache Beam | Is Supported | Notes |
|------------------------------------|----------------------------|--------------|------------------------------------------------------------------------------------------------------------------------------------------|
|
TableSchema.TypeName.FLOAT32
|
Schema.TypeName#FLOAT
| β
| |
|
TableSchema.TypeName.FLOAT64
|
Schema.TypeName#DOUBLE
| β
| |
|
TableSchema.TypeName.INT8
|
Schema.TypeName#BYTE
| β
| |
|
TableSchema.TypeName.INT16
|
Schema.TypeName#INT16
| β
| |
|
TableSchema.TypeName.INT32
|
Schema.TypeName#INT32
| β
| |
|
TableSchema.TypeName.INT64
|
Schema.TypeName#INT64
| β
| |
|
TableSchema.TypeName.STRING
|
Schema.TypeName#STRING
| β
| |
|
TableSchema.TypeName.UINT8
|
Schema.TypeName#INT16
| β
| |
|
TableSchema.TypeName.UINT16
|
Schema.TypeName#INT32
| β
| |
|
TableSchema.TypeName.UINT32
|
Schema.TypeName#INT64
| β
| |
|
TableSchema.TypeName.UINT64
|
Schema.TypeName#INT64
| β
| |
|
TableSchema.TypeName.DATE
|
Schema.TypeName#DATETIME | {"source_file": "apache-beam.md"} | [
0.04052043706178665,
-0.07071799039840698,
-0.08207487314939499,
0.009155241772532463,
-0.06603574007749557,
-0.031102048233151436,
0.013664541766047478,
-0.04617266729474068,
-0.07459436357021332,
-0.006575973238795996,
0.08826403319835663,
-0.057607900351285934,
-0.03633362054824829,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.