id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
2701a404-240c-40d5-bd27-3cc719b3b87a | 3 rows in set. Elapsed: 0.002 sec.
```
Verify that tables created on the cluster are created on both nodes {#verify-that-tables-created-on-the-cluster-are-created-on-both-nodes}
sql
-- highlight-next-line
create table trips on cluster 'cluster_1S_2R' (
`trip_id` UInt32,
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_datetime` DateTime,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`tip_amount` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4))
ENGINE = ReplicatedMergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime
-- highlight-next-line
SETTINGS storage_policy='gcs_main'
```response
ββhostββββββββββββββββββββββββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode2.us-east4-c.c.gcsqa-375100.internal β 9000 β 0 β β 1 β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
ββhostββββββββββββββββββββββββββββββββββββββββ¬βportββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β chnode1.us-east1-b.c.gcsqa-375100.internal β 9000 β 0 β β 0 β 0 β
ββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
2 rows in set. Elapsed: 0.641 sec.
```
Verify that data can be inserted {#verify-that-data-can-be-inserted}
sql
INSERT INTO trips SELECT
trip_id,
pickup_date,
pickup_datetime,
dropoff_datetime,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count,
trip_distance,
tip_amount,
total_amount,
payment_type
FROM s3('https://ch-nyc-taxi.s3.eu-west-3.amazonaws.com/tsv/trips_{0..9}.tsv.gz', 'TabSeparatedWithNames')
LIMIT 1000000
Verify that the storage policy
gcs_main
is used for the table. {#verify-that-the-storage-policy-gcs_main-is-used-for-the-table}
sql
SELECT
engine,
data_paths,
metadata_path,
storage_policy,
formatReadableSize(total_bytes)
FROM system.tables
WHERE name = 'trips'
FORMAT Vertical
```response
Row 1:
ββββββ
engine: ReplicatedMergeTree
data_paths: ['/var/lib/clickhouse/disks/gcs/store/631/6315b109-d639-4214-a1e7-afbd98f39727/']
metadata_path: /var/lib/clickhouse/store/e0f/e0f3e248-7996-44d4-853e-0384e153b740/trips.sql
storage_policy: gcs_main
formatReadableSize(total_bytes): 36.42 MiB
1 row in set. Elapsed: 0.002 sec.
```
Verify in Google Cloud console {#verify-in-google-cloud-console} | {"source_file": "index.md"} | [
0.007757487706840038,
-0.0011249907547608018,
-0.009624301455914974,
0.04120983928442001,
-0.02400621771812439,
-0.05081600323319435,
0.0220231581479311,
-0.007576657924801111,
-0.040802787989377975,
0.0347956120967865,
0.08350909501314163,
-0.1581423133611679,
0.04771886765956879,
-0.0825... |
a9cb0d33-b275-40c3-aa4f-cac6b9b516a9 | 1 row in set. Elapsed: 0.002 sec.
```
Verify in Google Cloud console {#verify-in-google-cloud-console}
Looking at the buckets you will see that a folder was created in each bucket with the name that was used in the
storage.xml
configuration file. Expand the folders and you will see many files, representing the data partitions.
Bucket for replica one {#bucket-for-replica-one}
Bucket for replica two {#bucket-for-replica-two} | {"source_file": "index.md"} | [
-0.03162876516580582,
-0.012474101036787033,
-0.01634831167757511,
-0.0350191593170166,
0.0697677806019783,
-0.05141915753483772,
0.040906455367803574,
-0.022474979981780052,
0.09457005560398102,
0.04673769325017929,
0.09320531040430069,
-0.018808888271450996,
0.1133706122636795,
-0.078245... |
fb25cc8d-1586-43ce-9f22-91d23c6f2839 | title: 'Pausing and Resuming a MongoDB ClickPipe'
description: 'Pausing and Resuming a MongoDB ClickPipe'
sidebar_label: 'Pause table'
slug: /integrations/clickpipes/mongodb/pause_and_resume
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import pause_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_button.png'
import pause_dialog from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_dialog.png'
import pause_status from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_status.png'
import resume_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resume_button.png'
import resume_dialog from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resume_dialog.png'
There are scenarios where it would be useful to pause a MongoDB ClickPipe. For example, you may want to run some analytics on existing data in a static state. Or, you might be performing upgrades on MongoDB. Here is how you can pause and resume a MongoDB ClickPipe.
Steps to pause a MongoDB ClickPipe {#pause-clickpipe-steps}
In the Data Sources tab, click on the MongoDB ClickPipe you wish to pause.
Head over to the
Settings
tab.
Click on the
Pause
button.
A dialog box should appear for confirmation. Click on Pause again.
Head over to the
Metrics
tab.
Wait for the status of the pipe to be
Paused
.
Steps to resume a MongoDB ClickPipe {#resume-clickpipe-steps}
In the Data Sources tab, click on the MongoDB ClickPipe you wish to resume. The status of the mirror should be
Paused
initially.
Head over to the
Settings
tab.
Click on the
Resume
button.
A dialog box should appear for confirmation. Click on Resume again.
Head over to the
Metrics
tab.
Wait for the status of the pipe to be
Running
. | {"source_file": "pause_and_resume.md"} | [
-0.04746651649475098,
-0.024791786447167397,
-0.05438533052802086,
0.0456552729010582,
0.018195968121290207,
0.015097037889063358,
-0.002050067763775587,
-0.017724817618727684,
-0.07220213115215302,
0.00449224142357707,
-0.03877826780080795,
0.0033015485387295485,
-0.08330746740102768,
-0.... |
05956a3a-5a66-419a-a0d9-7c02f2ad293a | sidebar_label: 'Lifecycle of a MongoDB ClickPipe'
description: 'Various pipe statuses and their meanings'
slug: /integrations/clickpipes/mongodb/lifecycle
title: 'Lifecycle of a MongoDB ClickPipe'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
Lifecycle of a MongoDB ClickPipe {#lifecycle}
This is a document on the various phases of a MongoDB ClickPipe, the different statuses it can have, and what they mean.
Provisioning {#provisioning}
When you click on the Create ClickPipe button, the ClickPipe is created in a
Provisioning
state. The provisioning process is where we spin up the underlying infrastructure to run ClickPipes for the service, along with registering some initial metadata for the pipe. Since compute for ClickPipes within a service is shared, your second ClickPipe will be created much faster than the first one -- as the infrastructure is already in place.
Setup {#setup}
After a pipe is provisioned, it enters the
Setup
state. This state is where we create the destination ClickHouse tables.
Snapshot {#snapshot}
Once setup is complete, we enter the
Snapshot
state (unless it's a CDC-only pipe, which would transition to
Running
).
Snapshot
,
Initial Snapshot
and
Initial Load
(more common) are interchangeable terms. In this state, we take a snapshot of the source MongoDB collections and load them into ClickHouse. Retention setting for the oplog should account for initial load time. The pipe will also enter the
Snapshot
state when a resync is triggered or when new tables are added to an existing pipe.
Running {#running}
Once the initial load is complete, the pipe enters the
Running
state (unless it's a snapshot-only pipe, which would transition to
Completed
). This is where the pipe begins
Change-Data Capture
. In this state, we start streaming changes from the source MongoDB cluster to ClickHouse. For information on controlling CDC, see
the doc on controlling CDC
.
Paused {#paused}
Once the pipe is in the
Running
state, you can pause it. This will stop the CDC process and the pipe will enter the
Paused
state. In this state, no new data is pulled from the source MongoDB, but the existing data in ClickHouse remains intact. You can resume the pipe from this state.
Pausing {#pausing}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
When you click on the Pause button, the pipe enters the
Pausing
state. This is a transient state where we are in the process of stopping the CDC process. Once the CDC process is fully stopped, the pipe will enter the
Paused
state.
Modifying {#modifying} | {"source_file": "lifecycle.md"} | [
-0.041107773780822754,
-0.0016853299457579851,
-0.02458510547876358,
0.02646023966372013,
0.014353848062455654,
-0.0918227881193161,
-0.0068057505413889885,
-0.0017431052401661873,
0.004558353219181299,
0.006204589270055294,
-0.004471713211387396,
-0.008496597409248352,
-0.04333343356847763,... |
916097e7-4a99-434d-b22f-c4543c48d0ea | Modifying {#modifying}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
Currently, this indicates the pipe is in the process of removing tables.
Resync {#resync}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
This state indicates the pipe is in the phase of resync where it is performing an atomic swap of the _resync tables with the original tables. More information on resync can be found in the
resync documentation
.
Completed {#completed}
This state applies to snapshot-only pipes and indicates that the snapshot has been completed and there's no more work to do.
Failed {#failed}
If there is an irrecoverable error in the pipe, it will enter the
Failed
state. You can reach out to support or
resync
your pipe to recover from this state. | {"source_file": "lifecycle.md"} | [
-0.02199043333530426,
0.013243435882031918,
0.016959795728325844,
0.03290228918194771,
0.0045092301443219185,
-0.10363936424255371,
-0.06875810027122498,
0.02169446088373661,
0.0057298303581774235,
-0.017965691164135933,
-0.005108298733830452,
-0.008274088613688946,
0.04224623367190361,
-0... |
0c6746f2-e814-4e2e-8303-4207f45ac564 | title: 'Resyncing Specific Tables'
description: 'Resyncing specific tables in a MongoDB ClickPipe'
slug: /integrations/clickpipes/mongodb/table_resync
sidebar_label: 'Resync table'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
Resyncing specific tables {#resync-tables}
There are scenarios where it would be useful to have specific tables of a pipe be re-synced. Some sample use-cases could be major schema changes on MongoDB, or maybe some data re-modelling on the ClickHouse.
While resyncing individual tables with a button click is a work-in-progress, this guide will share steps on how you can achieve this today in the MongoDB ClickPipe.
1. Remove the table from the pipe {#removing-table}
This can be followed by following the
table removal guide
.
2. Truncate or drop the table on ClickHouse {#truncate-drop-table}
This step is to avoid data duplication when we add this table again in the next step. You can do this by heading over to the
SQL Console
tab in ClickHouse Cloud and running a query.
Note that we have validation to block table addition if the table already exists in ClickHouse and is not empty.
3. Add the table to the ClickPipe again {#add-table-again}
This can be followed by following the
table addition guide
. | {"source_file": "table_resync.md"} | [
-0.02208808995783329,
-0.023120209574699402,
0.016165336593985558,
0.038960233330726624,
0.0019312306540086865,
-0.08184171468019485,
-0.05109109729528427,
-0.035904549062252045,
-0.026360386982560158,
0.022160954773426056,
-0.04507708176970482,
0.005252881441265345,
-0.02389252372086048,
... |
36ea5b84-ebdf-47c5-a1a3-602c20dac616 | title: 'Adding specific tables to a ClickPipe'
description: 'Describes the steps needed to add specific tables to a ClickPipe.'
sidebar_label: 'Add table'
slug: /integrations/clickpipes/mongodb/add_table
show_title: false
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import add_table from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/add_table.png'
Adding specific tables to a ClickPipe
There are scenarios where it would be useful to add specific tables to a pipe. This becomes a common necessity as your transactional or analytical workload scales.
Steps to add specific tables to a ClickPipe {#add-tables-steps}
This can be done by the following steps:
1.
Pause
the pipe.
2. Click on Edit Table settings.
3. Locate your table - this can be done by searching it in the search bar.
4. Select the table by clicking on the checkbox.
Click update.
Upon successful update, the pipe will have statuses
Setup
,
Snapshot
and
Running
in that order. The table's initial load can be tracked in the
Tables
tab.
:::info
CDC for existing tables resumes automatically after the new tableβs snapshot completes.
::: | {"source_file": "add_table.md"} | [
0.02349075861275196,
-0.043812096118927,
-0.05173391476273537,
0.042787425220012665,
-0.02011977508664131,
-0.034451425075531006,
0.01101994514465332,
0.05521714687347412,
-0.04124929755926132,
0.032654836773872375,
-0.008202028460800648,
-0.06487420946359634,
0.009911440312862396,
0.00133... |
26169fcf-6d18-4b9b-a7b9-be136ef0c12d | title: 'Supported data types'
slug: /integrations/clickpipes/mongodb/datatypes
description: 'Page describing MongoDB ClickPipe datatype mapping from MongoDB to ClickHouse'
doc_type: 'reference'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
MongoDB stores data records as BSON documents. In ClickPipes, you can configure to ingest BSON documents to ClickHouse as either JSON or JSON String. The following table shows the supported BSON to JSON field type mapping:
| MongoDB BSON Type | ClickHouse JSON Type | Notes |
| ------------------------ | -------------------------------------- | ------------------------ |
| ObjectId | String | |
| String | String | |
| 32-bit integer | Int64 | |
| 64-bit integer | Int64 | |
| Double | Float64 | |
| Boolean | Bool | |
| Date | String | ISO 8601 format |
| Regular Expression | {Options: String, Pattern: String} | MongoDB regex with fixed fields: Options (regex flags) and Pattern (regex pattern) |
| Timestamp | {T: Int64, I: Int64} | MongoDB internal timestamp format with fixed fields: T (timestamp) and I (increment) |
| Decimal128 | String | |
| Binary data | {Data: String, Subtype: Int64} | MongoDB binary data with fixed fields: Data (base64-encoded) and Subtype (
type of binary
) |
| JavaScript | String | |
| Null | Null | |
| Array | Dynamic | Arrays with homogeneous types become Array(Nullable(T)); arrays with mixed primitive types are promoted to the most general common type; arrays with complex incompatible types become Tuples |
| Object | Dynamic | Each nested field is mapped recursively |
:::info
To learn more about ClickHouse's JSON data types, see
our documentation
.
::: | {"source_file": "datatypes.md"} | [
-0.03653349354863167,
-0.02223583683371544,
-0.06546589732170105,
0.06493407487869263,
-0.052156154066324234,
-0.0165437962859869,
-0.05584677308797836,
0.01587536185979843,
-0.026720669120550156,
-0.04760574549436569,
-0.02257165126502514,
-0.004559987690299749,
-0.05117025971412659,
0.05... |
d338d7a0-a4b2-4dd6-922a-4ae6209d4fcc | title: 'Resyncing a Database ClickPipe'
description: 'Doc for resyncing a database ClickPipe'
slug: /integrations/clickpipes/mongodb/resync
sidebar_label: 'Resync ClickPipe'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import resync_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resync_button.png'
import Image from '@theme/IdealImage';
What does Resync do? {#what-mongodb-resync-do}
Resync involves the following operations in order:
The existing ClickPipe is dropped, and a new "resync" ClickPipe is kicked off. Thus, changes to source table structures will be picked up when you resync.
The resync ClickPipe creates (or replaces) a new set of destination tables which have the same names as the original tables except with a
_resync
suffix.
Initial load is performed on the
_resync
tables.
The
_resync
tables are then swapped with the original tables. Soft deleted rows are transferred from the original tables to the
_resync
tables before the swap.
All the settings of the original ClickPipe are retained in the resync ClickPipe. The statistics of the original ClickPipe are cleared in the UI.
Use cases for resyncing a ClickPipe {#use-cases-mongodb-resync}
Here are a few scenarios:
You may need to perform major schema changes on the source tables which would break the existing ClickPipe and you would need to restart. You can just click Resync after performing the changes.
Specifically for Clickhouse, maybe you needed to change the ORDER BY keys on the target tables. You can Resync to re-populate data into the new table with the right sorting key.
Resync ClickPipe Guide {#guide-mongodb-resync}
In the Data Sources tab, click on the MongoDB ClickPipe you wish to resync.
Head over to the
Settings
tab.
Click on the
Resync
button.
A dialog box should appear for confirmation. Click on Resync again.
Head over to the
Metrics
tab.
Wait for the status of the pipe to be
Setup
or
Snapshot
.
The initial load of the resync can be monitored in the
Tables
tab - in the
Initial Load Stats
section.
Once the initial load is complete, the pipe will atomically swap the
_resync
tables with the original tables. During the swap, the status will be
Resync
.
Once the swap is complete, the pipe will enter the
Running
state and perform CDC if enabled. | {"source_file": "resync.md"} | [
-0.08422785252332687,
-0.043938662856817245,
-0.019909778609871864,
0.06445016711950302,
0.06792579591274261,
-0.10438989847898483,
0.009205883368849754,
-0.01307111606001854,
0.025376975536346436,
0.019227320328354836,
-0.0230509452521801,
0.0606912299990654,
-0.0031343623995780945,
-0.08... |
5a2fd5db-4ed0-4946-848f-b8abd8a944c8 | title: 'Removing specific tables from a ClickPipe'
description: 'Removing specific tables from a ClickPipe'
sidebar_label: 'Remove table'
slug: /integrations/clickpipes/mongodb/removing_tables
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import remove_table from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/remove_table.png'
In some cases, it makes sense to exclude specific tables from a MongoDB ClickPipe - for example, if a table isn't needed for your analytics workload, skipping it can reduce storage and replication costs in ClickHouse.
Steps to remove specific tables {#remove-tables-steps}
The first step is to remove the table from the pipe. This can be done by the following steps:
Pause
the pipe.
Click on Edit Table Settings.
Locate your table - this can be done by searching it in the search bar.
Deselect the table by clicking on the selected checkbox.
Click update.
Upon successful update, in the
Metrics
tab the status will be
Running
. This table will no longer be replicated by this ClickPipe. | {"source_file": "remove_table.md"} | [
0.011208103969693184,
0.06278381496667862,
0.012016214430332184,
0.024692477658391,
0.041437841951847076,
-0.1035080999135971,
0.035511311143636703,
-0.04843711480498314,
-0.002111609559506178,
0.03201867267489433,
-0.013533093966543674,
0.011760929599404335,
0.029515113681554794,
-0.00579... |
7b05a715-5159-4f1f-a1ca-08e1a428c88b | title: 'Working with JSON in ClickHouse'
sidebar_label: 'Working with JSON'
slug: /integrations/clickpipes/mongodb/quickstart
description: 'Common patterns for working with JSON data replicated from MongoDB to ClickHouse via ClickPipes'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
Working with JSON in ClickHouse
This guide provides common patterns for working with JSON data replicated from MongoDB to ClickHouse via ClickPipes.
Suppose we created a collection
t1
in MongoDB to track customer orders:
javascript
db.t1.insertOne({
"order_id": "ORD-001234",
"customer_id": 98765,
"status": "completed",
"total_amount": 299.97,
"order_date": new Date(),
"shipping": {
"method": "express",
"city": "Seattle",
"cost": 19.99
},
"items": [
{
"category": "electronics",
"price": 149.99
},
{
"category": "accessories",
"price": 24.99
}
]
})
MongoDB CDC Connector replicates MongoDB documents to ClickHouse using the native JSON data type. The replicated table
t1
in ClickHouse will contain the following row:
shell
Row 1:
ββββββ
_id: "68a4df4b9fe6c73b541703b0"
doc: {"_id":"68a4df4b9fe6c73b541703b0","customer_id":"98765","items":[{"category":"electronics","price":149.99},{"category":"accessories","price":24.99}],"order_date":"2025-08-19T20:32:11.705Z","order_id":"ORD-001234","shipping":{"city":"Seattle","cost":19.99,"method":"express"},"status":"completed","total_amount":299.97}
_peerdb_synced_at: 2025-08-19 20:50:42.005000000
_peerdb_is_deleted: 0
_peerdb_version: 0
Table schema {#table-schema}
The replicated tables use this standard schema:
shell
ββnameββββββββββββββββ¬βtypeβββββββββββ
β _id β String β
β doc β JSON β
β _peerdb_synced_at β DateTime64(9) β
β _peerdb_version β Int64 β
β _peerdb_is_deleted β Int8 β
ββββββββββββββββββββββ΄ββββββββββββββββ
_id
: Primary key from MongoDB
doc
: MongoDB document replicated as JSON data type
_peerdb_synced_at
: Records when the row was last synced
_peerdb_version
: Tracks the version of the row; incremented when the row is updated or deleted
_peerdb_is_deleted
: Marks whether the row is deleted
ReplacingMergeTree table engine {#replacingmergetree-table-engine}
ClickPipes maps MongoDB collections into ClickHouse using the
ReplacingMergeTree
table engine family. With this engine, updates are modeled as inserts with a newer version (
_peerdb_version
) of the document for a given primary key (
_id
), enabling efficient handling of updates, replaces, and deletes as versioned inserts.
ReplacingMergeTree
clears out duplicates asynchronously in the background. To guarantee the absence of duplicates for the same row, use the
FINAL
modifier
. For example:
sql
SELECT * FROM t1 FINAL;
Handling deletes {#handling-deletes} | {"source_file": "quickstart.md"} | [
-0.08295239508152008,
0.04813949391245842,
0.036030493676662445,
0.03915201127529144,
-0.06931330263614655,
-0.04572930186986923,
-0.060068514198064804,
-0.03397006541490555,
0.019166234880685806,
-0.05470122769474983,
0.04888443648815155,
-0.011200056411325932,
-0.02363906241953373,
0.000... |
c6868b69-6fd5-46b3-851b-ce29d8099f10 | sql
SELECT * FROM t1 FINAL;
Handling deletes {#handling-deletes}
Deletes from MongoDB are propagated as new rows marked as deleted using the
_peerdb_is_deleted
column. You typically want to filter these out in your queries:
sql
SELECT * FROM t1 FINAL WHERE _peerdb_is_deleted = 0;
You can also create a row-level policy to automatically filter out deleted rows instead of specifying the filter in each query:
sql
CREATE ROW POLICY policy_name ON t1
FOR SELECT USING _peerdb_is_deleted = 0;
Querying JSON data {#querying-json-data}
You can directly query JSON fields using dot syntax:
sql title="Query"
SELECT
doc.order_id,
doc.shipping.method
FROM t1;
shell title="Result"
β-βdoc.order_idββ¬βdoc.shipping.methodββ
β ORD-001234 β express β
βββββββββββββββββ΄ββββββββββββββββββββββ
When querying
nested object fields
using dot syntax, make sure to add the
^
operator:
sql title="Query"
SELECT doc.^shipping as shipping_info FROM t1;
shell title="Result"
ββshipping_infoβββββββββββββββββββββββββββββββββββββββ
β {"city":"Seattle","cost":19.99,"method":"express"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Dynamic type {#dynamic-type}
In ClickHouse, each field in JSON has
Dynamic
type. Dynamic type allows ClickHouse to store values of any type without knowing the type in advance. You can verify this with the
toTypeName
function:
sql title="Query"
SELECT toTypeName(doc.customer_id) AS type FROM t1;
shell title="Result"
ββtypeβββββ
β Dynamic β
βββββββββββ
To examine the underlying data type(s) for a field, you can check with the
dynamicType
function. Note that it's possible to have different data types for the same field name in different rows:
sql title="Query"
SELECT dynamicType(doc.customer_id) AS type FROM t1;
shell title="Result"
ββtypeβββ
β Int64 β
βββββββββ
Regular functions
work for dynamic type just like they do for regular columns:
Example 1: Date parsing
sql title="Query"
SELECT parseDateTimeBestEffortOrNull(doc.order_date) AS order_date FROM t1;
shell title="Result"
ββorder_dateβββββββββββ
β 2025-08-19 20:32:11 β
βββββββββββββββββββββββ
Example 2: Conditional logic
sql title="Query"
SELECT multiIf(
doc.total_amount < 100, 'less_than_100',
doc.total_amount < 1000, 'less_than_1000',
'1000+') AS spendings
FROM t1;
shell title="Result"
ββspendingsβββββββ
β less_than_1000 β
ββββββββββββββββββ
Example 3: Array operations
sql title="Query"
SELECT length(doc.items) AS item_count FROM t1;
shell title="Result"
ββitem_countββ
β 2 β
ββββββββββββββ
Field casting {#field-casting}
Aggregation functions
in ClickHouse don't work with dynamic type directly. For example, if you attempt to directly use the
sum
function on a dynamic type, you get the following error:
sql
SELECT sum(doc.shipping.cost) AS shipping_cost FROM t1;
-- DB::Exception: Illegal type Dynamic of argument for aggregate function sum. (ILLEGAL_TYPE_OF_ARGUMENT) | {"source_file": "quickstart.md"} | [
-0.05141005665063858,
0.06640836596488953,
0.0669005736708641,
0.039620935916900635,
0.004025165922939777,
-0.06767840683460236,
0.015364016406238079,
-0.034035567194223404,
0.03126472607254982,
0.05976060777902603,
0.01709855906665325,
0.026117052882909775,
-0.014059695415198803,
0.000313... |
4aae399d-c76b-4c76-b55e-2ae76aa48547 | sql
SELECT sum(doc.shipping.cost) AS shipping_cost FROM t1;
-- DB::Exception: Illegal type Dynamic of argument for aggregate function sum. (ILLEGAL_TYPE_OF_ARGUMENT)
To use aggregation functions, cast the field to the appropriate type with the
CAST
function or
::
syntax:
sql title="Query"
SELECT sum(doc.shipping.cost::Float32) AS shipping_cost FROM t1;
shell title="Result"
ββshipping_costββ
β 19.99 β
βββββββββββββββββ
:::note
Casting from dynamic type to the underlying data type (determined by
dynamicType
) is very performant, as ClickHouse already stores the value in its underlying type internally.
:::
Flattening JSON {#flattening-json}
Normal view {#normal-view}
You can create normal views on top of the JSON table to encapsulate flattening/casting/transformation logic in order to query data similar to a relational table. Normal views are lightweight as they only store the query itself, not the underlying data. For example:
sql
CREATE VIEW v1 AS
SELECT
CAST(doc._id, 'String') AS object_id,
CAST(doc.order_id, 'String') AS order_id,
CAST(doc.customer_id, 'Int64') AS customer_id,
CAST(doc.status, 'String') AS status,
CAST(doc.total_amount, 'Decimal64(2)') AS total_amount,
CAST(parseDateTime64BestEffortOrNull(doc.order_date, 3), 'DATETIME(3)') AS order_date,
doc.^shipping AS shipping_info,
doc.items AS items
FROM t1 FINAL
WHERE _peerdb_is_deleted = 0;
This view will have the following schema:
shell
ββnameβββββββββββββ¬βtypeββββββββββββ
β object_id β String β
β order_id β String β
β customer_id β Int64 β
β status β String β
β total_amount β Decimal(18, 2) β
β order_date β DateTime64(3) β
β shipping_info β JSON β
β items β Dynamic β
βββββββββββββββββββ΄βββββββββββββββββ
You can now query the view similar to how you would query a flattened table:
sql
SELECT
customer_id,
sum(total_amount)
FROM v1
WHERE shipping_info.city = 'Seattle'
GROUP BY customer_id
ORDER BY customer_id DESC
LIMIT 10;
Refreshable materialized view {#refreshable-materialized-view}
You can create
Refreshable Materialized Views
, which enable you to schedule query execution for deduplicating rows and storing the results in a flattened destination table. With each scheduled refresh, the destination table is replaced with the latest query results.
The key advantage of this method is that the query using the
FINAL
keyword runs only once during the refresh, eliminating the need for subsequent queries on the destination table to use
FINAL
.
A drawback is that the data in the destination table is only as up-to-date as the most recent refresh. For many use cases, refresh intervals ranging from several minutes to a few hours provide a good balance between data freshness and query performance. | {"source_file": "quickstart.md"} | [
-0.034219589084386826,
0.004914211109280586,
-0.015911836177110672,
0.1150016039609909,
-0.08665592223405838,
0.0009923665784299374,
0.04532254859805107,
0.07159660756587982,
-0.019856229424476624,
-0.013551984913647175,
-0.0025325806345790625,
-0.07532578706741333,
0.0007977411733008921,
... |
080eb6f7-c514-4dd9-be43-69b090caab14 | ``sql
CREATE TABLE flattened_t1 (
_id
String,
order_id
String,
customer_id
Int64,
status
String,
total_amount
Decimal(18, 2),
order_date
DateTime64(3),
shipping_info
JSON,
items` Dynamic
)
ENGINE = ReplacingMergeTree()
PRIMARY KEY _id
ORDER BY _id;
CREATE MATERIALIZED VIEW rmv REFRESH EVERY 1 HOUR TO flattened_t1 AS
SELECT
CAST(doc._id, 'String') AS _id,
CAST(doc.order_id, 'String') AS order_id,
CAST(doc.customer_id, 'Int64') AS customer_id,
CAST(doc.status, 'String') AS status,
CAST(doc.total_amount, 'Decimal64(2)') AS total_amount,
CAST(parseDateTime64BestEffortOrNull(doc.order_date, 3), 'DATETIME(3)') AS order_date,
doc.^shipping AS shipping_info,
doc.items AS items
FROM t1 FINAL
WHERE _peerdb_is_deleted = 0;
```
You can now query the table
flattened_t1
directly without the
FINAL
modifier:
sql
SELECT
customer_id,
sum(total_amount)
FROM flattened_t1
WHERE shipping_info.city = 'Seattle'
GROUP BY customer_id
ORDER BY customer_id DESC
LIMIT 10;
Incremental materialized view {#incremental-materialized-view}
If you want to access flattened columns in real-time, you can create
Incremental Materialized Views
. If your table has frequent updates, it's not recommended to use the
FINAL
modifier in your materialized view as every update will trigger a merge. Instead, you can deduplicate the data at query time by building a normal view on top of the materialized view.
``sql
CREATE TABLE flattened_t1 (
_id
String,
order_id
String,
customer_id
Int64,
status
String,
total_amount
Decimal(18, 2),
order_date
DateTime64(3),
shipping_info
JSON,
items
Dynamic,
_peerdb_version
Int64,
_peerdb_synced_at
DateTime64(9),
_peerdb_is_deleted` Int8
)
ENGINE = ReplacingMergeTree()
PRIMARY KEY _id
ORDER BY _id;
CREATE MATERIALIZED VIEW imv TO flattened_t1 AS
SELECT
CAST(doc._id, 'String') AS _id,
CAST(doc.order_id, 'String') AS order_id,
CAST(doc.customer_id, 'Int64') AS customer_id,
CAST(doc.status, 'String') AS status,
CAST(doc.total_amount, 'Decimal64(2)') AS total_amount,
CAST(parseDateTime64BestEffortOrNull(doc.order_date, 3), 'DATETIME(3)') AS order_date,
doc.^shipping AS shipping_info,
doc.items,
_peerdb_version,
_peerdb_synced_at,
_peerdb_is_deleted
FROM t1;
CREATE VIEW flattened_t1_final AS
SELECT * FROM flattened_t1 FINAL WHERE _peerdb_is_deleted = 0;
```
You can now query the view
flattened_t1_final
as follows:
sql
SELECT
customer_id,
sum(total_amount)
FROM flattened_t1_final
AND shipping_info.city = 'Seattle'
GROUP BY customer_id
ORDER BY customer_id DESC
LIMIT 10; | {"source_file": "quickstart.md"} | [
-0.006345268804579973,
-0.0012876152759417892,
0.03310776129364967,
0.0750221386551857,
-0.11944848299026489,
-0.005077729467302561,
0.011981762945652008,
0.058974891901016235,
-0.06079471856355667,
0.026186686009168625,
0.037663474678993225,
-0.044125109910964966,
0.035340745002031326,
-0... |
f9dcd259-b24c-40fb-8178-1eb2568dcb4a | sidebar_label: 'FAQ'
description: 'Frequently asked questions about ClickPipes for MongoDB.'
slug: /integrations/clickpipes/mongodb/faq
sidebar_position: 2
title: 'ClickPipes for MongoDB FAQ'
doc_type: 'reference'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
ClickPipes for MongoDB FAQ
Can I query for individual fields in the JSON datatype? {#can-i-query-for-individual-fields-in-the-json-datatype}
For direct field access, such as
{"user_id": 123}
, you can use
dot notation
:
sql
SELECT doc.user_id as user_id FROM your_table;
For direct field access of nested object fields, such as
{"address": { "city": "San Francisco", "state": "CA" }}
, use the
^
operator:
sql
SELECT doc.^address.city AS city FROM your_table;
For aggregations, cast the field to the appropriate type with the
CAST
function or
::
syntax:
sql
SELECT sum(doc.shipping.cost::Float32) AS total_shipping_cost FROM t1;
To learn more about working with JSON, see our
Working with JSON guide
.
How do I flatten the nested MongoDB documents in ClickHouse? {#how-do-i-flatten-the-nested-mongodb-documents-in-clickhouse}
MongoDB documents are replicated as JSON type in ClickHouse by default, preserving the nested structure. You have several options to flatten this data. If you want to flatten the data to columns, you can use normal views, materialized views, or query-time access.
Normal Views
: Use normal views to encapsulate flattening logic.
Materialized Views
: For smaller datasets, you can use refreshable materialized with the
FINAL
modifier
to periodically flatten and deduplicate data. For larger datasets, we recommend using incremental materialized views without
FINAL
to flatten the data in real-time, and then deduplicate data at query time.
Query-time Access
: Instead of flattening, use dot notation to access nested fields directly in queries.
For detailed examples, see our
Working with JSON guide
.
Can I connect MongoDB databases that don't have a public IP or are in private networks? {#can-i-connect-mongodb-databases-that-dont-have-a-public-ip-or-are-in-private-networks}
We support AWS PrivateLink for connecting to MongoDB databases that don't have a public IP or are in private networks. Azure Private Link and GCP Private Service Connect are currently not supported.
What happens if I delete a database/table from my MongoDB database? {#what-happens-if-i-delete-a-database-table-from-my-mongodb-database}
When you delete a database/table from MongoDB, ClickPipes will continue running but the dropped database/table will stop replicating changes. The corresponding tables in ClickHouse is preserved.
How does MongoDB CDC Connector handle transactions? {#how-does-mongodb-cdc-connector-handle-transactions} | {"source_file": "faq.md"} | [
0.02195761539041996,
0.06896297633647919,
0.010444042272865772,
0.03695934638381004,
-0.017550582066178322,
-0.019253592938184738,
0.03013281710445881,
0.0225811954587698,
-0.043879393488168716,
-0.06357799470424652,
-0.03577835485339165,
-0.05048007890582085,
-0.03513554856181145,
0.01388... |
c5a69af4-8c3c-429e-bd20-1ab0a8deceab | How does MongoDB CDC Connector handle transactions? {#how-does-mongodb-cdc-connector-handle-transactions}
Each document change within a transaction is processed individually to ClickHouse. Changes are applied in the order they appear in the oplog; and only committed changes are replicated to ClickHouse. If a MongoDB transaction is rolled back, those changes won't appear in the change stream.
For more examples, see our
Working with JSON guide
.
How do I handle
resume of change stream was not possible, as the resume point may no longer be in the oplog.
error? {#resume-point-may-no-longer-be-in-the-oplog-error}
This error typically occurs when the oplog is truncated and ClickPipe is unable to resume the change stream at the expected point. To resolve this issue,
resync the ClickPipe
. To avoid this issue from recurring, we recommend
increasing the oplog retention period
(or
here
if you are on a self-managed MongoDB).
How is replication managed? {#how-is-replication-managed}
We use MongoDB's native Change Streams API to track changes in the database. Change Streams API provides a resumable stream of database changes by leveraging MongoDB's oplog (operations log). ClickPipe uses MongoDB's resume tokens to track the position in the oplog and ensure every change is replicated to ClickHouse.
Which read preference should I use? {#which-read-preference-should-i-use}
Which read preference to use depends on your specific use case. If you want to minimize the load on your primary node, we recommend using
secondaryPreferred
read preference. If you want to optimize ingestion latency, we recommend using
primaryPreferred
read preference. For more details, see
MongoDB documentation
.
Does the MongoDB ClickPipe support Sharded Cluster? {#does-the-mongodb-clickpipe-support-sharded-cluster}
Yes, the MongoDB ClickPipe supports both Replica Set and Sharded Cluster. | {"source_file": "faq.md"} | [
-0.042170386761426926,
-0.011148987337946892,
0.014416049234569073,
0.05518389865756035,
0.032673511654138565,
-0.06413564831018448,
-0.07500254362821579,
-0.022943908348679543,
0.1027398630976677,
0.06283457577228546,
0.009337973780930042,
0.0943211242556572,
-0.047402359545230865,
0.0212... |
fcd0b0da-1939-4883-bc3e-a33542a19a98 | sidebar_label: 'Ingesting Data from MongoDB to ClickHouse'
description: 'Describes how to seamlessly connect your MongoDB to ClickHouse Cloud.'
slug: /integrations/clickpipes/mongodb
title: 'Ingesting data from MongoDB to ClickHouse (using CDC)'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import BetaBadge from '@theme/badges/BetaBadge';
import cp_service from '@site/static/images/integrations/data-ingestion/clickpipes/cp_service.png';
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import mongodb_tile from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongodb-tile.png'
import mongodb_connection_details from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongodb-connection-details.png'
import select_destination_db from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/select-destination-db.png'
import ch_permissions from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ch-permissions.jpg'
import Image from '@theme/IdealImage';
Ingesting data from MongoDB to ClickHouse (using CDC)
:::info
Ingesting data from MongoDB to ClickHouse Cloud via ClickPipes is in public beta.
:::
:::note
In the ClickHouse Cloud console and documentation, "table" and "collection" are used interchangeably for MongoDB.
:::
You can use ClickPipes to ingest data from your MongoDB database into ClickHouse Cloud. The source MongoDB database can be hosted on-premises or in the cloud using services like MongoDB Atlas.
Prerequisites {#prerequisites}
To get started, you first need to ensure that your MongoDB database is correctly configured for replication. The configuration steps depend on how you're deploying MongoDB, so please follow the relevant guide below:
MongoDB Atlas
Generic MongoDB
Once your source MongoDB database is set up, you can continue creating your ClickPipe.
Create your ClickPipe {#create-your-clickpipe}
Make sure you are logged in to your ClickHouse Cloud account. If you don't have an account yet, you can sign up
here
.
In the ClickHouse Cloud console, navigate to your ClickHouse Cloud Service.
Select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe".
Select the
MongoDB CDC
tile.
Add your source MongoDB database connection {#add-your-source-mongodb-database-connection}
Fill in the connection details for your source MongoDB database which you configured in the prerequisites step.
:::info
Before you start adding your connection details make sure that you have whitelisted ClickPipes IP addresses in your firewall rules. On the following page you can find a
list of ClickPipes IP addresses
.
For more information refer to the source MongoDB setup guides linked at
the top of this page
.
:::
Once the connection details are filled in, click
Next
. | {"source_file": "index.md"} | [
0.00011955732043134049,
0.019645826891064644,
-0.04389380291104317,
0.03594658523797989,
0.0135352723300457,
-0.039978332817554474,
0.01864863932132721,
0.040351685136556625,
-0.0638895258307457,
0.0026567892637103796,
0.02394571155309677,
-0.05057338997721672,
-0.00861448422074318,
-0.000... |
4d5844fc-f602-4b6e-a60a-779a87d53f4f | Once the connection details are filled in, click
Next
.
Configure advanced settings {#advanced-settings}
You can configure the advanced settings if needed. A brief description of each setting is provided below:
Sync interval
: This is the interval at which ClickPipes will poll the source database for changes. This has an implication on the destination ClickHouse service, for cost-sensitive users we recommend to keep this at a higher value (over
3600
).
Pull batch size
: The number of rows to fetch in a single batch. This is a best effort setting and may not be respected in all cases.
Snapshot number of tables in parallel
: This is the number of tables that will be fetched in parallel during the initial snapshot. This is useful when you have a large number of tables and you want to control the number of tables fetched in parallel.
Configure the tables {#configure-the-tables}
Here you can select the destination database for your ClickPipe. You can either select an existing database or create a new one.
You can select the tables you want to replicate from the source MongoDB database. While selecting the tables, you can also choose to rename the tables in the destination ClickHouse database.
Review permissions and start the ClickPipe {#review-permissions-and-start-the-clickpipe}
Select the "Full access" role from the permissions dropdown and click "Complete Setup".
What's next? {#whats-next}
Once you've set up your ClickPipe to replicate data from MongoDB to ClickHouse Cloud, you can focus on how to query and model your data for optimal performance.
Caveats {#caveats}
Here are a few caveats to note when using this connector:
We require MongoDB version 5.1.0+.
We use MongoDB's native Change Streams API for CDC, which relies on the MongoDB oplog to capture real-time changes.
Documents from MongoDB are replicated into ClickHouse as JSON type by default. This allows for flexible schema management and makes it possible to use the rich set of JSON operators in ClickHouse for querying and analytics. You can learn more about querying JSON data
here
.
Self-serve PrivateLink configuration is not currently available. If you are on AWS and require PrivateLink, please reach out to db-integrations-support@clickhouse.com or create a support ticket β we will work with you to enable it. | {"source_file": "index.md"} | [
-0.019094906747341156,
-0.050528544932603836,
-0.09397456049919128,
0.003478369442746043,
-0.08572852611541748,
-0.06673944741487503,
-0.05451888591051102,
-0.03134476765990257,
-0.020927945151925087,
0.052233997732400894,
-0.014962837100028992,
-0.03506872430443764,
0.03911779448390007,
-... |
d5e2b77e-e971-485c-a197-d3085b1d343f | title: 'Controlling the Syncing of a MongoDB ClickPipe'
description: 'Doc for controllling the sync a MongoDB ClickPipe'
slug: /integrations/clickpipes/mongodb/sync_control
sidebar_label: 'Controlling syncs'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import edit_sync_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/edit_sync_button.png'
import create_sync_settings from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/create_sync_settings.png'
import edit_sync_settings from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/sync_settings_edit.png'
import cdc_syncs from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/cdc_syncs.png'
import Image from '@theme/IdealImage';
This document describes how to control the sync of a MongoDB ClickPipe when the ClickPipe is in
CDC (Running) mode
.
Overview {#overview}
Database ClickPipes have an architecture that consists of two parallel processes - pulling from the source database and pushing to the target database. The pulling process is controlled by a sync configuration that defines how often the data should be pulled and how much data should be pulled at a time. By "at a time", we mean one batch - since the ClickPipe pulls and pushes data in batches.
There are two main ways to control the sync of a MongoDB ClickPipe. The ClickPipe will start pushing when one of the below settings kicks in.
Sync interval {#interval}
The sync interval of the pipe is the amount of time (in seconds) for which the ClickPipe will pull records from the source database. The time to push what we have to ClickHouse is not included in this interval.
The default is
1 minute
.
Sync interval can be set to any positive integer value, but it is recommended to keep it above 10 seconds.
Pull batch size {#batch-size}
The pull batch size is the number of records that the ClickPipe will pull from the source database in one batch. Records mean inserts, updates and deletes done on the collections that are part of the pipe.
The default is
100,000
records.
A safe maximum is 10 million.
Configuring sync settings {#configuring}
You can set the sync interval and pull batch size when you create a ClickPipe or edit an existing one.
When creating a ClickPipe it will be seen in the second step of the creation wizard, as shown below:
When editing an existing ClickPipe, you can head over to the
Settings
tab of the pipe, pause the pipe and then click on
Configure
here:
This will open a flyout with the sync settings, where you can change the sync interval and pull batch size:
Monitoring sync control behaviour {#monitoring} | {"source_file": "controlling_sync.md"} | [
-0.026905620470643044,
-0.019101472571492195,
-0.07087460905313492,
0.06487582623958588,
-0.04375484213232994,
-0.009759563952684402,
-0.024412354454398155,
0.011258474551141262,
-0.051927436143159866,
0.048053111881017685,
-0.026709292083978653,
-0.0077065336517989635,
-0.031455740332603455... |
b5208d66-b9a9-4960-806b-fd6f0f21c942 | This will open a flyout with the sync settings, where you can change the sync interval and pull batch size:
Monitoring sync control behaviour {#monitoring}
You can see how long each batch takes in the
CDC Syncs
table in the
Metrics
tab of the ClickPipe. Note that the duration here includes push time and also if there are no rows incoming, the ClickPipe waits and the wait time is also included in the duration. | {"source_file": "controlling_sync.md"} | [
0.043662428855895996,
-0.043097011744976044,
-0.0564122200012207,
0.009613648056983948,
-0.012086136266589165,
-0.012476942501962185,
-0.04981831833720207,
-0.02765718102455139,
0.003400817047804594,
0.003038879716768861,
0.03913873806595802,
-0.004420088604092598,
-0.10040156543254852,
-0... |
3f2f8199-c1a6-4b34-ba7f-d86b189ca8ed | title: 'Pausing and Resuming a Postgres ClickPipe'
description: 'Pausing and Resuming a Postgres ClickPipe'
sidebar_label: 'Pause table'
slug: /integrations/clickpipes/postgres/pause_and_resume
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import pause_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_button.png'
import pause_dialog from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_dialog.png'
import pause_status from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_status.png'
import resume_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resume_button.png'
import resume_dialog from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resume_dialog.png'
There are scenarios where it would be useful to pause a Postgres ClickPipe. For example, you may want to run some analytics on existing data in a static state. Or, you might be performing upgrades on Postgres. Here is how you can pause and resume a Postgres ClickPipe.
Steps to pause a Postgres ClickPipe {#pause-clickpipe-steps}
In the Data Sources tab, click on the Postgres ClickPipe you wish to pause.
Head over to the
Settings
tab.
Click on the
Pause
button.
A dialog box should appear for confirmation. Click on Pause again.
Head over to the
Metrics
tab.
In around 5 seconds (and also on page refresh), the status of the pipe should be
Paused
.
:::warning
Pausing a Postgres ClickPipe will not pause the growth of replication slots.
:::
Steps to resume a Postgres ClickPipe {#resume-clickpipe-steps}
In the Data Sources tab, click on the Postgres ClickPipe you wish to resume. The status of the mirror should be
Paused
initially.
Head over to the
Settings
tab.
Click on the
Resume
button.
A dialog box should appear for confirmation. Click on Resume again.
Head over to the
Metrics
tab.
In around 5 seconds (and also on page refresh), the status of the pipe should be
Running
. | {"source_file": "pause_and_resume.md"} | [
-0.058163341134786606,
-0.05943167209625244,
-0.060722716152668,
0.029973730444908142,
-0.019372228533029556,
0.033315811306238174,
0.021492086350917816,
-0.020490482449531555,
-0.07818444818258286,
-0.01462484896183014,
-0.016852030530571938,
0.028748665004968643,
-0.0704699456691742,
-0.... |
28d859ff-95f6-4f24-a9f5-be6f6129c612 | sidebar_label: 'Lifecycle of a Postgres ClickPipe'
description: 'Various pipe statuses and their meanings'
slug: /integrations/clickpipes/postgres/lifecycle
title: 'Lifecycle of a Postgres ClickPipe'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
Lifecycle of a Postgres ClickPipe {#lifecycle}
This is a document on the various phases of a Postgres ClickPipe, the different statuses it can have, and what they mean.
Provisioning {#provisioning}
When you click on the Create ClickPipe button, the ClickPipe is created in a
Provisioning
state. The provisioning process is where we spin up the underlying infrastructure to run ClickPipes for the service, along with registering some initial metadata for the pipe. Since compute for ClickPipes within a service is shared, your second ClickPipe will be created much faster than the first one -- as the infrastructure is already in place.
Setup {#setup}
After a pipe is provisioned, it enters the
Setup
state. This state is where we create the destination ClickHouse tables. We also obtain and record the table definitions of your source tables here.
Snapshot {#snapshot}
Once setup is complete, we enter the
Snapshot
state (unless it's a CDC-only pipe, which would transition to
Running
).
Snapshot
,
Initial Snapshot
and
Initial Load
(more common) are interchangeable terms. In this state, we take a snapshot of the source database tables and load them into ClickHouse. This does not use logical replication, but the replication slot is created at this step, therefore your
max_slot_wal_keep_size
and storage parameters should account for slot growth during initial load. For more information on initial load, see the
parallel initial load documentation
. The pipe will also enter the
Snapshot
state when a resync is triggered or when new tables are added to an existing pipe.
Running {#running}
Once the initial load is complete, the pipe enters the
Running
state (unless it's a snapshot-only pipe, which would transition to
Completed
). This is where the pipe begins
Change-Data Capture
. In this state, we start logical replication from the source database to ClickHouse. For information on controlling CDC, see
the doc on controlling CDC
.
Paused {#paused}
Once the pipe is in the
Running
state, you can pause it. This will stop the CDC process and the pipe will enter the
Paused
state. In this state, no new data is pulled from the source database, but the existing data in ClickHouse remains intact. You can resume the pipe from this state.
Pausing {#pausing} | {"source_file": "lifecycle.md"} | [
-0.056828271597623825,
-0.04325595125555992,
-0.026335425674915314,
-0.0052124690264463425,
-0.0394839309155941,
-0.06259335577487946,
0.011613426730036736,
-0.02124142460525036,
-0.0014897782821208239,
-0.006548273377120495,
0.015224345028400421,
0.022124355658888817,
-0.04208317771553993,
... |
9243b6dd-20d8-4401-8880-9ba2b0f62af5 | Pausing {#pausing}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
When you click on the Pause button, the pipe enters the
Pausing
state. This is a transient state where we are in the process of stopping the CDC process. Once the CDC process is fully stopped, the pipe will enter the
Paused
state.
Modifying {#modifying}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
Currently, this indicates the pipe is in the process of removing tables.
Resync {#resync}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
This state indicates the pipe is in the phase of resync where it is performing an atomic swap of the _resync tables with the original tables. More information on resync can be found in the
resync documentation
.
Completed {#completed}
This state applies to snapshot-only pipes and indicates that the snapshot has been completed and there's no more work to do.
Failed {#failed}
If there is an irrecoverable error in the pipe, it will enter the
Failed
state. You can reach out to support or
resync
your pipe to recover from this state. | {"source_file": "lifecycle.md"} | [
-0.001806991291232407,
-0.010660860687494278,
-0.0171977411955595,
0.007635333575308323,
0.054602596908807755,
-0.09076477587223053,
-0.06132722273468971,
0.017927302047610283,
0.05875764414668083,
0.05014177784323692,
0.0396098867058754,
-0.03216424584388733,
-0.04395679384469986,
-0.0232... |
ed61b12d-f644-4cb4-90d5-03e3278ccb82 | title: 'Handling TOAST Columns'
description: 'Learn how to handle TOAST columns when replicating data from PostgreSQL to ClickHouse.'
slug: /integrations/clickpipes/postgres/toast
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
When replicating data from PostgreSQL to ClickHouse, it's important to understand the limitations and special considerations for TOAST (The Oversized-Attribute Storage Technique) columns. This guide will help you identify and properly handle TOAST columns in your replication process.
What are TOAST columns in PostgreSQL? {#what-are-toast-columns-in-postgresql}
TOAST (The Oversized-Attribute Storage Technique) is PostgreSQL's mechanism for handling large field values. When a row exceeds the maximum row size (typically 2KB, but this can vary depending on the PostgreSQL version and exact settings), PostgreSQL automatically moves large field values into a separate TOAST table, storing only a pointer in the main table.
It's important to note that during Change Data Capture (CDC), unchanged TOAST columns are not included in the replication stream. This can lead to incomplete data replication if not handled properly.
During the initial load (snapshot), all column values, including TOAST columns, will be replicated correctly regardless of their size. The limitations described in this guide primarily affect the ongoing CDC process after the initial load.
You can read more about TOAST and its implementation in PostgreSQL here: https://www.postgresql.org/docs/current/storage-toast.html
Identifying TOAST columns in a table {#identifying-toast-columns-in-a-table}
To identify if a table has TOAST columns, you can use the following SQL query:
sql
SELECT a.attname, pg_catalog.format_type(a.atttypid, a.atttypmod) AS data_type
FROM pg_attribute a
JOIN pg_class c ON a.attrelid = c.oid
WHERE c.relname = 'your_table_name'
AND a.attlen = -1
AND a.attstorage != 'p'
AND a.attnum > 0;
This query will return the names and data types of columns that could potentially be TOASTed. However, it's important to note that this query only identifies columns that are eligible for TOAST storage based on their data type and storage attributes. To determine if these columns actually contain TOASTed data, you'll need to consider whether the values in these columns exceed the size. The actual TOASTing of data depends on the specific content stored in these columns.
Ensuring proper handling of TOAST columns {#ensuring-proper-handling-of-toast-columns}
To ensure that TOAST columns are handled correctly during replication, you should set the
REPLICA IDENTITY
of the table to
FULL
. This tells PostgreSQL to include the full old row in the WAL for UPDATE and DELETE operations, ensuring that all column values (including TOAST columns) are available for replication.
You can set the
REPLICA IDENTITY
to
FULL
using the following SQL command: | {"source_file": "toast.md"} | [
0.02589688077569008,
-0.021862687543034554,
-0.08601004630327225,
-0.011565367691218853,
-0.06961178034543991,
-0.07442788034677505,
-0.006918173748999834,
-0.044254545122385025,
0.0026126911398023367,
0.006336736958473921,
-0.006861481349915266,
0.0751216784119606,
-0.04050719365477562,
-... |
677d9495-a7d5-4a51-bcdd-914e6c760d3f | You can set the
REPLICA IDENTITY
to
FULL
using the following SQL command:
sql
ALTER TABLE your_table_name REPLICA IDENTITY FULL;
Refer to
this blog post
for performance considerations when setting
REPLICA IDENTITY FULL
.
Replication behavior when REPLICA IDENTITY FULL is not set {#replication-behavior-when-replica-identity-full-is-not-set}
If
REPLICA IDENTITY FULL
is not set for a table with TOAST columns, you may encounter the following issues when replicating to ClickHouse:
For INSERT operations, all columns (including TOAST columns) will be replicated correctly.
For UPDATE operations:
If a TOAST column is not modified, its value will appear as NULL or empty in ClickHouse.
If a TOAST column is modified, it will be replicated correctly.
For DELETE operations, TOAST column values will appear as NULL or empty in ClickHouse.
These behaviors can lead to data inconsistencies between your PostgreSQL source and ClickHouse destination. Therefore, it's crucial to set
REPLICA IDENTITY FULL
for tables with TOAST columns to ensure accurate and complete data replication.
Conclusion {#conclusion}
Properly handling TOAST columns is essential for maintaining data integrity when replicating from PostgreSQL to ClickHouse. By identifying TOAST columns and setting the appropriate
REPLICA IDENTITY
, you can ensure that your data is replicated accurately and completely. | {"source_file": "toast.md"} | [
-0.014787557534873486,
-0.059560343623161316,
-0.06589320302009583,
0.012883014045655727,
-0.054919157177209854,
-0.07937397807836533,
0.025865651667118073,
-0.11011984944343567,
0.037528421729803085,
0.012061544694006443,
0.02681303396821022,
0.014520479366183281,
0.004975777119398117,
-0... |
80c17989-d3ed-4056-bb7a-ca44a5f7343f | title: 'Resyncing Specific Tables'
description: 'Resyncing specific tables in a Postgres ClickPipe'
slug: /integrations/clickpipes/postgres/table_resync
sidebar_label: 'Resync table'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
Resyncing specific tables {#resync-tables}
There are scenarios where it would be useful to have specific tables of a pipe be re-synced. Some sample use-cases could be major schema changes on Postgres, or maybe some data re-modelling on the ClickHouse.
While resyncing individual tables with a button click is a work-in-progress, this guide will share steps on how you can achieve this today in the Postgres ClickPipe.
1. Remove the table from the pipe {#removing-table}
This can be followed by following the
table removal guide
.
2. Truncate or drop the table on ClickHouse {#truncate-drop-table}
This step is to avoid data duplication when we add this table again in the next step. You can do this by heading over to the
SQL Console
tab in ClickHouse Cloud and running a query.
Note that we have validation to block table addition if the table already exists in ClickHouse and is not empty.
3. Add the table to the ClickPipe again {#add-table-again}
This can be followed by following the
table addition guide
. | {"source_file": "table_resync.md"} | [
-0.03538728132843971,
-0.06296676397323608,
0.02242056280374527,
0.019227448850870132,
-0.04495808854699135,
-0.05649241432547569,
-0.020267238840460777,
-0.05799255892634392,
-0.03493114188313484,
0.0025827731005847454,
-0.028687266632914543,
0.05926428735256195,
-0.03295719623565674,
-0.... |
89088ecb-e14f-4dd1-b851-e16b3ac234f9 | title: 'Adding specific tables to a ClickPipe'
description: 'Describes the steps need to add specific tables to a ClickPipe.'
sidebar_label: 'Add table'
slug: /integrations/clickpipes/postgres/add_table
show_title: false
keywords: ['clickpipes postgres', 'add table', 'table configuration', 'initial load', 'snapshot']
doc_type: 'guide'
import Image from '@theme/IdealImage';
import add_table from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/add_table.png'
Adding specific tables to a ClickPipe
There are scenarios where it would be useful to add specific tables to a pipe. This becomes a common necessity as your transactional or analytical workload scales.
Steps to add specific tables to a ClickPipe {#add-tables-steps}
This can be done by the following steps:
1.
Pause
the pipe.
2. Click on Edit Table settings.
3. Locate your table - this can be done by searching it in the search bar.
4. Select the table by clicking on the checkbox.
Click update.
Upon successful update, the pipe will have statuses
Setup
,
Snapshot
and
Running
in that order. The table's initial load can be tracked in the
Tables
tab.
:::info
CDC for existing tables resumes automatically after the new tableβs snapshot completes.
::: | {"source_file": "add_table.md"} | [
0.010596087202429771,
-0.06187945604324341,
-0.053064532577991486,
0.019324244931340218,
-0.045778125524520874,
-0.010564572177827358,
0.03967205807566643,
0.05941331759095192,
-0.05762552097439766,
0.009137052111327648,
0.00811787974089384,
-0.037200480699539185,
0.033042192459106445,
-0.... |
bb3a1f5e-7b63-4a21-b0e0-3f4eb9cca2bd | title: 'Resyncing a Database ClickPipe'
description: 'Doc for resyncing a database ClickPipe'
slug: /integrations/clickpipes/postgres/resync
sidebar_label: 'Resync ClickPipe'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import resync_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resync_button.png'
import Image from '@theme/IdealImage';
What does Resync do? {#what-postgres-resync-do}
Resync involves the following operations in order:
1. The existing ClickPipe is dropped, and a new "resync" ClickPipe is kicked off. Thus, changes to source table structures will be picked up when you resync.
2. The resync ClickPipe creates (or replaces) a new set of destination tables which have the same names as the original tables except with a
_resync
suffix.
3. Initial load is performed on the
_resync
tables.
4. The
_resync
tables are then swapped with the original tables. Soft deleted rows are transferred from the original tables to the
_resync
tables before the swap.
All the settings of the original ClickPipe are retained in the resync ClickPipe. The statistics of the original ClickPipe are cleared in the UI.
Use cases for resyncing a ClickPipe {#use-cases-postgres-resync}
Here are a few scenarios:
You may need to perform major schema changes on the source tables which would break the existing ClickPipe and you would need to restart. You can just click Resync after performing the changes.
Specifically for Clickhouse, maybe you needed to change the ORDER BY keys on the target tables. You can Resync to re-populate data into the new table with the right sorting key.
The replication slot of the ClickPipe is invalidated: Resync creates a new ClickPipe and a new slot on the source database.
:::note
You can resync multiple times, however please account for the load on the source database when you resync,
since initial load with parallel threads is involved each time.
:::
Resync ClickPipe Guide {#guide-postgres-resync}
In the Data Sources tab, click on the Postgres ClickPipe you wish to resync.
Head over to the
Settings
tab.
Click on the
Resync
button.
A dialog box should appear for confirmation. Click on Resync again.
Head over to the
Metrics
tab.
In around 5 seconds (and also on page refresh), the status of the pipe should be
Setup
or
Snapshot
.
The initial load of the resync can be monitored in the
Tables
tab - in the
Initial Load Stats
section.
Once the initial load is complete, the pipe will atomically swap the
_resync
tables with the original tables. During the swap, the status will be
Resync
.
Once the swap is complete, the pipe will enter the
Running
state and perform CDC if enabled. | {"source_file": "resync.md"} | [
-0.08391247689723969,
-0.06653551012277603,
-0.030047936365008354,
0.05192825198173523,
0.04628542438149452,
-0.08615782111883163,
0.015931908041238785,
-0.022771956399083138,
0.01781846396625042,
0.009800507687032223,
-0.01350573729723692,
0.07244225591421127,
0.016262631863355637,
-0.118... |
41f8f518-9e88-4e54-80e1-e210dcdb838e | title: 'Removing specific tables from a ClickPipe'
description: 'Removing specific tables from a ClickPipe'
sidebar_label: 'Remove Table'
slug: /integrations/clickpipes/postgres/removing_tables
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import remove_table from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/remove_table.png'
In some cases, it makes sense to exclude specific tables from a Postgres ClickPipe - for example, if a table isn't needed for your analytics workload, skipping it can reduce storage and replication costs in ClickHouse.
Steps to remove specific tables {#remove-tables-steps}
The first step is to remove the table from the pipe. This can be done by the following steps:
Pause
the pipe.
Click on Edit Table Settings.
Locate your table - this can be done by searching it in the search bar.
Deselect the table by clicking on the selected checkbox.
Click update.
Upon successful update, in the
Metrics
tab the status will be
Running
. This table will no longer be replicated by this ClickPipe. | {"source_file": "remove_table.md"} | [
0.005404382478445768,
0.04366903752088547,
0.011529171839356422,
0.008249474689364433,
0.017418749630451202,
-0.08577300608158112,
0.06079714000225067,
-0.058187004178762436,
-0.008108598180115223,
0.0204519834369421,
0.0090655367821455,
0.039917584508657455,
0.040837787091732025,
-0.03971... |
78f3945f-ec31-406c-b928-559160ab65ba | sidebar_label: 'Ordering keys'
description: 'How to define custom ordering keys.'
slug: /integrations/clickpipes/postgres/ordering_keys
title: 'Ordering Keys'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
Ordering Keys (a.k.a. sorting keys) define how data is sorted on disk and indexed for a table in ClickHouse. When replicating from Postgres, ClickPipes by default uses the Postgres primary key of a table as the ordering key for the corresponding table in ClickHouse. In most cases, the Postgres primary key serves as a sufficient ordering key, as ClickHouse is already optimized for fast scans, and custom ordering keys are often not required.
As describe in the
migration guide
, for larger use cases you should include additional columns beyond the Postgres primary key in the ClickHouse ordering key to optimize queries.
By default with CDC, choosing an ordering key different from the Postgres primary key can cause data deduplication issues in ClickHouse. This happens because the ordering key in ClickHouse serves a dual role: it controls data indexing and sorting while acting as the deduplication key. The easiest way to address this issue is by defining refreshable materialized views.
Use refreshable materialized views {#use-refreshable-materialized-views}
A simple way to define custom ordering keys (ORDER BY) is using
refreshable materialized views
(MVs). These allow you to periodically (e.g., every 5 or 10 minutes) copy the entire table with the desired ordering key.
Below is an example of a Refreshable MV with a custom ORDER BY and required deduplication:
sql
CREATE MATERIALIZED VIEW posts_final
REFRESH EVERY 10 second ENGINE = ReplacingMergeTree(_peerdb_version)
ORDER BY (owneruserid,id) -- different ordering key but with suffixed postgres pkey
AS
SELECT * FROM posts FINAL
WHERE _peerdb_is_deleted = 0; -- this does the deduplication
Custom ordering keys without refreshable materialized views {#custom-ordering-keys-without-refreshable-materialized-views}
If refreshable materialized views don't work due to the scale of data, here are a few recommendations you can follow to define custom ordering keys on larger tables and overcome deduplication-related issues.
Choose ordering key columns that don't change for a given row {#choose-ordering-key-columns-that-dont-change-for-a-given-row}
When including additional columns in the ordering key for ClickHouse (besides the primary key from Postgres), we recommend selecting columns that don't change for each row. This helps prevent data consistency and deduplication issues with ReplacingMergeTree. | {"source_file": "ordering_keys.md"} | [
-0.01183223631232977,
-0.036291368305683136,
0.018221300095319748,
-0.04999924823641777,
-0.04699746146798134,
-0.022157948464155197,
-0.028354058042168617,
-0.021617291495203972,
0.02312939427793026,
0.05528932437300682,
0.08148857951164246,
0.08185552060604095,
-0.008317824453115463,
-0.... |
48b247ac-3314-4136-bd90-0c95f64ae271 | For example, in a multi-tenant SaaS application, using (
tenant_id
,
id
) as the ordering key is a good choice. These columns uniquely identify each row, and
tenant_id
remains constant for an
id
even if other columns change. Since deduplication by id aligns with deduplication by (tenant_id, id), it helps avoid data
deduplication issues
that could arise if tenant_id were to change.
Set Replica Identity on Postgres tables to custom ordering key {#set-replica-identity-on-postgres-tables-to-custom-ordering-key}
For Postgres CDC to function as expected, it is important to modify the
REPLICA IDENTITY
on tables to include the ordering key columns. This is essential for handling DELETEs accurately.
If the
REPLICA IDENTITY
does not include the ordering key columns, Postgres CDC will not capture the values of columns other than the primary key - this is a limitation of Postgres logical decoding. All ordering key columns besides the primary key in Postgres will have nulls. This affects deduplication, meaning the previous version of the row may not be deduplicated with the latest deleted version (where
_peerdb_is_deleted
is set to 1).
In the above example with
owneruserid
and
id
, if the primary key does not already include
owneruserid
, you need to have a
UNIQUE INDEX
on (
owneruserid
,
id
) and set it as the
REPLICA IDENTITY
for the table. This ensures that Postgres CDC captures the necessary column values for accurate replication and deduplication.
Below is an example of how to do this on the events table. Make sure to apply this to all tables with modified ordering keys.
sql
-- Create a UNIQUE INDEX on (owneruserid, id)
CREATE UNIQUE INDEX posts_unique_owneruserid_idx ON posts(owneruserid, id);
-- Set REPLICA IDENTITY to use this index
ALTER TABLE posts REPLICA IDENTITY USING INDEX posts_unique_owneruserid_idx; | {"source_file": "ordering_keys.md"} | [
-0.06391360610723495,
0.0038519282825291157,
0.028081638738512993,
-0.04935331642627716,
0.000546806026250124,
0.023513374850153923,
-0.07044872641563416,
-0.0717085674405098,
0.10425718128681183,
0.0694977417588234,
0.08017054200172424,
0.10038051009178162,
0.00970055628567934,
-0.0702352... |
3c3da501-79c1-4f38-88ad-9e162ac3ac7a | sidebar_label: 'FAQ'
description: 'Frequently asked questions about ClickPipes for Postgres.'
slug: /integrations/clickpipes/postgres/faq
sidebar_position: 2
title: 'ClickPipes for Postgres FAQ'
keywords: ['postgres faq', 'clickpipes', 'toast columns', 'replication slot', 'publications']
doc_type: 'reference'
import failover_slot from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/failover_slot.png'
import Image from '@theme/IdealImage';
ClickPipes for Postgres FAQ
How does idling affect my Postgres CDC ClickPipe? {#how-does-idling-affect-my-postgres-cdc-clickpipe}
If your ClickHouse Cloud service is idling, your Postgres CDC ClickPipe will continue to sync data, your service will wake-up at the next sync interval to handle the incoming data. Once the sync is finished and the idle period is reached, your service will go back to idling.
As an example, if your sync interval is set to 30 mins and your service idle time is set to 10 mins, Your service will wake-up every 30 mins and be active for 10 mins, then go back to idling.
How are TOAST columns handled in ClickPipes for Postgres? {#how-are-toast-columns-handled-in-clickpipes-for-postgres}
Please refer to the
Handling TOAST Columns
page for more information.
How are generated columns handled in ClickPipes for Postgres? {#how-are-generated-columns-handled-in-clickpipes-for-postgres}
Please refer to the
Postgres Generated Columns: Gotchas and Best Practices
page for more information.
Do tables need to have primary keys to be part of Postgres CDC? {#do-tables-need-to-have-primary-keys-to-be-part-of-postgres-cdc}
For a table to be replicated using ClickPipes for Postgres, it must have either a primary key or a
REPLICA IDENTITY
defined.
Primary Key
: The most straightforward approach is to define a primary key on the table. This provides a unique identifier for each row, which is crucial for tracking updates and deletions. You can have REPLICA IDENTITY set to
DEFAULT
(the default behavior) in this case.
Replica Identity
: If a table does not have a primary key, you can set a replica identity. The replica identity can be set to
FULL
, which means that the entire row will be used to identify changes. Alternatively, you can set it to use a unique index if one exists on the table, and then set REPLICA IDENTITY to
USING INDEX index_name
.
To set the replica identity to FULL, you can use the following SQL command:
sql
ALTER TABLE your_table_name REPLICA IDENTITY FULL;
REPLICA IDENTITY FULL also enables replication of unchanged TOAST columns. More on that
here
. | {"source_file": "faq.md"} | [
-0.06775793433189392,
-0.03661055862903595,
-0.005855277180671692,
0.03330513462424278,
0.04510752856731415,
-0.0648966059088707,
0.05732790380716324,
-0.02957909181714058,
0.027472706511616707,
-0.05204000696539879,
-0.009196163155138493,
0.08712757378816605,
0.012138895690441132,
-0.1084... |
756b306a-953b-44a3-adba-f0ee56be4769 | sql
ALTER TABLE your_table_name REPLICA IDENTITY FULL;
REPLICA IDENTITY FULL also enables replication of unchanged TOAST columns. More on that
here
.
Note that using
REPLICA IDENTITY FULL
can have performance implications and also faster WAL growth, especially for tables without a primary key and with frequent updates or deletes, as it requires more data to be logged for each change. If you have any doubts or need assistance with setting up primary keys or replica identities for your tables, please reach out to our support team for guidance.
It's important to note that if neither a primary key nor a replica identity is defined, ClickPipes will not be able to replicate changes for that table, and you may encounter errors during the replication process. Therefore, it's recommended to review your table schemas and ensure that they meet these requirements before setting up your ClickPipe.
Do you support partitioned tables as part of Postgres CDC? {#do-you-support-partitioned-tables-as-part-of-postgres-cdc}
Yes, partitioned tables are supported out of the box, as long as they have a PRIMARY KEY or REPLICA IDENTITY defined. The PRIMARY KEY and REPLICA IDENTITY must be present on both the parent table and its partitions. You can read more about it
here
.
Can I connect Postgres databases that don't have a public IP or are in private networks? {#can-i-connect-postgres-databases-that-dont-have-a-public-ip-or-are-in-private-networks}
Yes! ClickPipes for Postgres offers two ways to connect to databases in private networks:
SSH Tunneling
Works well for most use cases
See the setup instructions
here
Works across all regions
AWS PrivateLink
Available in three AWS regions:
us-east-1
us-east-2
eu-central-1
For detailed setup instructions, see our
PrivateLink documentation
For regions where PrivateLink is not available, please use SSH tunneling
How do you handle UPDATEs and DELETEs? {#how-do-you-handle-updates-and-deletes}
ClickPipes for Postgres captures both INSERTs and UPDATEs from Postgres as new rows with different versions (using the
_peerdb_
version column) in ClickHouse. The ReplacingMergeTree table engine periodically performs deduplication in the background based on the ordering key (ORDER BY columns), retaining only the row with the latest
_peerdb_
version.
DELETEs from Postgres are propagated as new rows marked as deleted (using the
_peerdb_is_deleted
column). Since the deduplication process is asynchronous, you might temporarily see duplicates. To address this, you need to handle deduplication at the query layer.
Also note that by default, Postgres does not send column values of columns that are not part of the primary key or replica identity during DELETE operations. If you want to capture the full row data during DELETEs, you can set the
REPLICA IDENTITY
to FULL.
For more details, refer to:
ReplacingMergeTree table engine best practices | {"source_file": "faq.md"} | [
-0.04821695759892464,
-0.07432377338409424,
-0.020849186927080154,
-0.003884576726704836,
-0.029203325510025024,
-0.05627436563372612,
-0.005522212013602257,
-0.030924754217267036,
0.029012367129325867,
0.04894821345806122,
0.0016678585670888424,
0.0006346798036247492,
-0.02602778561413288,
... |
9428493d-d9d9-4ccf-8d88-438e0136220b | For more details, refer to:
ReplacingMergeTree table engine best practices
Postgres-to-ClickHouse CDC internals blog
Can I update primary key columns in PostgreSQL? {#can-i-update-primary-key-columns-in-postgresql}
:::warning
Primary key updates in PostgreSQL cannot be properly replayed in ClickHouse by default.
This limitation exists because
ReplacingMergeTree
deduplication works based on the
ORDER BY
columns (which typically correspond to the primary key). When a primary key is updated in PostgreSQL, it appears as a new row with a different key in ClickHouse, rather than an update to the existing row. This can lead to both the old and new primary key values existing in your ClickHouse table.
:::
Note that updating primary key columns is not a common practice in PostgreSQL database design, as primary keys are intended to be immutable identifiers. Most applications avoid primary key updates by design, making this limitation rarely encountered in typical use cases.
There is an experimental setting available that can enable primary key update handling, but it comes with significant performance implications and is not recommended for production use without careful consideration.
If your use case requires updating primary key columns in PostgreSQL and having those changes properly reflected in ClickHouse, please reach out to our support team at
db-integrations-support@clickhouse.com
to discuss your specific requirements and potential solutions.
Do you support schema changes? {#do-you-support-schema-changes}
Please refer to the
ClickPipes for Postgres: Schema Changes Propagation Support
page for more information.
What are the costs for ClickPipes for Postgres CDC? {#what-are-the-costs-for-clickpipes-for-postgres-cdc}
For detailed pricing information, please refer to the
ClickPipes for Postgres CDC pricing section on our main billing overview page
.
My replication slot size is growing or not decreasing; what might be the issue? {#my-replication-slot-size-is-growing-or-not-decreasing-what-might-be-the-issue}
If you're noticing that the size of your Postgres replication slot keeps increasing or isn't coming back down, it usually means that
WAL (Write-Ahead Log) records aren't being consumed (or "replayed") quickly enough
by your CDC pipeline or replication process. Below are the most common causes and how you can address them.
Sudden Spikes in Database Activity
Large batch updates, bulk inserts, or significant schema changes can quickly generate a lot of WAL data.
The replication slot will hold these WAL records until they are consumed, causing a temporary spike in size.
Long-Running Transactions
An open transaction forces Postgres to keep all WAL segments generated since the transaction began, which can dramatically increase slot size. | {"source_file": "faq.md"} | [
-0.05727136880159378,
-0.04380430653691292,
0.016538091003894806,
-0.06330312043428421,
-0.0421798937022686,
-0.05034017190337181,
-0.06020154058933258,
-0.060702092945575714,
0.045055944472551346,
0.07636047154664993,
0.0619405061006546,
0.06885982304811478,
-0.03926066309213638,
-0.14426... |
83d83e75-f147-40f1-b9d2-af45e472b118 | Long-Running Transactions
An open transaction forces Postgres to keep all WAL segments generated since the transaction began, which can dramatically increase slot size.
Set
statement_timeout
and
idle_in_transaction_session_timeout
to reasonable values to prevent transactions from staying open indefinitely:
sql
SELECT
pid,
state,
age(now(), xact_start) AS transaction_duration,
query AS current_query
FROM
pg_stat_activity
WHERE
xact_start IS NOT NULL
ORDER BY
age(now(), xact_start) DESC;
Use this query to identify unusually long-running transactions.
Maintenance or Utility Operations (e.g.,
pg_repack
)
Tools like
pg_repack
can rewrite entire tables, generating large amounts of WAL data in a short time.
Schedule these operations during slower traffic periods or monitor your WAL usage closely while they run.
VACUUM and VACUUM ANALYZE
Although necessary for database health, these operations can create extra WAL trafficβespecially if they scan large tables.
Consider using autovacuum tuning parameters or scheduling manual VACUUM operations during off-peak hours.
Replication Consumer Not Actively Reading the Slot
If your CDC pipeline (e.g., ClickPipes) or another replication consumer stops, pauses, or crashes, WAL data will accumulate in the slot.
Ensure your pipeline is continuously running and check logs for connectivity or authentication errors.
For an excellent deep dive into this topic, check out our blog post:
Overcoming Pitfalls of Postgres Logical Decoding
.
How are Postgres data types mapped to ClickHouse? {#how-are-postgres-data-types-mapped-to-clickhouse}
ClickPipes for Postgres aims to map Postgres data types as natively as possible on the ClickHouse side. This document provides a comprehensive list of each data type and its mapping:
Data Type Matrix
.
Can I define my own data type mapping while replicating data from Postgres to ClickHouse? {#can-i-define-my-own-data-type-mapping-while-replicating-data-from-postgres-to-clickhouse}
Currently, we don't support defining custom data type mappings as part of the pipe. However, note that the default data type mapping used by ClickPipes is highly native. Most column types in Postgres are replicated as closely as possible to their native equivalents on ClickHouse. Integer array types in Postgres, for instance, are replicated as integer array types on ClickHouse.
How are JSON and JSONB columns replicated from Postgres? {#how-are-json-and-jsonb-columns-replicated-from-postgres} | {"source_file": "faq.md"} | [
0.03649510443210602,
-0.018779434263706207,
-0.04263861104846001,
0.06195090338587761,
-0.024673830717802048,
-0.019525546580553055,
-0.0029713648837059736,
-0.01506503764539957,
-0.0036513665691018105,
0.05501265078783035,
-0.024223079904913902,
0.13551387190818787,
-0.10719825327396393,
... |
8be50f72-79db-47fd-9176-02e69ba038fe | How are JSON and JSONB columns replicated from Postgres? {#how-are-json-and-jsonb-columns-replicated-from-postgres}
JSON and JSONB columns are replicated as String type in ClickHouse. Since ClickHouse supports a native
JSON type
, you can create a materialized view over the ClickPipes tables to perform the translation if needed. Alternatively, you can use
JSON functions
directly on the String column(s). We are actively working on a feature that replicates JSON and JSONB columns directly to the JSON type in ClickHouse. This feature is expected to be available in a few months.
What happens to inserts when a mirror is paused? {#what-happens-to-inserts-when-a-mirror-is-paused}
When you pause the mirror, the messages are queued up in the replication slot on the source Postgres, ensuring they are buffered and not lost. However, pausing and resuming the mirror will re-establish the connection, which could take some time depending on the source.
During this process, both the sync (pulling data from Postgres and streaming it into the ClickHouse raw table) and normalize (from raw table to target table) operations are aborted. However, they retain the state required to resume durably.
For sync, if it is canceled mid-way, the confirmed_flush_lsn in Postgres is not advanced, so the next sync will start from the same position as the aborted one, ensuring data consistency.
For normalize, the ReplacingMergeTree insert order handles deduplication.
In summary, while sync and normalize processes are terminated during a pause, it is safe to do so as they can resume without data loss or inconsistency.
Can ClickPipe creation be automated or done via API or CLI? {#can-clickpipe-creation-be-automated-or-done-via-api-or-cli}
A Postgres ClickPipe can also be created and managed via
OpenAPI
endpoints. This feature is in beta, and the API reference can be found
here
. We are actively working on Terraform support to create Postgres ClickPipes as well.
How do I speed up my initial load? {#how-do-i-speed-up-my-initial-load}
You cannot speed up an already running initial load. However, you can optimize future initial loads by adjusting certain settings. By default, the settings are configured with 4 parallel threads and a snapshot number of rows per partition set to 100,000. These are advanced settings and are generally sufficient for most use cases.
For Postgres versions 13 or lower, CTID range scans are slower, and these settings become more critical. In such cases, consider the following process to improve performance:
Drop the existing pipe
: This is necessary to apply new settings.
Delete destination tables on ClickHouse
: Ensure that the tables created by the previous pipe are removed.
Create a new pipe with optimized settings
: Typically, increase the snapshot number of rows per partition to between 1 million and 10 million, depending on your specific requirements and the load your Postgres instance can handle. | {"source_file": "faq.md"} | [
-0.06107069179415703,
-0.10334471613168716,
-0.08003349602222443,
0.005950309801846743,
-0.06683078408241272,
-0.026492493227124214,
-0.07499901950359344,
-0.10163885354995728,
0.026072928681969643,
-0.03558885678648949,
0.027953866869211197,
0.07595208287239075,
-0.053336575627326965,
-0.... |
1788628a-b7eb-4624-802f-a8b0e8729f37 | These adjustments should significantly enhance the performance of the initial load, especially for older Postgres versions. If you are using Postgres 14 or later, these settings are less impactful due to improved support for CTID range scans.
How should I scope my publications when setting up replication? {#how-should-i-scope-my-publications-when-setting-up-replication}
You can let ClickPipes manage your publications (requires additional permissions) or create them yourself. With ClickPipes-managed publications, we automatically handle table additions and removals as you edit the pipe. If self-managing, carefully scope your publications to only include tables you need to replicate - including unnecessary tables will slow down Postgres WAL decoding.
If you include any table in your publication, make sure it has either a primary key or
REPLICA IDENTITY FULL
. If you have tables without a primary key, creating a publication for all tables will cause DELETE and UPDATE operations to fail on those tables.
To identify tables without primary keys in your database, you can use this query:
sql
SELECT table_schema, table_name
FROM information_schema.tables
WHERE
(table_catalog, table_schema, table_name) NOT IN (
SELECT table_catalog, table_schema, table_name
FROM information_schema.table_constraints
WHERE constraint_type = 'PRIMARY KEY') AND
table_schema NOT IN ('information_schema', 'pg_catalog', 'pgq', 'londiste');
You have two options when dealing with tables without primary keys:
Exclude tables without primary keys from ClickPipes
:
Create the publication with only the tables that have a primary key:
sql
CREATE PUBLICATION clickpipes_publication FOR TABLE table_with_primary_key1, table_with_primary_key2, ...;
Include tables without primary keys in ClickPipes
:
If you want to include tables without a primary key, you need to alter their replica identity to
FULL
. This ensures that UPDATE and DELETE operations work correctly:
sql
ALTER TABLE table_without_primary_key1 REPLICA IDENTITY FULL;
ALTER TABLE table_without_primary_key2 REPLICA IDENTITY FULL;
CREATE PUBLICATION clickpipes_publication FOR TABLE <...>, <...>;
:::tip
If you're creating a publication manually instead of letting ClickPipes manage it, we don't recommend creating a publication
FOR ALL TABLES
, this leads to more traffic from Postgres to ClickPipes (to sending changes for other tables not in the pipe) and reduces overall efficiency.
For manually created publications, please add any tables you want to the publication before adding them to the pipe.
:::
:::warning
If you're replicating from a Postgres read replica/hot standby, you will need to create your own publication on the primary instance, which will automatically propagate to the standby. The ClickPipe will not be able to manage the publication in this case as you're unable to create publications on a standby.
::: | {"source_file": "faq.md"} | [
0.0279981791973114,
-0.015866583213210106,
-0.046134889125823975,
0.03623387590050697,
-0.07135553658008575,
-0.0313662514090538,
-0.048567842692136765,
-0.032126061618328094,
-0.04248741269111633,
0.07385922968387604,
-0.04517596587538719,
0.0769147053360939,
0.021190699189901352,
-0.0467... |
a51a7f27-3399-4444-bf75-739fd4289465 | Recommended
max_slot_wal_keep_size
settings {#recommended-max_slot_wal_keep_size-settings}
At Minimum:
Set
max_slot_wal_keep_size
to retain at least
two days' worth
of WAL data.
For Large Databases (High Transaction Volume):
Retain at least
2-3 times
the peak WAL generation per day.
For Storage-Constrained Environments:
Tune this conservatively to
avoid disk exhaustion
while ensuring replication stability.
How to calculate the right value {#how-to-calculate-the-right-value}
To determine the right setting, measure the WAL generation rate:
For PostgreSQL 10+ {#for-postgresql-10}
sql
SELECT pg_wal_lsn_diff(pg_current_wal_insert_lsn(), '0/0') / 1024 / 1024 AS wal_generated_mb;
For PostgreSQL 9.6 and below: {#for-postgresql-96-and-below}
sql
SELECT pg_xlog_location_diff(pg_current_xlog_insert_location(), '0/0') / 1024 / 1024 AS wal_generated_mb;
Run the above query at different times of the day, especially during highly transactional periods.
Calculate how much WAL is generated per 24-hour period.
Multiply that number by 2 or 3 to provide sufficient retention.
Set
max_slot_wal_keep_size
to the resulting value in MB or GB.
Example {#example}
If your database generates 100 GB of WAL per day, set:
sql
max_slot_wal_keep_size = 200GB
I'm seeing a ReceiveMessage EOF error in the logs. What does it mean? {#im-seeing-a-receivemessage-eof-error-in-the-logs-what-does-it-mean}
ReceiveMessage
is a function in the Postgres logical decoding protocol that reads messages from the replication stream. An EOF (End of File) error indicates that the connection to the Postgres server was unexpectedly closed while trying to read from the replication stream.
It is a recoverable, completely non-fatal error. ClickPipes will automatically attempt to reconnect and resume the replication process.
It can happen for a few reasons:
-
Low wal_sender_timeout:
Make sure
wal_sender_timeout
is 5 minutes or higher. This setting controls how long the server waits for a response from the client before closing the connection. If the timeout is too low, it can lead to premature disconnections.
-
Network Issues:
Temporary network disruptions can cause the connection to drop.
-
Postgres Server Restart:
If the Postgres server is restarted or crashes, the connection will be lost.
My replication slot is invalidated. What should I do? {#my-replication-slot-is-invalidated-what-should-i-do}
The only way to recover ClickPipe is by triggering a resync, which you can do in the Settings page.
The most common cause of replication slot invalidation is a low
max_slot_wal_keep_size
setting on your PostgreSQL database (e.g., a few gigabytes). We recommend increasing this value.
Refer to this section
on tuning
max_slot_wal_keep_size
. Ideally, this should be set to at least 200GB to prevent replication slot invalidation. | {"source_file": "faq.md"} | [
0.036516692489385605,
0.018378043547272682,
-0.048566918820142746,
0.03790441155433655,
-0.041814591735601425,
-0.01616768166422844,
-0.028842763975262642,
0.061391621828079224,
-0.0594518668949604,
0.021482670679688454,
-0.038617804646492004,
0.027834897860884666,
-0.02036627195775509,
-0... |
fde7e031-191a-4d2e-b73b-3514af4e4e78 | In rare cases, we have seen this issue occur even when
max_slot_wal_keep_size
is not configured. This could be due to an intricate and rare bug in PostgreSQL, although the cause remains unclear.
I am seeing out of memory (OOMs) on ClickHouse while my ClickPipe is ingesting data. Can you help? {#i-am-seeing-out-of-memory-ooms-on-clickhouse-while-my-clickpipe-is-ingesting-data-can-you-help}
One common reason for OOMs on ClickHouse is that your service is undersized. This means that your current service configuration doesn't have enough resources (e.g., memory or CPU) to handle the ingestion load effectively. We strongly recommend scaling up the service to meet the demands of your ClickPipe data ingestion.
Another reason we've observed is the presence of downstream Materialized Views with potentially unoptimized joins:
A common optimization technique for JOINs is if you have a
LEFT JOIN
where the right-hand side table is very large. In this case, rewrite the query to use a
RIGHT JOIN
and move the larger table to the left-hand side. This allows the query planner to be more memory efficient.
Another optimization for JOINs is to explicitly filter the tables through
subqueries
or
CTEs
and then perform the
JOIN
across these subqueries. This provides the planner with hints on how to efficiently filter rows and perform the
JOIN
.
I am seeing an
invalid snapshot identifier
during the initial load. What should I do? {#i-am-seeing-an-invalid-snapshot-identifier-during-the-initial-load-what-should-i-do}
The
invalid snapshot identifier
error occurs when there is a connection drop between ClickPipes and your Postgres database. This can happen due to gateway timeouts, database restarts, or other transient issues.
It is recommended that you do not carry out any disruptive operations like upgrades or restarts on your Postgres database while Initial Load is in progress and ensure that the network connection to your database is stable.
To resolve this issue, you can trigger a resync from the ClickPipes UI. This will restart the initial load process from the beginning.
What happens if I drop a publication in Postgres? {#what-happens-if-i-drop-a-publication-in-postgres}
Dropping a publication in Postgres will break your ClickPipe connection since the publication is required for the ClickPipe to pull changes from the source. When this happens, you'll typically receive an error alert indicating that the publication no longer exists.
To recover your ClickPipe after dropping a publication:
Create a new publication with the same name and required tables in Postgres
Click the 'Resync tables' button in the Settings tab of your ClickPipe
This resync is necessary because the recreated publication will have a different Object Identifier (OID) in Postgres, even if it has the same name. The resync process refreshes your destination tables and restores the connection. | {"source_file": "faq.md"} | [
0.02906103990972042,
-0.08955223113298416,
-0.02379164658486843,
0.05937329679727554,
-0.05947836488485336,
-0.06658009439706802,
0.042200714349746704,
-0.017977889627218246,
-0.006201281677931547,
0.008946503512561321,
-0.05688408017158508,
0.07160382717847824,
-0.05878199264407158,
-0.05... |
e073bb81-32a7-46c2-b390-10a29c70b83f | Alternatively, you can create an entirely new pipe if preferred.
Note that if you're working with partitioned tables, make sure to create your publication with the appropriate settings:
sql
CREATE PUBLICATION clickpipes_publication
FOR TABLE <...>, <...>
WITH (publish_via_partition_root = true);
What if I am seeing
Unexpected Datatype
errors or
Cannot parse type XX ...
{#what-if-i-am-seeing-unexpected-datatype-errors}
This error typically occurs when the source Postgres database has a datatype which cannot be mapped during ingestion.
For more specific issue, refer to the possibilities below.
Cannot parse type Decimal(XX, YY), expected non-empty binary data with size equal to or less than ...
{#cannot-parse-type-decimal-expected-non-empty-binary-data-with-size-equal-to-or-less-than}
Postgres
NUMERIC
s have really high precision (up to 131072 digits before the decimal point; up to 16383 digits after the decimal point) and ClickHouse Decimal type allows maximum of (76 digits, 39 scale).
The system assumes that
usually
the size would not get that high and does an optimistic cast for the same as source table can have large number of rows or the row can come in during the CDC phase.
The current workaround would be to map the NUMERIC type to string on ClickHouse. To enable this please raise a ticket with the support team and this will be enabled for your ClickPipes.
I'm seeing errors like
invalid memory alloc request size <XXX>
during replication/slot creation {#postgres-invalid-memalloc-bug}
There was a bug introduced in Postgres patch versions 17.5/16.9/15.13/14.18/13.21 due to which certain workloads can cause an exponential increase in memory usage, leading to a memory allocation request >1GB which Postgres considers invalid. This bug
has been fixed
and will be in the next Postgres patch series (17.6...). Please check with your Postgres provider when this patch version will be available for upgrade. If an upgrade isn't immediately possible, a resync of the pipe will be needed as it hits the error.
I need to maintain a complete historical record in ClickHouse, even when the data is deleted from the source Postgres database. Can I completely ignore DELETE and TRUNCATE operations from Postgres in ClickPipes? {#ignore-delete-truncate}
Yes! Before creating your Postgres ClickPipe, create a publication without DELETE operations. For example:
sql
CREATE PUBLICATION <pub_name> FOR TABLES IN SCHEMA <schema_name> WITH (publish = 'insert,update');
Then when
setting up
your Postgres ClickPipe, make sure this publication name is selected.
Note that TRUNCATE operations are ignored by ClickPipes and will not be replicated to ClickHouse.
Why can I not replicate my table which has a dot in it? {#replicate-table-dot} | {"source_file": "faq.md"} | [
0.048779502511024475,
-0.07545803487300873,
-0.030465535819530487,
-0.0048417337238788605,
-0.07424276322126389,
-0.08078805357217789,
-0.02883772738277912,
0.03606784716248512,
-0.07648597657680511,
0.04664336517453194,
-0.04099040850996971,
-0.05740264058113098,
-0.02326171100139618,
0.0... |
3b2642c8-9948-4631-b56e-0aee7ed31974 | Note that TRUNCATE operations are ignored by ClickPipes and will not be replicated to ClickHouse.
Why can I not replicate my table which has a dot in it? {#replicate-table-dot}
PeerDB has a limitation currently where dots in source table identifiers - aka either schema name or table name - is not supported for replication as PeerDB cannot discern, in that case, what is the schema and what is the table as it splits on dot.
Effort is being made to support input of schema and table separately to get around this limitation.
Initial load completed but there is no/missing data on ClickHouse. What could be the issue? {#initial-load-issue}
If your initial load has completed without error but your destination ClickHouse table is missing data, it might be that you have RLS (Row Level Security) policies enabled on your source Postgres tables.
Also worth checking:
- If the user has sufficient permissions to read the source tables.
- If there are any row policies on ClickHouse side which might be filtering out rows.
Can I have the ClickPipe create a replication slot with failover enabled? {#failover-slot}
Yes, for a Postgres ClickPipe with replication mode as CDC or Snapshot + CDC, you can have ClickPipes create a replication slot with failover enabled, by toggling the below switch in the
Advanced Settings
section while creating the ClickPipe. Note that your Postgres version must be 17 or above to use this feature.
If the source is configured accordingly, the slot is preserved after failovers to a Postgres read replica, ensuring continuous data replication. Learn more
here
.
I am seeing errors like
Internal error encountered during logical decoding of aborted sub-transaction
{#transient-logical-decoding-errors}
This error suggests a transient issue with the logical decoding of aborted sub-transaction, and is specific to custom implementations of Aurora Postgres. Given the error is coming from
ReorderBufferPreserveLastSpilledSnapshot
routine, this suggests that logical decoding is not able to read the snapshot spilled to disk. It may be worth trying to increase
logical_decoding_work_mem
to a higher value.
I am seeing errors like
error converting new tuple to map
or
error parsing logical message
during CDC replication {#logical-message-processing-errors}
Postgres sends information about changes in the form of messages that have a fixed protocol. These errors arise when the ClickPipe receives a message that it is unable to parse, either due to corruption in transit or invalid messages being sent. While the exact issue tends to vary, we've seen several cases from Neon Postgres sources. In case you are seeing this issue with Neon as well, please raise a support ticket with them. In other cases, please reach out to our support team for guidance. | {"source_file": "faq.md"} | [
-0.014200244098901749,
-0.10603231936693192,
-0.011980582028627396,
0.027539357542991638,
-0.07279689610004425,
-0.08738043904304504,
-0.009257028810679913,
-0.053988032042980194,
-0.06315071880817413,
0.07587327063083649,
0.004756862763315439,
0.03442469611763954,
0.02136910893023014,
-0.... |
0ade9104-222c-48c9-b5ce-3e0453681039 | title: 'Schema Changes Propagation Support'
slug: /integrations/clickpipes/postgres/schema-changes
description: 'Page describing schema change types detectable by ClickPipes in the source tables'
doc_type: 'reference'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
ClickPipes for Postgres can detect schema changes in the source tables and, in some cases, automatically propagate the changes to the destination tables. The way each DDL operation is handled is documented below:
| Schema Change Type | Behaviour |
| ----------------------------------------------------------------------------------- | ------------------------------------- |
| Adding a new column (
ALTER TABLE ADD COLUMN ...
) | Propagated automatically once the table gets an insert/update/delete. The new column(s) will be populated for all rows replicated after the schema change |
| Adding a new column with a default value (
ALTER TABLE ADD COLUMN ... DEFAULT ...
) | Propagated automatically once the table gets an insert/update/delete. The new column(s) will be populated for all rows replicated after the schema change, but existing rows will not show the default value without a full table refresh |
| Dropping an existing column (
ALTER TABLE DROP COLUMN ...
) | Detected, but
not
propagated. The dropped column(s) will be populated with
NULL
for all rows replicated after the schema change |
Note that column addition will be propagated at the end of a batch's sync, which could occur after the sync interval or pull batch size is reached. More information on controlling syncs
here | {"source_file": "schema-changes.md"} | [
-0.020910030230879784,
-0.08058375120162964,
0.026771726086735725,
-0.013970574364066124,
-0.011276577599346638,
-0.06215612590312958,
-0.011376305483281612,
-0.08550342917442322,
0.013218776322901249,
0.014716509729623795,
0.01866595260798931,
-0.02610013820230961,
-0.026848506182432175,
... |
5ad3c8a7-a26a-49d8-bf22-037c439ed174 | sidebar_label: 'Deduplication strategies'
description: 'Handle duplicates and deleted rows.'
slug: /integrations/clickpipes/postgres/deduplication
title: 'Deduplication strategies (using CDC)'
keywords: ['deduplication', 'postgres', 'clickpipes', 'replacingmergetree', 'final']
doc_type: 'guide'
import clickpipes_initial_load from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/postgres-cdc-initial-load.png';
import Image from '@theme/IdealImage';
Updates and deletes replicated from Postgres to ClickHouse result in duplicated rows in ClickHouse due to its data storage structure and the replication process. This page covers why this happens and the strategies to use in ClickHouse to handle duplicates.
How does data get replicated? {#how-does-data-get-replicated}
PostgreSQL logical decoding {#PostgreSQL-logical-decoding}
ClickPipes uses
Postgres Logical Decoding
to consume changes as they happen in Postgres. The Logical Decoding process in Postgres enables clients like ClickPipes to receive changes in a human-readable format, i.e., a series of INSERTs, UPDATEs, and DELETEs.
ReplacingMergeTree {#replacingmergetree}
ClickPipes maps Postgres tables to ClickHouse using the
ReplacingMergeTree
engine. ClickHouse performs best with append-only workloads and does not recommend frequent UPDATEs. This is where ReplacingMergeTree is particularly powerful.
With ReplacingMergeTree, updates are modeled as inserts with a newer version (
_peerdb_version
) of the row, while deletes are inserts with a newer version and
_peerdb_is_deleted
marked as true. The ReplacingMergeTree engine deduplicates/merges data in the background, and retains the latest version of the row for a given primary key (id), enabling efficient handling of UPDATEs and DELETEs as versioned inserts.
Below is an example of a CREATE Table statement executed by ClickPipes to create the table in ClickHouse.
sql
CREATE TABLE users
(
`id` Int32,
`reputation` String,
`creationdate` DateTime64(6),
`displayname` String,
`lastaccessdate` DateTime64(6),
`aboutme` String,
`views` Int32,
`upvotes` Int32,
`downvotes` Int32,
`websiteurl` String,
`location` String,
`accountid` Int32,
`_peerdb_synced_at` DateTime64(9) DEFAULT now64(),
`_peerdb_is_deleted` Int8,
`_peerdb_version` Int64
)
ENGINE = ReplacingMergeTree(_peerdb_version)
PRIMARY KEY id
ORDER BY id;
Illustrative example {#illustrative-example}
The illustration below walks through a basic example of synchronization of a table
users
between PostgreSQL and ClickHouse using ClickPipes.
Step 1
shows the initial snapshot of the 2 rows in PostgreSQL and ClickPipes performing the initial load of those 2 rows to ClickHouse. As you can observe, both rows are copied as-is to ClickHouse.
Step 2
shows three operations on the users table: inserting a new row, updating an existing row, and deleting another row. | {"source_file": "deduplication.md"} | [
-0.1199922263622284,
-0.0365259051322937,
0.00983534473925829,
0.004254783038049936,
-0.016218049451708794,
-0.10013741254806519,
0.0241808220744133,
-0.08012912422418594,
0.07942954450845718,
0.047446444630622864,
0.08597174286842346,
0.07864923030138016,
0.03625127300620079,
-0.115894608... |
07ddd518-c86b-4d90-b782-23d4da338b58 | Step 2
shows three operations on the users table: inserting a new row, updating an existing row, and deleting another row.
Step 3
shows how ClickPipes replicates the INSERT, UPDATE, and DELETE operations to ClickHouse as versioned inserts. The UPDATE appears as a new version of the row with ID 2, while the DELETE appears as a new version of ID 1 which is marked as true using
_is_deleted
. Because of this, ClickHouse has three additional rows compared to PostgreSQL.
As a result, running a simple query like
SELECT count(*) FROM users;
may produce different results in ClickHouse and PostgreSQL. According to the
ClickHouse merge documentation
, outdated row versions are eventually discarded during the merge process. However, the timing of this merge is unpredictable, meaning queries in ClickHouse may return inconsistent results until it occurs.
How can we ensure identical query results in both ClickHouse and PostgreSQL?
Deduplicate using FINAL Keyword {#deduplicate-using-final-keyword}
The recommended way to deduplicate data in ClickHouse queries is to use the
FINAL modifier.
This ensures only the deduplicated rows are returned.
Let's look at how to apply it to three different queries.
Take note of the WHERE clause in the following queries, used to filter out deleted rows.
Simple count query
: Count the number of posts.
This is the simplest query you can run to check if the synchronization went fine. The two queries should return the same count.
```sql
-- PostgreSQL
SELECT count(*) FROM posts;
-- ClickHouse
SELECT count(*) FROM posts FINAL WHERE _peerdb_is_deleted=0;
```
Simple aggregation with JOIN
: Top 10 users who have accumulated the most views.
An example of an aggregation on a single table. Having duplicates here would greatly affect the result of the sum function.
```sql
-- PostgreSQL
SELECT
sum(p.viewcount) AS viewcount,
p.owneruserid AS user_id,
u.displayname AS display_name
FROM posts p
LEFT JOIN users u ON u.id = p.owneruserid
-- highlight-next-line
WHERE p.owneruserid > 0
GROUP BY user_id, display_name
ORDER BY viewcount DESC
LIMIT 10;
-- ClickHouse
SELECT
sum(p.viewcount) AS viewcount,
p.owneruserid AS user_id,
u.displayname AS display_name
FROM posts AS p
FINAL
LEFT JOIN users AS u
FINAL ON (u.id = p.owneruserid) AND (u._peerdb_is_deleted = 0)
-- highlight-next-line
WHERE (p.owneruserid > 0) AND (p._peerdb_is_deleted = 0)
GROUP BY
user_id,
display_name
ORDER BY viewcount DESC
LIMIT 10
```
FINAL setting {#final-setting}
Rather than adding the FINAL modifier to each table name in the query, you can use the
FINAL setting
to apply it automatically to all tables in the query.
This setting can be applied either per query or for an entire session.
```sql
-- Per query FINAL setting
SELECT count(*) FROM posts SETTINGS FINAL = 1;
-- Set FINAL for the session
SET final = 1;
SELECT count(*) FROM posts;
```
ROW policy {#row-policy} | {"source_file": "deduplication.md"} | [
-0.0574718713760376,
-0.08107445389032364,
0.04165384918451309,
-0.019442614167928696,
-0.06590733677148819,
-0.0625072717666626,
0.002154087647795677,
-0.08683539927005768,
0.044639952480793,
-0.028500216081738472,
0.07051697373390198,
0.0664636418223381,
0.01052687969058752,
-0.123896121... |
78e38f5c-9131-4cad-bdbb-1e72c4b71c29 | ```sql
-- Per query FINAL setting
SELECT count(*) FROM posts SETTINGS FINAL = 1;
-- Set FINAL for the session
SET final = 1;
SELECT count(*) FROM posts;
```
ROW policy {#row-policy}
An easy way to hide the redundant
_peerdb_is_deleted = 0
filter is to use
ROW policy.
Below is an example that creates a row policy to exclude the deleted rows from all queries on the table votes.
sql
-- Apply row policy to all users
CREATE ROW POLICY cdc_policy ON votes FOR SELECT USING _peerdb_is_deleted = 0 TO ALL;
Row policies are applied to a list of users and roles. In this example, it is applied to all users and roles. This can be adjusted to only specific users or roles.
Query like with Postgres {#query-like-with-postgres}
Migrating an analytical dataset from PostgreSQL to ClickHouse often requires modifying application queries to account for differences in data handling and query execution.
This section will explore techniques for deduplicating data while keeping the original queries unchanged.
Views {#views}
Views
are a great way to hide the FINAL keyword from the query, as they do not store any data and simply perform a read from another table on each access.
Below is an example of creating views for each table of our database in ClickHouse with the FINAL keyword and filter for the deleted rows.
sql
CREATE VIEW posts_view AS SELECT * FROM posts FINAL WHERE _peerdb_is_deleted=0;
CREATE VIEW users_view AS SELECT * FROM users FINAL WHERE _peerdb_is_deleted=0;
CREATE VIEW votes_view AS SELECT * FROM votes FINAL WHERE _peerdb_is_deleted=0;
CREATE VIEW comments_view AS SELECT * FROM comments FINAL WHERE _peerdb_is_deleted=0;
Then, we can query the views using the same query we would use in PostgreSQL.
sql
-- Most viewed posts
SELECT
sum(viewcount) AS viewcount,
owneruserid
FROM posts_view
WHERE owneruserid > 0
GROUP BY owneruserid
ORDER BY viewcount DESC
LIMIT 10
Refreshable materialized view {#refreshable-material-view}
Another approach is to use a
refreshable materialized view
, which enables you to schedule query execution for deduplicating rows and storing the results in a destination table. With each scheduled refresh, the destination table is replaced with the latest query results.
The key advantage of this method is that the query using the FINAL keyword runs only once during the refresh, eliminating the need for subsequent queries on the destination table to use FINAL.
However, a drawback is that the data in the destination table is only as up-to-date as the most recent refresh. That said, for many use cases, refresh intervals ranging from several minutes to a few hours may be sufficient.
```sql
-- Create deduplicated posts table
CREATE TABLE deduplicated_posts AS posts;
-- Create the Materialized view and schedule to run every hour
CREATE MATERIALIZED VIEW deduplicated_posts_mv REFRESH EVERY 1 HOUR TO deduplicated_posts AS
SELECT * FROM posts FINAL WHERE _peerdb_is_deleted=0
``` | {"source_file": "deduplication.md"} | [
-0.024515638127923012,
0.015626467764377594,
-0.033146265894174576,
0.057543378323316574,
-0.00337813189253211,
0.06617158651351929,
0.08946450799703598,
-0.11115255206823349,
-0.019520727917551994,
0.04789099469780922,
0.031228654086589813,
0.029842402786016464,
0.12823006510734558,
-0.08... |
6a499f98-deda-484f-8b91-2e17c47270ed | Then, you can query the table
deduplicated_posts
normally.
sql
SELECT
sum(viewcount) AS viewcount,
owneruserid
FROM deduplicated_posts
WHERE owneruserid > 0
GROUP BY owneruserid
ORDER BY viewcount DESC
LIMIT 10; | {"source_file": "deduplication.md"} | [
0.03538154065608978,
-0.050011150538921356,
-0.019285596907138824,
0.03867066279053688,
-0.07921268045902252,
0.012447749264538288,
0.05170631408691406,
-0.04684435948729515,
-0.06342878192663193,
-0.009069324471056461,
0.07313565909862518,
-0.03909824416041374,
0.07595503330230713,
-0.026... |
55befe47-6672-4082-aae8-4cf536e40f5e | sidebar_label: 'Ingesting data from Postgres to ClickHouse'
description: 'Seamlessly connect your Postgres to ClickHouse Cloud.'
slug: /integrations/clickpipes/postgres
title: 'Ingesting Data from Postgres to ClickHouse (using CDC)'
keywords: ['PostgreSQL', 'ClickPipes', 'CDC', 'change data capture', 'database replication']
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'clickpipes'
import BetaBadge from '@theme/badges/BetaBadge';
import cp_service from '@site/static/images/integrations/data-ingestion/clickpipes/cp_service.png';
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import postgres_tile from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/postgres-tile.png'
import postgres_connection_details from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/postgres-connection-details.jpg'
import ssh_tunnel from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ssh-tunnel.jpg'
import select_replication_slot from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/select-replication-slot.jpg'
import select_destination_db from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/select-destination-db.jpg'
import ch_permissions from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ch-permissions.jpg'
import Image from '@theme/IdealImage';
Ingesting data from Postgres to ClickHouse (using CDC)
You can use ClickPipes to ingest data from your source Postgres database into ClickHouse Cloud. The source Postgres database can be hosted on-premises or in the cloud including Amazon RDS, Google Cloud SQL, Azure Database for Postgres, Supabase and others.
Prerequisites {#prerequisites}
To get started, you first need to make sure that your Postgres database is set up correctly. Depending on your source Postgres instance, you may follow any of the following guides:
Amazon RDS Postgres
Amazon Aurora Postgres
Supabase Postgres
Google Cloud SQL Postgres
Azure Flexible Server for Postgres
Neon Postgres
Crunchy Bridge Postgres
Generic Postgres Source
, if you are using any other Postgres provider or using a self-hosted instance.
TimescaleDB
, if you are using the TimescaleDB extension on a managed service or self-hosted instance.
:::warning
Postgres Proxies like PgBouncer, RDS Proxy, Supabase Pooler, etc., are not supported for CDC based replication. Please make sure to NOT use them for the ClickPipes setup and instead add connection details of the actual Postgres database.
:::
Once your source Postgres database is set up, you can continue creating your ClickPipe.
Creating your ClickPipe {#creating-your-clickpipe}
Make sure you are logged in to your ClickHouse Cloud account. If you don't have an account yet, you can sign up
here
. | {"source_file": "index.md"} | [
-0.00447699474170804,
-0.0018973123515024781,
-0.05840318650007248,
-0.004874411970376968,
-0.04564490541815758,
-0.011610103771090508,
0.05485200509428978,
0.005196088459342718,
-0.06084616109728813,
0.01634136587381363,
0.046213943511247635,
-0.031042544171214104,
-0.016958387568593025,
... |
51a28fa8-4a03-4498-ae1e-2dfb8c42be0f | Creating your ClickPipe {#creating-your-clickpipe}
Make sure you are logged in to your ClickHouse Cloud account. If you don't have an account yet, you can sign up
here
.
In the ClickHouse Cloud console, navigate to your ClickHouse Cloud Service.
Select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe"
Select the
Postgres CDC
tile
Adding your source Postgres database connection {#adding-your-source-postgres-database-connection}
Fill in the connection details for your source Postgres database which you configured in the prerequisites step.
:::info
Before you start adding your connection details make sure that you have whitelisted ClickPipes IP addresses in your firewall rules. You can find the list of ClickPipes IP addresses
here
.
For more information refer to the source Postgres setup guides linked at
the top of this page
.
:::
(Optional) Setting up AWS Private Link {#optional-setting-up-aws-private-link}
You can use AWS Private Link to connect to your source Postgres database if it is hosted on AWS. This is useful if you
want to keep your data transfer private.
You can follow the
setup guide to set up the connection
.
(Optional) Setting up SSH tunneling {#optional-setting-up-ssh-tunneling}
You can specify SSH tunneling details if your source Postgres database is not publicly accessible.
Enable the "Use SSH Tunnelling" toggle.
Fill in the SSH connection details.
To use Key-based authentication, click on "Revoke and generate key pair" to generate a new key pair and copy the generated public key to your SSH server under
~/.ssh/authorized_keys
.
Click on "Verify Connection" to verify the connection.
:::note
Make sure to whitelist
ClickPipes IP addresses
in your firewall rules for the SSH bastion host so that ClickPipes can establish the SSH tunnel.
:::
Once the connection details are filled in, click on "Next".
Configuring the replication settings {#configuring-the-replication-settings}
Make sure to select the replication slot from the dropdown list you created in the prerequisites step.
Advanced settings {#advanced-settings}
You can configure the Advanced settings if needed. A brief description of each setting is provided below:
Sync interval
: This is the interval at which ClickPipes will poll the source database for changes. This has implication on the destination ClickHouse service, for cost-sensitive users we recommend to keep this at a higher value (over
3600
).
Parallel threads for initial load
: This is the number of parallel workers that will be used to fetch the initial snapshot. This is useful when you have a large number of tables and you want to control the number of parallel workers used to fetch the initial snapshot. This setting is per-table.
Pull batch size
: The number of rows to fetch in a single batch. This is a best effort setting and may not be respected in all cases. | {"source_file": "index.md"} | [
0.029743637889623642,
-0.06853382289409637,
-0.044187143445014954,
-0.038835443556308746,
-0.07125276327133179,
0.03762088343501091,
-0.04470979422330856,
-0.02701256237924099,
0.013724210672080517,
0.061612989753484726,
-0.028369728475809097,
-0.024248460307717323,
-0.0684874951839447,
-0... |
7847f893-f376-414d-8ee2-dce9ef99087d | Pull batch size
: The number of rows to fetch in a single batch. This is a best effort setting and may not be respected in all cases.
Snapshot number of rows per partition
: This is the number of rows that will be fetched in each partition during the initial snapshot. This is useful when you have a large number of rows in your tables and you want to control the number of rows fetched in each partition.
Snapshot number of tables in parallel
: This is the number of tables that will be fetched in parallel during the initial snapshot. This is useful when you have a large number of tables and you want to control the number of tables fetched in parallel.
Configuring the tables {#configuring-the-tables}
Here you can select the destination database for your ClickPipe. You can either select an existing database or create a new one.
You can select the tables you want to replicate from the source Postgres database. While selecting the tables, you can also choose to rename the tables in the destination ClickHouse database as well as exclude specific columns.
:::warning
If you are defining an ordering key in ClickHouse differently than from the primary key in Postgres, don't forget to read all the
considerations
around it
:::
Review permissions and start the ClickPipe {#review-permissions-and-start-the-clickpipe}
Select the "Full access" role from the permissions dropdown and click "Complete Setup".
What's next? {#whats-next}
Once you've set up your ClickPipe to replicate data from PostgreSQL to ClickHouse Cloud, you can focus on how to query and model your data for optimal performance. See the
migration guide
to assess which strategy best suits your requirements, as well as the
Deduplication strategies (using CDC)
and
Ordering Keys
pages for best practices on CDC workloads.
For common questions around PostgreSQL CDC and troubleshooting, see the
Postgres FAQs page
. | {"source_file": "index.md"} | [
-0.009383060038089752,
-0.08196449279785156,
-0.06613993644714355,
-0.011445620097219944,
-0.07532893121242523,
-0.023012209683656693,
-0.008807148784399033,
-0.0009471401572227478,
-0.0425911508500576,
0.033582624047994614,
-0.029293879866600037,
-0.01042727567255497,
-0.0073200128972530365... |
bdc855c7-3d4d-404d-a86e-8402cd2f60ab | title: 'Scaling DB ClickPipes via OpenAPI'
description: 'Doc for scaling DB ClickPipes via OpenAPI'
slug: /integrations/clickpipes/postgres/scaling
sidebar_label: 'Scaling'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
:::caution Most users won't need this API
Default configuration of DB ClickPipes is designed to handle the majority of workloads out of the box. If you think your workload requires scaling, open a
support case
and we'll guide you through the optimal settings for the use case.
:::
Scaling API may be useful for:
- Large initial loads (over 4 TB)
- Migrating a moderate amount of data as quickly as possible
- Supporting over 8 CDC ClickPipes under the same service
Before attempting to scale up, consider:
- Ensuring the source DB has sufficient available capacity
- First adjusting
initial load parallelism and partitioning
when creating a ClickPipe
- Checking for
long-running transactions
on the source that could be causing CDC delays
Increasing the scale will proportionally increase your ClickPipes compute costs.
If you're scaling up just for the initial loads, it's important to scale down after the snapshot is finished to avoid unexpected charges. For more details on pricing, see
Postgres CDC Pricing
.
Prerequisites for this process {#prerequisites}
Before you get started you will need:
ClickHouse API key
with Admin permissions on the target ClickHouse Cloud service.
A DB ClickPipe (Postgres, MySQL or MongoDB) provisioned in the service at some point in time. CDC infrastructure gets created along with the first ClickPipe, and the scaling endpoints become available from that point onwards.
Steps to scale DB ClickPipes {#cdc-scaling-steps}
Set the following environment variables before running any commands:
bash
ORG_ID=<Your ClickHouse organization ID>
SERVICE_ID=<Your ClickHouse service ID>
KEY_ID=<Your ClickHouse key ID>
KEY_SECRET=<Your ClickHouse key secret>
Fetch the current scaling configuration (optional):
```bash
curl --silent --user $KEY_ID:$KEY_SECRET \
https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/clickpipesCdcScaling \
| jq
example result:
{
"result": {
"replicaCpuMillicores": 2000,
"replicaMemoryGb": 8
},
"requestId": "04310d9e-1126-4c03-9b05-2aa884dbecb7",
"status": 200
}
```
Set the desired scaling. Supported configurations include 1-24 CPU cores with memory (GB) set to 4Γ the core count:
```bash
cat <<EOF | tee cdc_scaling.json
{
"replicaCpuMillicores": 24000,
"replicaMemoryGb": 96
}
EOF
curl --silent --user $KEY_ID:$KEY_SECRET \
-X PATCH -H "Content-Type: application/json" \
https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/clickpipesCdcScaling \
-d @cdc_scaling.json | jq
```
Wait for the configuration to propagate (typically 3-5 minutes). After the scaling is finished, the GET endpoint will reflect the new values: | {"source_file": "scaling.md"} | [
-0.008731517009437084,
-0.03885146975517273,
-0.05153493955731392,
0.019877221435308456,
-0.08316070586442947,
-0.08613717555999756,
-0.05533081665635109,
0.06000950559973717,
-0.05923384800553322,
0.012556065805256367,
-0.035962384194135666,
0.004930083639919758,
-0.048884328454732895,
-0... |
871c2410-d415-4d2a-acdb-3730fb906254 | Wait for the configuration to propagate (typically 3-5 minutes). After the scaling is finished, the GET endpoint will reflect the new values:
```bash
curl --silent --user $KEY_ID:$KEY_SECRET \
https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/clickpipesCdcScaling \
| jq
example result:
{
"result": {
"replicaCpuMillicores": 24000,
"replicaMemoryGb": 96
},
"requestId": "5a76d642-d29f-45af-a857-8c4d4b947bf0",
"status": 200
}
``` | {"source_file": "scaling.md"} | [
-0.02264202944934368,
0.015367109328508377,
-0.06458163261413574,
0.008708293549716473,
-0.05767332762479782,
-0.05575486272573471,
-0.059992220252752304,
-0.04735976457595825,
0.03656787425279617,
0.023594016209244728,
0.026941297575831413,
-0.010590909980237484,
-0.010113676078617573,
-0... |
58410571-f307-44a0-9cd2-d72a6a20946f | title: 'Postgres Generated Columns: Gotchas and Best Practices'
slug: /integrations/clickpipes/postgres/generated_columns
description: 'Page describing important considerations to keep in mind when using PostgreSQL generated columns in tables that are being replicated'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
When using PostgreSQL's generated columns in tables that are being replicated, there are some important considerations to keep in mind. These gotchas can affect the replication process and data consistency in your destination systems.
The problem with generated columns {#the-problem-with-generated-columns}
Not Published via
pgoutput
:
Generated columns are not published through the
pgoutput
logical replication plugin. This means that when you're replicating data from PostgreSQL to another system, the values of generated columns are not included in the replication stream.
Issues with Primary Keys:
If a generated column is part of your primary key, it can cause deduplication problems on the destination. Since the generated column values are not replicated, the destination system won't have the necessary information to properly identify and deduplicate rows.
Issues with schema changes
: If you add a generated column to a table that is already being replicated, the new column will not be populated in the destination - as Postgres does not give us the RelationMessage for the new column. If you then add a new non-generated column to the same table, the ClickPipe, when trying to reconcile the schema, will not be able to find the generated column in the destination, leading to a failure in the replication process.
Best practices {#best-practices}
To work around these limitations, consider the following best practices:
Recreate Generated Columns on the Destination:
Instead of relying on the replication process to handle generated columns, it's recommended to recreate these columns on the destination using tools like dbt (data build tool) or other data transformation mechanisms.
Avoid Using Generated Columns in Primary Keys:
When designing tables that will be replicated, it's best to avoid including generated columns as part of the primary key.
Upcoming improvements to UI {#upcoming-improvements-to-ui}
In upcoming versions, we are planning to add a UI to help users with the following:
Identify Tables with Generated Columns:
The UI will have a feature to identify tables that contain generated columns. This will help users understand which tables are affected by this issue.
Documentation and Best Practices:
The UI will include best practices for using generated columns in replicated tables, including guidance on how to avoid common pitfalls. | {"source_file": "postgres_generated_columns.md"} | [
-0.05446808040142059,
-0.053907640278339386,
-0.09773269295692444,
0.010567767545580864,
-0.03798816353082657,
-0.06134990602731705,
-0.09600542485713959,
-0.060071323066949844,
0.016000645235180855,
0.042983878403902054,
0.061600856482982635,
0.017754128202795982,
0.0700286328792572,
-0.1... |
1226fb53-9786-4d46-8cc2-5fc586abce15 | title: 'Controlling the Syncing of a Postgres ClickPipe'
description: 'Doc for controlling the sync a Postgres ClickPipe'
slug: /integrations/clickpipes/postgres/sync_control
sidebar_label: 'Controlling syncs'
keywords: ['sync control', 'postgres', 'clickpipes', 'batch size', 'sync interval']
doc_type: 'guide'
import edit_sync_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/edit_sync_button.png'
import create_sync_settings from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/create_sync_settings.png'
import edit_sync_settings from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/sync_settings_edit.png'
import cdc_syncs from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/cdc_syncs.png'
import Image from '@theme/IdealImage';
This document describes how to control the sync of a Postgres ClickPipe when the ClickPipe is in
CDC (Running) mode
.
Overview {#overview}
Database ClickPipes have an architecture that consists of two parallel processes - pulling from the source database and pushing to the target database. The pulling process is controlled by a sync configuration that defines how often the data should be pulled and how much data should be pulled at a time. By "at a time", we mean one batch - since the ClickPipe pulls and pushes data in batches.
There are two main ways to control the sync of a Postgres ClickPipe. The ClickPipe will start pushing when one of the below settings kicks in.
Sync interval {#interval}
The sync interval of the pipe is the amount of time (in seconds) for which the ClickPipe will pull records from the source database. The time to push what we have to ClickHouse is not included in this interval.
The default is
1 minute
.
Sync interval can be set to any positive integer value, but it is recommended to keep it above 10 seconds.
Pull batch size {#batch-size}
The pull batch size is the number of records that the ClickPipe will pull from the source database in one batch. Records mean inserts, updates and deletes done on the tables that are part of the pipe.
The default is
100,000
records.
A safe maximum is 10 million.
An exception: Long-running transactions on source {#transactions}
When a transaction is run on the source database, the ClickPipe waits until it receives the COMMIT of the transaction before it moves forward. This with
overrides
both the sync interval and the pull batch size.
Configuring sync settings {#configuring}
You can set the sync interval and pull batch size when you create a ClickPipe or edit an existing one.
When creating a ClickPipe it will be seen in the second step of the creation wizard, as shown below:
When editing an existing ClickPipe, you can head over to the
Settings
tab of the pipe, pause the pipe and then click on
Configure
here:
This will open a flyout with the sync settings, where you can change the sync interval and pull batch size: | {"source_file": "controlling_sync.md"} | [
-0.04525536671280861,
-0.05247883126139641,
-0.08465989679098129,
0.046787530183792114,
-0.08296669274568558,
0.008395655080676079,
0.000753877975512296,
0.0020278827287256718,
-0.07216659933328629,
0.018434103578329086,
-0.00029476129566319287,
0.012772591784596443,
-0.015653342008590698,
... |
e20009ed-31a6-42bd-aa2e-1f4818c18e95 | This will open a flyout with the sync settings, where you can change the sync interval and pull batch size:
Tweaking the sync settings to help with replication slot growth {#tweaking}
Let's talk about how to use these settings to handle a large replication slot of a CDC pipe.
The pushing time to ClickHouse does not scale linearly with the pulling time from the source database. This can be leveraged to reduce the size of a large replication slot.
By increasing both the sync interval and pull batch size, the ClickPipe will pull a whole lot of data from the source database in one go, and then push it to ClickHouse.
Monitoring sync control behaviour {#monitoring}
You can see how long each batch takes in the
CDC Syncs
table in the
Metrics
tab of the ClickPipe. Note that the duration here includes push time and also if there are no rows incoming, the ClickPipe waits and the wait time is also included in the duration. | {"source_file": "controlling_sync.md"} | [
0.010992954485118389,
-0.05946028605103493,
-0.058182720094919205,
0.005632924847304821,
-0.04189666360616684,
-0.04943981021642685,
-0.07181517779827118,
-0.012761247344315052,
-0.004587722476571798,
0.03342599794268608,
0.03551929444074631,
0.030399657785892487,
-0.012057427316904068,
-0... |
f6b606b8-af62-4339-b8c7-71e2c9c59051 | sidebar_label: 'Maintenance windows'
description: 'Maintenance windows for ClickPipes for Postgres.'
slug: /integrations/clickpipes/postgres/maintenance
title: 'Maintenance windows for ClickPipes for Postgres'
doc_type: 'reference'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
Maintenance windows for ClickPipes for Postgres
There is an upcoming maintenance window for Postgres ClickPipes scheduled on:
-
Date:
17 April 2025
-
Time:
07:00 AM - 08:00 AM UTC
During this time, your Postgres Pipes will experience a brief downtime.
The ClickPipes will be available again after the maintenance window and will resume normal operations. | {"source_file": "maintenance.md"} | [
-0.01838696375489235,
-0.04558293893933296,
0.019176678732037544,
0.014607179909944534,
-0.048563454300165176,
-0.04133766144514084,
0.005623300094157457,
-0.024874091148376465,
-0.04397520795464516,
-0.028133409097790718,
-0.05723593756556511,
0.06673279404640198,
-0.1124468445777893,
-0.... |
3503911e-705b-47ba-ab56-4b6308a6ce17 | title: 'Parallel Snapshot In The Postgres ClickPipe'
description: 'Doc for explaining parallel snapshot in the Postgres ClickPipe'
slug: /integrations/clickpipes/postgres/parallel_initial_load
sidebar_label: 'How parallel snapshot works'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import snapshot_params from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/snapshot_params.png'
import Image from '@theme/IdealImage';
This document explains parallelized snapshot/initial load in the Postgres ClickPipe works and talks about the snapshot parameters that can be used to control it.
Overview {#overview-pg-snapshot}
Initial load is the first phase of a CDC ClickPipe, where the ClickPipe syncs the historical data of the tables in the source database over to ClickHouse, before then starting CDC. A lot of the times, developers do this in a single-threaded manner - such as using pg_dump or pg_restore, or using a single thread to read from the source database and write to ClickHouse.
However, the Postgres ClickPipe can parallelize this process, which can significantly speed up the initial load.
CTID column in Postgres {#ctid-pg-snapshot}
In Postgres, every row in a table has a unique identifier called the CTID. This is a system column that is not visible to users by default, but it can be used to uniquely identify rows in a table. The CTID is a combination of the block number and the offset within the block, which allows for efficient access to rows.
Logical partitioning {#logical-partitioning-pg-snapshot}
The Postgres ClickPipe uses the CTID column to logically partition source tables. It obtains the partitions by first performing a COUNT(*) on the source table, followed by a window function partitioning query to get the CTID ranges for each partition. This allows the ClickPipe to read the source table in parallel, with each partition being processed by a separate thread.
Let's talk about the below settings:
Snapshot number of rows per partition {#numrows-pg-snapshot}
This setting controls how many rows constitute a partition. The ClickPipe will read the source table in chunks of this size, and chunks are processed in parallel based on the initial load parallelism set. The default value is 100,000 rows per partition.
Initial load parallelism {#parallelism-pg-snapshot}
This setting controls how many partitions are processed in parallel. The default value is 4, which means that the ClickPipe will read 4 partitions of the source table in parallel. This can be increased to speed up the initial load, but it is recommended to keep it to a reasonable value depending on your source instance specs to avoid overwhelming the source database. The ClickPipe will automatically adjust the number of partitions based on the size of the source table and the number of rows per partition.
Snapshot number of tables in parallel {#tables-parallel-pg-snapshot} | {"source_file": "parallel_initial_load.md"} | [
-0.06156867742538452,
-0.03050893358886242,
-0.05309569835662842,
-0.028958288952708244,
-0.019389014691114426,
-0.07055019587278366,
-0.05537428334355354,
0.002299398882314563,
0.001892138971015811,
0.02307896688580513,
-0.007979001849889755,
0.04904194548726082,
0.0010297308908775449,
-0... |
9abb8e9d-c393-4633-8769-16e527ad95a1 | Snapshot number of tables in parallel {#tables-parallel-pg-snapshot}
Not really related to parallel snapshot, but this setting controls how many tables are processed in parallel during the initial load. The default value is 1. Note that is on top of the parallelism of the partitions, so if you have 4 partitions and 2 tables, the ClickPipe will read 8 partitions in parallel.
Monitoring parallel snapshot in Postgres {#monitoring-parallel-pg-snapshot}
You can analyze
pg_stat_activity
to see the parallel snapshot in action. The ClickPipe will create multiple connections to the source database, each reading a different partition of the source table. If you see
FETCH
queries with different CTID ranges, it means that the ClickPipe is reading the source tables. You can also see the COUNT(*) and the partitioning query in here.
Limitations {#limitations-parallel-pg-snapshot}
The snapshot parameters cannot be edited after pipe creation. If you want to change them, you will have to create a new ClickPipe.
When adding tables to an existing ClickPipe, you cannot change the snapshot parameters. The ClickPipe will use the existing parameters for the new tables.
The partition key column should not contain
NULL
s, as they are skipped by the partitioning logic. | {"source_file": "parallel_initial_load.md"} | [
-0.013193833641707897,
-0.07552597671747208,
-0.03927198424935341,
-0.018181759864091873,
-0.05501387268304825,
-0.038309160619974136,
0.006279671099036932,
-0.016468865796923637,
0.03970899432897568,
-0.011184883303940296,
-0.043786611407995224,
-0.022641655057668686,
0.011669528670608997,
... |
34520884-54fe-4006-8271-73449d4902b8 | sidebar_label: 'Reference'
description: 'Details supported formats, sources, delivery semantics, authentication and experimental features supported by Kafka ClickPipes'
slug: /integrations/clickpipes/kafka/reference
sidebar_position: 1
title: 'Reference'
doc_type: 'reference'
keywords: ['kafka reference', 'clickpipes', 'data sources', 'avro', 'virtual columns']
import Kafkasvg from '@site/static/images/integrations/logos/kafka.svg';
import Confluentsvg from '@site/static/images/integrations/logos/confluent.svg';
import Msksvg from '@site/static/images/integrations/logos/msk.svg';
import Azureeventhubssvg from '@site/static/images/integrations/logos/azure_event_hubs.svg';
import Warpstreamsvg from '@site/static/images/integrations/logos/warpstream.svg';
import redpanda_logo from '@site/static/images/integrations/logos/logo_redpanda.png';
import Image from '@theme/IdealImage';
import ExperimentalBadge from '@site/src/theme/badges/ExperimentalBadge';
Reference
Supported data sources {#supported-data-sources}
| Name |Logo|Type| Status | Description |
|----------------------|----|----|-----------------|------------------------------------------------------------------------------------------------------|
| Apache Kafka |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Apache Kafka into ClickHouse Cloud. |
| Confluent Cloud |
|Streaming| Stable | Unlock the combined power of Confluent and ClickHouse Cloud through our direct integration. |
| Redpanda |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
| AWS MSK |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from AWS MSK into ClickHouse Cloud. |
| Azure Event Hubs |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Azure Event Hubs into ClickHouse Cloud. |
| WarpStream |
|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |
Supported data formats {#supported-data-formats}
The supported formats are:
-
JSON
-
AvroConfluent
Supported data types {#supported-data-types}
Standard {#standard-types-support}
The following standard ClickHouse data types are currently supported in ClickPipes:
Base numeric types - [U]Int8/16/32/64, Float32/64, and BFloat16
Large integer types - [U]Int128/256
Decimal Types
Boolean
String
FixedString
Date, Date32
DateTime, DateTime64 (UTC timezones only)
Enum8/Enum16
UUID
IPv4
IPv6
all ClickHouse LowCardinality types
Map with keys and values using any of the above types (including Nullables) | {"source_file": "03_reference.md"} | [
-0.031879059970378876,
0.025173399597406387,
-0.13789327442646027,
0.009488502517342567,
0.028020376339554787,
0.006341245025396347,
-0.00014160484715830535,
0.019920581951737404,
0.017361287027597427,
0.05435134470462799,
0.0013334564864635468,
-0.04981444031000137,
-0.007235523778945208,
... |
70a31388-fb6b-457b-921f-0ab7316140eb | DateTime, DateTime64 (UTC timezones only)
Enum8/Enum16
UUID
IPv4
IPv6
all ClickHouse LowCardinality types
Map with keys and values using any of the above types (including Nullables)
Tuple and Array with elements using any of the above types (including Nullables, one level depth only)
SimpleAggregateFunction types (for AggregatingMergeTree or SummingMergeTree destinations)
Avro {#avro}
Supported Avro Data Types {#supported-avro-data-types}
ClickPipes supports all Avro Primitive and Complex types, and all Avro Logical types except
time-millis
,
time-micros
,
local-timestamp-millis
,
local_timestamp-micros
, and
duration
. Avro
record
types are converted to Tuple,
array
types to Array, and
map
to Map (string keys only). In general the conversions listed
here
are available. We recommend using exact type matching for Avro numeric types, as ClickPipes does not check for overflow or precision loss on type conversion.
Alternatively, all Avro types can be inserted into a
String
column, and will be represented as a valid JSON string in that case.
Nullable types and Avro unions {#nullable-types-and-avro-unions}
Nullable types in Avro are defined by using a Union schema of
(T, null)
or
(null, T)
where T is the base Avro type. During schema inference, such unions will be mapped to a ClickHouse "Nullable" column. Note that ClickHouse does not support
Nullable(Array)
,
Nullable(Map)
, or
Nullable(Tuple)
types. Avro null unions for these types will be mapped to non-nullable versions (Avro Record types are mapped to a ClickHouse named Tuple). Avro "nulls" for these types will be inserted as:
- An empty Array for a null Avro array
- An empty Map for a null Avro Map
- A named Tuple with all default/zero values for a null Avro Record
Variant type support {#variant-type-support}
ClickPipes supports the Variant type in the following circumstances:
- Avro Unions. If your Avro schema contains a union with multiple non-null types, ClickPipes will infer the
appropriate variant type. Variant types are not otherwise supported for Avro data.
- JSON fields. You can manually specify a Variant type (such as
Variant(String, Int64, DateTime)
) for any JSON field
in the source data stream. Because of the way ClickPipes determines the correct variant subtype to use, only one integer or datetime
type can be used in the Variant definition - for example,
Variant(Int64, UInt32)
is not supported.
JSON type support {#json-type-support}
ClickPipes support the JSON type in the following circumstances:
- Avro Record types can always be assigned to a JSON column.
- Avro String and Bytes types can be assigned to a JSON column if the column actually holds JSON String objects.
- JSON fields that are always a JSON object can be assigned to a JSON destination column.
Note that you will have to manually change the destination column to the desired JSON type, including any fixed or skipped paths. | {"source_file": "03_reference.md"} | [
0.05196286737918854,
-0.013617996126413345,
-0.07804235070943832,
-0.04669399932026863,
-0.025317851454019547,
-0.01874121092259884,
0.0519331619143486,
0.0019542265217751265,
-0.06412607431411743,
-0.01987752690911293,
-0.035905640572309494,
-0.08408347517251968,
-0.10164033621549606,
-0.... |
09f5b001-b841-4b54-8186-863b252b97e9 | Note that you will have to manually change the destination column to the desired JSON type, including any fixed or skipped paths.
Kafka virtual columns {#kafka-virtual-columns}
The following virtual columns are supported for Kafka compatible streaming data sources. When creating a new destination table virtual columns can be added by using the
Add Column
button.
| Name | Description | Recommended Data Type |
|------------------|-------------------------------------------------|------------------------|
|
_key
| Kafka Message Key |
String
|
|
_timestamp
| Kafka Timestamp (Millisecond precision) |
DateTime64(3)
|
|
_partition
| Kafka Partition |
Int32
|
|
_offset
| Kafka Offset |
Int64
|
|
_topic
| Kafka Topic |
String
|
|
_header_keys
| Parallel array of keys in the record Headers |
Array(String)
|
|
_header_values
| Parallel array of headers in the record Headers |
Array(String)
|
|
_raw_message
| Full Kafka Message |
String
|
Note that the
_raw_message
column is only recommended for JSON data.
For use cases where only the JSON string is required (such as using ClickHouse
JsonExtract*
functions to
populate a downstream materialized view), it may improve ClickPipes performance to delete all the "non-virtual" columns. | {"source_file": "03_reference.md"} | [
0.03824662044644356,
-0.05414115637540817,
-0.09759499877691269,
0.036271438002586365,
-0.01432208064943552,
-0.008248267695307732,
-0.05083658546209335,
0.009645238518714905,
0.004289854317903519,
0.06162256747484207,
-0.0058939531445503235,
-0.14434413611888885,
-0.013966521248221397,
0.... |
fbf1902f-7a3a-49a8-a93b-fadb0aee3da3 | sidebar_label: 'Integrate with a schema registry'
description: 'How to integrate for ClickPipes with a schema registry for schema management'
slug: /integrations/clickpipes/kafka/schema-registries
sidebar_position: 1
title: 'Schema registries for Kafka ClickPipe'
doc_type: 'guide'
keywords: ['schema registries', 'kafka', 'clickpipes', 'avro', 'confluent']
Schema registries {#schema-registries}
ClickPipes supports schema registries for Avro data streams.
Supported registries for Kafka ClickPipes {#supported-schema-registries}
Schema registries that are API-compatible with the Confluent Schema Registry are supported. This includes:
Confluent Schema Registry
Redpanda Schema Registry
ClickPipes does not support AWS Glue Schema Registry or Azure Schema Registry yet. If you require support for these schema registries,
reach out to our team
.
Configuration {#schema-registry-configuration}
ClickPipes with Avro data require a schema registry. This can be configured in one of three ways:
Providing a complete path to the schema subject (e.g.
https://registry.example.com/subjects/events
)
Optionally, a specific version can be referenced by appending
/versions/[version]
to the url (otherwise ClickPipes will retrieve the latest version).
Providing a complete path to the schema id (e.g.
https://registry.example.com/schemas/ids/1000
)
Providing the root schema registry URL (e.g.
https://registry.example.com
)
How it works {#how-schema-registries-work}
ClickPipes dynamically retrieves and applies the Avro schema from the configured schema registry.
- If there's a schema id embedded in the message, it will use that to retrieve the schema.
- If there's no schema id embedded in the message, it will use the schema id or subject name specified in the ClickPipe configuration to retrieve the schema.
- If the message is written without an embedded schema id, and no schema id or subject name is specified in the ClickPipe configuration, then the schema will not be retrieved and the message will be skipped with a
SOURCE_SCHEMA_ERROR
logged in the ClickPipes errors table.
- If the message does not conform to the schema, then the message will be skipped with a
DATA_PARSING_ERROR
logged in the ClickPipes errors table.
Schema mapping {#schema-mapping}
The following rules are applied to the mapping between the retrieved Avro schema and the ClickHouse destination table:
If the Avro schema contains a field that is not included in the ClickHouse destination mapping, that field is ignored.
If the Avro schema is missing a field defined in the ClickHouse destination mapping, the ClickHouse column will be populated with a "zero" value, such as 0 or an empty string. Note that DEFAULT expressions are not currently evaluated for ClickPipes inserts (this is temporary limitation pending updates to the ClickHouse server default processing). | {"source_file": "02_schema-registries.md"} | [
-0.04684773087501526,
-0.07413889467716217,
-0.09634710103273392,
-0.0060062408447265625,
-0.0158144049346447,
0.05197853967547417,
0.00906706415116787,
-0.030469078570604324,
-0.045812565833330154,
0.03635401278734207,
0.01475616730749607,
-0.10066264122724533,
-0.050141580402851105,
-0.0... |
091e8386-5a0a-4476-bcbc-8c2d1ccc2889 | If the Avro schema field and the ClickHouse column are incompatible, inserts of that row/message will fail, and the failure will be recorded in the ClickPipes errors table. Note that several implicit conversions are supported (like between numeric types), but not all (for example, an Avro record field can not be inserted into an Int32 ClickHouse column). | {"source_file": "02_schema-registries.md"} | [
0.04390590637922287,
-0.07502548396587372,
-0.0929255560040474,
-0.03588300943374634,
-0.03562261164188385,
-0.019237162545323372,
-0.018465887755155563,
-0.030037714168429375,
-0.05510633811354637,
-0.01152622140944004,
0.02306355908513069,
-0.08770594000816345,
-0.0017739629838615656,
-0... |
91c14461-321e-4fd0-a378-43bcea3aa8bb | description: 'Landing page with table of contents for the Kafka ClickPipes section'
slug: /integrations/clickpipes/kafka
sidebar_position: 1
title: 'Kafka ClickPipes'
doc_type: 'landing-page'
integration:
- support_level: 'core'
- category: 'clickpipes'
keywords: ['Kafka ClickPipes', 'Apache Kafka', 'streaming ingestion', 'real-time data', 'message broker']
| Page | Description |
|-----|-----|
|
Reference
| Details supported formats, sources, delivery semantics, authentication and experimental features supported by Kafka ClickPipes |
|
Schema registries for Kafka ClickPipe
| How to integrate for ClickPipes with a schema registry for schema management |
|
Creating your first Kafka ClickPipe
| Step-by-step guide to creating your first Kafka ClickPipe. |
|
Kafka ClickPipes FAQ
| Frequently asked questions about ClickPipes for Kafka |
|
Best practices
| Details best practices to follow when working with Kafka ClickPipes | | {"source_file": "index.md"} | [
-0.033528074622154236,
-0.09047442674636841,
-0.09957917779684067,
-0.005419920198619366,
-0.027527926489710808,
-0.011612274684011936,
-0.050699517130851746,
0.01823435351252556,
-0.08501787483692169,
0.030551455914974213,
-0.033261530101299286,
-0.05020345002412796,
-0.013544607907533646,
... |
61e1cebb-8fc6-434c-b7b8-cd40f44759dc | sidebar_label: 'Create your first Kafka ClickPipe'
description: 'Step-by-step guide to creating your first Kafka ClickPipe.'
slug: /integrations/clickpipes/kafka/create-your-first-kafka-clickpipe
sidebar_position: 1
title: 'Creating your first Kafka ClickPipe'
doc_type: 'guide'
keywords: ['create kafka clickpipe', 'kafka', 'clickpipes', 'data sources', 'setup guide']
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import cp_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step1.png';
import cp_step2 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step2.png';
import cp_step3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step3.png';
import cp_step4a from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a.png';
import cp_step5 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step5.png';
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
import cp_table_settings from '@site/static/images/integrations/data-ingestion/clickpipes/cp_table_settings.png';
import Image from '@theme/IdealImage';
Creating your first Kafka ClickPipe {#creating-your-first-kafka-clickpipe}
In this guide, we will walk you through the process of creating your first Kafka ClickPipe.
Navigate to data sources {#1-load-sql-console}
Select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe".
Select a data source {#2-select-data-source}
Select your Kafka data source from the list.
Configure the data source {#3-configure-data-source}
Fill out the form by providing your ClickPipe with a name, a description (optional), your credentials, and other connection details.
Configure a schema registry (optional) {#4-configure-your-schema-registry}
A valid schema is required for Avro streams. See
Schema registries
for more details on how to configure a schema registry.
Configure a reverse private endpoint (optional) {#5-configure-reverse-private-endpoint}
Configure a Reverse Private Endpoint to allow ClickPipes to connect to your Kafka cluster using AWS PrivateLink.
See our
AWS PrivateLink documentation
for more information.
Select your topic {#6-select-your-topic}
Select your topic and the UI will display a sample document from the topic.
Configure your destination table {#7-configure-your-destination-table}
In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
You can also customize the advanced settings using the controls provided
Configure permissions {#8-configure-permissions} | {"source_file": "01_create-kafka-clickpipe.md"} | [
-0.053330034017562866,
-0.009965291246771812,
-0.07022887468338013,
-0.0006794920191168785,
-0.03859823942184448,
-0.03734727203845978,
-0.02468220889568329,
0.07832827419042587,
-0.040273986756801605,
-0.0019153119064867496,
0.03169432282447815,
-0.07066544145345688,
0.021080762147903442,
... |
67d57521-5495-47af-8683-79ee6497ee60 | You can also customize the advanced settings using the controls provided
Configure permissions {#8-configure-permissions}
ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
-
Full access
: with the full access to the cluster. It might be useful if you use Materialized View or Dictionary with the destination table.
-
Only destination table
: with the
INSERT
permissions to the destination table only.
Complete setup {#9-complete-setup}
Clicking on "Create ClickPipe" will create and run your ClickPipe. It will now be listed in the Data Sources section. | {"source_file": "01_create-kafka-clickpipe.md"} | [
0.04904400557279587,
-0.1142624095082283,
-0.10975860804319382,
0.022675398737192154,
-0.07653368264436722,
-0.048057086765766144,
0.033051490783691406,
0.010036435909569263,
-0.13279986381530762,
0.04523896798491478,
-0.01039485726505518,
-0.07738062739372253,
0.04320243000984192,
-0.0192... |
dc067ac7-16c3-4034-b28b-d788608bec1b | sidebar_label: 'FAQ'
description: 'Frequently asked questions about ClickPipes for Kafka'
slug: /integrations/clickpipes/kafka/faq
sidebar_position: 1
title: 'Kafka ClickPipes FAQ'
doc_type: 'guide'
keywords: ['kafka faq', 'clickpipes', 'upstash', 'azure event hubs', 'private link']
Kafka ClickPipes FAQ {#faq}
General {#general}
How does ClickPipes for Kafka work?
ClickPipes uses a dedicated architecture running the Kafka Consumer API to read data from a specified topic and then inserts the data into a ClickHouse table on a specific ClickHouse Cloud service.
What's the difference between ClickPipes and the ClickHouse Kafka Table Engine?
The Kafka Table engine is a ClickHouse core capability that implements a "pull model" where the ClickHouse server itself connects to Kafka, pulls events then writes them locally.
ClickPipes is a separate cloud service that runs independently of the ClickHouse service. It connects to Kafka (or other data sources) and pushes events to an associated ClickHouse Cloud service. This decoupled architecture allows for superior operational flexibility, clear separation of concerns, scalable ingestion, graceful failure management, extensibility, and more.
What are the requirements for using ClickPipes for Kafka?
In order to use ClickPipes for Kafka, you will need a running Kafka broker and a ClickHouse Cloud service with ClickPipes enabled. You will also need to ensure that ClickHouse Cloud can access your Kafka broker. This can be achieved by allowing remote connection on the Kafka side, whitelisting [ClickHouse Cloud Egress IP addresses](/manage/data-sources/cloud-endpoints-api) in your Kafka setup. Alternatively, you can use [AWS PrivateLink](/integrations/clickpipes/aws-privatelink) to connect ClickPipes for Kafka to your Kafka brokers.
Does ClickPipes for Kafka support AWS PrivateLink?
AWS PrivateLink is supported. See [the documentation](/integrations/clickpipes/aws-privatelink) for more information on how to set it up.
Can I use ClickPipes for Kafka to write data to a Kafka topic?
No, the ClickPipes for Kafka is designed for reading data from Kafka topics, not writing data to them. To write data to a Kafka topic, you will need to use a dedicated Kafka producer.
Does ClickPipes support multiple brokers?
Yes, if the brokers are part of the same quorum they can be configured together delimited with `,`.
Can ClickPipes replicas be scaled?
Yes, ClickPipes for streaming can be scaled both horizontally and vertically.
Horizontal scaling adds more replicas to increase throughput, while vertical scaling increases the resources (CPU and RAM) allocated to each replica to handle more intensive workloads.
This can be configured during ClickPipe creation, or at any other point under **Settings** -> **Advanced Settings** -> **Scaling**.
Azure Event Hubs {#azure-eventhubs}
Does the Azure Event Hubs ClickPipe work without the Kafka surface? | {"source_file": "05_faq.md"} | [
-0.1003056988120079,
-0.08232297748327255,
-0.07367527484893799,
-0.005461952183395624,
-0.0027426595333963633,
-0.02339964173734188,
-0.029109766706824303,
0.00848359428346157,
0.05096021294593811,
0.015019507147371769,
-0.023967687040567398,
0.00567206647247076,
0.02295110374689102,
-0.0... |
d6de3e54-1b26-4491-98ff-2220002f351e | Azure Event Hubs {#azure-eventhubs}
Does the Azure Event Hubs ClickPipe work without the Kafka surface?
No. ClickPipes requires the Event Hubs namespace to have the Kafka surface enabled. This is only available in tiers above **basic**. See the [Azure Event Hubs documentation](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs?tabs=passwordless#create-an-azure-event-hubs-namespace) for more information.
Does Azure Schema Registry work with ClickPipes?
No. ClickPipes only supports schema registries that are API-compatible with the Confluent Schema Registry, which is not the case for Azure Schema Registry. If you require support for this schema registry, [reach out to our team](https://clickhouse.com/company/contact?loc=clickpipes).
What permissions does my policy need to consume from Azure Event Hubs?
To list topics and consume events, the shared access policy that is given to ClickPipes requires, at minimum, a 'Listen' claim.
Why is my Event Hubs not returning any data?
If your ClickHouse instance is in a different region or continent from your Event Hubs deployment, you may experience timeouts when onboarding your ClickPipes, and higher-latency when consuming data from the Event Hub. We recommend deploying ClickHouse Cloud and Azure Event Hubs in the same cloud region, or regions located close to each other, to avoid performance overhead.
Should I include the port number for Azure Event Hubs?
Yes. ClickPipes expects you to include the port number for the Kafka surface, which should be `:9093`.
Are ClickPipes IPs still relevant for Azure Event Hubs?
Yes. To restrict traffic to your Event Hubs instance, please add the [documented static NAT IPs](../
/index.md#list-of-static-ips) to .
Is the connection string for the Event Hub, or is it for the Event Hub namespace?
Both work. We strongly recommend using a shared access policy at the **namespace level** to retrieve samples from multiple Event Hubs. | {"source_file": "05_faq.md"} | [
-0.008719460107386112,
-0.055929310619831085,
-0.06585418432950974,
0.0012353461934253573,
0.0047346241772174835,
0.050771936774253845,
0.021037176251411438,
-0.0738365575671196,
0.012370511889457703,
0.10395362228155136,
-0.03343517333269119,
-0.06684330850839615,
0.01816967874765396,
0.0... |
d9c75dad-dc3d-45cb-a591-12a286ddcbc3 | sidebar_label: 'Best practices'
description: 'Details best practices to follow when working with Kafka ClickPipes'
slug: /integrations/clickpipes/kafka/best-practices
sidebar_position: 1
title: 'Best practices'
doc_type: 'guide'
keywords: ['kafka best practices', 'clickpipes', 'compression', 'authentication', 'scaling']
Best practices {#best-practices}
Message Compression {#compression}
We strongly recommend using compression for your Kafka topics. Compression can result in a significant saving in data transfer costs with virtually no performance hit.
To learn more about message compression in Kafka, we recommend starting with this
guide
.
Limitations {#limitations}
DEFAULT
is not supported.
Delivery semantics {#delivery-semantics}
ClickPipes for Kafka provides
at-least-once
delivery semantics (as one of the most commonly used approaches). We'd love to hear your feedback on delivery semantics
contact form
. If you need exactly-once semantics, we recommend using our official
clickhouse-kafka-connect
sink.
Authentication {#authentication}
For Apache Kafka protocol data sources, ClickPipes supports
SASL/PLAIN
authentication with TLS encryption, as well as
SASL/SCRAM-SHA-256
and
SASL/SCRAM-SHA-512
. Depending on the streaming source (Redpanda, MSK, etc) will enable all or a subset of these auth mechanisms based on compatibility. If you auth needs differ please
give us feedback
.
IAM {#iam}
:::info
IAM Authentication for the MSK ClickPipe is a beta feature.
:::
ClickPipes supports the following AWS MSK authentication
SASL/SCRAM-SHA-512
authentication
IAM Credentials or Role-based access
authentication
When using IAM authentication to connect to an MSK broker, the IAM role must have the necessary permissions.
Below is an example of the required IAM policy for Apache Kafka APIs for MSK:
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect"
],
"Resource": [
"arn:aws:kafka:us-west-2:12345678912:cluster/clickpipes-testing-brokers/b194d5ae-5013-4b5b-ad27-3ca9f56299c9-10"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:DescribeTopic",
"kafka-cluster:ReadData"
],
"Resource": [
"arn:aws:kafka:us-west-2:12345678912:topic/clickpipes-testing-brokers/*"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:AlterGroup",
"kafka-cluster:DescribeGroup"
],
"Resource": [
"arn:aws:kafka:us-east-1:12345678912:group/clickpipes-testing-brokers/*"
]
}
]
}
Configuring a trusted relationship {#configuring-a-trusted-relationship} | {"source_file": "04_best_practices.md"} | [
-0.06036306917667389,
0.04653467983007431,
-0.049164704978466034,
0.012146658264100552,
-0.025614088401198387,
-0.04636405035853386,
-0.061343155801296234,
0.015751468017697334,
-0.0018215778982266784,
0.027459152042865753,
-0.0410321019589901,
-0.005095204338431358,
-0.012288755737245083,
... |
8ac12382-4ce1-461e-924d-a08b90370a44 | Configuring a trusted relationship {#configuring-a-trusted-relationship}
If you are authenticating to MSK with a IAM role ARN, you will need to add a trusted relationship between your ClickHouse Cloud instance so the role can be assumed.
:::note
Role-based access only works for ClickHouse Cloud instances deployed to AWS.
:::
json
{
"Version": "2012-10-17",
"Statement": [
...
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678912:role/CH-S3-your-clickhouse-cloud-role"
},
"Action": "sts:AssumeRole"
},
]
}
Custom Certificates {#custom-certificates}
ClickPipes for Kafka supports the upload of custom certificates for Kafka brokers which use non-public server certificates.
Upload of client certificates and keys is also supported for mutual TLS (mTLS) based authentication.
Performance {#performance}
Batching {#batching}
ClickPipes inserts data into ClickHouse in batches. This is to avoid creating too many parts in the database which can lead to performance issues in the cluster.
Batches are inserted when one of the following criteria has been met:
- The batch size has reached the maximum size (100,000 rows or 32MB per 1GB of pod memory)
- The batch has been open for a maximum amount of time (5 seconds)
Latency {#latency}
Latency (defined as the time between the Kafka message being produced and the message being available in ClickHouse) will be dependent on a number of factors (i.e. broker latency, network latency, message size/format). The
batching
described in the section above will also impact latency. We always recommend testing your specific use case with typical loads to determine the expected latency.
ClickPipes does not provide any guarantees concerning latency. If you have specific low-latency requirements, please
contact us
.
Scaling {#scaling}
ClickPipes for Kafka is designed to scale horizontally and vertically. By default, we create a consumer group with one consumer. This can be configured during ClickPipe creation, or at any other point under
Settings
->
Advanced Settings
->
Scaling
.
ClickPipes provides a high-availability with an availability zone distributed architecture.
This requires scaling to at least two consumers.
Regardless number of running consumers, fault tolerance is available by design.
If a consumer or its underlying infrastructure fails,
the ClickPipe will automatically restart the consumer and continue processing messages.
Benchmarks {#benchmarks}
Below are some informal benchmarks for ClickPipes for Kafka that can be used to get a general idea of the baseline performance. It's important to know that many factors can impact performance, including message size, data types, and data format. Your mileage may vary, and what we show here is not a guarantee of actual performance.
Benchmark details: | {"source_file": "04_best_practices.md"} | [
-0.08324730396270752,
0.01653241366147995,
-0.0970943495631218,
0.021262163296341896,
-0.04576358571648598,
-0.018417835235595703,
0.0277661494910717,
-0.06635428965091705,
0.04854559898376465,
0.039640747010707855,
0.029826007783412933,
-0.07799314707517624,
0.0934949666261673,
0.02812271... |
39a1e5dd-a702-4ac7-a63f-c6abe4697e67 | Benchmark details:
We used production ClickHouse Cloud services with enough resources to ensure that throughput was not bottlenecked by the insert processing on the ClickHouse side.
The ClickHouse Cloud service, the Kafka cluster (Confluent Cloud), and the ClickPipe were all running in the same region (
us-east-2
).
The ClickPipe was configured with a single L-sized replica (4 GiB of RAM and 1 vCPU).
The sample data included nested data with a mix of
UUID
,
String
, and
Int
datatypes. Other datatypes, such as
Float
,
Decimal
, and
DateTime
, may be less performant.
There was no appreciable difference in performance using compressed and uncompressed data.
| Replica Size | Message Size | Data Format | Throughput |
|---------------|--------------|-------------|------------|
| Large (L) | 1.6kb | JSON | 63mb/s |
| Large (L) | 1.6kb | Avro | 99mb/s | | {"source_file": "04_best_practices.md"} | [
-0.043604809790849686,
-0.02128995582461357,
-0.019578879699110985,
0.029261920601129532,
-0.015890993177890778,
-0.08411732316017151,
-0.1125384271144867,
-0.02971302904188633,
0.02877676859498024,
0.02943098545074463,
-0.03788788989186287,
0.01238353829830885,
0.01568753272294998,
-0.082... |
876036e1-0c60-4381-ba4c-648358809451 | title: 'Pausing and Resuming a MySQL ClickPipe'
description: 'Pausing and Resuming a MySQL ClickPipe'
sidebar_label: 'Pause table'
slug: /integrations/clickpipes/mysql/pause_and_resume
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import pause_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_button.png'
import pause_dialog from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_dialog.png'
import pause_status from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/pause_status.png'
import resume_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resume_button.png'
import resume_dialog from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resume_dialog.png'
There are scenarios where it would be useful to pause a MySQL ClickPipe. For example, you may want to run some analytics on existing data in a static state. Or, you might be performing upgrades on MySQL. Here is how you can pause and resume a MySQL ClickPipe.
Steps to pause a MySQL ClickPipe {#pause-clickpipe-steps}
In the Data Sources tab, click on the MySQL ClickPipe you wish to pause.
Head over to the
Settings
tab.
Click on the
Pause
button.
A dialog box should appear for confirmation. Click on Pause again.
Head over to the
Metrics
tab.
In around 5 seconds (and also on page refresh), the status of the pipe should be
Paused
.
Steps to resume a MySQL ClickPipe {#resume-clickpipe-steps}
In the Data Sources tab, click on the MySQL ClickPipe you wish to resume. The status of the mirror should be
Paused
initially.
Head over to the
Settings
tab.
Click on the
Resume
button.
A dialog box should appear for confirmation. Click on Resume again.
Head over to the
Metrics
tab.
In around 5 seconds (and also on page refresh), the status of the pipe should be
Running
. | {"source_file": "pause_and_resume.md"} | [
-0.039991606026887894,
-0.05928893759846687,
-0.057518936693668365,
0.02582445554435253,
-0.03193289786577225,
0.020144009962677956,
0.04591367766261101,
0.006433498580008745,
-0.08098993450403214,
-0.008104388602077961,
-0.0017257436411455274,
0.011295914649963379,
0.0077559021301567554,
... |
3903484a-af1e-4043-b9de-2bc436992c2a | sidebar_label: 'Lifecycle of a MySQL ClickPipe'
description: 'Various pipe statuses and their meanings'
slug: /integrations/clickpipes/mysql/lifecycle
title: 'Lifecycle of a MySQL ClickPipe'
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
Lifecycle of a MySQL ClickPipe {#lifecycle}
This is a document on the various phases of a MySQL ClickPipe, the different statuses it can have, and what they mean. Note that this applies to MariaDB as well.
Provisioning {#provisioning}
When you click on the Create ClickPipe button, the ClickPipe is created in a
Provisioning
state. The provisioning process is where we spin up the underlying infrastructure to run ClickPipes for the service, along with registering some initial metadata for the pipe. Since compute for ClickPipes within a service is shared, your second ClickPipe will be created much faster than the first one -- as the infrastructure is already in place.
Setup {#setup}
After a pipe is provisioned, it enters the
Setup
state. This state is where we create the destination ClickHouse tables. We also obtain and record the table definitions of your source tables here.
Snapshot {#snapshot}
Once setup is complete, we enter the
Snapshot
state (unless it's a CDC-only pipe, which would transition to
Running
).
Snapshot
,
Initial Snapshot
and
Initial Load
(more common) are interchangeable terms. In this state, we take a snapshot of the source MySQL tables and load them into ClickHouse. Retention setting for binary logs should account for initial load time. For more information on initial load, see the
parallel initial load documentation
. The pipe will also enter the
Snapshot
state when a resync is triggered or when new tables are added to an existing pipe.
Running {#running}
Once the initial load is complete, the pipe enters the
Running
state (unless it's a snapshot-only pipe, which would transition to
Completed
). This is where the pipe begins
Change-Data Capture
. In this state, we start reading binary logs from the source database and sync the data to ClickHouse in batches. For information on controlling CDC, see
the doc on controlling CDC
.
Paused {#paused}
Once the pipe is in the
Running
state, you can pause it. This will stop the CDC process and the pipe will enter the
Paused
state. In this state, no new data is pulled from the source database, but the existing data in ClickHouse remains intact. You can resume the pipe from this state.
Pausing {#pausing}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
When you click on the Pause button, the pipe enters the
Pausing
state. This is a transient state where we are in the process of stopping the CDC process. Once the CDC process is fully stopped, the pipe will enter the
Paused
state.
Modifying {#modifying} | {"source_file": "lifecycle.md"} | [
-0.015905611217021942,
-0.043762825429439545,
-0.005117814522236586,
0.02320934273302555,
-0.0522749125957489,
-0.043778471648693085,
0.012937117367982864,
0.07324789464473724,
0.028474820777773857,
-0.026484355330467224,
0.016968997195363045,
-0.02376248873770237,
0.05690464749932289,
-0.... |
02e3f327-1768-45ea-91fe-20bf9e0442bf | Modifying {#modifying}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
Currently, this indicates the pipe is in the process of removing tables.
Resync {#resync}
:::note
This state is coming soon. If you're using our
OpenAPI
, consider adding support for it now to ensure your integration continues working when it's released.
:::
This state indicates the pipe is in the phase of resync where it is performing an atomic swap of the _resync tables with the original tables. More information on resync can be found in the
resync documentation
.
Completed {#completed}
This state applies to snapshot-only pipes and indicates that the snapshot has been completed and there's no more work to do.
Failed {#failed}
If there is an irrecoverable error in the pipe, it will enter the
Failed
state. You can reach out to support or
resync
your pipe to recover from this state. | {"source_file": "lifecycle.md"} | [
-0.02199043333530426,
0.013243435882031918,
0.016959795728325844,
0.03290228918194771,
0.0045092301443219185,
-0.10363936424255371,
-0.06875810027122498,
0.02169446088373661,
0.0057298303581774235,
-0.017965691164135933,
-0.005108298733830452,
-0.008274088613688946,
0.04224623367190361,
-0... |
7e52de03-a41b-47a5-b988-e9e042961c5b | title: 'Resyncing Specific Tables'
description: 'Resyncing specific tables in a MySQL ClickPipe'
slug: /integrations/clickpipes/mysql/table_resync
sidebar_label: 'Resync table'
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
Resyncing specific tables {#resync-tables}
There are scenarios where it would be useful to have specific tables of a pipe be re-synced. Some sample use-cases could be major schema changes on MySQL, or maybe some data re-modelling on the ClickHouse.
While resyncing individual tables with a button click is a work-in-progress, this guide will share steps on how you can achieve this today in the MySQL ClickPipe.
1. Remove the table from the pipe {#removing-table}
This can be followed by following the
table removal guide
.
2. Truncate or drop the table on ClickHouse {#truncate-drop-table}
This step is to avoid data duplication when we add this table again in the next step. You can do this by heading over to the
SQL Console
tab in ClickHouse Cloud and running a query.
Note that we have validation to block table addition if the table already exists in ClickHouse and is not empty.
3. Add the table to the ClickPipe again {#add-table-again}
This can be followed by following the
table addition guide
. | {"source_file": "table_resync.md"} | [
-0.015283980406820774,
-0.059595879167318344,
0.020489942282438278,
0.012961884960532188,
-0.03859879821538925,
-0.08223039656877518,
-0.01506625022739172,
-0.027955321595072746,
-0.029703952372074127,
0.01482403464615345,
-0.011512836441397667,
0.03355424106121063,
0.08115995675325394,
-0... |
439f6055-eb79-4b55-ab72-a20a1ba0e743 | title: 'Adding specific tables to a ClickPipe'
description: 'Describes the steps need to add specific tables to a ClickPipe.'
sidebar_label: 'Add table'
slug: /integrations/clickpipes/mysql/add_table
show_title: false
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import add_table from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/add_table.png'
Adding specific tables to a ClickPipe
There are scenarios where it would be useful to add specific tables to a pipe. This becomes a common necessity as your transactional or analytical workload scales.
Steps to add specific tables to a ClickPipe {#add-tables-steps}
This can be done by the following steps:
1.
Pause
the pipe.
2. Click on Edit Table settings.
3. Locate your table - this can be done by searching it in the search bar.
4. Select the table by clicking on the checkbox.
Click update.
Upon successful update, the pipe will have statuses
Setup
,
Snapshot
and
Running
in that order. The table's initial load can be tracked in the
Tables
tab.
:::info
CDC for existing tables resumes automatically after the new tableβs snapshot completes.
::: | {"source_file": "add_table.md"} | [
0.020868603140115738,
-0.0652121901512146,
-0.04090891778469086,
0.02463562786579132,
-0.038234103471040726,
-0.019045090302824974,
0.043668750673532486,
0.07374802231788635,
-0.04540511220693588,
0.016640426591038704,
0.023586438968777657,
-0.04709456488490105,
0.06494054943323135,
-0.025... |
697355c7-9676-41ee-b66d-b6b5c2386009 | title: 'Supported data types'
slug: /integrations/clickpipes/mysql/datatypes
description: 'Page describing MySQL ClickPipe datatype mapping from MySQL to ClickHouse'
doc_type: 'reference'
keywords: ['MySQL ClickPipe datatypes', 'MySQL to ClickHouse data types', 'ClickPipe datatype mapping', 'MySQL ClickHouse type conversion', 'database type compatibility']
Here is the supported data-type mapping for the MySQL ClickPipe:
| MySQL Type | ClickHouse type | Notes |
| --------------------------| -----------------------| -------------------------------------------------------------------------------------- |
| Enum | LowCardinality(String) ||
| Set | String ||
| Decimal | Decimal ||
| TinyInt | Int8 | Supports unsigned.|
| SmallInt | Int16 | Supports unsigned.|
| MediumInt, Int | Int32 | Supports unsigned.|
| BigInt | Int64 | Supports unsigned.|
| Year | Int16 ||
| TinyText, Text, MediumText, LongText | String ||
| TinyBlob, Blob, MediumBlob, LongBlob | String ||
| Char, Varchar | String ||
| Binary, VarBinary | String ||
| TinyInt(1) | Bool ||
| JSON | String | MySQL only; MariaDB
json
is just an alias for
text
with a constraint. |
| Geometry & Geometry Types | String | WKT (Well-Known Text). WKT may suffer from small precision loss. |
| Vector | Array(Float32) | MySQL only; MariaDB is adding support soon. |
| Float | Float32 | Precision on ClickHouse may differ from MySQL during initial load due to text protocol.|
| Double | Float64 | Precision on ClickHouse may differ from MySQL during initial load due to text protocol.|
| Date | Date32 | 00 day/month mapped to 01.|
| Time | DateTime64(6) | Time offset from unix epoch.|
| Datetime, Timestamp | DateTime64(6) | 00 day/month mapped to 01.| | {"source_file": "datatypes.md"} | [
0.03593125566840172,
-0.09149983525276184,
-0.0039770943112671375,
0.019466517493128777,
-0.1039169654250145,
-0.01619536243379116,
0.01666819117963314,
0.003788661677390337,
-0.07467063516378403,
-0.02378532476723194,
0.013444799929857254,
-0.06340904533863068,
0.06151200085878372,
-0.006... |
49ad499a-2585-45e4-9739-5f0eeb5ac135 | title: 'Resyncing a Database ClickPipe'
description: 'Doc for resyncing a database ClickPipe'
slug: /integrations/clickpipes/mysql/resync
sidebar_label: 'Resync ClickPipe'
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import resync_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/resync_button.png'
import Image from '@theme/IdealImage';
What does Resync do? {#what-mysql-resync-do}
Resync involves the following operations in order:
The existing ClickPipe is dropped, and a new "resync" ClickPipe is kicked off. Thus, changes to source table structures will be picked up when you resync.
The resync ClickPipe creates (or replaces) a new set of destination tables which have the same names as the original tables except with a
_resync
suffix.
Initial load is performed on the
_resync
tables.
The
_resync
tables are then swapped with the original tables. Soft deleted rows are transferred from the original tables to the
_resync
tables before the swap.
All the settings of the original ClickPipe are retained in the resync ClickPipe. The statistics of the original ClickPipe are cleared in the UI.
Use cases for resyncing a ClickPipe {#use-cases-mysql-resync}
Here are a few scenarios:
You may need to perform major schema changes on the source tables which would break the existing ClickPipe and you would need to restart. You can just click Resync after performing the changes.
Specifically for Clickhouse, maybe you needed to change the ORDER BY keys on the target tables. You can Resync to re-populate data into the new table with the right sorting key.
:::note
You can resync multiple times, however please account for the load on the source database when you resync.
:::
Resync ClickPipe Guide {#guide-mysql-resync}
In the Data Sources tab, click on the MySQL ClickPipe you wish to resync.
Head over to the
Settings
tab.
Click on the
Resync
button.
A dialog box should appear for confirmation. Click on Resync again.
Head over to the
Metrics
tab.
In around 5 seconds (and also on page refresh), the status of the pipe should be
Setup
or
Snapshot
.
The initial load of the resync can be monitored in the
Tables
tab - in the
Initial Load Stats
section.
Once the initial load is complete, the pipe will atomically swap the
_resync
tables with the original tables. During the swap, the status will be
Resync
.
Once the swap is complete, the pipe will enter the
Running
state and perform CDC if enabled. | {"source_file": "resync.md"} | [
-0.08074789494276047,
-0.06567052006721497,
-0.02938106097280979,
0.0472337044775486,
0.04999426752328873,
-0.10046876966953278,
0.019813142716884613,
-0.0037966326344758272,
0.015987785533070564,
0.014747701585292816,
-0.010440277867019176,
0.06192219629883766,
0.049466706812381744,
-0.10... |
95182c4c-2eb6-453c-8374-b2c91b760ff0 | title: 'Removing specific tables from a ClickPipe'
description: 'Removing specific tables from a ClickPipe'
sidebar_label: 'Remove table'
slug: /integrations/clickpipes/mysql/removing_tables
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import Image from '@theme/IdealImage';
import remove_table from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/remove_table.png'
In some cases, it makes sense to exclude specific tables from a MySQL ClickPipe - for example, if a table isn't needed for your analytics workload, skipping it can reduce storage and replication costs in ClickHouse.
Steps to remove specific tables {#remove-tables-steps}
The first step is to remove the table from the pipe. This can be done by the following steps:
Pause
the pipe.
Click on Edit Table Settings.
Locate your table - this can be done by searching it in the search bar.
Deselect the table by clicking on the selected checkbox.
Click update.
Upon successful update, in the
Metrics
tab the status will be
Running
. This table will no longer be replicated by this ClickPipe. | {"source_file": "remove_table.md"} | [
0.007093426771461964,
0.047878414392471313,
0.012357734143733978,
0.006185202393680811,
0.020069800317287445,
-0.10039383172988892,
0.07607848942279816,
-0.038044869899749756,
-0.0071590798906981945,
0.017488012090325356,
0.017678841948509216,
0.03185832127928734,
0.08887598663568497,
-0.0... |
359b3c85-7e1a-459b-9dc9-3f6d872f2743 | sidebar_label: 'FAQ'
description: 'Frequently asked questions about ClickPipes for MySQL.'
slug: /integrations/clickpipes/mysql/faq
sidebar_position: 2
title: 'ClickPipes for MySQL FAQ'
doc_type: 'reference'
keywords: ['MySQL ClickPipes FAQ', 'ClickPipes MySQL troubleshooting', 'MySQL ClickHouse replication', 'ClickPipes MySQL support', 'MySQL CDC ClickHouse']
ClickPipes for MySQL FAQ
Does the MySQL ClickPipe support MariaDB? {#does-the-clickpipe-support-mariadb}
Yes, the MySQL ClickPipe supports MariaDB 10.0 and above. The configuration for it is very similar to MySQL, using GTID replication by default.
Does the MySQL ClickPipe support PlanetScale, Vitess, or TiDB? {#does-the-clickpipe-support-planetscale-vitess}
No, these do not support MySQL's binlog API.
How is replication managed? {#how-is-replication-managed}
We support both
GTID
&
FilePos
replication. Unlike Postgres there is no slot to manage offset. Instead, you must configure your MySQL server to have a sufficient binlog retention period. If our offset into the binlog becomes invalidated
(eg, mirror paused too long, or database failover occurs while using
FilePos
replication)
then you will need to resync the pipe. Make sure to optimize materialized views depending on destination tables, as inefficient queries can slow down ingestion to fall behind the retention period.
It's also possible for an inactive database to rotate the log file without allowing ClickPipes to progress to a more recent offset. You may need to setup a heartbeat table with regularly scheduled updates.
At the start of an initial load we record the binlog offset to start at. This offset must still be valid when the initial load finishes in order for CDC to progress. If you are ingesting a large amount of data be sure to configure an appropriate binlog retention period. While setting up tables you can speed up initial load by configuring
Use a custom partitioning key for initial load
for large tables under advanced settings so that we can load a single table in parallel.
Why am I getting a TLS certificate validation error when connecting to MySQL? {#tls-certificate-validation-error}
When connecting to MySQL, you may encounter certificate errors like
x509: certificate is not valid for any names
or
x509: certificate signed by unknown authority
. These occur because ClickPipes enables TLS encryption by default.
You have several options to resolve these issues:
Set the TLS Host field
- When the hostname in your connection differs from the certificate (common with AWS PrivateLink via Endpoint Service). Set "TLS Host (optional)" to match the certificate's Common Name (CN) or Subject Alternative Name (SAN).
Upload your Root CA
- For MySQL servers using internal Certificate Authorities or Google Cloud SQL in the default per-instance CA configuration. For more information on how to access Google Cloud SQL certificates, see
this section
. | {"source_file": "faq.md"} | [
-0.030577998608350754,
-0.07602846622467041,
-0.025883862748742104,
-0.013467149809002876,
-0.030413148924708366,
-0.0015949108637869358,
0.04427623003721237,
0.023107828572392464,
-0.036183662712574005,
-0.041890427470207214,
-0.011655525304377079,
-0.022250806912779808,
0.10278796404600143... |
2b421900-0ab1-464e-8dae-259009cdd49a | Configure server certificate
- Update your server's SSL certificate to include all connection hostnames and use a trusted Certificate Authority.
Skip certificate verification
- For self-hosted MySQL or MariaDB, whose default configurations provision a self-signed certificate we can't validate (
MySQL
,
MariaDB
). Relying on this certificate encrypts the data in transit but runs the risk of server impersonation. We recommend properly signed certificates for production environments, but this option is useful for testing on a one-off instance or connecting to legacy infrastructure.
Do you support schema changes? {#do-you-support-schema-changes}
Please refer to the
ClickPipes for MySQL: Schema Changes Propagation Support
page for more information. | {"source_file": "faq.md"} | [
0.04292002320289612,
-0.034386683255434036,
0.0311242938041687,
0.025033237412571907,
-0.07116126269102097,
-0.0643119364976883,
-0.051251120865345,
-0.019198086112737656,
0.013025941327214241,
-0.0073484270833432674,
-0.005790211260318756,
-0.103640615940094,
0.12272962182760239,
0.107822... |
62aad56a-adfe-47b9-98ff-de4e9a89f175 | title: 'Schema Changes Propagation Support'
slug: /integrations/clickpipes/mysql/schema-changes
description: 'Page describing schema change types detectable by ClickPipes in the source tables'
doc_type: 'reference'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
ClickPipes for MySQL can detect schema changes in the source tables and, in some cases, automatically propagate the changes to the destination tables. The way each DDL operation is handled is documented below:
| Schema Change Type | Behaviour |
| ----------------------------------------------------------------------------------- | ------------------------------------- |
| Adding a new column (
ALTER TABLE ADD COLUMN ...
) | Propagated automatically. The new column(s) will be populated for all rows replicated after the schema change |
| Adding a new column with a default value (
ALTER TABLE ADD COLUMN ... DEFAULT ...
) | Propagated automatically. The new column(s) will be populated for all rows replicated after the schema change, but existing rows will not show the default value without a full table refresh |
| Dropping an existing column (
ALTER TABLE DROP COLUMN ...
) | Detected, but
not
propagated. The dropped column(s) will be populated with
NULL
for all rows replicated after the schema change | | {"source_file": "schema-changes.md"} | [
-0.011498824693262577,
-0.08431852608919144,
0.03148460388183594,
-0.01764039881527424,
-0.015711085870862007,
-0.07972782850265503,
-0.008777589537203312,
-0.07552272081375122,
0.016315506771206856,
0.016836676746606827,
0.026600992307066917,
-0.07170428335666656,
0.0662127360701561,
-0.0... |
e673311e-db1f-43fb-a8cd-d5dde6d46bd0 | sidebar_label: 'Ingesting Data from MySQL to ClickHouse'
description: 'Describes how to seamlessly connect your MySQL to ClickHouse Cloud.'
slug: /integrations/clickpipes/mysql
title: 'Ingesting data from MySQL to ClickHouse (using CDC)'
doc_type: 'guide'
keywords: ['MySQL', 'ClickPipes', 'CDC', 'change data capture', 'database replication']
import BetaBadge from '@theme/badges/BetaBadge';
import cp_service from '@site/static/images/integrations/data-ingestion/clickpipes/cp_service.png';
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import mysql_tile from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/mysql-tile.png'
import mysql_connection_details from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/mysql-connection-details.png'
import ssh_tunnel from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ssh-tunnel.jpg'
import select_destination_db from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/select-destination-db.png'
import ch_permissions from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/ch-permissions.jpg'
import Image from '@theme/IdealImage';
Ingesting data from MySQL to ClickHouse (using CDC)
:::info
Ingesting data from MySQL to ClickHouse Cloud via ClickPipes is in public beta.
:::
You can use ClickPipes to ingest data from your source MySQL database into ClickHouse Cloud. The source MySQL database can be hosted on-premises or in the cloud using services like Amazon RDS, Google Cloud SQL, and others.
Prerequisites {#prerequisites}
To get started, you first need to ensure that your MySQL database is correctly configured for binlog replication. The configuration steps depend on how you're deploying MySQL, so please follow the relevant guide below:
Amazon RDS MySQL
Amazon Aurora MySQL
Cloud SQL for MySQL
Generic MySQL
Amazon RDS MariaDB
Generic MariaDB
Once your source MySQL database is set up, you can continue creating your ClickPipe.
Create your ClickPipe {#create-your-clickpipe}
Make sure you are logged in to your ClickHouse Cloud account. If you don't have an account yet, you can sign up
here
.
In the ClickHouse Cloud console, navigate to your ClickHouse Cloud Service.
Select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe"
Select the
MySQL CDC
tile
Add your source MySQL database connection {#add-your-source-mysql-database-connection}
Fill in the connection details for your source MySQL database which you configured in the prerequisites step.
:::info
Before you start adding your connection details make sure that you have whitelisted ClickPipes IP addresses in your firewall rules. On the following page you can find a
list of ClickPipes IP addresses
.
For more information refer to the source MySQL setup guides linked at
the top of this page
.
::: | {"source_file": "index.md"} | [
0.01603030040860176,
-0.001993450103327632,
-0.05177920311689377,
0.000149192608660087,
-0.04050374776124954,
-0.028619779273867607,
0.08905234187841415,
0.052347563207149506,
-0.057927217334508896,
0.00041624141158536077,
0.059583794325590134,
-0.047562532126903534,
0.10448981821537018,
-... |
466f1fc8-2257-4b3f-97ed-095f840a3755 | (Optional) Set up SSH Tunneling {#optional-set-up-ssh-tunneling}
You can specify SSH tunneling details if your source MySQL database is not publicly accessible.
Enable the "Use SSH Tunnelling" toggle.
Fill in the SSH connection details.
To use Key-based authentication, click on "Revoke and generate key pair" to generate a new key pair and copy the generated public key to your SSH server under
~/.ssh/authorized_keys
.
Click on "Verify Connection" to verify the connection.
:::note
Make sure to whitelist
ClickPipes IP addresses
in your firewall rules for the SSH bastion host so that ClickPipes can establish the SSH tunnel.
:::
Once the connection details are filled in, click
Next
.
Configure advanced settings {#advanced-settings}
You can configure the advanced settings if needed. A brief description of each setting is provided below:
Sync interval
: This is the interval at which ClickPipes will poll the source database for changes. This has an implication on the destination ClickHouse service, for cost-sensitive users we recommend to keep this at a higher value (over
3600
).
Parallel threads for initial load
: This is the number of parallel workers that will be used to fetch the initial snapshot. This is useful when you have a large number of tables and you want to control the number of parallel workers used to fetch the initial snapshot. This setting is per-table.
Pull batch size
: The number of rows to fetch in a single batch. This is a best effort setting and may not be respected in all cases.
Snapshot number of rows per partition
: This is the number of rows that will be fetched in each partition during the initial snapshot. This is useful when you have a large number of rows in your tables and you want to control the number of rows fetched in each partition.
Snapshot number of tables in parallel
: This is the number of tables that will be fetched in parallel during the initial snapshot. This is useful when you have a large number of tables and you want to control the number of tables fetched in parallel.
Configure the tables {#configure-the-tables}
Here you can select the destination database for your ClickPipe. You can either select an existing database or create a new one.
You can select the tables you want to replicate from the source MySQL database. While selecting the tables, you can also choose to rename the tables in the destination ClickHouse database as well as exclude specific columns.
Review permissions and start the ClickPipe {#review-permissions-and-start-the-clickpipe}
Select the "Full access" role from the permissions dropdown and click "Complete Setup".
Finally, please refer to the
"ClickPipes for MySQL FAQ"
page for more information about common issues and how to resolve them.
What's next? {#whats-next} | {"source_file": "index.md"} | [
0.05140271782875061,
-0.0010830111568793654,
-0.0776137039065361,
-0.008428196422755718,
-0.09804241359233856,
0.038599416613578796,
-0.013747245073318481,
0.018173623830080032,
0.017546387389302254,
0.058653730899095535,
-0.030629020184278488,
-0.07203274965286255,
0.11132024228572845,
0.... |
21d72189-6a01-4b8c-a3b9-28941aa1889c | Finally, please refer to the
"ClickPipes for MySQL FAQ"
page for more information about common issues and how to resolve them.
What's next? {#whats-next}
Once you've set up your ClickPipe to replicate data from MySQL to ClickHouse Cloud, you can focus on how to query and model your data for optimal performance. For common questions around MySQL CDC and troubleshooting, see the
MySQL FAQs page
. | {"source_file": "index.md"} | [
0.05987619608640671,
-0.08501183241605759,
-0.00273496494628489,
-0.0043556056916713715,
-0.0418374128639698,
-0.09455364942550659,
-0.010919229127466679,
-0.03166233375668526,
-0.01756029576063156,
0.06086401641368866,
0.026203524321317673,
-0.04070764780044556,
0.11813094466924667,
-0.04... |
3249833f-c00b-4c38-b4f9-f03130add606 | title: 'Controlling the Syncing of a MySQL ClickPipe'
description: 'Doc for controllling the sync a MySQL ClickPipe'
slug: /integrations/clickpipes/mysql/sync_control
sidebar_label: 'Controlling syncs'
keywords: ['MySQL ClickPipe', 'ClickPipe sync control', 'MySQL CDC replication', 'ClickHouse MySQL connector', 'database synchronization ClickHouse']
doc_type: 'guide'
import edit_sync_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/edit_sync_button.png'
import create_sync_settings from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/create_sync_settings.png'
import edit_sync_settings from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/sync_settings_edit.png'
import cdc_syncs from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/cdc_syncs.png'
import Image from '@theme/IdealImage';
This document describes how to control the sync of a MySQL ClickPipe when the ClickPipe is in
CDC (Running) mode
.
Overview {#overview}
Database ClickPipes have an architecture that consists of two parallel processes - pulling from the source database and pushing to the target database. The pulling process is controlled by a sync configuration that defines how often the data should be pulled and how much data should be pulled at a time. By "at a time", we mean one batch - since the ClickPipe pulls and pushes data in batches.
There are two main ways to control the sync of a MySQL ClickPipe. The ClickPipe will start pushing when one of the below settings kicks in.
Sync interval {#interval}
The sync interval of the pipe is the amount of time (in seconds) for which the ClickPipe will pull records from the source database. The time to push what we have to ClickHouse is not included in this interval.
The default is
1 minute
.
Sync interval can be set to any positive integer value, but it is recommended to keep it above 10 seconds.
Pull batch size {#batch-size}
The pull batch size is the number of records that the ClickPipe will pull from the source database in one batch. Records mean inserts, updates and deletes done on the tables that are part of the pipe.
The default is
100,000
records.
A safe maximum is 10 million.
An exception: Long-running transactions on source {#transactions}
When a transaction is run on the source database, the ClickPipe waits until it receives the COMMIT of the transaction before it moves forward. This with
overrides
both the sync interval and the pull batch size.
Configuring sync settings {#configuring}
You can set the sync interval and pull batch size when you create a ClickPipe or edit an existing one.
When creating a ClickPipe it will be seen in the second step of the creation wizard, as shown below:
When editing an existing ClickPipe, you can head over to the
Settings
tab of the pipe, pause the pipe and then click on
Configure
here: | {"source_file": "controlling_sync.md"} | [
-0.024317607283592224,
-0.06771522015333176,
-0.08074220269918442,
0.04588812589645386,
-0.07360856235027313,
-0.010809528641402721,
0.030885016545653343,
0.0037657886277884245,
-0.0691370740532875,
0.03887917101383209,
0.017083365470170975,
-0.008404014632105827,
0.06732063740491867,
-0.0... |
8df9bd60-6197-4b76-b309-c469d82f896f | When editing an existing ClickPipe, you can head over to the
Settings
tab of the pipe, pause the pipe and then click on
Configure
here:
This will open a flyout with the sync settings, where you can change the sync interval and pull batch size:
Monitoring sync control behaviour {#monitoring}
You can see how long each batch takes in the
CDC Syncs
table in the
Metrics
tab of the ClickPipe. Note that the duration here includes push time and also if there are no rows incoming, the ClickPipe waits and the wait time is also included in the duration. | {"source_file": "controlling_sync.md"} | [
0.040583882480859756,
-0.06147923320531845,
-0.05044243857264519,
-0.005001377779990435,
-0.034734565764665604,
-0.017865626141428947,
-0.06638180464506149,
-0.019202137365937233,
-0.0026980380062013865,
-0.006414158269762993,
0.010900510475039482,
-0.009357364848256111,
-0.09827900677919388... |
819b8af8-6228-4f67-bac5-01d1c6e3392f | title: 'Parallel Snapshot In The MySQL ClickPipe'
description: 'Doc for explaining parallel snapshot in the MySQL ClickPipe'
slug: /integrations/clickpipes/mysql/parallel_initial_load
sidebar_label: 'How parallel snapshot works'
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import snapshot_params from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/snapshot_params.png'
import partition_key from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/partition_key.png'
import Image from '@theme/IdealImage';
This document explains parallelized snapshot/initial load in the MySQL ClickPipe works and talks about the snapshot parameters that can be used to control it.
Overview {#overview-mysql-snapshot}
Initial load is the first phase of a CDC ClickPipe, where the ClickPipe syncs the historical data of the tables in the source database over to ClickHouse, before then starting CDC. A lot of the times, developers do this in a single-threaded manner.
However, the MySQL ClickPipe can parallelize this process, which can significantly speed up the initial load.
Partition key column {#key-mysql-snapshot}
Once we've enabled the feature flag, you should see the below setting in the ClickPipe table picker (both during creation and editing of a ClickPipe):
The MySQL ClickPipe uses a column on your source table to logically partition the source tables. This column is called the
partition key column
. It is used to divide the source table into partitions, which can then be processed in parallel by the ClickPipe.
:::warning
The partition key column must be indexed in the source table to see a good performance boost. This can be seen by running
SHOW INDEX FROM <table_name>
in MySQL.
:::
Logical partitioning {#logical-partitioning-mysql-snapshot}
Let's talk about the below settings:
Snapshot number of rows per partition {#numrows-mysql-snapshot}
This setting controls how many rows constitute a partition. The ClickPipe will read the source table in chunks of this size, and chunks are processed in parallel based on the initial load parallelism set. The default value is 100,000 rows per partition.
Initial load parallelism {#parallelism-mysql-snapshot}
This setting controls how many partitions are processed in parallel. The default value is 4, which means that the ClickPipe will read 4 partitions of the source table in parallel. This can be increased to speed up the initial load, but it is recommended to keep it to a reasonable value depending on your source instance specs to avoid overwhelming the source database. The ClickPipe will automatically adjust the number of partitions based on the size of the source table and the number of rows per partition.
Snapshot number of tables in parallel {#tables-parallel-mysql-snapshot} | {"source_file": "parallel_initial_load.md"} | [
-0.04509308934211731,
-0.046076733618974686,
-0.05745523050427437,
-0.029370315372943878,
-0.05031722038984299,
-0.0713401660323143,
-0.00832501333206892,
0.052362680435180664,
-0.010235428810119629,
0.03848229721188545,
0.03642234578728676,
0.010409682057797909,
0.09915044158697128,
-0.09... |
76c1dbb1-4415-4a16-a105-9adddb377611 | Snapshot number of tables in parallel {#tables-parallel-mysql-snapshot}
Not really related to parallel snapshot, but this setting controls how many tables are processed in parallel during the initial load. The default value is 1. Note that is on top of the parallelism of the partitions, so if you have 4 partitions and 2 tables, the ClickPipe will read 8 partitions in parallel.
Monitoring parallel snapshot in MySQL {#monitoring-parallel-mysql-snapshot}
You can run
SHOW processlist
in MySQL to see the parallel snapshot in action. The ClickPipe will create multiple connections to the source database, each reading a different partition of the source table. If you see
SELECT
queries with different ranges, it means that the ClickPipe is reading the source tables. You can also see the COUNT(*) and the partitioning query in here.
Limitations {#limitations-parallel-mysql-snapshot}
The snapshot parameters cannot be edited after pipe creation. If you want to change them, you will have to create a new ClickPipe.
When adding tables to an existing ClickPipe, you cannot change the snapshot parameters. The ClickPipe will use the existing parameters for the new tables.
The partition key column should not contain
NULL
s, as they are skipped by the partitioning logic. | {"source_file": "parallel_initial_load.md"} | [
0.0342167504131794,
-0.09806305915117264,
-0.04721441864967346,
-0.022371400147676468,
-0.039205268025398254,
-0.05429772287607193,
0.015901649370789528,
0.017850739881396294,
0.04096071422100067,
0.01566472463309765,
-0.01619609259068966,
-0.06129875034093857,
0.1105942577123642,
-0.09226... |
e39f90df-10c3-4e97-8383-1582813451ea | sidebar_label: 'Reference'
description: 'Details supported formats, exactly-once semantics, view-support, scaling, limitations, authentication with object storage ClickPipes'
slug: /integrations/clickpipes/object-storage/reference
sidebar_position: 1
title: 'Reference'
doc_type: 'reference'
integration:
- support_level: 'core'
- category: 'clickpipes'
keywords: ['clickpipes', 'object storage', 's3', 'data ingestion', 'batch loading']
import S3svg from '@site/static/images/integrations/logos/amazon_s3_logo.svg';
import Gcssvg from '@site/static/images/integrations/logos/gcs.svg';
import DOsvg from '@site/static/images/integrations/logos/digitalocean.svg';
import ABSsvg from '@site/static/images/integrations/logos/azureblobstorage.svg';
import Image from '@theme/IdealImage';
Supported data sources {#supported-data-sources}
| Name |Logo|Type| Status | Description |
|----------------------|----|----|-----------------|------------------------------------------------------------------------------------------------------|
| Amazon S3 |
|Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
| Google Cloud Storage |
|Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
| DigitalOcean Spaces |
| Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
| Azure Blob Storage |
| Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
More connectors will get added to ClickPipes, you can find out more by
contacting us
.
Supported data formats {#supported-data-formats}
The supported formats are:
-
JSON
-
CSV
-
Parquet
-
Avro
Exactly-once semantics {#exactly-once-semantics}
Various types of failures can occur when ingesting large dataset, which can result in a partial inserts or duplicate data. Object Storage ClickPipes are resilient to insert failures and provides exactly-once semantics. This is accomplished by using temporary "staging" tables. Data is first inserted into the staging tables. If something goes wrong with this insert, the staging table can be truncated and the insert can be retried from a clean state. Only when an insert is completed and successful, the partitions in the staging table are moved to target table. To read more about this strategy, check-out
this blog post
.
View support {#view-support}
Materialized views on the target table are also supported. ClickPipes will create staging tables not only for the target table, but also any dependent materialized view. | {"source_file": "02_reference.md"} | [
-0.042246926575899124,
-0.0047827851958572865,
-0.11375643312931061,
-0.04251544922590256,
0.07021022588014603,
0.0009188308613374829,
0.018007377162575722,
0.017637092620134354,
-0.0005910840700380504,
0.02556598372757435,
0.002003266243264079,
-0.0021519039291888475,
0.05432240292429924,
... |
f10680b7-7ab6-4192-9f44-8678848d7837 | Materialized views on the target table are also supported. ClickPipes will create staging tables not only for the target table, but also any dependent materialized view.
We do not create staging tables for non-materialized views. This means that if you have a target table with one of more downstream materialized views, those materialized views should avoid selecting data via a view from the target table. Otherwise, you may find that you are missing data in the materialized view.
Scaling {#scaling}
Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the
configured vertical autoscaling settings
. The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings will not affect the ClickPipe size.
To increase the throughput on large ingest jobs, we recommend scaling the ClickHouse service before creating the ClickPipe.
Limitations {#limitations}
Any changes to the destination table, its materialized views (including cascading materialized views), or the materialized view's target tables can result in temporary errors that will be retried. For best results we recommend to stop the pipe, make the necessary modifications, and then restart the pipe for the changes to be picked up and avoid errors.
There are limitations on the types of views that are supported. Please read the section on
exactly-once semantics
and
view support
for more information.
Role authentication is not available for S3 ClickPipes for ClickHouse Cloud instances deployed into GCP or Azure. It is only supported for AWS ClickHouse Cloud instances.
ClickPipes will only attempt to ingest objects at 10GB or smaller in size. If a file is greater than 10GB an error will be appended to the ClickPipes dedicated error table.
Azure Blob Storage pipes with continuous ingest on containers with over 100k files will have a latency of around 10β15 seconds in detecting new files. Latency increases with file count.
Object Storage ClickPipes
does not
share a listing syntax with the
S3 Table Function
, nor Azure with the
AzureBlobStorage Table function
.
?
- Substitutes any single character
*
- Substitutes any number of any characters except / including empty string
**
- Substitutes any number of any character include / including empty string
:::note
This is a valid path (for S3):
https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/**.ndjson.gz
This is not a valid path.
{N..M}
are not supported in ClickPipes.
https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/{documents-01,documents-02}.ndjson.gz
:::
Continuous Ingest {#continuous-ingest} | {"source_file": "02_reference.md"} | [
0.0034551143180578947,
-0.0975092276930809,
-0.04858660697937012,
0.0009165217634290457,
-0.058307014405727386,
-0.08611008524894714,
-0.008180387318134308,
-0.010480694472789764,
-0.049597542732954025,
0.026080261915922165,
-0.014016283676028252,
-0.042124923318624496,
-0.03543532267212868,... |
3f2cc947-7d2a-4a3c-8b24-b0b2333aa3b3 | https://datasets-documentation.s3.eu-west-3.amazonaws.com/http/{documents-01,documents-02}.ndjson.gz
:::
Continuous Ingest {#continuous-ingest}
ClickPipes supports continuous ingestion from S3, GCS, Azure Blob Storage, and DigitalOcean Spaces. When enabled, ClickPipes continuously ingests data from the specified path, and polls for new files at a rate of once every 30 seconds. However, new files must be lexically greater than the last ingested file. This means that they must be named in a way that defines the ingestion order. For instance, files named
file1
,
file2
,
file3
, etc., will be ingested sequentially. If a new file is added with a name like
file0
, ClickPipes will not ingest it because it is not lexically greater than the last ingested file.
Tracking ingested files {#tracking-ingested-files}
To track which files have been ingested include the
_file
virtual column
in the field mappings. The
_file
virtual column contains the filename of the source object, making it easy to query and identify which files have been processed.
Authentication {#authentication}
S3 {#s3}
Both publicly accessible and protected S3 buckets are supported.
Public buckets need to allow both the
s3:GetObject
and the
s3:ListBucket
actions in their Policy.
Protected buckets can be accessed using either
IAM credentials
or an
IAM Role
.
To use an IAM Role, you will need to create the IAM Role as specified
in this guide
. Copy the new IAM Role Arn after creation and paste it into the ClickPipe configuration as the "IAM ARN role".
GCS {#gcs}
Like S3, you can access public buckets with no configuration, and with protected buckets you can use
HMAC Keys
in place of the AWS IAM credentials. You can read this guide from Google Cloud on
how to setup such keys
.
Service Accounts for GCS aren't directly supported. HMAC (IAM) Credentials must be used when authenticating with non-public buckets.
The Service Account permissions attached to the HMAC credentials should be
storage.objects.list
and
storage.objects.get
.
DigitalOcean Spaces {#dospaces}
Currently only protected buckets are supported for DigitalOcean spaces. You require an "Access Key" and a "Secret Key" to access the bucket and its files. You can read
this guide
on how to create access keys.
Azure Blob Storage {#azureblobstorage}
Currently only protected buckets are supported for Azure Blob Storage. Authentication is done via a connection string, which supports access keys and shared keys. For more information, read
this guide
. | {"source_file": "02_reference.md"} | [
-0.04443880915641785,
-0.048340436071157455,
-0.07608411461114883,
-0.02723694033920765,
0.04771948233246803,
-0.042149439454078674,
0.03908592835068703,
-0.058568648993968964,
0.06489909440279007,
0.03728235885500908,
0.058632273226976395,
0.022078966721892357,
0.0033932477235794067,
-0.0... |
8dc4afa4-c07a-42a0-9241-d3a724a3b336 | sidebar_label: 'Create your first object storage ClickPipe'
description: 'Seamlessly connect your object storage to ClickHouse Cloud.'
slug: /integrations/clickpipes/object-storage
title: 'Creating your first object-storage ClickPipe'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'clickpipes'
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import cp_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step1.png';
import cp_step2_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step2_object_storage.png';
import cp_step3_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step3_object_storage.png';
import cp_step4a from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a.png';
import cp_step4a3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a3.png';
import cp_step4b from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4b.png';
import cp_step5 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step5.png';
import cp_success from '@site/static/images/integrations/data-ingestion/clickpipes/cp_success.png';
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
import Image from '@theme/IdealImage';
Object Storage ClickPipes provide a simple and resilient way to ingest data from Amazon S3, Google Cloud Storage, Azure Blob Storage, and DigitalOcean Spaces into ClickHouse Cloud. Both one-time and continuous ingestion are supported with exactly-once semantics.
Creating your first object storage ClickPipe {#creating-your-first-clickpipe}
Prerequisite {#prerequisite}
You have familiarized yourself with the
ClickPipes intro
.
Navigate to data sources {#1-load-sql-console}
In the cloud console, select the
Data Sources
button on the left-side menu and click on "Set up a ClickPipe"
Select a data source {#2-select-data-source}
Select your data source.
Configure the ClickPipe {#3-configure-clickpipe}
Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL.
You can specify multiple files using bash-like wildcards.
For more information,
see the documentation on using wildcards in path
.
Select data format {#4-select-format}
The UI will display a list of files in the specified bucket.
Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion.
(
More details below
).
Configure table, schema and settings {#5-configure-table-schema-settings} | {"source_file": "01_create_clickpipe_for_object_storage.md"} | [
-0.029398206621408463,
-0.012999807484447956,
-0.05147598311305046,
0.00597370183095336,
-0.036042965948581696,
-0.039632607251405716,
0.007475385442376137,
0.0702878087759018,
-0.03910720348358154,
0.027919556945562363,
0.0415322370827198,
-0.07246431708335876,
0.05559423193335533,
-0.011... |
e152e94f-3ebd-41f2-bca6-2c8a06bc5445 | Configure table, schema and settings {#5-configure-table-schema-settings}
In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one.
Follow the instructions in the screen to modify your table name, schema, and settings.
You can see a real-time preview of your changes in the sample table at the top.
You can also customize the advanced settings using the controls provided
Alternatively, you can decide to ingest your data in an existing ClickHouse table.
In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
:::info
You can also map
virtual columns
, like
_path
or
_size
, to fields.
:::
Configure permissions {#6-configure-permissions}
Finally, you can configure permissions for the internal ClickPipes user.
Permissions:
ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
-
Full access
: with the full access to the cluster. Required if you use materialized view or Dictionary with the destination table.
-
Only destination table
: with the
INSERT
permissions to the destination table only.
Complete setup {#7-complete-setup}
By clicking on "Complete Setup", the system will register your ClickPipe, and you'll be able to see it listed in the summary table.
The summary table provides controls to display sample data from the source or the destination table in ClickHouse
As well as controls to remove the ClickPipe and display a summary of the ingest job.
Congratulations!
you have successfully set up your first ClickPipe.
If this is a streaming ClickPipe, it will be continuously running, ingesting data in real-time from your remote data source.
Otherwise, it will ingest the batch and complete. | {"source_file": "01_create_clickpipe_for_object_storage.md"} | [
0.05563914775848389,
-0.04969127103686333,
-0.12167428433895111,
0.03400769084692001,
-0.03148268163204193,
-0.0429537408053875,
0.04701479151844978,
-0.0030875294469296932,
-0.12726855278015137,
0.03554486855864525,
0.04232294484972954,
-0.07829096168279648,
0.024730758741497993,
0.008953... |
a07ad64e-70ed-465b-9068-a58779f3f2ec | description: 'Landing page with table of contents for the object storage ClickPipes section'
slug: /integrations/clickpipes/object-storage/index
sidebar_position: 1
title: 'Object storage ClickPipes'
doc_type: 'landing-page'
| Page | Description |
|-----|-----|
|
Reference
| Details supported formats, exactly-once semantics, view-support, scaling, limitations, authentication with object storage ClickPipes |
|
FAQ
| FAQ for object storage ClickPipes |
|
Creating your first object-storage ClickPipe
| Seamlessly connect your object storage to ClickHouse Cloud. | | {"source_file": "index.md"} | [
-0.010194786824285984,
-0.05307226628065109,
-0.058046698570251465,
0.001943042385391891,
-0.0007790977251715958,
-0.012672405689954758,
-0.022976873442530632,
-0.00941848661750555,
-0.032980918884277344,
0.046856075525283813,
-0.03342560678720474,
0.01284970622509718,
-0.0037365765310823917... |
8d846974-9bbe-400a-ba9a-f5406a9c45da | sidebar_label: 'FAQ'
description: 'FAQ for object storage ClickPipes'
slug: /integrations/clickpipes/object-storage/faq
sidebar_position: 1
title: 'FAQ'
doc_type: 'reference'
integration:
- support_level: 'core'
- category: 'clickpipes'
FAQ {#faq}
Does ClickPipes support GCS buckets prefixed with `gs://`?
No. For interoperability reasons we ask you to replace your `gs://` bucket prefix with `https://storage.googleapis.com/`.
What permissions does a GCS public bucket require?
`allUsers` requires appropriate role assignment. The `roles/storage.objectViewer` role must be granted at the bucket level. This role provides the `storage.objects.list` permission, which allows ClickPipes to list all objects in the bucket which is required for onboarding and ingestion. This role also includes the `storage.objects.get` permission, which is required to read or download individual objects in the bucket. See: [Google Cloud Access Control](https://cloud.google.com/storage/docs/access-control/iam-roles) for further information. | {"source_file": "03_faq.md"} | [
-0.0619509220123291,
-0.03911810740828514,
-0.014756240881979465,
0.000643144128844142,
-0.014093787409365177,
-0.013810799457132816,
0.03729171305894852,
-0.0007245703600347042,
-0.040966104716062546,
0.02647118642926216,
-0.028035840019583702,
-0.02465192787349224,
0.01674698293209076,
-... |
b5d23196-e272-4c30-85e4-3222a8004ad8 | sidebar_label: 'MongoDB Atlas'
description: 'Step-by-step guide on how to set up MongoDB Atlas as a source for ClickPipes'
slug: /integrations/clickpipes/mongodb/source/atlas
title: 'MongoDB Atlas source setup guide'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
import mongo_atlas_configuration from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongo-atlas-cluster-overview-configuration.png'
import mngo_atlas_additional_settings from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongo-atlas-expand-additional-settings.png'
import mongo_atlas_retention_hours from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongo-atlas-set-retention-hours.png'
import mongo_atlas_add_user from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongo-atlas-add-new-database-user.png'
import mongo_atlas_add_roles from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongo-atlas-database-user-privilege.png'
import mongo_atlas_restrict_access from '@site/static/images/integrations/data-ingestion/clickpipes/mongodb/mongo-atlas-restrict-access.png'
import Image from '@theme/IdealImage';
MongoDB Atlas source setup guide
Configure oplog retention {#enable-oplog-retention}
Minimum oplog retention of 24 hours is required for replication. We recommend setting the oplog retention to 72 hours or longer to ensure that the oplog is not truncated before the initial snapshot is completed. To set the oplog retention via UI:
Navigate to your cluster's
Overview
tab in the MongoDB Atlas console and click on the
Configuration
tab.
Click
Additional Settings
and scroll down to
More Configuration Options
.
Click
More Configuration Options
and set the minimum oplog window to
72 hours
or longer.
Click
Review Changes
to review, and then
Apply Changes
to deploy the changes.
Configure a database user {#configure-database-user}
Once you are logged in to your MongoDB Atlas console, click
Database Access
under the Security tab in the left navigation bar. Click on "Add New Database User".
ClickPipes requires password authentication:
ClickPipes requires a user with the following roles:
readAnyDatabase
clusterMonitor
You can find them in the
Specific Privileges
section:
You can further specify the cluster(s)/instance(s) you wish to grant access to ClickPipes user:
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your MongoDB instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your MongoDB instance as you will need them during the ClickPipe creation process. | {"source_file": "atlas.md"} | [
-0.022311802953481674,
0.07147874683141708,
-0.03689594939351082,
0.09292984008789062,
0.016979722306132317,
-0.03666681796312332,
-0.023471737280488014,
0.021648673340678215,
-0.03356505185365677,
-0.031411729753017426,
-0.01460288092494011,
-0.049990516155958176,
-0.01326039433479309,
-0... |
1fc13b79-563a-4e44-8d1d-c2cde6f61939 | sidebar_label: 'Generic MongoDB'
description: 'Set up any MongoDB instance as a source for ClickPipes'
slug: /integrations/clickpipes/mongodb/source/generic
title: 'Generic MongoDB source setup guide'
doc_type: 'guide'
keywords: ['clickpipes', 'mongodb', 'cdc', 'data ingestion', 'real-time sync']
Generic MongoDB source setup guide
:::info
If you use MongoDB Atlas, please refer to the specific guide
here
.
:::
Enable oplog retention {#enable-oplog-retention}
Minimum oplog retention of 24 hours is required for replication. We recommend setting the oplog retention to 72 hours or longer to ensure that the oplog is not truncated before the initial snapshot is completed.
You can check your current oplog retention by running the following command in the MongoDB shell (you must have
clusterMonitor
role to run this command):
javascript
db.getSiblingDB("admin").serverStatus().oplogTruncation.oplogMinRetentionHours
To set the oplog retention to 72 hours, run the following command on each node in the replica set as an admin user:
javascript
db.adminCommand({
"replSetResizeOplog" : 1,
"minRetentionHours": 72
})
For more details on the
replSetResizeOplog
command and oplog retention, see
MongoDB documentation
.
Configure a database user {#configure-database-user}
Connect to your MongoDB instance as an admin user and execute the following command to create a user for MongoDB CDC ClickPipes:
javascript
db.getSiblingDB("admin").createUser({
user: "clickpipes_user",
pwd: "some_secure_password",
roles: ["readAnyDatabase", "clusterMonitor"],
})
:::note
Make sure to replace
clickpipes_user
and
some_secure_password
with your desired username and password.
:::
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your MongoDB instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your MongoDB instance as you will need them during the ClickPipe creation process. | {"source_file": "generic.md"} | [
-0.006985634099692106,
-0.0010106588015332818,
0.000584848748985678,
0.062209341675043106,
-0.001157871913164854,
-0.0972185879945755,
-0.08876656740903854,
0.031892478466033936,
0.014028066769242287,
-0.007807780057191849,
0.006398805417120457,
0.022163331508636475,
-0.027017101645469666,
... |
34bffd6b-7973-4aec-a00e-bc8e04cefc3a | sidebar_label: 'Amazon Aurora Postgres'
description: 'Set up Amazon Aurora Postgres as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/aurora
title: 'Aurora Postgres Source Setup Guide'
doc_type: 'guide'
keywords: ['Amazon Aurora', 'PostgreSQL', 'ClickPipes', 'AWS database', 'logical replication setup']
import parameter_group_in_blade from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/parameter_group_in_blade.png';
import change_rds_logical_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/change_rds_logical_replication.png';
import change_wal_sender_timeout from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/change_wal_sender_timeout.png';
import modify_parameter_group from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/modify_parameter_group.png';
import reboot_rds from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/reboot_rds.png';
import security_group_in_rds_postgres from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/security_group_in_rds_postgres.png';
import edit_inbound_rules from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/edit_inbound_rules.png';
import Image from '@theme/IdealImage';
Aurora Postgres source setup guide
Supported Postgres versions {#supported-postgres-versions}
ClickPipes supports Aurora PostgreSQL-Compatible Edition version 12 and later.
Enable logical replication {#enable-logical-replication}
You can skip this section if your Aurora instance already has the following settings configured:
-
rds.logical_replication = 1
-
wal_sender_timeout = 0
These settings are typically pre-configured if you previously used another data replication tool.
```text
postgres=> SHOW rds.logical_replication ;
rds.logical_replication
on
(1 row)
postgres=> SHOW wal_sender_timeout ;
wal_sender_timeout
0
(1 row)
```
If not already configured, follow these steps:
Create a new parameter group for your Aurora PostgreSQL version with the required settings:
Set
rds.logical_replication
to 1
Set
wal_sender_timeout
to 0
Apply the new parameter group to your Aurora PostgreSQL cluster
Reboot your Aurora cluster to apply the changes
Configure database user {#configure-database-user}
Connect to your Aurora PostgreSQL writer instance as an admin user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER clickpipes_user PASSWORD 'some-password';
Grant schema permissions. The following example shows permissions for the
public
schema. Repeat these commands for each schema you want to replicate: | {"source_file": "aurora.md"} | [
-0.014361228793859482,
-0.015666430816054344,
-0.11378559470176697,
0.024268101900815964,
-0.0009387956233695149,
-0.0016941337380558252,
0.017067335546016693,
-0.042383596301078796,
-0.036568623036146164,
0.03577358275651932,
0.07773034274578094,
-0.00798097811639309,
-0.003977328073233366,... |
39c85892-a931-41f2-9db9-c1a478ce8db7 | Grant schema permissions. The following example shows permissions for the
public
schema. Repeat these commands for each schema you want to replicate:
sql
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
Grant replication privileges:
sql
GRANT rds_replication TO clickpipes_user;
Create a publication for replication:
sql
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
Configure network access {#configure-network-access}
IP-based access control {#ip-based-access-control}
If you want to restrict traffic to your Aurora cluster, please add the
documented static NAT IPs
to the
Inbound rules
of your Aurora security group.
Private access via AWS PrivateLink {#private-access-via-aws-privatelink}
To connect to your Aurora cluster through a private network, you can use AWS PrivateLink. Follow our
AWS PrivateLink setup guide for ClickPipes
to set up the connection.
Aurora-specific considerations {#aurora-specific-considerations}
When setting up ClickPipes with Aurora PostgreSQL, keep these considerations in mind:
Connection Endpoint
: Always connect to the writer endpoint of your Aurora cluster, as logical replication requires write access to create replication slots and must connect to the primary instance.
Failover Handling
: In the event of a failover, Aurora will automatically promote a reader to be the new writer. ClickPipes will detect the disconnection and attempt to reconnect to the writer endpoint, which will now point to the new primary instance.
Global Database
: If you're using Aurora Global Database, you should connect to the primary region's writer endpoint, as cross-region replication already handles data movement between regions.
Storage Considerations
: Aurora's storage layer is shared across all instances in a cluster, which can provide better performance for logical replication compared to standard RDS.
Dealing with dynamic cluster endpoints {#dealing-with-dynamic-cluster-endpoints}
While Aurora provides stable endpoints that automatically route to the appropriate instance, here are some additional approaches for ensuring consistent connectivity:
For high-availability setups, configure your application to use the Aurora writer endpoint, which automatically points to the current primary instance.
If using cross-region replication, consider setting up separate ClickPipes for each region to reduce latency and improve fault tolerance.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Aurora PostgreSQL cluster into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Aurora PostgreSQL cluster as you will need them during the ClickPipe creation process. | {"source_file": "aurora.md"} | [
0.0394648052752018,
-0.029980050399899483,
-0.1262015700340271,
0.02229437045753002,
-0.03716696426272392,
0.009894677437841892,
0.04017787426710129,
-0.054143745452165604,
-0.04768691211938858,
0.07705558836460114,
-0.03253580629825592,
-0.0696307122707367,
0.05926769971847534,
0.02320339... |
87855712-b734-46f4-9ba9-23bfd3a9ba0f | sidebar_label: 'Timescale'
description: 'Set up Postgres with the TimescaleDB extension as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/timescale
title: 'Postgres with TimescaleDB source setup guide'
keywords: ['TimescaleDB']
doc_type: 'guide'
import BetaBadge from '@theme/badges/BetaBadge';
Postgres with TimescaleDB source setup guide
Background {#background}
TimescaleDB
is an open-source Postgres extension developed by Timescale Inc
that aims to boost the performance of analytics queries without having to move away from Postgres. This is achieved by
creating "hypertables" which are managed by the extension and support automatic partitioning into "chunks".
Hypertables also support transparent compression and hybrid row-columnar storage (known as "hypercore"), although these
features require a version of the extension that has a proprietary license.
Timescale Inc also offers two managed services for TimescaleDB:
-
Managed Service for Timescale
-
Timescale Cloud
.
There are third-party vendors offering managed services that allow you to use the TimescaleDB extension, but due to
licensing, these vendors only support the open-source version of the extension.
Timescale hypertables behave differently from regular Postgres tables in several ways. This poses some complications
to the process of replicating them, which is why the ability to replicate Timescale hypertables should be considered as
best effort
.
Supported Postgres versions {#supported-postgres-versions}
ClickPipes supports Postgres version 12 and later.
Enable logical replication {#enable-logical-replication}
The steps to be follow depend on how your Postgres instance with TimescaleDB is deployed.
If you're using a managed service and your provider is listed in the sidebar, please follow the guide for that provider.
If you're deploying TimescaleDB yourself, follow the generic guide.
For other managed services, please raise a support ticket with your provider to help in enabling logical replication if
it isn't already.
:::info
Timescale Cloud does not support enabling logical replication, which is needed for Postgres pipes in CDC mode.
As a result, users of Timescale Cloud can only perform a one-time load of their data (
Initial Load Only
) with the
Postgres ClickPipe.
:::
Configuration {#configuration}
Timescale hypertables don't store any data inserted into them. Instead, the data is stored in multiple corresponding
"chunk" tables which are in the
_timescaledb_internal
schema. For running queries on the hypertables, this is not an
issue. But during logical replication, instead of detecting changes in the hypertable we detect them in the chunk table
instead. The Postgres ClickPipe has logic to automatically remap changes from the chunk tables to the parent hypertable,
but this requires additional steps. | {"source_file": "timescale.md"} | [
-0.0461387112736702,
0.02447037771344185,
-0.06420842558145523,
0.03403181582689285,
-0.04653913527727127,
0.004227254074066877,
-0.009335614740848541,
0.009850408881902695,
0.0031107019167393446,
0.012375680729746819,
-0.05714040994644165,
0.03514513745903969,
-0.0009096217690967023,
-0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.