id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
0fd5d3da-cb45-4ace-8b31-010bde22deba | :::info
If you'd like to only perform a one-time load of your data (
Initial Load Only
), please skip steps 2 onward.
:::
Create a Postgres user for the pipe and grant it permissions to
SELECT
the tables you wish to replicate.
sql
CREATE USER clickpipes_user PASSWORD 'clickpipes_password';
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
-- If desired, you can refine these GRANTs to individual tables alone, instead of the entire schema
-- But when adding new tables to the ClickPipe, you'll need to add them to the user as well.
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
:::note
Make sure to replace
clickpipes_user
and
clickpipes_password
with your desired username and password.
:::
As a Postgres superuser/admin user, create a publication on the source instance that has the tables and hypertables
you want to replicate and
also includes the entire
_timescaledb_internal
schema
. While creating the ClickPipe, you need to select this publication.
sql
-- When adding new tables to the ClickPipe, you'll need to add them to the publication as well manually.
CREATE PUBLICATION clickpipes_publication FOR TABLE <...>, <...>, TABLES IN SCHEMA _timescaledb_internal;
:::tip
We don't recommend creating a publication
FOR ALL TABLES
, this leads to more traffic from Postgres to ClickPipes (to sending changes for other tables not in the pipe) and reduces overall efficiency.
For manually created publications, please add any tables you want to the publication before adding them to the pipe.
:::
:::info
Some managed services don't give their admin users the required permissions to create a publication for an entire schema.
If this is the case, please raise a support ticket with your provider. Alternatively, you can skip this step and the following
steps and perform a one-time load of your data.
:::
Grant replication permissions to the user created earlier.
sql
-- Give replication permission to the USER
ALTER USER clickpipes_user REPLICATION;
After these steps, you should be able to proceed with
creating a ClickPipe
.
Configure network access {#configure-network-access}
If you want to restrict traffic to your Timescale instance, please allowlist the
documented static NAT IPs
.
Instructions to do this will vary across providers, please consult the sidebar if your provider is listed or raise a
ticket with them. | {"source_file": "timescale.md"} | [
0.04183976352214813,
-0.07861729711294174,
-0.06912253797054291,
0.001742762979120016,
-0.12620408833026886,
-0.022980129346251488,
0.01209074817597866,
-0.019236193969845772,
-0.07115473598241806,
0.05474397912621498,
-0.025275882333517075,
-0.09899386763572693,
0.013764080591499805,
-0.0... |
36d70c1c-5d69-43f8-9511-9fd79b2c201f | sidebar_label: 'Crunchy Bridge Postgres'
description: 'Set up Crunchy Bridge Postgres as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/crunchy-postgres
title: 'Crunchy Bridge Postgres Source Setup Guide'
keywords: ['crunchy bridge', 'postgres', 'clickpipes', 'logical replication', 'data ingestion']
doc_type: 'guide'
import firewall_rules_crunchy_bridge from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/crunchy-postgres/firewall_rules_crunchy_bridge.png'
import add_firewall_rules_crunchy_bridge from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/crunchy-postgres/add_firewall_rules_crunchy_bridge.png'
import Image from '@theme/IdealImage';
Crunchy Bridge Postgres source setup guide
ClickPipes supports Postgres version 12 and later.
Enable logical replication {#enable-logical-replication}
Crunchy Bridge comes with logical replication enabled by
default
. Ensure that the settings below are configured correctly. If not, adjust them accordingly.
sql
SHOW wal_level; -- should be logical
SHOW max_wal_senders; -- should be 10
SHOW max_replication_slots; -- should be 10
Creating ClickPipes user and granting permissions {#creating-clickpipes-user-and-granting-permissions}
Connect to your Crunchy Bridge Postgres through the
postgres
user and run the below commands:
Create a Postgres user exclusively for ClickPipes.
sql
CREATE USER clickpipes_user PASSWORD 'some-password';
Grant read-only access to the schema from which you are replicating tables to
clickpipes_user
. Below example shows granting permissions for the
public
schema. If you want to grant access to multiple schemas, you can run these three commands for each schema.
sql
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
Grant replication access to this user:
sql
ALTER ROLE clickpipes_user REPLICATION;
Create publication that you'll be using for creating the MIRROR (replication) in future.
sql
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
Safe list ClickPipes IPs {#safe-list-clickpipes-ips}
Safelist
ClickPipes IPs
by adding the Firewall Rules in Crunchy Bridge.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "crunchy-postgres.md"} | [
-0.016635829582810402,
-0.005774525925517082,
-0.08196104317903519,
0.014262145385146141,
-0.037336356937885284,
-0.09204836934804916,
-0.007007095031440258,
0.045967474579811096,
-0.08039411902427673,
-0.030855542048811913,
-0.014783996157348156,
-0.0188433974981308,
0.033051714301109314,
... |
6a978758-a877-44cd-8336-b83de4f138bd | sidebar_label: 'Planetscale for Postgres'
description: 'Set up Planetscale for Postgres as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/planetscale
title: 'PlanetScale for Postgres Source Setup Guide'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import planetscale_wal_level_logical from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/planetscale/planetscale_wal_level_logical.png';
import planetscale_max_slot_wal_keep_size from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/planetscale/planetscale_max_slot_wal_keep_size.png';
import Image from '@theme/IdealImage';
PlanetScale for Postgres source setup guide
:::info
PlanetScale for Postgres is currently in
early access
.
:::
Supported Postgres versions {#supported-postgres-versions}
ClickPipes supports Postgres version 12 and later.
Enable logical replication {#enable-logical-replication}
To enable replication on your Postgres instance, we need to make sure that the following settings are set:
sql
wal_level = logical
To check the same, you can run the following SQL command:
sql
SHOW wal_level;
The output should be
logical
by default. If not, please log into the PlanetScale console and go to
Cluster configuration->Parameters
and scroll down to
Write-ahead log
to change it.
:::warning
Changing this in the PlanetScale console WILL trigger a restart.
:::
Additionally, it is recommended to increase the setting
max_slot_wal_keep_size
from its default of 4GB. This is also done via the PlanetScale console by going to
Cluster configuration->Parameters
and then scroll down to
Write-ahead log
. To help determine the new value, please take a look
here
.
Creating a user with permissions and publication {#creating-a-user-with-permissions-and-publication}
Let's create a new user for ClickPipes with the necessary permissions suitable for CDC,
and also create a publication that we'll use for replication.
For this, you can connect to your PlanetScale Postgres instance using the default
postgres.<...>
user and run the following SQL commands:
```sql
CREATE USER clickpipes_user PASSWORD 'clickpipes_password';
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
-- You may need to grant these permissions on more schemas depending on the tables you're moving
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
-- Give replication permission to the USER
ALTER USER clickpipes_user REPLICATION;
-- Create a publication. We will use this when creating the pipe
-- When adding new tables to the ClickPipe, you'll need to manually add them to the publication as well.
CREATE PUBLICATION clickpipes_publication FOR TABLE <...>, <...>, <...>; | {"source_file": "planetscale.md"} | [
0.06421693414449692,
0.03674568608403206,
-0.014069648459553719,
0.025812741369009018,
0.012304732576012611,
-0.05073457956314087,
0.01948358491063118,
0.07183737307786942,
-0.02732977271080017,
0.032544512301683426,
-0.023229336366057396,
-0.029628152027726173,
0.02712993696331978,
-0.069... |
5f04dc8e-6563-42de-9e63-d98aa6c207f0 | ``
:::note
Make sure to replace
clickpipes_user
and
clickpipes_password` with your desired username and password.
:::
Caveats {#caveats}
To connect to PlanetScale Postgres, the current branch needs to be appended to the username created above. For example, if the created user was named
clickpipes_user
, the actual user provided during the ClickPipe creation needs to be
clickpipes_user
.
branch
where
branch
refers to the "id" of the current PlanetScale Postgres
branch
. To quickly determine this, you can refer to the username of the
postgres
user you used to create the user earlier, the part after the period would be the branch id.
Do not use the
PSBouncer
port (currently
6432
) for CDC pipes connecting to PlanetScale Postgres, the normal port
5432
must be used. Either port may be used for initial-load only pipes.
Please ensure you're connecting only to the primary instance,
connecting to replica instances
is currently not supported.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "planetscale.md"} | [
-0.00725737027823925,
-0.04226567968726158,
-0.0831858366727829,
-0.07711201906204224,
-0.11859395354986191,
-0.036080654710531235,
0.02130015939474106,
0.06141148880124092,
0.01296504307538271,
0.03222070634365082,
-0.04410049691796303,
-0.047523532062768936,
-0.02119429036974907,
-0.0358... |
0b39653e-194e-4ed1-aac0-4baadcfbb08d | sidebar_label: 'Azure Flexible Server for Postgres'
description: 'Set up Azure Flexible Server for Postgres as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/azure-flexible-server-postgres
title: 'Azure Flexible Server for Postgres Source Setup Guide'
keywords: ['azure', 'flexible server', 'postgres', 'clickpipes', 'wal level']
doc_type: 'guide'
import server_parameters from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/azure-flexible-server-postgres/server_parameters.png';
import wal_level from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/azure-flexible-server-postgres/wal_level.png';
import restart from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/azure-flexible-server-postgres/restart.png';
import firewall from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/azure-flexible-server-postgres/firewall.png';
import Image from '@theme/IdealImage';
Azure flexible server for Postgres source setup guide
ClickPipes supports Postgres version 12 and later.
Enable logical replication {#enable-logical-replication}
You don't need
to follow the below steps if
wal_level
is set to
logical
. This setting should mostly be pre-configured if you are migrating from another data replication tool.
Click on the
Server parameters
section
Edit the
wal_level
to
logical
This change would require a server restart. So restart when requested.
Creating ClickPipes users and granting permissions {#creating-clickpipes-user-and-granting-permissions}
Connect to your Azure Flexible Server Postgres through the admin user and run the below commands:
Create a Postgres user for exclusively ClickPipes.
sql
CREATE USER clickpipes_user PASSWORD 'some-password';
Provide read-only access to the schema from which you are replicating tables to the
clickpipes_user
. Below example shows setting up permissions for the
public
schema. If you want to grant access to multiple schemas, you can run these three commands for each schema.
sql
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
Grant replication access to this user:
sql
ALTER ROLE clickpipes_user REPLICATION;
Create publication that you'll be using for creating the MIRROR (replication) in future.
sql
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
Set
wal_sender_timeout
to 0 for
clickpipes_user
sql
ALTER ROLE clickpipes_user SET wal_sender_timeout to 0;
Add ClickPipes IPs to Firewall {#add-clickpipes-ips-to-firewall}
Please follow the below steps to add
ClickPipes IPs
to your network. | {"source_file": "azure-flexible-server-postgres.md"} | [
-0.01250460185110569,
-0.018152931705117226,
-0.08284691721200943,
0.027503537014126778,
-0.0551910400390625,
-0.013128858990967274,
0.041031233966350555,
0.022136155515909195,
-0.067954882979393,
0.0005494867800734937,
0.019246263429522514,
0.00879162922501564,
0.008145000785589218,
-0.04... |
69c28b21-9f53-4391-9e2c-15223488d61a | Add ClickPipes IPs to Firewall {#add-clickpipes-ips-to-firewall}
Please follow the below steps to add
ClickPipes IPs
to your network.
Go to the
Networking
tab and add the
ClickPipes IPs
to the Firewall
of your Azure Flexible Server Postgres OR the Jump Server/Bastion if you are using SSH tunneling.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "azure-flexible-server-postgres.md"} | [
0.04492095857858658,
-0.026997016742825508,
-0.06623919308185577,
-0.038290057331323624,
-0.07270260155200958,
0.041237059980630875,
0.02775508537888527,
-0.061448849737644196,
-0.008848643861711025,
0.06436296552419662,
0.001925583346746862,
-0.00043783726869150996,
-0.0044991569593548775,
... |
d04c17fe-cba1-4d07-bd34-af9a78c0e027 | sidebar_label: 'Neon Postgres'
description: 'Set up Neon Postgres instance as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/neon-postgres
title: 'Neon Postgres Source Setup Guide'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import neon_commands from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-commands.png'
import neon_enable_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-enable-replication.png'
import neon_enabled_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-enabled-replication.png'
import neon_ip_allow from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-ip-allow.png'
import neon_conn_details from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/neon-postgres/neon-conn-details.png'
import Image from '@theme/IdealImage';
Neon Postgres source setup guide
This is a guide on how to setup Neon Postgres, which you can use for replication in ClickPipes.
Make sure you're signed in to your
Neon console
for this setup.
Creating a user with permissions {#creating-a-user-with-permissions}
Let's create a new user for ClickPipes with the necessary permissions suitable for CDC,
and also create a publication that we'll use for replication.
For this, you can head over to the
SQL Editor
tab.
Here, we can run the following SQL commands:
```sql
CREATE USER clickpipes_user PASSWORD 'clickpipes_password';
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
-- Give replication permission to the USER
ALTER USER clickpipes_user REPLICATION;
-- Create a publication. We will use this when creating the mirror
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
```
Click on
Run
to have a publication and a user ready.
Enable logical replication {#enable-logical-replication}
In Neon, you can enable logical replication through the UI. This is necessary for ClickPipes's CDC to replicate data.
Head over to the
Settings
tab and then to the
Logical Replication
section.
Click on
Enable
to be all set here. You should see the below success message once you enable it.
Let's verify the below settings in your Neon Postgres instance:
sql
SHOW wal_level; -- should be logical
SHOW max_wal_senders; -- should be 10
SHOW max_replication_slots; -- should be 10
IP whitelisting (for Neon enterprise plan) {#ip-whitelisting-for-neon-enterprise-plan} | {"source_file": "neon-postgres.md"} | [
-0.06731948256492615,
-0.003242001635953784,
-0.09401977807283401,
0.01362677849829197,
-0.019352471455931664,
-0.017429035156965256,
-0.010715479962527752,
-0.004405655898153782,
-0.06489337980747223,
-0.04862707853317261,
0.055784232914447784,
-0.029261112213134766,
-0.04294724762439728,
... |
0d0d6ab6-7f7c-4f97-b379-2cde672eb286 | IP whitelisting (for Neon enterprise plan) {#ip-whitelisting-for-neon-enterprise-plan}
If you have Neon Enterprise plan, you can whitelist the
ClickPipes IPs
to allow replication from ClickPipes to your Neon Postgres instance.
To do this you can click on the
Settings
tab and go to the
IP Allow
section.
Copy connection details {#copy-connection-details}
Now that we have the user, publication ready and replication enabled, we can copy the connection details to create a new ClickPipe.
Head over to the
Dashboard
and at the text box where it shows the connection string,
change the view to
Parameters Only
. We will need these parameters for our next step.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "neon-postgres.md"} | [
-0.019455280154943466,
0.022824153304100037,
-0.05059142783284187,
-0.062445126473903656,
-0.06270851939916611,
-0.0023464399855583906,
-0.007240666542202234,
-0.0668555274605751,
-0.04610113799571991,
-0.0008349188137799501,
-0.018053509294986725,
0.02098502218723297,
-0.030809177085757256,... |
2d922e7a-8494-47c1-abb3-87d581deb1d9 | sidebar_label: 'Google Cloud SQL'
description: 'Set up Google Cloud SQL Postgres instance as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/google-cloudsql
title: 'Google Cloud SQL Postgres Source Setup Guide'
doc_type: 'guide'
keywords: ['google cloud sql', 'postgres', 'clickpipes', 'logical decoding', 'firewall']
import edit_button from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/edit.png';
import cloudsql_logical_decoding1 from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/cloudsql_logical_decoding1.png';
import cloudsql_logical_decoding2 from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/cloudsql_logical_decoding2.png';
import cloudsql_logical_decoding3 from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/cloudsql_logical_decoding3.png';
import connections from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/connections.png';
import connections_networking from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/connections_networking.png';
import firewall1 from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/firewall1.png';
import firewall2 from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/google-cloudsql/firewall2.png';
import Image from '@theme/IdealImage';
Google Cloud SQL Postgres source setup guide
:::info
If you use one of the supported providers (in the sidebar), please refer to the specific guide for that provider.
:::
Supported Postgres versions {#supported-postgres-versions}
Anything on or after Postgres 12
Enable logical replication {#enable-logical-replication}
You don't need
to follow the below steps if the settings
cloudsql. logical_decoding
is on and
wal_sender_timeout
is 0. These settings should mostly be pre-configured if you are migrating from another data replication tool.
Click on
Edit
button on the Overview page.
Go to Flags and change
cloudsql.logical_decoding
to on and
wal_sender_timeout
to 0. These changes will need restarting your Postgres server.
Creating ClickPipes user and granting permissions {#creating-clickpipes-user-and-granting-permissions}
Connect to your Cloud SQL Postgres through the admin user and run the below commands:
Create a Postgres user for exclusively ClickPipes.
sql
CREATE USER clickpipes_user PASSWORD 'some-password';
Provide read-only access to the schema from which you are replicating tables to the
clickpipes_user
. Below example shows setting up permissions for the
public
schema. If you want to grant access to multiple schemas, you can run these three commands for each schema. | {"source_file": "google-cloudsql.md"} | [
-0.01873905584216118,
-0.044704619795084,
-0.0472860150039196,
0.0024638096801936626,
-0.04348786175251007,
-0.0015895470278337598,
0.07014274597167969,
-0.043902456760406494,
-0.04138384386897087,
0.03119770437479019,
-0.012793156318366528,
-0.026620658114552498,
-0.007221279200166464,
-0... |
e4062d24-67e0-4eec-8ea9-ae3b30bf2f3f | sql
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
Grant replication access to this user:
sql
ALTER ROLE clickpipes_user REPLICATION;
Create publication that you'll be using for creating the MIRROR (replication) in future.
sql
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
Add ClickPipes IPs to Firewall {#add-clickpipes-ips-to-firewall}
Please follow the below steps to add ClickPipes IPs to your network.
:::note
If your are using SSH Tunneling, then you need to add the
ClickPipes IPs
to the firewall rules of the Jump Server/Bastion.
:::
Go to
Connections
section
Go to the Networking subsection
Add the
public IPs of ClickPipes
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "google-cloudsql.md"} | [
0.04280292987823486,
-0.08296395093202591,
-0.08830874413251877,
-0.02689485251903534,
-0.10099337249994278,
0.00048464263090863824,
-0.006296928506344557,
-0.056365251541137695,
-0.05038556829094887,
0.08758940547704697,
-0.022110989317297935,
-0.04109564051032066,
0.038568370044231415,
-... |
35996bbc-f64e-48a9-9e8c-9d1419aeee0e | sidebar_label: 'Supabase Postgres'
description: 'Set up Supabase instance as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/supabase
title: 'Supabase Source Setup Guide'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import supabase_commands from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/supabase/supabase-commands.jpg'
import supabase_connection_details from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/setup/supabase/supabase-connection-details.jpg'
import Image from '@theme/IdealImage';
Supabase source setup guide
This is a guide on how to setup Supabase Postgres for usage in ClickPipes.
:::note
ClickPipes supports Supabase via IPv6 natively for seamless replication.
:::
Creating a user with permissions and replication slot {#creating-a-user-with-permissions-and-replication-slot}
Let's create a new user for ClickPipes with the necessary permissions suitable for CDC,
and also create a publication that we'll use for replication.
For this, you can head over to the
SQL Editor
for your Supabase Project.
Here, we can run the following SQL commands:
```sql
CREATE USER clickpipes_user PASSWORD 'clickpipes_password';
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
-- Give replication permission to the USER
ALTER USER clickpipes_user REPLICATION;
-- Create a publication. We will use this when creating the mirror
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
```
Click on
Run
to have a publication and a user ready.
:::note
Make sure to replace
clickpipes_user
and
clickpipes_password
with your desired username and password.
Also, remember to use the same publication name when creating the mirror in ClickPipes.
:::
Increase
max_slot_wal_keep_size
{#increase-max_slot_wal_keep_size}
:::warning
This step will restart your Supabase database and may cause a brief downtime.
You can increase the
max_slot_wal_keep_size
parameter for your Supabase database to a higher value (at least 100GB or
102400
) by following the
Supabase Docs
For better recommendation of this value you can contact the ClickPipes team.
:::
Connection details to use for Supabase {#connection-details-to-use-for-supabase}
Head over to your Supabase Project's
Project Settings
->
Database
(under
Configuration
).
Important
: Disable
Display connection pooler
on this page and head over to the
Connection parameters
section and note/copy the parameters.
:::info
The connection pooler is not supported for CDC based replication, hence it needs to be disabled.
:::
Note on RLS {#note-on-rls} | {"source_file": "supabase.md"} | [
-0.01982814446091652,
-0.025524232536554337,
-0.07218369096517563,
-0.0007604499696753919,
-0.027084870263934135,
-0.003972540609538555,
0.00864090770483017,
0.06610257923603058,
-0.05727544054389,
-0.03967412933707237,
0.03610553592443466,
-0.002437090501189232,
-0.0006241680821403861,
-0... |
5bd4516e-46d7-4327-87d3-403515e0b602 | :::info
The connection pooler is not supported for CDC based replication, hence it needs to be disabled.
:::
Note on RLS {#note-on-rls}
The ClickPipes Postgres user must not be restricted by RLS policies, as it can lead to missing data. You can disable RLS policies for the user by running the below command:
sql
ALTER USER clickpipes_user BYPASSRLS;
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "supabase.md"} | [
-0.0092789800837636,
-0.09629520028829575,
-0.057010941207408905,
-0.03752129524946213,
-0.07346461713314056,
-0.015807291492819786,
0.028634976595640182,
-0.03904126212000847,
0.028524944558739662,
0.06603497266769409,
0.0020450023002922535,
-0.02958621457219124,
0.005527286324650049,
-0.... |
ab278743-0fa5-475c-82c5-e18f5299235e | sidebar_label: 'Amazon RDS Postgres'
description: 'Set up Amazon RDS Postgres as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/rds
title: 'RDS Postgres Source Setup Guide'
doc_type: 'guide'
keywords: ['clickpipes', 'postgresql', 'cdc', 'data ingestion', 'real-time sync']
import parameter_group_in_blade from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/parameter_group_in_blade.png';
import change_rds_logical_replication from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/change_rds_logical_replication.png';
import change_wal_sender_timeout from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/change_wal_sender_timeout.png';
import modify_parameter_group from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/modify_parameter_group.png';
import reboot_rds from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/reboot_rds.png';
import security_group_in_rds_postgres from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/security_group_in_rds_postgres.png';
import edit_inbound_rules from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/edit_inbound_rules.png';
import Image from '@theme/IdealImage';
RDS Postgres source setup guide
Supported Postgres versions {#supported-postgres-versions}
ClickPipes supports Postgres version 12 and later.
Enable logical replication {#enable-logical-replication}
You can skip this section if your RDS instance already has the following settings configured:
-
rds.logical_replication = 1
-
wal_sender_timeout = 0
These settings are typically pre-configured if you previously used another data replication tool.
```text
postgres=> SHOW rds.logical_replication ;
rds.logical_replication
on
(1 row)
postgres=> SHOW wal_sender_timeout ;
wal_sender_timeout
0
(1 row)
```
If not already configured, follow these steps:
Create a new parameter group for your Postgres version with the required settings:
Set
rds.logical_replication
to 1
Set
wal_sender_timeout
to 0
Apply the new parameter group to your RDS Postgres database
Reboot your RDS instance to apply the changes
Configure database user {#configure-database-user}
Connect to your RDS Postgres instance as an admin user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER clickpipes_user PASSWORD 'some-password';
Grant schema permissions. The following example shows permissions for the
public
schema. Repeat these commands for each schema you want to replicate:
sql
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
Grant replication privileges: | {"source_file": "rds.md"} | [
0.0014135150704532862,
-0.020170586183667183,
-0.09214801341295242,
0.021414056420326233,
-0.0079637560993433,
0.007721757981926203,
0.0000050063217713613994,
-0.018800504505634308,
-0.009593832306563854,
0.04067324474453926,
0.07921086996793747,
-0.022557754069566727,
-0.018379388377070427,... |
7d73cfdc-aa6d-48b1-93d7-e4c2548555b5 | Grant replication privileges:
sql
GRANT rds_replication TO clickpipes_user;
Create a publication for replication:
sql
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
Configure network access {#configure-network-access}
IP-based access control {#ip-based-access-control}
If you want to restrict traffic to your RDS instance, please add the
documented static NAT IPs
to the
Inbound rules
of your RDS security group.
Private Access via AWS PrivateLink {#private-access-via-aws-privatelink}
To connect to your RDS instance through a private network, you can use AWS PrivateLink. Follow our
AWS PrivateLink setup guide for ClickPipes
to set up the connection.
Workarounds for RDS Proxy {#workarounds-for-rds-proxy}
RDS Proxy does not support logical replication connections. If you have dynamic IP addresses in RDS and cannot use DNS name or a lambda, here are some alternatives:
Using a cron job, resolve the RDS endpoint's IP periodically and update the NLB if it has changed.
Using RDS Event Notifications with EventBridge/SNS: Trigger updates automatically using AWS RDS event notifications
Stable EC2: Deploy an EC2 instance to act as a polling service or IP-based proxy
Automate IP address management using tools like Terraform or CloudFormation.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "rds.md"} | [
-0.022678496316075325,
-0.07125777751207352,
-0.0733853206038475,
-0.011033694259822369,
-0.10687215626239777,
0.020052632316946983,
-0.004777831025421619,
-0.054015595465898514,
-0.025006376206874847,
0.11940684169530869,
-0.04322633147239685,
-0.013411242514848709,
0.09366340935230255,
0... |
d7633f19-0d58-4d6e-b481-d6d2940a6eb9 | sidebar_label: 'Generic Postgres'
description: 'Set up any Postgres instance as a source for ClickPipes'
slug: /integrations/clickpipes/postgres/source/generic
title: 'Generic Postgres Source Setup Guide'
doc_type: 'guide'
keywords: ['postgres', 'clickpipes', 'logical replication', 'pg_hba.conf', 'wal level']
Generic Postgres source setup guide
:::info
If you use one of the supported providers (in the sidebar), please refer to the specific guide for that provider.
:::
ClickPipes supports Postgres version 12 and later.
Enable logical replication {#enable-logical-replication}
To enable replication on your Postgres instance, we need to make sure that the following settings are set:
sql
wal_level = logical
To check the same, you can run the following SQL command:
sql
SHOW wal_level;
The output should be
logical
. If not, run:
sql
ALTER SYSTEM SET wal_level = logical;
Additionally, the following settings are recommended to be set on the Postgres instance:
sql
max_wal_senders > 1
max_replication_slots >= 4
To check the same, you can run the following SQL commands:
sql
SHOW max_wal_senders;
SHOW max_replication_slots;
If the values do not match the recommended values, you can run the following SQL commands to set them:
sql
ALTER SYSTEM SET max_wal_senders = 10;
ALTER SYSTEM SET max_replication_slots = 10;
3. If you have made any changes to the configuration as mentioned above, you NEED to RESTART the Postgres instance for the changes to take effect.
Creating a user with permissions and publication {#creating-a-user-with-permissions-and-publication}
Let's create a new user for ClickPipes with the necessary permissions suitable for CDC,
and also create a publication that we'll use for replication.
For this, you can connect to your Postgres instance and run the following SQL commands:
```sql
CREATE USER clickpipes_user PASSWORD 'clickpipes_password';
GRANT USAGE ON SCHEMA "public" TO clickpipes_user;
GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO clickpipes_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO clickpipes_user;
-- Give replication permission to the USER
ALTER USER clickpipes_user REPLICATION;
-- Create a publication. We will use this when creating the pipe
CREATE PUBLICATION clickpipes_publication FOR ALL TABLES;
```
:::note
Make sure to replace
clickpipes_user
and
clickpipes_password
with your desired username and password.
:::
Enabling connections in pg_hba.conf to the ClickPipes User {#enabling-connections-in-pg_hbaconf-to-the-clickpipes-user}
If you are self serving, you need to allow connections to the ClickPipes user from the ClickPipes IP addresses by following the below steps. If you are using a managed service, you can do the same by following the provider's documentation. | {"source_file": "generic.md"} | [
-0.019033271819353104,
-0.1332988142967224,
-0.07033319026231766,
-0.04864581674337387,
-0.09708024561405182,
-0.014377173967659473,
0.002434720518067479,
-0.03402785211801529,
-0.05917596444487572,
0.028059428557753563,
0.004695272073149681,
0.05190901458263397,
-0.002044449094682932,
-0.... |
07925595-9f81-473e-a499-228f3b50f42c | Make necessary changes to the
pg_hba.conf
file to allow connections to the ClickPipes user from the ClickPipes IP addresses. An example entry in the
pg_hba.conf
file would look like:
response
host all clickpipes_user 0.0.0.0/0 scram-sha-256
Reload the PostgreSQL instance for the changes to take effect:
sql
SELECT pg_reload_conf();
Increase
max_slot_wal_keep_size
{#increase-max_slot_wal_keep_size}
This is a recommended configuration change to ensure that large transactions/commits do not cause the replication slot to be dropped.
You can increase the
max_slot_wal_keep_size
parameter for your PostgreSQL instance to a higher value (at least 100GB or
102400
) by updating the
postgresql.conf
file.
sql
max_slot_wal_keep_size = 102400
You can reload the Postgres instance for the changes to take effect:
sql
SELECT pg_reload_conf();
:::note
For better recommendation of this value you can contact the ClickPipes team.
:::
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your Postgres instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your Postgres instance as you will need them during the ClickPipe creation process. | {"source_file": "generic.md"} | [
0.019174499437212944,
-0.07285158336162567,
-0.02591046504676342,
-0.015171104110777378,
-0.10505404323339462,
-0.008492077700793743,
-0.0026149728801101446,
-0.06583895534276962,
-0.05978311598300934,
0.04194088652729988,
-0.03607805818319321,
0.08810747414827347,
-0.023744046688079834,
-... |
1478dc3b-21d7-47f0-a5f5-29bf91ba407a | sidebar_label: 'Amazon Aurora MySQL'
description: 'Step-by-step guide on how to set up Amazon Aurora MySQL as a source for ClickPipes'
slug: /integrations/clickpipes/mysql/source/aurora
title: 'Aurora MySQL source setup guide'
doc_type: 'guide'
keywords: ['aurora mysql', 'clickpipes', 'binlog retention', 'gtid mode', 'aws']
import rds_backups from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/rds/rds-backups.png';
import parameter_group_in_blade from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/parameter_group_in_blade.png';
import security_group_in_rds_mysql from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/rds/security-group-in-rds-mysql.png';
import edit_inbound_rules from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/edit_inbound_rules.png';
import aurora_config from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/aurora_config.png';
import binlog_format from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_format.png';
import binlog_row_image from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_row_image.png';
import binlog_row_metadata from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_row_metadata.png';
import edit_button from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/edit_button.png';
import enable_gtid from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/enable_gtid.png';
import Image from '@theme/IdealImage';
Aurora MySQL source setup guide
This step-by-step guide shows you how to configure Amazon Aurora MySQL to replicate data into ClickHouse Cloud using the
MySQL ClickPipe
. For common questions around MySQL CDC, see the
MySQL FAQs page
.
Enable binary log retention {#enable-binlog-retention-aurora}
The binary log is a set of log files that contain information about data modifications made to a MySQL server instance, and binary log files are required for replication. To configure binary log retention in Aurora MySQL, you must
enable binary logging
and
increase the binlog retention interval
.
1. Enable binary logging via automated backup {#enable-binlog-logging}
The automated backups feature determines whether binary logging is turned on or off for MySQL. Automated backups can be configured for your instance in the RDS Console by navigating to
Modify
>
Additional configuration
>
Backup
and selecting the
Enable automated backups
checkbox (if not selected already).
We recommend setting the
Backup retention period
to a reasonably long value, depending on the replication use case.
2. Increase the binlog retention interval {#binlog-retention-interval} | {"source_file": "aurora.md"} | [
-0.0022366398479789495,
0.023458555340766907,
-0.11876356601715088,
0.018043609336018562,
0.007229932583868504,
-0.014070725999772549,
0.08003382384777069,
-0.00962502509355545,
-0.05672220513224602,
0.006838839966803789,
0.027636727318167686,
-0.059684980660676956,
0.12008196860551834,
-0... |
6124fb9a-2ef4-4db1-8fa1-bc12e7dc38ec | We recommend setting the
Backup retention period
to a reasonably long value, depending on the replication use case.
2. Increase the binlog retention interval {#binlog-retention-interval}
:::warning
If ClickPipes tries to resume replication and the required binlog files have been purged due to the configured binlog retention value, the ClickPipe will enter an errored state and a resync is required.
:::
By default, Aurora MySQL purges the binary log as soon as possible (i.e.,
lazy purging
). We recommend increasing the binlog retention interval to at least
72 hours
to ensure availability of binary log files for replication under failure scenarios. To set an interval for binary log retention (
binlog retention hours
), use the
mysql.rds_set_configuration
procedure:
text
mysql=> call mysql.rds_set_configuration('binlog retention hours', 72);
If this configuration isn't set or is set to a low interval, it can lead to gaps in the binary logs, compromising ClickPipes' ability to resume replication.
Configure binlog settings {#binlog-settings}
The parameter group can be found when you click on your MySQL instance in the RDS Console, and then navigate to the
Configuration
tab.
:::tip
If you have a MySQL cluster, the parameters below can be found in the
DB cluster
parameter group instead of the DB instance group.
:::
Click the parameter group link, which will take you to its dedicated page. You should see an
Edit
button in the top right.
The following parameters need to be set as follows:
binlog_format
to
ROW
.
binlog_row_metadata
to
FULL
.
binlog_row_image
to
FULL
.
Then, click on
Save Changes
in the top right corner. You may need to reboot your instance for the changes to take effect β a way of knowing this is if you see
Pending reboot
next to the parameter group link in the
Configuration
tab of the Aurora instance.
Enable GTID mode (recommended) {#gtid-mode}
:::tip
The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
:::
Global Transaction Identifiers (GTIDs)
are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward. We
recommend
enabling GTID mode, so that the MySQL ClickPipe can use GTID-based replication.
GTID-based replication is supported for Amazon Aurora MySQL v2 (MySQL 5.7) and v3 (MySQL 8.0), as well as Aurora Serverless v2. To enable GTID mode for your Aurora MySQL instance, follow these steps:
In the RDS Console, click on your MySQL instance.
Click on the
Configuration
tab.
Click on the parameter group link.
Click on the
Edit
button in the top right corner.
Set
enforce_gtid_consistency
to
ON
.
Set
gtid-mode
to
ON
.
Click on
Save Changes
in the top right corner.
Reboot your instance for the changes to take effect. | {"source_file": "aurora.md"} | [
0.008526662364602089,
-0.04099619761109352,
-0.03946616128087044,
0.010014711879193783,
-0.024634676054120064,
-0.03390578180551529,
-0.019347015768289566,
-0.061190105974674225,
-0.018954552710056305,
0.06496315449476242,
-0.0496525801718235,
0.04816620796918869,
0.08174583315849304,
0.00... |
6f7801df-5a2e-4501-8bba-7704e1acd4c3 | Set
enforce_gtid_consistency
to
ON
.
Set
gtid-mode
to
ON
.
Click on
Save Changes
in the top right corner.
Reboot your instance for the changes to take effect.
Configure a database user {#configure-database-user}
Connect to your Aurora MySQL instance as an admin user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER 'clickpipes_user'@'%' IDENTIFIED BY 'some-password';
Grant schema permissions. The following example shows permissions for the
mysql
database. Repeat these commands for each database and host you want to replicate:
sql
GRANT SELECT ON `mysql`.* TO 'clickpipes_user'@'host';
Grant replication permissions to the user:
sql
GRANT REPLICATION CLIENT ON *.* TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'clickpipes_user'@'%';
Configure network access {#configure-network-access}
IP-based access control {#ip-based-access-control}
To restrict traffic to your Aurora MySQL instance, add the
documented static NAT IPs
to the
Inbound rules
of your Aurora security group.
Private access via AWS PrivateLink {#private-access-via-aws-privatelink}
To connect to your Aurora MySQL instance through a private network, you can use AWS PrivateLink. Follow the
AWS PrivateLink setup guide for ClickPipes
to set up the connection.
What's next? {#whats-next}
Now that your Amazon Aurora MySQL instance is configured for binlog replication and securely connecting to ClickHouse Cloud, you can
create your first MySQL ClickPipe
. For common questions around MySQL CDC, see the
MySQL FAQs page
. | {"source_file": "aurora.md"} | [
0.012781614437699318,
-0.06866501271724701,
-0.10931554436683655,
0.011643623933196068,
-0.10328251868486404,
-0.032257840037345886,
0.04729503393173218,
-0.07505552470684052,
-0.0567205585539341,
0.05061512812972069,
-0.07965316623449326,
-0.019907375797629356,
0.15229061245918274,
0.0014... |
a4128b3a-010d-4c67-b5f9-e2922755f16e | sidebar_label: 'Cloud SQL For MySQL '
description: 'Step-by-step guide on how to set up Cloud SQL for MySQL as a source for ClickPipes'
slug: /integrations/clickpipes/mysql/source/gcp
title: 'Cloud SQL for MySQL source setup guide'
keywords: ['google cloud sql', 'mysql', 'clickpipes', 'pitr', 'root ca certificate']
doc_type: 'guide'
import gcp_pitr from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/gcp/gcp-mysql-pitr.png';
import gcp_mysql_flags from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/gcp/gcp-mysql-flags.png';
import gcp_mysql_ip from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/gcp/gcp-mysql-ip.png';
import gcp_mysql_edit_button from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/gcp/gcp-mysql-edit-button.png';
import gcp_mysql_cert from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/gcp/gcp-mysql-cert.png';
import rootca from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/gcp/rootca.png';
import Image from '@theme/IdealImage';
Cloud SQL for MySQL source setup guide
This is a step-by-step guide on how to configure your Cloud SQL for MySQL instance for replicating its data via the MySQL ClickPipe.
Enable binary log retention {#enable-binlog-retention-gcp}
The binary log is a set of log files that contain information about data modifications made to an MySQL server instance, and binary log files are required for replication.
Enable binary logging via PITR{#enable-binlog-logging-gcp}
The PITR feature determines whether binary logging is turned on or off for MySQL in Google Cloud. It can be set in the Cloud console, by editing your Cloud SQL instance and scrolling down to the below section.
Setting the value to a reasonably long value depending on the replication use-case is advisable.
If not already configured, make sure to set these in the database flags section by editing the Cloud SQL:
1.
binlog_expire_logs_seconds
to a value >=
86400
(1 day).
2.
binlog_row_metadata
to
FULL
3.
binlog_row_image
to
FULL
To do this, click on the
Edit
button in the top right corner of the instance overview page.
Then scroll down to the
Flags
section and add the above flags.
Configure a database user {#configure-database-user-gcp}
Connect to your Cloud SQL MySQL instance as the root user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER 'clickpipes_user'@'host' IDENTIFIED BY 'some-password';
Grant schema permissions. The following example shows permissions for the
clickpipes
database. Repeat these commands for each database and host you want to replicate:
sql
GRANT SELECT ON `clickpipes`.* TO 'clickpipes_user'@'host';
Grant replication permissions to the user:
sql
GRANT REPLICATION CLIENT ON *.* TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'clickpipes_user'@'%'; | {"source_file": "gcp.md"} | [
-0.007120494265109301,
-0.04521757736802101,
-0.0014767557149752975,
-0.020679432898759842,
-0.059886060655117035,
-0.04098421707749367,
0.1055552065372467,
0.03670463711023331,
-0.06729345768690109,
0.006841720547527075,
0.03187096491456032,
-0.08422231674194336,
0.11710616946220398,
-0.0... |
f8f3a444-14a7-4d39-b06d-6c86a995bed7 | Grant replication permissions to the user:
sql
GRANT REPLICATION CLIENT ON *.* TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'clickpipes_user'@'%';
Configure network access {#configure-network-access-gcp-mysql}
If you want to restrict traffic to your Cloud SQL instance, please add the
documented static NAT IPs
to the allowlisted IPs of your Cloud SQL MySQL instance.
This can be done either by editing the instance or by heading over to the
Connections
tab in the sidebar in Cloud console.
Download and use root CA certificate {#download-root-ca-certificate-gcp-mysql}
To connect to your Cloud SQL instance, you need to download the root CA certificate.
Go to your Cloud SQL instance in the Cloud console.
Click on
Connections
in the sidebar.
Click on the
Security
tab.
In the
Manage server CA certificates
section, click on the
DOWNLOAD CERTIFICATES
button at the bottom.
In the ClickPipes UI, upload the downloaded certificate when creating a new MySQL ClickPipe. | {"source_file": "gcp.md"} | [
0.0038347719237208366,
-0.08197339624166489,
-0.01055699773132801,
-0.04442020133137703,
-0.11297319829463959,
0.02153833396732807,
0.08443304896354675,
-0.023193296045064926,
-0.036464251577854156,
0.07762569934129715,
-0.0419485829770565,
-0.02950352616608143,
0.1313999891281128,
0.01737... |
625200fc-046b-4acb-9c11-1dfc528c37b8 | sidebar_label: 'Amazon RDS MariaDB'
description: 'Step-by-step guide on how to set up Amazon RDS MariaDB as a source for ClickPipes'
slug: /integrations/clickpipes/mysql/source/rds_maria
title: 'RDS MariaDB source setup guide'
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import rds_backups from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/rds/rds-backups.png';
import rds_config from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/rds_config.png';
import edit_button from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/edit_button.png';
import binlog_format from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_format.png';
import binlog_row_image from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_row_image.png';
import binlog_row_metadata from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_row_metadata.png';
import security_group_in_rds_mysql from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/rds/security-group-in-rds-mysql.png';
import edit_inbound_rules from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/edit_inbound_rules.png';
import Image from '@theme/IdealImage';
RDS MariaDB source setup guide
This is a step-by-step guide on how to configure your RDS MariaDB instance for replicating its data via the MySQL ClickPipe.
:::info
We also recommend going through the MySQL FAQs
here
. The FAQs page is being actively updated.
:::
Enable binary log retention {#enable-binlog-retention-rds}
The binary log is a set of log files that contain information about data modifications made to a MySQL server instance. Binary log files are required for replication. Both of the steps below must be followed:
1. Enable binary logging via automated backup{#enable-binlog-logging-rds}
The automated backups feature determines whether binary logging is turned on or off for MySQL. It can be set in the AWS console:
Setting backup retention to a reasonably long value depending on the replication use-case is advisable.
2. Binlog retention hours{#binlog-retention-hours-rds}
Amazon RDS for MariaDB has a different method of setting binlog retention duration, which is the amount of time a binlog file containing changes is kept. If some changes are not read before the binlog file is removed, replication will be unable to continue. The default value of binlog retention hours is NULL, which means binary logs aren't retained.
To specify the number of hours to retain binary logs on a DB instance, use the mysql.rds_set_configuration function with a binlog retention period long enough for replication to occur.
24 hours
is the recommended minimum.
text
mysql=> call mysql.rds_set_configuration('binlog retention hours', 24); | {"source_file": "rds_maria.md"} | [
-0.00035758386366069317,
-0.013580786064267159,
-0.06073871999979019,
0.03653806447982788,
-0.008526955731213093,
0.05151586979627609,
0.0038116236682981253,
0.09419460594654083,
-0.03841301426291466,
0.0018907474586740136,
0.012032533064484596,
-0.05137717351317406,
0.09279131144285202,
-... |
2d2f10cb-2110-405b-8ba3-b51053b1e009 | text
mysql=> call mysql.rds_set_configuration('binlog retention hours', 24);
Configure binlog settings in the parameter group {#binlog-parameter-group-rds}
The parameter group can be found when you click on your MariaDB instance in the RDS Console, and then navigate to the
Configurations
tab.
Upon clicking on the parameter group link, you will be taken to the parameter group link page. You will see an Edit button in the top-right:
Settings
binlog_format
,
binlog_row_metadata
and
binlog_row_image
need to be set as follows:
binlog_format
to
ROW
.
binlog_row_metadata
to
FULL
binlog_row_image
to
FULL
Next, click on
Save Changes
in the top-right. You may need to reboot your instance for the changes to take effect. If you see
Pending reboot
next to the parameter group link in the Configurations tab of the RDS instance, this is a good indication that a reboot of your instance is needed.
:::tip
If you have a MariaDB cluster, the above parameters would be found in a
DB Cluster
parameter group and not the DB instance group.
:::
Enabling GTID Mode {#gtid-mode-rds}
Global Transaction Identifiers (GTIDs) are unique IDs assigned to each committed transaction in MySQL/MariaDB. They simplify binlog replication and make troubleshooting more straightforward. MariaDB enables GTID mode by default, so no user action is needed to use it.
Configure a database user {#configure-database-user-rds}
Connect to your RDS MariaDB instance as an admin user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER 'clickpipes_user'@'host' IDENTIFIED BY 'some-password';
Grant schema permissions. The following example shows permissions for the
mysql
database. Repeat these commands for each database and host that you want to replicate:
sql
GRANT SELECT ON `mysql`.* TO 'clickpipes_user'@'host';
Grant replication permissions to the user:
```sql
GRANT REPLICATION CLIENT ON
.
TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON
.
TO 'clickpipes_user'@'%';
Configure network access {#configure-network-access}
IP-based access control {#ip-based-access-control}
If you want to restrict traffic to your RDS instance, please add the
documented static NAT IPs
to the
Inbound rules
of your RDS security group.
Private access via AWS PrivateLink {#private-access-via-aws-privatelink}
To connect to your RDS instance through a private network, you can use AWS PrivateLink. Follow our
AWS PrivateLink setup guide for ClickPipes
to set up the connection. | {"source_file": "rds_maria.md"} | [
0.09724967926740646,
-0.0096737677231431,
-0.030674852430820465,
0.05319293960928917,
-0.002298958133906126,
0.0630655586719513,
-0.005157193634659052,
0.04815319553017616,
0.017377987504005432,
0.0326819121837616,
-0.004263067152351141,
-0.011989405378699303,
0.08761154115200043,
0.064874... |
23698f1b-27bd-4894-ad0e-c8d7ce1cb747 | sidebar_label: 'Amazon RDS MySQL'
description: 'Step-by-step guide on how to set up Amazon RDS MySQL as a source for ClickPipes'
slug: /integrations/clickpipes/mysql/source/rds
title: 'RDS MySQL source setup guide'
doc_type: 'guide'
keywords: ['clickpipes', 'mysql', 'cdc', 'data ingestion', 'real-time sync']
import rds_backups from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/rds/rds-backups.png';
import parameter_group_in_blade from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/parameter_group_in_blade.png';
import security_group_in_rds_mysql from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/source/rds/security-group-in-rds-mysql.png';
import edit_inbound_rules from '@site/static/images/integrations/data-ingestion/clickpipes/postgres/source/rds/edit_inbound_rules.png';
import rds_config from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/rds_config.png';
import binlog_format from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_format.png';
import binlog_row_image from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_row_image.png';
import binlog_row_metadata from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/binlog_row_metadata.png';
import edit_button from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/parameter_group/edit_button.png';
import enable_gtid from '@site/static/images/integrations/data-ingestion/clickpipes/mysql/enable_gtid.png';
import Image from '@theme/IdealImage';
RDS MySQL source setup guide
This step-by-step guide shows you how to configure Amazon RDS MySQL to replicate data into ClickHouse Cloud using the
MySQL ClickPipe
. For common questions around MySQL CDC, see the
MySQL FAQs page
.
Enable binary log retention {#enable-binlog-retention-rds}
The binary log is a set of log files that contain information about data modifications made to an MySQL server instance, and binary log files are required for replication. To configure binary log retention in RDS MySQL, you must
enable binary logging
and
increase the binlog retention interval
.
1. Enable binary logging via automated backup {#enable-binlog-logging}
The automated backups feature determines whether binary logging is turned on or off for MySQL. Automated backups can be configured for your instance in the RDS Console by navigating to
Modify
>
Additional configuration
>
Backup
and selecting the
Enable automated backups
checkbox (if not selected already).
We recommend setting the
Backup retention period
to a reasonably long value, depending on the replication use case.
2. Increase the binlog retention interval {#binlog-retention-interval} | {"source_file": "rds.md"} | [
0.006780266296118498,
0.004557395353913307,
-0.09470880776643753,
0.0058081443421542645,
0.007917904295027256,
-0.014238351956009865,
0.0484439916908741,
0.020694078877568245,
-0.05666892230510712,
0.019404200837016106,
0.05912105366587639,
-0.07287726551294327,
0.12534821033477783,
-0.045... |
5c5c669f-d4f2-4293-b779-5e56000b3e64 | We recommend setting the
Backup retention period
to a reasonably long value, depending on the replication use case.
2. Increase the binlog retention interval {#binlog-retention-interval}
:::warning
If ClickPipes tries to resume replication and the required binlog files have been purged due to the configured binlog retention value, the ClickPipe will enter an errored state and a resync is required.
:::
By default, Amazon RDS purges the binary log as soon as possible (i.e.,
lazy purging
). We recommend increasing the binlog retention interval to at least
72 hours
to ensure availability of binary log files for replication under failure scenarios. To set an interval for binary log retention (
binlog retention hours
), use the
mysql.rds_set_configuration
procedure:
text
mysql=> call mysql.rds_set_configuration('binlog retention hours', 72);
If this configuration isn't set or is set to a low interval, it can lead to gaps in the binary logs, compromising ClickPipes' ability to resume replication.
Configure binlog settings {#binlog-settings}
The parameter group can be found when you click on your MySQL instance in the RDS Console, and then navigate to the
Configuration
tab.
:::tip
If you have a MySQL cluster, the parameters below can be found in the
DB cluster
parameter group instead of the DB instance group.
:::
Click the parameter group link, which will take you to its dedicated page. You should see an
Edit
button in the top right.
The following parameters need to be set as follows:
binlog_format
to
ROW
.
binlog_row_metadata
to
FULL
binlog_row_image
to
FULL
Then, click on
Save Changes
in the top right corner. You may need to reboot your instance for the changes to take effect β a way of knowing this is if you see
Pending reboot
next to the parameter group link in the
Configuration
tab of the RDS instance.
Enable GTID Mode {#gtid-mode}
:::tip
The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
:::
Global Transaction Identifiers (GTIDs)
are unique IDs assigned to each committed transaction in MySQL. They simplify binlog replication and make troubleshooting more straightforward. We
recommend
enabling GTID mode, so that the MySQL ClickPipe can use GTID-based replication.
GTID-based replication is supported for Amazon RDS for MySQL versions 5.7, 8.0 and 8.4. To enable GTID mode for your Aurora MySQL instance, follow these steps:
In the RDS Console, click on your MySQL instance.
Click on the
Configuration
tab.
Click on the parameter group link.
Click on the
Edit
button in the top right corner.
Set
enforce_gtid_consistency
to
ON
.
Set
gtid-mode
to
ON
.
Click on
Save Changes
in the top right corner.
Reboot your instance for the changes to take effect. | {"source_file": "rds.md"} | [
0.004247001372277737,
-0.05153535306453705,
-0.023760687559843063,
0.008012809790670872,
-0.006627350114285946,
-0.03516867384314537,
-0.036771755665540695,
-0.04820684343576431,
-0.017681539058685303,
0.07236355543136597,
-0.026296215131878853,
0.0416145958006382,
0.11793830245733261,
-0.... |
f6eaece3-118c-4ca3-9498-66e7f3d12210 | Set
enforce_gtid_consistency
to
ON
.
Set
gtid-mode
to
ON
.
Click on
Save Changes
in the top right corner.
Reboot your instance for the changes to take effect.
:::tip
The MySQL ClickPipe also supports replication without GTID mode. However, enabling GTID mode is recommended for better performance and easier troubleshooting.
:::
Configure a database user {#configure-database-user}
Connect to your RDS MySQL instance as an admin user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER 'clickpipes_user'@'host' IDENTIFIED BY 'some-password';
Grant schema permissions. The following example shows permissions for the
mysql
database. Repeat these commands for each database and host you want to replicate:
sql
GRANT SELECT ON `mysql`.* TO 'clickpipes_user'@'host';
Grant replication permissions to the user:
sql
GRANT REPLICATION CLIENT ON *.* TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'clickpipes_user'@'%';
Configure network access {#configure-network-access}
IP-based access control {#ip-based-access-control}
To restrict traffic to your Aurora MySQL instance, add the
documented static NAT IPs
to the
Inbound rules
of your RDS security group.
Private access via AWS PrivateLink {#private-access-via-aws-privatelink}
To connect to your RDS instance through a private network, you can use AWS PrivateLink. Follow the
AWS PrivateLink setup guide for ClickPipes
to set up the connection.
Next steps {#next-steps}
Now that your Amazon RDS MySQL instance is configured for binlog replication and securely connecting to ClickHouse Cloud, you can
create your first MySQL ClickPipe
. For common questions around MySQL CDC, see the
MySQL FAQs page
. | {"source_file": "rds.md"} | [
0.00039744144305586815,
-0.11491034179925919,
-0.06063428521156311,
-0.013878908008337021,
-0.10221855342388153,
-0.03702410310506821,
-0.010881039313971996,
-0.07987817376852036,
-0.061248015612363815,
0.06730101257562637,
-0.03977659344673157,
-0.0036707434337586164,
0.17326384782791138,
... |
c87b51d5-7069-45a6-a2b0-d52beca8cc0d | sidebar_label: 'Generic MariaDB'
description: 'Set up any MariaDB instance as a source for ClickPipes'
slug: /integrations/clickpipes/mysql/source/generic_maria
title: 'Generic MariaDB source setup guide'
doc_type: 'guide'
keywords: ['generic mariadb', 'clickpipes', 'binary logging', 'ssl tls', 'self hosted']
Generic MariaDB source setup guide
:::info
If you use one of the supported providers (in the sidebar), please refer to the specific guide for that provider.
:::
Enable binary log retention {#enable-binlog-retention}
Binary logs contain information about data modifications made to a MariaDB server instance and are required for replication.
To enable binary logging on your MariaDB instance, ensure that the following settings are configured:
sql
server_id = 1 -- or greater; anything but 0
log_bin = ON
binlog_format = ROW
binlog_row_image = FULL
binlog_row_metadata = FULL -- introduced in 10.5.0
expire_logs_days = 1 -- or higher; 0 would mean logs are preserved forever
To check these settings, run the following SQL commands:
sql
SHOW VARIABLES LIKE 'server_id';
SHOW VARIABLES LIKE 'log_bin';
SHOW VARIABLES LIKE 'binlog_format';
SHOW VARIABLES LIKE 'binlog_row_image';
SHOW VARIABLES LIKE 'binlog_row_metadata';
SHOW VARIABLES LIKE 'expire_logs_days';
If the values don't match, you can set them in the config file (typically at
/etc/my.cnf
or
/etc/my.cnf.d/mariadb-server.cnf
):
ini
[mysqld]
server_id = 1
log_bin = ON
binlog_format = ROW
binlog_row_image = FULL
binlog_row_metadata = FULL ; only in 10.5.0 and newer
expire_logs_days = 1
If the source database is a replica, make sure to also turn on
log_slave_updates
.
You NEED to RESTART the MariaDB instance for the changes to take effect.
:::note
Column exclusion is not supported for MariaDB \<= 10.4 because the
binlog_row_metadata
setting wasn't yet introduced.
:::
Configure a database user {#configure-database-user}
Connect to your MariaDB instance as the root user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER 'clickpipes_user'@'%' IDENTIFIED BY 'some_secure_password';
Grant schema permissions. The following example shows permissions for the
clickpipes
database. Repeat these commands for each database and host you want to replicate:
sql
GRANT SELECT ON `clickpipes`.* TO 'clickpipes_user'@'%';
Grant replication permissions to the user:
sql
GRANT REPLICATION CLIENT ON *.* TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'clickpipes_user'@'%';
:::note
Make sure to replace
clickpipes_user
and
some_secure_password
with your desired username and password.
:::
SSL/TLS configuration (recommended) {#ssl-tls-configuration}
SSL certificates ensure secure connections to your MariaDB database. Configuration depends on your certificate type:
Trusted Certificate Authority (DigiCert, Let's Encrypt, etc.)
- no additional configuration needed. | {"source_file": "generic_maria.md"} | [
0.03347490727901459,
-0.06944877654314041,
0.012312992475926876,
0.0339231863617897,
-0.027228577062487602,
-0.019303353503346443,
-0.020796848461031914,
0.07880891859531403,
0.02507662959396839,
-0.042340587824583054,
-0.008504035882651806,
-0.011850421316921711,
0.05762459710240364,
0.04... |
8915ef0f-1cfe-4280-b690-fe23c5816a9c | Trusted Certificate Authority (DigiCert, Let's Encrypt, etc.)
- no additional configuration needed.
Internal Certificate Authority
- Obtain the root CA certificate file from your IT team. In the ClickPipes UI, upload it when creating a new MariaDB ClickPipe.
Self-hosted MariaDB
- Copy the CA certificate from your MariaDB server (look up the path via the
ssl_ca
setting in your
my.cnf
). In the ClickPipes UI, upload it when creating a new MariaDB ClickPipe. Use the IP address of the server as the host.
Self-hosted MariaDB starting with 11.4
- If your server has
ssl_ca
set up, follow the option above. Otherwise, consult with your IT team to provision a proper certificate. As a last resort, use the "Skip Certificate Verification" toggle in ClickPipes UI (not recommended for security reasons).
For more information on SSL/TLS options, check out our
FAQ
.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your MariaDB instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your MariaDB instance as you will need them during the ClickPipe creation process. | {"source_file": "generic_maria.md"} | [
-0.00009947396029019728,
-0.0259802658110857,
-0.025705751031637192,
-0.004423064179718494,
-0.07425408810377121,
-0.028403399512171745,
-0.05366786941885948,
0.046908650547266006,
0.03586556389927864,
-0.057870324701070786,
-0.03740260750055313,
-0.06611187756061554,
0.08783014118671417,
... |
add0603a-ff8e-418f-a8f7-d9495bda62e8 | sidebar_label: 'Generic MySQL'
description: 'Set up any MySQL instance as a source for ClickPipes'
slug: /integrations/clickpipes/mysql/source/generic
title: 'Generic MySQL source setup guide'
doc_type: 'guide'
keywords: ['generic mysql', 'clickpipes', 'binary logging', 'ssl tls', 'mysql 8.x']
Generic MySQL source setup guide
:::info
If you use one of the supported providers (in the sidebar), please refer to the specific guide for that provider.
:::
Enable binary log retention {#enable-binlog-retention}
Binary logs contain information about data modifications made to a MySQL server instance and are required for replication.
MySQL 8.x and newer {#binlog-v8-x}
To enable binary logging on your MySQL instance, ensure that the following settings are configured:
sql
log_bin = ON -- default value
binlog_format = ROW -- default value
binlog_row_image = FULL -- default value
binlog_row_metadata = FULL
binlog_expire_logs_seconds = 86400 -- 1 day or higher; default is 30 days
To check these settings, run the following SQL commands:
sql
SHOW VARIABLES LIKE 'log_bin';
SHOW VARIABLES LIKE 'binlog_format';
SHOW VARIABLES LIKE 'binlog_row_image';
SHOW VARIABLES LIKE 'binlog_row_metadata';
SHOW VARIABLES LIKE 'binlog_expire_logs_seconds';
If the values don't match, you can run the following SQL commands to set them:
sql
SET PERSIST log_bin = ON;
SET PERSIST binlog_format = ROW;
SET PERSIST binlog_row_image = FULL;
SET PERSIST binlog_row_metadata = FULL;
SET PERSIST binlog_expire_logs_seconds = 86400;
If you have changed the
log_bin
setting, you NEED to RESTART the MySQL instance for the changes to take effect.
After changing the settings, continue on with
configuring a database user
.
MySQL 5.7 {#binlog-v5-x}
To enable binary logging on your MySQL 5.7 instance, ensure that the following settings are configured:
sql
server_id = 1 -- or greater; anything but 0
log_bin = ON
binlog_format = ROW -- default value
binlog_row_image = FULL -- default value
expire_logs_days = 1 -- or higher; 0 would mean logs are preserved forever
To check these settings, run the following SQL commands:
sql
SHOW VARIABLES LIKE 'server_id';
SHOW VARIABLES LIKE 'log_bin';
SHOW VARIABLES LIKE 'binlog_format';
SHOW VARIABLES LIKE 'binlog_row_image';
SHOW VARIABLES LIKE 'expire_logs_days';
If the values don't match, you can set them in the config file (typically at
/etc/my.cnf
or
/etc/mysql/my.cnf
):
ini
[mysqld]
server_id = 1
log_bin = ON
binlog_format = ROW
binlog_row_image = FULL
expire_logs_days = 1
You NEED to RESTART the MySQL instance for the changes to take effect.
:::note
Column exclusion is not supported for MySQL 5.7 because the
binlog_row_metadata
setting wasn't yet introduced.
:::
Configure a database user {#configure-database-user}
Connect to your MySQL instance as the root user and execute the following commands: | {"source_file": "generic.md"} | [
0.01568055897951126,
-0.036091387271881104,
-0.011507035233080387,
-0.002374180592596531,
-0.06354805827140808,
-0.05879431962966919,
0.06612355262041092,
0.02322596125304699,
-0.019476696848869324,
0.024064207449555397,
0.0057025388814508915,
-0.017954878509044647,
0.12588301301002502,
0.... |
dd25bc8a-69fd-4f8e-869d-8fea0bf946d2 | :::
Configure a database user {#configure-database-user}
Connect to your MySQL instance as the root user and execute the following commands:
Create a dedicated user for ClickPipes:
sql
CREATE USER 'clickpipes_user'@'%' IDENTIFIED BY 'some_secure_password';
Grant schema permissions. The following example shows permissions for the
clickpipes
database. Repeat these commands for each database and host you want to replicate:
sql
GRANT SELECT ON `clickpipes`.* TO 'clickpipes_user'@'%';
Grant replication permissions to the user:
sql
GRANT REPLICATION CLIENT ON *.* TO 'clickpipes_user'@'%';
GRANT REPLICATION SLAVE ON *.* TO 'clickpipes_user'@'%';
:::note
Make sure to replace
clickpipes_user
and
some_secure_password
with your desired username and password.
:::
SSL/TLS configuration (recommended) {#ssl-tls-configuration}
SSL certificates ensure secure connections to your MySQL database. Configuration depends on your certificate type:
Trusted Certificate Authority (DigiCert, Let's Encrypt, etc.)
- no additional configuration needed.
Internal Certificate Authority
- Obtain the root CA certificate file from your IT team. In the ClickPipes UI, upload it when creating a new MySQL ClickPipe.
Self-hosted MySQL
- Copy the CA certificate from your MySQL server (typically at
/var/lib/mysql/ca.pem
) and upload it in the UI when creating a new MySQL ClickPipe. Use the IP address of the server as the host.
Self-hosted MySQL without server access
- Contact your IT team for the certificate. As a last resort, use the "Skip Certificate Verification" toggle in ClickPipes UI (not recommended for security reasons).
For more information on SSL/TLS options, check out our
FAQ
.
What's next? {#whats-next}
You can now
create your ClickPipe
and start ingesting data from your MySQL instance into ClickHouse Cloud.
Make sure to note down the connection details you used while setting up your MySQL instance as you will need them during the ClickPipe creation process. | {"source_file": "generic.md"} | [
-0.005235585849732161,
-0.10496150702238083,
-0.1133747398853302,
-0.01630912721157074,
-0.1228378564119339,
-0.06690864264965057,
0.01372453011572361,
0.0024739450309425592,
-0.07316194474697113,
0.060949284583330154,
-0.029452834278345108,
-0.04494957625865936,
0.15130986273288727,
0.011... |
78823c26-121c-4a56-b570-e287aaba85a0 | sidebar_label: 'Fivetran'
slug: /integrations/fivetran
sidebar_position: 2
description: 'Users can transform and model their data in ClickHouse using dbt'
title: 'Fivetran and ClickHouse Cloud'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
keywords: ['fivetran', 'data movement', 'etl', 'clickhouse destination', 'automated data platform']
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Fivetran and ClickHouse Cloud
Overview {#overview}
Fivetran
is the automated data movement platform moving data out of, into and across your cloud data platforms.
ClickHouse Cloud
is supported as a
Fivetran destination
, allowing users to load data from various sources into ClickHouse.
:::note
ClickHouse Cloud destination
is currently in private preview, please contact ClickHouse support in the case of any problems.
:::
ClickHouse Cloud destination {#clickhouse-cloud-destination}
See the official documentation on the Fivetran website:
ClickHouse destination overview
ClickHouse destination setup guide
Contact us {#contact-us}
If you have any questions, or if you have a feature request, please open a
support ticket
. | {"source_file": "index.md"} | [
-0.07133286446332932,
-0.06462474912405014,
0.022688550874590874,
0.007951483130455017,
0.05457427352666855,
-0.042404964566230774,
0.024270663037896156,
-0.04763435944914818,
-0.044101566076278687,
0.017313797026872635,
0.017888784408569336,
0.010463178157806396,
0.030822405591607094,
-0.... |
ee400530-7b3c-4b85-9f13-2640833c66e2 | sidebar_label: 'Features and configurations'
slug: /integrations/dbt/features-and-configurations
sidebar_position: 2
description: 'Features for using dbt with ClickHouse'
keywords: ['clickhouse', 'dbt', 'features']
title: 'Features and Configurations'
doc_type: 'guide'
import TOCInline from '@theme/TOCInline';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Features and Configurations
In this section, we provide documentation about some of the features available for dbt with ClickHouse.
Profile.yml configurations {#profile-yml-configurations}
To connect to ClickHouse from dbt, you'll need to add a
profile
to your
profiles.yml
file. A ClickHouse profile conforms to the following syntax:
```yaml
your_profile_name:
target: dev
outputs:
dev:
type: clickhouse | {"source_file": "features-and-configurations.md"} | [
0.0034042655024677515,
-0.05733044818043709,
-0.00454599317163229,
0.027848469093441963,
0.02874263934791088,
0.039549414068460464,
0.06674493849277496,
0.03440270200371742,
-0.15893971920013428,
-0.04997778311371803,
0.006596881430596113,
-0.008362655527889729,
0.06901327520608902,
-0.041... |
f9bd968b-a516-4973-b406-f9910d8d4625 | ```yaml
your_profile_name:
target: dev
outputs:
dev:
type: clickhouse
# Optional
schema: [default] # ClickHouse database for dbt models
driver: [http] # http or native. If not set this will be autodetermined based on port setting
host: [localhost]
port: [8123] # If not set, defaults to 8123, 8443, 9000, 9440 depending on the secure and driver settings
user: [default] # User for all database operations
password: [<empty string>] # Password for the user
cluster: [<empty string>] # If set, certain DDL/table operations will be executed with the `ON CLUSTER` clause using this cluster. Distributed materializations require this setting to work. See the following ClickHouse Cluster section for more details.
verify: [True] # Validate TLS certificate if using TLS/SSL
secure: [False] # Use TLS (native protocol) or HTTPS (http protocol)
client_cert: [null] # Path to a TLS client certificate in .pem format
client_cert_key: [null] # Path to the private key for the TLS client certificate
retries: [1] # Number of times to retry a "retriable" database exception (such as a 503 'Service Unavailable' error)
compression: [<empty string>] # Use gzip compression if truthy (http), or compression type for a native connection
connect_timeout: [10] # Timeout in seconds to establish a connection to ClickHouse
send_receive_timeout: [300] # Timeout in seconds to receive data from the ClickHouse server
cluster_mode: [False] # Use specific settings designed to improve operation on Replicated databases (recommended for ClickHouse Cloud)
use_lw_deletes: [False] # Use the strategy `delete+insert` as the default incremental strategy.
check_exchange: [True] # Validate that clickhouse support the atomic EXCHANGE TABLES command. (Not needed for most ClickHouse versions)
local_suffix: [_local] # Table suffix of local tables on shards for distributed materializations.
local_db_prefix: [<empty string>] # Database prefix of local tables on shards for distributed materializations. If empty, it uses the same database as the distributed table.
allow_automatic_deduplication: [False] # Enable ClickHouse automatic deduplication for Replicated tables
tcp_keepalive: [False] # Native client only, specify TCP keepalive configuration. Specify custom keepalive settings as [idle_time_sec, interval_sec, probes].
custom_settings: [{}] # A dictionary/mapping of custom ClickHouse settings for the connection - default is empty.
database_engine: '' # Database engine to use when creating new ClickHouse schemas (databases). If not set (the default), new databases will use the default ClickHouse database engine (usually Atomic).
threads: [1] # Number of threads to use when running queries. Before setting it to a number higher than 1, make sure to read the [read-after-write consistency](#read-after-write-consistency) section. | {"source_file": "features-and-configurations.md"} | [
0.0413990318775177,
-0.06987661868333817,
-0.055208299309015274,
-0.008686930872499943,
-0.014672712422907352,
-0.08742763847112656,
0.01900842972099781,
-0.00031550726271234453,
-0.07113262265920639,
-0.018118664622306824,
0.008960801176726818,
-0.09501271694898605,
0.11016083508729935,
0... |
c07e5ec9-4f94-44c2-a32a-7b28dbd02dde | # Native (clickhouse-driver) connection settings
sync_request_timeout: [5] # Timeout for server ping
compress_block_size: [1048576] # Compression block size if compression is enabled
```
Schema vs Database {#schema-vs-database}
The dbt model relation identifier
database.schema.table
is not compatible with Clickhouse because Clickhouse does not
support a
schema
.
So we use a simplified approach
schema.table
, where
schema
is the Clickhouse database. Using the
default
database
is not recommended.
SET Statement Warning {#set-statement-warning}
In many environments, using the SET statement to persist a ClickHouse setting across all DBT queries is not reliable
and can cause unexpected failures. This is particularly true when using HTTP connections through a load balancer that
distributes queries across multiple nodes (such as ClickHouse cloud), although in some circumstances this can also
happen with native ClickHouse connections. Accordingly, we recommend configuring any required ClickHouse settings in the
"custom_settings" property of the DBT profile as a best practice, instead of relying on a pre-hook "SET" statement as
has been occasionally suggested.
Setting
quote_columns
{#setting-quote_columns}
To prevent a warning, make sure to explicitly set a value for
quote_columns
in your
dbt_project.yml
. See the
doc on quote_columns
for more information.
yaml
seeds:
+quote_columns: false #or `true` if you have CSV column headers with spaces
About the ClickHouse Cluster {#about-the-clickhouse-cluster}
When using a ClickHouse cluster, you need to consider two things:
- Setting the
cluster
setting.
- Ensuring read-after-write consistency, especially if you are using more than one
threads
.
Cluster Setting {#cluster-setting}
The
cluster
setting in profile enables dbt-clickhouse to run against a ClickHouse cluster. If
cluster
is set in the profile,
all models will be created with the
ON CLUSTER
clause
by defaultβexcept for those using a
Replicated
engine. This includes:
Database creation
View materializations
Table and incremental materializations
Distributed materializations
Replicated engines will
not
include the
ON CLUSTER
clause, as they are designed to manage replication internally.
To
opt out
of cluster-based creation for a specific model, add the
disable_on_cluster
config:
```sql
{{ config(
engine='MergeTree',
materialized='table',
disable_on_cluster='true'
)
}}
```
table and incremental materializations with non-replicated engine will not be affected by
cluster
setting (model would
be created on the connected node only).
Compatibility
If a model has been created without a
cluster
setting, dbt-clickhouse will detect the situation and run all DDL/DML
without
on cluster
clause for this model.
Read-after-write Consistency {#read-after-write-consistency} | {"source_file": "features-and-configurations.md"} | [
-0.01916147582232952,
-0.04726153612136841,
-0.03976163640618324,
0.060575369745492935,
-0.0676613599061966,
-0.03164943307638168,
-0.008531956002116203,
-0.0406356006860733,
-0.04747208580374718,
-0.013837139122188091,
-0.010838689282536507,
-0.044217485934495926,
0.06700111925601959,
-0.... |
7d3aa889-afb9-4390-95f0-8b27a652231b | Read-after-write Consistency {#read-after-write-consistency}
dbt relies on a read-after-insert consistency model. This is not compatible with ClickHouse clusters that have more than one replica if you cannot guarantee that all operations will go to the same replica. You may not encounter problems in your day-to-day usage of dbt, but there are some strategies depending on your cluster to have this guarantee in place:
- If you are using a ClickHouse Cloud cluster, you only need to set
select_sequential_consistency: 1
in your profile's
custom_settings
property. You can find more information about this setting
here
.
- If you are using a self-hosted cluster, make sure all dbt requests are sent to the same ClickHouse replica. If you have a load balancer on top of it, try using some
replica aware routing
/
sticky sessions
mechanism to be able to always reach the same replica. Adding the setting
select_sequential_consistency = 1
in clusters outside ClickHouse Cloud is
not recommended
.
General information about features {#general-information-about-features}
General table configurations {#general-table-configurations} | {"source_file": "features-and-configurations.md"} | [
-0.026861930266022682,
-0.14055362343788147,
0.019464807584881783,
0.05887822434306145,
-0.020754776895046234,
-0.07689456641674042,
0.018332690000534058,
-0.041484981775283813,
0.02252754382789135,
0.026996325701475143,
-0.00010019413457484916,
0.016835471615195274,
0.04937050864100456,
-... |
b4cbbaa8-7527-45ad-9b70-508ecd2ceda5 | | Option | Description | Default if any |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- |
| engine | The table engine (type of table) to use when creating tables |
MergeTree()
|
| order_by | A tuple of column names or arbitrary expressions. This allows you to create a small sparse index that helps find data faster. |
tuple()
|
| partition_by | A partition is a logical combination of records in a table by a specified criterion. The partition key can be any expression from the table columns. | |
| sharding_key | Sharding key determines the destination server when inserting into distributed engine table. The sharding key can be random or as an output of a hash function |
rand()
) |
| primary_key | Like order_by, a ClickHouse primary key expression. If not specified, ClickHouse will use the order by expression as the primary key | |
| unique_key | A tuple of column names that uniquely identify rows. Used with incremental models for updates. | | | {"source_file": "features-and-configurations.md"} | [
0.021959885954856873,
0.043860387057065964,
0.025231484323740005,
0.003765995381399989,
-0.023593224585056305,
0.06336911022663116,
0.039043694734573364,
0.042815200984478,
0.03774546831846237,
-0.039687562733888626,
0.029394395649433136,
-0.06583267450332642,
-0.05396424978971481,
-0.0221... |
93b4e8b9-8e1e-4e5f-9afa-c95bd968406f | | settings | A map/dictionary of "TABLE" settings to be used to DDL statements like 'CREATE TABLE' with this model | |
| query_settings | A map/dictionary of ClickHouse user level settings to be used with
INSERT
or
DELETE
statements in conjunction with this model | |
| ttl | A TTL expression to be used with the table. The TTL expression is a string that can be used to specify the TTL for the table. | |
| indexes | A list of
data skipping indexes to create
. Check below for more information. | |
| sql_security | Allow you to specify which ClickHouse user to use when executing the view's underlying query.Β
SQL SECURITY
Β
has two legal values
:Β
definer
invoker
. | |
| definer | If
sql_security
was set to
definer
, you have to specify any existing user orΒ
CURRENT_USER
Β in theΒ
definer
Β clause. | |
| projections | A list of
projections
to be created. Check
About projections
for details. | | | {"source_file": "features-and-configurations.md"} | [
-0.015184112824499607,
-0.11937858909368515,
-0.14117509126663208,
0.04280603677034378,
-0.11462727934122086,
-0.020504051819443703,
0.08554837107658386,
-0.0037073970306664705,
-0.08870090544223785,
-0.007642765995115042,
0.06871272623538971,
-0.0726625919342041,
0.11295044422149658,
-0.0... |
8fdaf514-90ec-4b8a-b9f9-0d26a6d37619 | About data skipping indexes {#data-skipping-indexes}
Data skipping indexes are only available for the
table
materialization. To add a list of data skipping indexes to a table, use the
indexes
configuration:
sql
{{ config(
materialized='table',
indexes=[{
'name': 'your_index_name',
'definition': 'your_column TYPE minmax GRANULARITY 2'
}]
) }}
About projections {#projections}
You can add
projections
to
table
and
distributed_table
materializations using the
projections
configuration:
sql
{{ config(
materialized='table',
projections=[
{
'name': 'your_projection_name',
'query': 'SELECT department, avg(age) AS avg_age GROUP BY department'
}
]
) }}
Note
: For distributed tables, the projection is applied to the
_local
tables, not to the distributed proxy table.
Supported table engines {#supported-table-engines}
| Type | Details |
|------------------------|-------------------------------------------------------------------------------------------|
| MergeTree (default) | https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/. |
| HDFS | https://clickhouse.com/docs/en/engines/table-engines/integrations/hdfs |
| MaterializedPostgreSQL | https://clickhouse.com/docs/en/engines/table-engines/integrations/materialized-postgresql |
| S3 | https://clickhouse.com/docs/en/engines/table-engines/integrations/s3 |
| EmbeddedRocksDB | https://clickhouse.com/docs/en/engines/table-engines/integrations/embedded-rocksdb |
| Hive | https://clickhouse.com/docs/en/engines/table-engines/integrations/hive |
Experimental supported table engines {#experimental-supported-table-engines}
| Type | Details |
|-------------------|---------------------------------------------------------------------------|
| Distributed Table | https://clickhouse.com/docs/en/engines/table-engines/special/distributed. |
| Dictionary | https://clickhouse.com/docs/en/engines/table-engines/special/dictionary |
If you encounter issues connecting to ClickHouse from dbt with one of the above engines, please report an
issue
here
.
A note on model settings {#a-note-on-model-settings}
ClickHouse has several types/levels of "settings". In the model configuration above, two types of these are
configurable.
settings
means the
SETTINGS
clause used in
CREATE TABLE/VIEW
types of DDL statements, so this is generally settings that are specific to the
specific ClickHouse table engine. The new | {"source_file": "features-and-configurations.md"} | [
0.01900443434715271,
0.04148910939693451,
0.009065862745046616,
0.1106567308306694,
-0.00864145252853632,
-0.03240638226270676,
-0.014047116972506046,
-0.01260315626859665,
-0.09487340599298477,
0.06926017254590988,
0.03343864902853966,
-0.05761454254388809,
0.06544623523950577,
-0.0432431... |
310754e8-2849-4f06-930e-0a5a8699bd5e | clause used in
CREATE TABLE/VIEW
types of DDL statements, so this is generally settings that are specific to the
specific ClickHouse table engine. The new
query_settings
is use to add a
SETTINGS
clause to the
INSERT
and
DELETE
queries used for model materialization (
including incremental materializations).
There are hundreds of ClickHouse settings, and it's not always clear which is a "table" setting and which is a "user"
setting (although the latter are generally
available in the
system.settings
table.) In general the defaults are recommended, and any use of these properties
should be carefully researched and tested.
Column Configuration {#column-configuration}
NOTE:
The column configuration options below require
model contracts
to be enforced.
| Option | Description | Default if any |
|--------|------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
| codec | A string consisting of arguments passed to
CODEC()
in the column's DDL. For example:
codec: "Delta, ZSTD"
will be compiled as
CODEC(Delta, ZSTD)
. |
| ttl | A string consisting of a
TTL (time-to-live) expression
that defines a TTL rule in the column's DDL. For example:
ttl: ts + INTERVAL 1 DAY
will be compiled as
TTL ts + INTERVAL 1 DAY
. |
Example of schema configuration {#example-of-schema-configuration}
yaml
models:
- name: table_column_configs
description: 'Testing column-level configurations'
config:
contract:
enforced: true
columns:
- name: ts
data_type: timestamp
codec: ZSTD
- name: x
data_type: UInt8
ttl: ts + INTERVAL 1 DAY
Adding complex types {#adding-complex-types}
dbt automatically determines the data type of each column by analyzing the SQL used to create the model. However, in some cases this process may not accurately determine the data type, leading to conflicts with the types specified in the contract
data_type
property. To address this, we recommend using the
CAST()
function in the model SQL to explicitly define the desired type. For example:
```sql
{{
config(
materialized="materialized_view",
engine="AggregatingMergeTree",
order_by=["event_type"],
)
}} | {"source_file": "features-and-configurations.md"} | [
-0.03882090747356415,
-0.0705791711807251,
-0.06250513345003128,
0.05918223783373833,
-0.07645449787378311,
0.036351822316646576,
0.04344378411769867,
-0.013264606706798077,
-0.022127583622932434,
0.021945759654045105,
0.05677071958780289,
-0.019905786961317062,
0.05917414650321007,
-0.067... |
dcc77c4f-6681-4ef4-955d-68262b81b652 | ```sql
{{
config(
materialized="materialized_view",
engine="AggregatingMergeTree",
order_by=["event_type"],
)
}}
select
-- event_type may be infered as a String but we may prefer LowCardinality(String):
CAST(event_type, 'LowCardinality(String)') as event_type,
-- countState() may be infered as
AggregateFunction(count)
but we may prefer to change the type of the argument used:
CAST(countState(), 'AggregateFunction(count, UInt32)') as response_count,
-- maxSimpleState() may be infered as
SimpleAggregateFunction(max, String)
but we may prefer to also change the type of the argument used:
CAST(maxSimpleState(event_type), 'SimpleAggregateFunction(max, LowCardinality(String))') as max_event_type
from {{ ref('user_events') }}
group by event_type
```
Features {#features}
Materialization: view {#materialization-view}
A dbt model can be created as a
ClickHouse view
and configured using the following syntax:
Project File (
dbt_project.yml
):
yaml
models:
<resource-path>:
+materialized: view
Or config block (
models/<model_name>.sql
):
python
{{ config(materialized = "view") }}
Materialization: table {#materialization-table}
A dbt model can be created as a
ClickHouse table
and
configured using the following syntax:
Project File (
dbt_project.yml
):
yaml
models:
<resource-path>:
+materialized: table
+order_by: [ <column-name>, ... ]
+engine: <engine-type>
+partition_by: [ <column-name>, ... ]
Or config block (
models/<model_name>.sql
):
python
{{ config(
materialized = "table",
engine = "<engine-type>",
order_by = [ "<column-name>", ... ],
partition_by = [ "<column-name>", ... ],
...
]
) }}
Materialization: incremental {#materialization-incremental}
Table model will be reconstructed for each dbt execution. This may be infeasible and extremely costly for larger result sets or complex transformations. To address this challenge and reduce the build time, a dbt model can be created as an incremental ClickHouse table and is configured using the following syntax:
Model definition in
dbt_project.yml
:
yaml
models:
<resource-path>:
+materialized: incremental
+order_by: [ <column-name>, ... ]
+engine: <engine-type>
+partition_by: [ <column-name>, ... ]
+unique_key: [ <column-name>, ... ]
+inserts_only: [ True|False ]
Or config block in
models/<model_name>.sql
:
python
{{ config(
materialized = "incremental",
engine = "<engine-type>",
order_by = [ "<column-name>", ... ],
partition_by = [ "<column-name>", ... ],
unique_key = [ "<column-name>", ... ],
inserts_only = [ True|False ],
...
]
) }}
Configurations {#configurations}
Configurations that are specific for this materialization type are listed below: | {"source_file": "features-and-configurations.md"} | [
0.02399572730064392,
-0.016855141147971153,
0.03924797475337982,
0.05375562235713005,
-0.10377524793148041,
0.044236719608306885,
0.06181870028376579,
0.09448876976966858,
-0.026393748819828033,
-0.0016191485337913036,
-0.02537597343325615,
-0.07882226258516312,
-0.009642193093895912,
0.01... |
436fad11-eb8b-4853-b3e2-fc258f416492 | Configurations {#configurations}
Configurations that are specific for this materialization type are listed below:
| Option | Description | Required? |
|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
|
unique_key
| A tuple of column names that uniquely identify rows. For more details on uniqueness constraints, see
here
. | Required. If not provided altered rows will be added twice to the incremental table. |
|
inserts_only
| It has been deprecated in favor of the
append
incremental
strategy
, which operates in the same way. If set to True for an incremental model, incremental updates will be inserted directly to the target table without creating intermediate table. . If
inserts_only
is set,
incremental_strategy
is ignored. | Optional (default:
False
) |
|
incremental_strategy
| The strategy to use for incremental materialization.
delete+insert
,
append
,
insert_overwrite
, or
microbatch
are supported. For additional details on strategies, see
here
| Optional (default: 'default') |
|
incremental_predicates
| Additional conditions to be applied to the incremental materialization (only applied to
delete+insert
strategy | Optional
Incremental Model Strategies {#incremental-model-strategies}
dbt-clickhouse
supports three incremental model strategies.
The Default (Legacy) Strategy {#default-legacy-strategy} | {"source_file": "features-and-configurations.md"} | [
0.0001552348112454638,
0.04056110605597496,
-0.015486091375350952,
-0.00006283797847572714,
-0.004365319851785898,
0.0024840268306434155,
0.02330174297094345,
0.020828895270824432,
-0.07476679235696793,
-0.010394182056188583,
-0.03352367877960205,
-0.05341477319598198,
0.017944179475307465,
... |
17e9ad84-1530-412d-a97d-48893bc1fa6c | Incremental Model Strategies {#incremental-model-strategies}
dbt-clickhouse
supports three incremental model strategies.
The Default (Legacy) Strategy {#default-legacy-strategy}
Historically ClickHouse has had only limited support for updates and deletes, in the form of asynchronous "mutations."
To emulate expected dbt behavior,
dbt-clickhouse by default creates a new temporary table containing all unaffected (not deleted, not changed) "old"
records, plus any new or updated records,
and then swaps or exchanges this temporary table with the existing incremental model relation. This is the only strategy
that preserves the original relation if something
goes wrong before the operation completes; however, since it involves a full copy of the original table, it can be quite
expensive and slow to execute.
The Delete+Insert Strategy {#delete-insert-strategy}
ClickHouse added "lightweight deletes" as an experimental feature in version 22.8. Lightweight deletes are significantly
faster than ALTER TABLE ... DELETE
operations, because they don't require rewriting ClickHouse data parts. The incremental strategy
delete+insert
utilizes lightweight deletes to implement
incremental materializations that perform significantly better than the "legacy" strategy. However, there are important
caveats to using this strategy:
Lightweight deletes must be enabled on your ClickHouse server using the setting
allow_experimental_lightweight_delete=1
or you
must set
use_lw_deletes=true
in your profile (which will enable that setting for your dbt sessions)
Lightweight deletes are now production ready, but there may be performance and other problems on ClickHouse versions
earlier than 23.3.
This strategy operates directly on the affected table/relation (with creating any intermediate or temporary tables),
so if there is an issue during the operation, the
data in the incremental model is likely to be in an invalid state
When using lightweight deletes, dbt-clickhouse enabled the setting
allow_nondeterministic_mutations
. In some very
rare cases using non-deterministic incremental_predicates
this could result in a race condition for the updated/deleted items (and related log messages in the ClickHouse logs).
To ensure consistent results the
incremental predicates should only include sub-queries on data that will not be modified during the incremental
materialization.
The Microbatch Strategy (Requires dbt-core >= 1.9) {#microbatch-strategy}
The incremental strategy
microbatch
has been a dbt-core feature since version 1.9, designed to handle large
time-series data transformations efficiently. In dbt-clickhouse, it builds on top of the existing
delete_insert
incremental strategy by splitting the increment into predefined time-series batches based on the
event_time
and
batch_size
model configurations. | {"source_file": "features-and-configurations.md"} | [
-0.11280481517314911,
-0.05935269966721535,
-0.001870721927843988,
-0.020812775939702988,
-0.0006924777408130467,
-0.04591212421655655,
-0.03719853609800339,
-0.06309361010789871,
0.05600762367248535,
0.0446094311773777,
0.08619364351034164,
0.08139580488204956,
0.044102225452661514,
-0.04... |
14bd99e9-f5db-4394-bdfa-9a4d6d65e80a | incremental strategy by splitting the increment into predefined time-series batches based on the
event_time
and
batch_size
model configurations.
Beyond handling large transformations, microbatch provides the ability to:
-
Reprocess failed batches
.
- Auto-detect
parallel batch execution
.
- Eliminate the need for complex conditional logic in
backfilling
.
For detailed microbatch usage, refer to the
official documentation
.
Available Microbatch Configurations {#available-microbatch-configurations} | {"source_file": "features-and-configurations.md"} | [
-0.08063434809446335,
0.008341026492416859,
-0.043705619871616364,
-0.05151398479938507,
-0.08243680745363235,
-0.012421717867255211,
-0.043119315057992935,
-0.011022026650607586,
-0.029019074514508247,
0.021305756643414497,
0.02802494540810585,
-0.0016887838719412684,
-0.032569583505392075,... |
74fe5f9e-6669-47b6-aa21-ba505860f547 | For detailed microbatch usage, refer to the
official documentation
.
Available Microbatch Configurations {#available-microbatch-configurations}
| Option | Description | Default if any |
|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
| event_time | The column indicating "at what time did the row occur." Required for your microbatch model and any direct parents that should be filtered. | |
| begin | The "beginning of time" for the microbatch model. This is the starting point for any initial or full-refresh builds. For example, a daily-grain microbatch model run on 2024-10-01 with begin = '2023-10-01 will process 366 batches (it's a leap year!) plus the batch for "today." | |
| batch_size | The granularity of your batches. Supported values are
hour
,
day
,
month
, and
year
| |
| lookback | Process X batches prior to the latest bookmark to capture late-arriving records. | 1 |
| concurrent_batches | Overrides dbt's auto detect for running batches concurrently (at the same time). Read more about
configuring concurrent batches
. Setting to true runs batches concurrently (in parallel). false runs batches sequentially (one after the other). | |
The Append Strategy {#append-strategy} | {"source_file": "features-and-configurations.md"} | [
-0.05546864867210388,
-0.012268180958926678,
-0.13254006206989288,
0.016083668917417526,
-0.0857500284910202,
0.035909950733184814,
-0.012851891107857227,
0.033164095133543015,
-0.05260370671749115,
0.00885718036442995,
0.08581750094890594,
-0.03543563559651375,
-0.04374281316995621,
0.028... |
1facdcb4-ea84-4fe0-b49f-df79346bdd11 | The Append Strategy {#append-strategy}
This strategy replaces the
inserts_only
setting in previous versions of dbt-clickhouse. This approach simply appends
new rows to the existing relation.
As a result duplicate rows are not eliminated, and there is no temporary or intermediate table. It is the fastest
approach if duplicates are either permitted
in the data or excluded by the incremental query WHERE clause/filter.
The insert_overwrite Strategy (Experimental) {#insert-overwrite-strategy}
[IMPORTANT]
Currently, the insert_overwrite strategy is not fully functional with distributed materializations.
Performs the following steps:
Create a staging (temporary) table with the same structure as the incremental model relation:
CREATE TABLE <staging> AS <target>
.
Insert only new records (produced by
SELECT
) into the staging table.
Replace only new partitions (present in the staging table) into the target table.
This approach has the following advantages:
It is faster than the default strategy because it doesn't copy the entire table.
It is safer than other strategies because it doesn't modify the original table until the INSERT operation completes
successfully: in case of intermediate failure, the original table is not modified.
It implements "partitions immutability" data engineering best practice. Which simplifies incremental and parallel data
processing, rollbacks, etc.
The strategy requires
partition_by
to be set in the model configuration. Ignores all other strategies-specific
parameters of the model config.
Materialization: materialized_view (Experimental) {#materialized-view}
A
materialized_view
materialization should be a
SELECT
from an existing (source) table. The adapter will create a
target table with the model name
and a ClickHouse MATERIALIZED VIEW with the name
<model_name>_mv
. Unlike PostgreSQL, a ClickHouse materialized view is
not "static" (and has
no corresponding REFRESH operation). Instead, it acts as an "insert trigger", and will insert new rows into the target
table using the defined
SELECT
"transformation" in the view definition on rows inserted into the source table. See the
test file
for an introductory example
of how to use this functionality.
Clickhouse provides the ability for more than one materialized view to write records to the same target table. To
support this in dbt-clickhouse, you can construct a
UNION
in your model file, such that the SQL for each of your
materialized views is wrapped with comments of the form
--my_mv_name:begin
and
--my_mv_name:end
.
For example the following will build two materialized views both writing data to the same destination table of the
model. The names of the materialized views will take the form
<model_name>_mv1
and
<model_name>_mv2
:
sql
--mv1:begin
select a,b,c from {{ source('raw', 'table_1') }}
--mv1:end
union all
--mv2:begin
select a,b,c from {{ source('raw', 'table_2') }}
--mv2:end
IMPORTANT! | {"source_file": "features-and-configurations.md"} | [
-0.08691290766000748,
-0.04566085711121559,
0.04666765034198761,
0.008369125425815582,
-0.009021229110658169,
-0.05603525787591934,
-0.06169503554701805,
-0.01280362717807293,
-0.03118463233113289,
0.07160743325948715,
0.06126857548952103,
0.027199506759643555,
0.1153949722647667,
-0.06436... |
7091497a-ed1a-4b4d-bb5c-bcea63081bce | sql
--mv1:begin
select a,b,c from {{ source('raw', 'table_1') }}
--mv1:end
union all
--mv2:begin
select a,b,c from {{ source('raw', 'table_2') }}
--mv2:end
IMPORTANT!
When updating a model with multiple materialized views (MVs), especially when renaming one of the MV names,
dbt-clickhouse does not automatically drop the old MV. Instead,
you will encounter the following warning:
Warning - Table <previous table name> was detected with the same pattern as model name <your model name> but was not found in this run. In case it is a renamed mv that was previously part of this model, drop it manually (!!!)
Data catch-up {#data-catch-up}
Currently, when creating a materialized view (MV), the target table is first populated with historical data before the MV itself is created.
In other words, dbt-clickhouse initially creates the target table and preloads it with historical data based on the query defined for the MV. Only after this step is the MV created.
If you prefer not to preload historical data during MV creation, you can disable this behavior by setting the catch-up config to False:
python
{{config(
materialized='materialized_view',
engine='MergeTree()',
order_by='(id)',
catchup=False
)}}
Refreshable Materialized Views {#refreshable-materialized-views}
To use
Refreshable Materialized View
,
please adjust the following configs as needed in your MV model (all these configs are supposed to be set inside a
refreshable config object): | {"source_file": "features-and-configurations.md"} | [
-0.06693010032176971,
-0.10767332464456558,
0.020315466448664665,
0.04310654103755951,
-0.006115451920777559,
-0.058285344392061234,
-0.00794616062194109,
-0.03971969336271286,
0.045116111636161804,
0.03595351427793503,
0.06126580014824867,
-0.040075961500406265,
0.07265952974557877,
-0.09... |
7565dd75-cc50-45c7-986e-7ad053a95ed0 | To use
Refreshable Materialized View
,
please adjust the following configs as needed in your MV model (all these configs are supposed to be set inside a
refreshable config object):
| Option | Description | Required | Default Value |
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------|
| refresh_interval | The interval clause (required) | Yes | |
| randomize | The randomization clause, will appear after
RANDOMIZE FOR
| | |
| append | If set to
True
, each refresh inserts rows into the table without deleting existing rows. The insert is not atomic, just like a regular INSERT SELECT. | | False |
| depends_on | A dependencies list for the refreshable mv. Please provide the dependencies in the following format
{schema}.{view_name}
| | |
| depends_on_validation | Whether to validate the existence of the dependencies provided in
depends_on
. In case a dependency doesn't contain a schema, the validation occurs on schema
default
| | False |
A config example for refreshable materialized view:
python
{{
config(
materialized='materialized_view',
refreshable={
"interval": "EVERY 5 MINUTE",
"randomize": "1 MINUTE",
"append": True,
"depends_on": ['schema.depend_on_model'],
"depends_on_validation": True
}
)
}}
Limitations {#limitations}
When creating a refreshable materialized view (MV) in ClickHouse that has a dependency, ClickHouse does not throw an
error if the specified dependency does not exist at the time of creation. Instead, the refreshable MV remains in an
inactive state, waiting for the dependency to be satisfied before it starts processing updates or refreshing.
This behavior is by design, but it may lead to delays in data availability if the required dependency is not addressed
promptly. Users are advised to ensure all dependencies are correctly defined and exist before creating a refreshable
materialized view.
As of today, there is no actual "dbt linkage" between the mv and its dependencies, therefore the creation order is not
guaranteed.
The refreshable feature was not tested with multiple mvs directing to the same target model. | {"source_file": "features-and-configurations.md"} | [
-0.019935928285121918,
-0.060638610273599625,
0.022912779822945595,
0.05023444816470146,
-0.042371854186058044,
0.024693211540579796,
-0.07853496819734573,
-0.05257532373070717,
0.010332353413105011,
0.05752845108509064,
0.032895904034376144,
-0.04731496423482895,
-0.00975012220442295,
-0.... |
2fe86d59-4a12-4168-86a3-0a7456fbf1e3 | The refreshable feature was not tested with multiple mvs directing to the same target model.
Materialization: dictionary (experimental) {#materialization-dictionary}
See the tests
in https://github.com/ClickHouse/dbt-clickhouse/blob/main/tests/integration/adapter/dictionary/test_dictionary.py for
examples of how to
implement materializations for ClickHouse dictionaries
Materialization: distributed_table (experimental) {#materialization-distributed-table}
Distributed table created with following steps:
Creates temp view with sql query to get right structure
Create empty local tables based on view
Create distributed table based on local tables.
Data inserts into distributed table, so it is distributed across shards without duplicating.
Notes:
- dbt-clickhouse queries now automatically include the setting
insert_distributed_sync = 1
in order to ensure that
downstream incremental
materialization operations execute correctly. This could cause some distributed table inserts to run more slowly than
expected.
Distributed table model example {#distributed-table-model-example}
```sql
{{
config(
materialized='distributed_table',
order_by='id, created_at',
sharding_key='cityHash64(id)',
engine='ReplacingMergeTree'
)
}}
select id, created_at, item
from {{ source('db', 'table') }}
```
Generated migrations {#distributed-table-generated-migrations}
``sql
CREATE TABLE db.table_local on cluster cluster (
id
UInt64,
created_at
DateTime,
item` String
)
ENGINE = ReplacingMergeTree
ORDER BY (id, created_at)
SETTINGS index_granularity = 8192;
CREATE TABLE db.table on cluster cluster (
id
UInt64,
created_at
DateTime,
item
String
)
ENGINE = Distributed ('cluster', 'db', 'table_local', cityHash64(id));
```
materialization: distributed_incremental (experimental) {#materialization-distributed-incremental}
Incremental model based on the same idea as distributed table, the main difficulty is to process all incremental
strategies correctly.
The Append Strategy
just insert data into distributed table.
The Delete+Insert
Strategy creates distributed temp table to work with all data on every shard.
The Default (Legacy) Strategy
creates distributed temp and intermediate tables for the same reason.
Only shard tables are replacing, because distributed table does not keep data.
The distributed table reloads only when the full_refresh mode is enabled or the table structure may have changed.
Distributed incremental model example {#distributed-incremental-model-example}
```sql
{{
config(
materialized='distributed_incremental',
engine='MergeTree',
incremental_strategy='append',
unique_key='id,created_at'
)
}}
select id, created_at, item
from {{ source('db', 'table') }}
```
Generated migrations {#distributed-incremental-generated-migrations} | {"source_file": "features-and-configurations.md"} | [
-0.03835207596421242,
-0.12284593284130096,
-0.0008856015629135072,
0.07355368137359619,
-0.0046995896846055984,
-0.11722318828105927,
-0.046425554901361465,
-0.05593666434288025,
-0.03763720393180847,
0.08734400570392609,
0.032729923725128174,
0.02205333299934864,
0.09763196110725403,
-0.... |
e9c8986e-23c5-4814-9714-d8edba788930 | select id, created_at, item
from {{ source('db', 'table') }}
```
Generated migrations {#distributed-incremental-generated-migrations}
``sql
CREATE TABLE db.table_local on cluster cluster (
id
UInt64,
created_at
DateTime,
item` String
)
ENGINE = MergeTree
SETTINGS index_granularity = 8192;
CREATE TABLE db.table on cluster cluster (
id
UInt64,
created_at
DateTime,
item
String
)
ENGINE = Distributed ('cluster', 'db', 'table_local', cityHash64(id));
```
Snapshot {#snapshot}
dbt snapshots allow a record to be made of changes to a mutable model over time. This in turn allows point-in-time
queries on models, where analysts can βlook back in timeβ at the previous state of a model. This functionality is
supported by the ClickHouse connector and is configured using the following syntax:
Config block in
snapshots/<model_name>.sql
:
python
{{
config(
schema = "<schema-name>",
unique_key = "<column-name>",
strategy = "<strategy>",
updated_at = "<updated-at-column-name>",
)
}}
For more information on configuration, check out the
snapshot configs
reference page.
Contracts and Constraints {#contracts-and-constraints}
Only exact column type contracts are supported. For example, a contract with a UInt32 column type will fail if the model
returns a UInt64 or other integer type.
ClickHouse also support
only
CHECK
constraints on the entire table/model. Primary key, foreign key, unique, and
column level CHECK constraints are not supported.
(See ClickHouse documentation on primary/order by keys.)
Additional ClickHouse Macros {#additional-clickhouse-macros}
Model Materialization Utility Macros {#model-materialization-utility-macros}
The following macros are included to facilitate creating ClickHouse specific tables and views:
engine_clause
-- Uses the
engine
model configuration property to assign a ClickHouse table engine. dbt-clickhouse
uses the
MergeTree
engine by default.
partition_cols
-- Uses the
partition_by
model configuration property to assign a ClickHouse partition key. No
partition key is assigned by default.
order_cols
-- Uses the
order_by
model configuration to assign a ClickHouse order by/sorting key. If not specified
ClickHouse will use an empty tuple() and the table will be unsorted
primary_key_clause
-- Uses the
primary_key
model configuration property to assign a ClickHouse primary key. By
default, primary key is set and ClickHouse will use the order by clause as the primary key.
on_cluster_clause
-- Uses the
cluster
profile property to add an
ON CLUSTER
clause to certain dbt-operations:
distributed materializations, views creation, database creation.
ttl_config
-- Uses the
ttl
model configuration property to assign a ClickHouse table TTL expression. No TTL is
assigned by default.
s3Source Helper Macro {#s3source-helper-macro} | {"source_file": "features-and-configurations.md"} | [
-0.02831687405705452,
-0.032859306782484055,
-0.03348561376333237,
0.056526582688093185,
0.028418205678462982,
-0.05283725634217262,
-0.04520552605390549,
-0.0016175154596567154,
-0.010350481607019901,
0.04200001060962677,
0.06147284060716629,
-0.0026596414390951395,
0.06492769718170166,
-... |
93a1fa75-b2b9-4c73-9041-eef9cd4044c4 | ttl_config
-- Uses the
ttl
model configuration property to assign a ClickHouse table TTL expression. No TTL is
assigned by default.
s3Source Helper Macro {#s3source-helper-macro}
The
s3source
macro simplifies the process of selecting ClickHouse data directly from S3 using the ClickHouse S3 table
function. It works by
populating the S3 table function parameters from a named configuration dictionary (the name of the dictionary must end
in
s3
). The macro
first looks for the dictionary in the profile
vars
, and then in the model configuration. The dictionary can contain
any of the following
keys used to populate the parameters of the S3 table function:
| Argument Name | Description |
|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| bucket | The bucket base url, such as
https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi
.
https://
is assumed if no protocol is provided. |
| path | The S3 path to use for the table query, such as
/trips_4.gz
. S3 wildcards are supported. |
| fmt | The expected ClickHouse input format (such as
TSV
or
CSVWithNames
) of the referenced S3 objects. |
| structure | The column structure of the data in bucket, as a list of name/datatype pairs, such as
['id UInt32', 'date DateTime', 'value String']
If not provided ClickHouse will infer the structure. |
| aws_access_key_id | The S3 access key id. |
| aws_secret_access_key | The S3 secret key. |
| role_arn | The ARN of a ClickhouseAccess IAM role to use to securely access the S3 objects. See this
documentation
for more information. |
| compression | The compression method used with the S3 objects. If not provided ClickHouse will attempt to determine compression based on the file name. |
See
the
S3 test file
for examples of how to use this macro.
Cross database macro support {#cross-database-macro-support} | {"source_file": "features-and-configurations.md"} | [
-0.033956848084926605,
-0.08696627616882324,
-0.09889771044254303,
-0.012550354935228825,
-0.04538913443684578,
0.012465616688132286,
0.007560253608971834,
0.011914015747606754,
-0.03180306777358055,
-0.04949292913079262,
0.029889756813645363,
-0.07411758601665497,
0.0746481865644455,
-0.0... |
dffcbbcd-06e4-410f-9f01-871a5fadf8ed | See
the
S3 test file
for examples of how to use this macro.
Cross database macro support {#cross-database-macro-support}
dbt-clickhouse supports most of the cross database macros now included in
dbt Core
with the following exceptions:
The
split_part
SQL function is implemented in ClickHouse using the splitByChar function. This function requires
using a constant string for the "split" delimiter, so the
delimeter
parameter used for this macro will be
interpreted as a string, not a column name
Similarly, the
replace
SQL function in ClickHouse requires constant strings for the
old_chars
and
new_chars
parameters, so those parameters will be interpreted as strings rather than column names when invoking this macro. | {"source_file": "features-and-configurations.md"} | [
-0.04607556387782097,
-0.1309940069913864,
-0.0667564868927002,
0.005522534716874361,
0.020192021504044533,
0.013369043357670307,
0.038135312497615814,
0.041515812277793884,
-0.0828295573592186,
-0.006185587495565414,
0.04242643341422081,
-0.08509376645088196,
0.06967033445835114,
-0.13160... |
2180deac-8dc3-4b71-91ff-72cbac1c6139 | sidebar_label: 'Overview'
slug: /integrations/dbt
sidebar_position: 1
description: 'Users can transform and model their data in ClickHouse using dbt'
title: 'Integrating dbt and ClickHouse'
keywords: ['dbt', 'data transformation', 'analytics engineering', 'SQL modeling', 'ELT pipeline']
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_integration'
- website: 'https://github.com/ClickHouse/dbt-clickhouse'
import TOCInline from '@theme/TOCInline';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Integrating dbt and ClickHouse {#integrate-dbt-clickhouse}
The dbt-clickhouse Adapter {#dbt-clickhouse-adapter}
dbt
(data build tool) enables analytics engineers to transform data in their warehouses by simply writing select statements. dbt handles materializing these select statements into objects in the database in the form of tables and views - performing the T of
Extract Load and Transform (ELT)
. Users can create a model defined by a SELECT statement.
Within dbt, these models can be cross-referenced and layered to allow the construction of higher-level concepts. The boilerplate SQL required to connect models is automatically generated. Furthermore, dbt identifies dependencies between models and ensures they are created in the appropriate order using a directed acyclic graph (DAG).
dbt is compatible with ClickHouse through a
ClickHouse-supported adapter
.
Supported features {#supported-features}
List of supported features:
- [x] Table materialization
- [x] View materialization
- [x] Incremental materialization
- [x] Microbatch incremental materialization
- [x] Materialized View materializations (uses the
TO
form of MATERIALIZED VIEW, experimental)
- [x] Seeds
- [x] Sources
- [x] Docs generate
- [x] Tests
- [x] Snapshots
- [x] Most dbt-utils macros (now included in dbt-core)
- [x] Ephemeral materialization
- [x] Distributed table materialization (experimental)
- [x] Distributed incremental materialization (experimental)
- [x] Contracts
- [x] ClickHouse-specific column configurations (Codec, TTL...)
- [x] ClickHouse-specific table settings (indexes, projections...)
All features up to dbt-core 1.9 are supported. We will soon add the features added in dbt-core 1.10.
This adapter is still not available for use inside
dbt Cloud
, but we expect to make it available soon. Please reach out to support to get more information on this.
Concepts {#concepts}
dbt introduces the concept of a model. This is defined as a SQL statement, potentially joining many tables. A model can be "materialized" in a number of ways. A materialization represents a build strategy for the model's select query. The code behind a materialization is boilerplate SQL that wraps your SELECT query in a statement in order to create a new or update an existing relation.
dbt provides 4 types of materialization:
view
(default): The model is built as a view in the database. | {"source_file": "index.md"} | [
-0.06473193317651749,
0.01789647713303566,
-0.02870435081422329,
0.08129405230283737,
-0.0028724276926368475,
-0.04099756106734276,
0.07733261585235596,
0.02299988456070423,
-0.09117580205202103,
0.013931102119386196,
0.009837867692112923,
-0.01496619451791048,
0.1335241049528122,
-0.06729... |
339505b8-7841-451f-bc7e-a5954a8b6c0c | dbt provides 4 types of materialization:
view
(default): The model is built as a view in the database.
table
: The model is built as a table in the database.
ephemeral
: The model is not directly built in the database but is instead pulled into dependent models as common table expressions.
incremental
: The model is initially materialized as a table, and in subsequent runs, dbt inserts new rows and updates changed rows in the table.
Additional syntax and clauses define how these models should be updated if their underlying data changes. dbt generally recommends starting with the view materialization until performance becomes a concern. The table materialization provides a query time performance improvement by capturing the results of the model's query as a table at the expense of increased storage. The incremental approach builds on this further to allow subsequent updates to the underlying data to be captured in the target table.
The
current adapter
for ClickHouse supports also support
materialized view
,
dictionary
,
distributed table
and
distributed incremental
materializations. The adapter also supports dbt
snapshots
and
seeds
.
Details about supported materializations {#details-about-supported-materializations}
| Type | Supported? | Details |
|-----------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------|
| view materialization | YES | Creates a
view
. |
| table materialization | YES | Creates a
table
. See below for the list of supported engines. |
| incremental materialization | YES | Creates a table if it doesn't exist, and then writes only updates to it. |
| ephemeral materialized | YES | Creates a ephemeral/CTE materialization. This does model is internal to dbt and does not create any database objects |
The following are
experimental features
in ClickHouse: | {"source_file": "index.md"} | [
-0.12142423540353775,
-0.05203372240066528,
-0.04177965223789215,
0.032758913934230804,
0.052215080708265305,
-0.009164837189018726,
0.016280395910143852,
0.022571559995412827,
-0.015575891360640526,
0.06245487183332443,
-0.022639162838459015,
0.005306809209287167,
0.07835820317268372,
-0.... |
44f9fbce-7b39-4e8c-b667-875fc9cbb3f0 | The following are
experimental features
in ClickHouse:
| Type | Supported? | Details |
|-----------------------------------------|-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Materialized View materialization | YES, Experimental | Creates a
materialized view
. |
| Distributed table materialization | YES, Experimental | Creates a
distributed table
. |
| Distributed incremental materialization | YES, Experimental | Incremental model based on the same idea as distributed table. Note that not all strategies are supported, visit
this
for more info. |
| Dictionary materialization | YES, Experimental | Creates a
dictionary
. |
Setup of dbt and the ClickHouse adapter {#setup-of-dbt-and-the-clickhouse-adapter}
Install dbt-core and dbt-clickhouse {#install-dbt-core-and-dbt-clickhouse}
dbt provides several options for installing the command-line interface (CLI), which are detailed
here
. We recommend using
pip
to install both dbt and dbt-clickhouse.
sh
pip install dbt-core dbt-clickhouse
Provide dbt with the connection details for our ClickHouse instance. {#provide-dbt-with-the-connection-details-for-our-clickhouse-instance}
Configure the
clickhouse-service
profile in the
~/.dbt/profiles.yml
file and provide the schema, host, port, user, and password properties. The full list of connection configuration options is available in the
Features and configurations
page:
```yaml
clickhouse-service:
target: dev
outputs:
dev:
type: clickhouse
schema: [ default ] # ClickHouse database for dbt models
# Optional
host: [ localhost ]
port: [ 8123 ] # Defaults to 8123, 8443, 9000, 9440 depending on the secure and driver settings
user: [ default ] # User for all database operations
password: [ <empty string> ] # Password for the user
secure: True # Use TLS (native protocol) or HTTPS (http protocol)
```
Create a dbt project {#create-a-dbt-project}
You can now use this profile in one of your existing projects or create a new one using:
sh
dbt init project_name | {"source_file": "index.md"} | [
0.006388507317751646,
-0.08650070428848267,
-0.030407244339585304,
-0.019866757094860077,
-0.036342646926641464,
0.017893709242343903,
-0.03781580552458763,
-0.031857140362262726,
-0.11644570529460907,
-0.04309399425983429,
0.05316200107336044,
0.00531554501503706,
-0.011529150418937206,
0... |
4f2aeb54-88dd-4b0c-be96-3eaf3d45c00f | ```
Create a dbt project {#create-a-dbt-project}
You can now use this profile in one of your existing projects or create a new one using:
sh
dbt init project_name
Inside
project_name
dir, update your
dbt_project.yml
file to specify a profile name to connect to the ClickHouse server.
yaml
profile: 'clickhouse-service'
Test connection {#test-connection}
Execute
dbt debug
with the CLI tool to confirm whether dbt is able to connect to ClickHouse. Confirm the response includes
Connection test: [OK connection ok]
indicating a successful connection.
Go to the
guides page
to learn more about how to use dbt with ClickHouse.
Testing and Deploying your models (CI/CD) {#testing-and-deploying-your-models-ci-cd}
There are many ways to test and deploy your dbt project. dbt has some suggestions for
best practice workflows
and
CI jobs
. We are going to discuss several strategies, but keep in mind that these strategies may need to be deeply adjusted to fit your specific use case.
CI/CD with simple data tests and unit tests {#ci-with-simple-data-tests-and-unit-tests}
One simple way to kick-start your CI pipeline is to run a ClickHouse cluster inside your job and then run your models against it. You can insert demo data into this cluster before running your models. You can just use a
seed
to populate the staging environment with a subset of your production data.
Once the data is inserted, you can then run your
data tests
and your
unit tests
.
Your CD step can be as simple as running
dbt build
against your production ClickHouse cluster.
More complete CI/CD stage: Use recent data, only test affected models {#more-complete-ci-stage}
One common strategy is to use
Slim CI
jobs, where only the modified models (and their up- and downstream dependencies) are re-deployed. This approach uses artifacts from your production runs (i.e., the
dbt manifest
) to reduce the run time of your project and ensure there is no schema drift across environments.
To keep your development environments in sync and avoid running your models against stale deployments, you can use
clone
or even
defer
.
We recommend using a dedicated ClickHouse cluster or service for the testing environment (i.e., a staging environment) to avoid impacting the operation of your production environment. To ensure the testing environment is representative, it's important that you use a subset of your production data, as well as run dbt in a way that prevents schema drift between environments.
If you don't need fresh data to test against, you can restore a backup of your production data into the staging environment. | {"source_file": "index.md"} | [
-0.0433790422976017,
-0.1273300051689148,
-0.027821781113743782,
-0.024368224665522575,
-0.07089506089687347,
-0.05582740157842636,
0.02245398610830307,
0.013523269444704056,
-0.038805440068244934,
-0.005045617930591106,
0.05533162131905556,
-0.04372427612543106,
0.08129242807626724,
0.018... |
0bb1cc51-f109-4790-b853-8375ee88c480 | If you don't need fresh data to test against, you can restore a backup of your production data into the staging environment.
If you need fresh data to test against, you can use a combination of the
remoteSecure()
table function
and refreshable materialized views to insert at the desired frequency. Another option is to use object storage as an intermediate and periodically write data from your production service, then import it into the staging environment using the object storage table functions or ClickPipes (for continuous ingestion).
Using a dedicated environment for CI testing also allows you to perform manual testing without impacting your production environment. For example, you may want to point a BI tool to this environment for testing.
For deployment (i.e., the CD step), we recommend using the artifacts from your production deployments to only update the models that have changed. This requires setting up object storage (e.g., S3) as intermediate storage for your dbt artifacts. Once that is set up, you can run a command like
dbt build --select state:modified+ --state path/to/last/deploy/state.json
to selectively rebuild the minimum amount of models needed based on what changed since the last run in production.
Troubleshooting common issues {#troubleshooting-common-issues}
Connections {#troubleshooting-connections}
If you encounter issues connecting to ClickHouse from dbt, make sure the following criteria are met:
The engine must be one of the
supported engines
.
You must have adequate permissions to access the database.
If you're not using the default table engine for the database, you must specify a table engine in your model
configuration.
Understanding long-running operations {#understanding-long-running-operations}
Some operations may take longer than expected due to specific ClickHouse queries. To gain more insight into which queries are taking longer, increase the
log level
to
debug
β this will print the time used by each query. For example, this can be achieved by appending
--log-level debug
to dbt commands.
Limitations {#limitations}
The current ClickHouse adapter for dbt has several limitations users should be aware of:
The plugin uses syntax that requires ClickHouse version 25.3 or newer. We do not test older versions of Clickhouse. We also do not currently test Replicated tables.
Different runs of the
dbt-adapter
may collide if they are run at the same time as internally they can use the same table names for the same operations. For more information, check the issue
#420
. | {"source_file": "index.md"} | [
-0.01662502810359001,
-0.03670461103320122,
0.0026597734540700912,
0.035402704030275345,
-0.05075344070792198,
-0.07942954450845718,
-0.093033067882061,
0.005100106820464134,
-0.049061283469200134,
0.07989440113306046,
0.014875270426273346,
-0.06596611440181732,
0.09117905795574188,
-0.038... |
5557bde6-3977-4efa-aa6b-a6318cb32a05 | The adapter currently materializes models as tables using an
INSERT INTO SELECT
. This effectively means data duplication if the run is executed again. Very large datasets (PB) can result in extremely long run times, making some models unviable. To improve performance, use ClickHouse Materialized Views by implementing the view as
materialized: materialization_view
. Additionally, aim to minimize the number of rows returned by any query by utilizing
GROUP BY
where possible. Prefer models that summarize data over those that simply transform while maintaining row counts of the source.
To use Distributed tables to represent a model, users must create the underlying replicated tables on each node manually. The Distributed table can, in turn, be created on top of these. The adapter does not manage cluster creation.
When dbt creates a relation (table/view) in a database, it usually creates it as:
{{ database }}.{{ schema }}.{{ table/view id }}
. ClickHouse has no notion of schemas. The adapter therefore uses
{{schema}}.{{ table/view id }}
, where
schema
is the ClickHouse database.
Ephemeral models/CTEs don't work if placed before the
INSERT INTO
in a ClickHouse insert statement, see https://github.com/ClickHouse/ClickHouse/issues/30323. This should not affect most models, but care should be taken where an ephemeral model is placed in model definitions and other SQL statements.
Fivetran {#fivetran}
The
dbt-clickhouse
connector is also available for use in
Fivetran transformations
, allowing seamless integration and transformation capabilities directly within the Fivetran platform using
dbt
. | {"source_file": "index.md"} | [
-0.058570362627506256,
-0.09486763179302216,
-0.02914123795926571,
0.08577495068311691,
0.03688015043735504,
-0.027604445815086365,
-0.07263772934675217,
-0.014210534282028675,
0.001934908446855843,
0.04076046124100685,
0.009541280567646027,
-0.01755603589117527,
0.10014674812555313,
-0.01... |
507b6276-e7a2-43d1-87c1-1fb0dbd3f772 | sidebar_label: 'Guides'
slug: /integrations/dbt/guides
sidebar_position: 2
description: 'Guides for using dbt with ClickHouse'
keywords: ['clickhouse', 'dbt', 'guides']
title: 'Guides'
doc_type: 'guide'
import TOCInline from '@theme/TOCInline';
import Image from '@theme/IdealImage';
import dbt_01 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_01.png';
import dbt_02 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_02.png';
import dbt_03 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_03.png';
import dbt_04 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_04.png';
import dbt_05 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_05.png';
import dbt_06 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_06.png';
import dbt_07 from '@site/static/images/integrations/data-ingestion/etl-tools/dbt/dbt_07.png';
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
Guides
This section provides guides on setting up dbt and the ClickHouse adapter, as well as an example of using dbt with ClickHouse using a publicly available IMDB dataset. The example covers the following steps:
Creating a dbt project and setting up the ClickHouse adapter.
Defining a model.
Updating a model.
Creating an incremental model.
Creating a snapshot model.
Using materialized views.
These guides are designed to be used in conjunction with the rest of the
documentation
and the
features and configurations
.
Setup {#setup}
Follow the instructions in the
Setup of dbt and the ClickHouse adapter
section to prepare your environment.
Important: The following is tested under python 3.9.
Prepare ClickHouse {#prepare-clickhouse}
dbt excels when modeling highly relational data. For the purposes of example, we provide a small IMDB dataset with the following relational schema. This dataset originates from the
relational dataset repository
. This is trivial relative to common schemas used with dbt but represents a manageable sample:
We use a subset of these tables as shown.
Create the following tables:
```sql
CREATE DATABASE imdb;
CREATE TABLE imdb.actors
(
id UInt32,
first_name String,
last_name String,
gender FixedString(1)
) ENGINE = MergeTree ORDER BY (id, first_name, last_name, gender);
CREATE TABLE imdb.directors
(
id UInt32,
first_name String,
last_name String
) ENGINE = MergeTree ORDER BY (id, first_name, last_name);
CREATE TABLE imdb.genres
(
movie_id UInt32,
genre String
) ENGINE = MergeTree ORDER BY (movie_id, genre);
CREATE TABLE imdb.movie_directors
(
director_id UInt32,
movie_id UInt64
) ENGINE = MergeTree ORDER BY (director_id, movie_id);
CREATE TABLE imdb.movies
(
id UInt32,
name String,
year UInt32,
rank Float32 DEFAULT 0
) ENGINE = MergeTree ORDER BY (id, name, year); | {"source_file": "guides.md"} | [
-0.01233262661844492,
0.008591444231569767,
-0.023970142006874084,
-0.0002789038699120283,
0.05275595560669899,
-0.02701646462082863,
0.07973821461200714,
0.08823508769273758,
-0.07052070647478104,
-0.020150719210505486,
0.054030243307352066,
0.02072283998131752,
0.10577423125505447,
-0.07... |
50987f5e-9de8-486b-beef-c1b96ea3f072 | CREATE TABLE imdb.movies
(
id UInt32,
name String,
year UInt32,
rank Float32 DEFAULT 0
) ENGINE = MergeTree ORDER BY (id, name, year);
CREATE TABLE imdb.roles
(
actor_id UInt32,
movie_id UInt32,
role String,
created_at DateTime DEFAULT now()
) ENGINE = MergeTree ORDER BY (actor_id, movie_id);
```
:::note
The column
created_at
for the table
roles
, which defaults to a value of
now()
. We use this later to identify incremental updates to our models - see
Incremental Models
.
:::
We use the
s3
function to read the source data from public endpoints to insert data. Run the following commands to populate the tables:
```sql
INSERT INTO imdb.actors
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/imdb/imdb_ijs_actors.tsv.gz',
'TSVWithNames');
INSERT INTO imdb.directors
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/imdb/imdb_ijs_directors.tsv.gz',
'TSVWithNames');
INSERT INTO imdb.genres
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/imdb/imdb_ijs_movies_genres.tsv.gz',
'TSVWithNames');
INSERT INTO imdb.movie_directors
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/imdb/imdb_ijs_movies_directors.tsv.gz',
'TSVWithNames');
INSERT INTO imdb.movies
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/imdb/imdb_ijs_movies.tsv.gz',
'TSVWithNames');
INSERT INTO imdb.roles(actor_id, movie_id, role)
SELECT actor_id, movie_id, role
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/imdb/imdb_ijs_roles.tsv.gz',
'TSVWithNames');
```
The execution of these may vary depending on your bandwidth, but each should only take a few seconds to complete. Execute the following query to compute a summary of each actor, ordered by the most movie appearances, and to confirm the data was loaded successfully: | {"source_file": "guides.md"} | [
-0.004768258426338434,
-0.1101580485701561,
-0.03953332453966141,
-0.02622779831290245,
0.02101927623152733,
0.048496972769498825,
-0.06101730093359947,
-0.026952559128403664,
0.06313349306583405,
0.036497462540864944,
0.07715633511543274,
-0.03189576044678688,
0.048159703612327576,
-0.035... |
6816edf2-92e3-4ce7-b71f-775fa39f04e7 | sql
SELECT id,
any(actor_name) AS name,
uniqExact(movie_id) AS num_movies,
avg(rank) AS avg_rank,
uniqExact(genre) AS unique_genres,
uniqExact(director_name) AS uniq_directors,
max(created_at) AS updated_at
FROM (
SELECT imdb.actors.id AS id,
concat(imdb.actors.first_name, ' ', imdb.actors.last_name) AS actor_name,
imdb.movies.id AS movie_id,
imdb.movies.rank AS rank,
genre,
concat(imdb.directors.first_name, ' ', imdb.directors.last_name) AS director_name,
created_at
FROM imdb.actors
JOIN imdb.roles ON imdb.roles.actor_id = imdb.actors.id
LEFT OUTER JOIN imdb.movies ON imdb.movies.id = imdb.roles.movie_id
LEFT OUTER JOIN imdb.genres ON imdb.genres.movie_id = imdb.movies.id
LEFT OUTER JOIN imdb.movie_directors ON imdb.movie_directors.movie_id = imdb.movies.id
LEFT OUTER JOIN imdb.directors ON imdb.directors.id = imdb.movie_directors.director_id
)
GROUP BY id
ORDER BY num_movies DESC
LIMIT 5;
The response should look like:
response
+------+------------+----------+------------------+-------------+--------------+-------------------+
|id |name |num_movies|avg_rank |unique_genres|uniq_directors|updated_at |
+------+------------+----------+------------------+-------------+--------------+-------------------+
|45332 |Mel Blanc |832 |6.175853582979779 |18 |84 |2022-04-26 14:01:45|
|621468|Bess Flowers|659 |5.57727638854796 |19 |293 |2022-04-26 14:01:46|
|372839|Lee Phelps |527 |5.032976449684617 |18 |261 |2022-04-26 14:01:46|
|283127|Tom London |525 |2.8721716524875673|17 |203 |2022-04-26 14:01:46|
|356804|Bud Osborne |515 |2.0389507108727773|15 |149 |2022-04-26 14:01:46|
+------+------------+----------+------------------+-------------+--------------+-------------------+
In the later guides, we will convert this query into a model - materializing it in ClickHouse as a dbt view and table.
Connecting to ClickHouse {#connecting-to-clickhouse}
Create a dbt project. In this case we name this after our
imdb
source. When prompted, select
clickhouse
as the database source.
```bash
clickhouse-user@clickhouse:~$ dbt init imdb
16:52:40 Running with dbt=1.1.0
Which database would you like to use?
[1] clickhouse
(Don't see the one you want? https://docs.getdbt.com/docs/available-adapters)
Enter a number: 1
16:53:21 No sample profile found for clickhouse.
16:53:21
Your new dbt project "imdb" was created!
For more information on how to configure the profiles.yml file,
please consult the dbt documentation here:
https://docs.getdbt.com/docs/configure-your-profile
``` | {"source_file": "guides.md"} | [
0.025275370106101036,
-0.12336957454681396,
0.017758257687091827,
0.05767468735575676,
0.008066065609455109,
0.01769194006919861,
0.0764136090874672,
-0.04411866515874863,
0.04010069742798805,
-0.03266497701406479,
0.04173151031136513,
-0.030967796221375465,
0.06638509035110474,
-0.0173478... |
a646a258-249e-4898-b39f-b642f55cf657 | For more information on how to configure the profiles.yml file,
please consult the dbt documentation here:
https://docs.getdbt.com/docs/configure-your-profile
```
cd
into your project folder:
bash
cd imdb
At this point, you will need the text editor of your choice. In the examples below, we use the popular VS Code. Opening the IMDB directory, you should see a collection of yml and sql files:
Update your
dbt_project.yml
file to specify our first model -
actor_summary
and set profile to
clickhouse_imdb
.
We next need to provide dbt with the connection details for our ClickHouse instance. Add the following to your
~/.dbt/profiles.yml
.
yml
clickhouse_imdb:
target: dev
outputs:
dev:
type: clickhouse
schema: imdb_dbt
host: localhost
port: 8123
user: default
password: ''
secure: False
Note the need to modify the user and password. There are additional available settings documented
here
.
From the IMDB directory, execute the
dbt debug
command to confirm whether dbt is able to connect to ClickHouse.
```bash
clickhouse-user@clickhouse:~/imdb$ dbt debug
17:33:53 Running with dbt=1.1.0
dbt version: 1.1.0
python version: 3.10.1
python path: /home/dale/.pyenv/versions/3.10.1/bin/python3.10
os info: Linux-5.13.0-10039-tuxedo-x86_64-with-glibc2.31
Using profiles.yml file at /home/dale/.dbt/profiles.yml
Using dbt_project.yml file at /opt/dbt/imdb/dbt_project.yml
Configuration:
profiles.yml file [OK found and valid]
dbt_project.yml file [OK found and valid]
Required dependencies:
- git [OK found]
Connection:
host: localhost
port: 8123
user: default
schema: imdb_dbt
secure: False
verify: False
Connection test: [OK connection ok]
All checks passed!
```
Confirm the response includes
Connection test: [OK connection ok]
indicating a successful connection.
Creating a simple view materialization {#creating-a-simple-view-materialization}
When using the view materialization, a model is rebuilt as a view on each run, via a
CREATE VIEW AS
statement in ClickHouse. This doesn't require any additional storage of data but will be slower to query than table materializations.
From the
imdb
folder, delete the directory
models/example
:
bash
clickhouse-user@clickhouse:~/imdb$ rm -rf models/example
Create a new file in the
actors
within the
models
folder. Here we create files that each represent an actor model:
bash
clickhouse-user@clickhouse:~/imdb$ mkdir models/actors
Create the files
schema.yml
and
actor_summary.sql
in the
models/actors
folder.
bash
clickhouse-user@clickhouse:~/imdb$ touch models/actors/actor_summary.sql
clickhouse-user@clickhouse:~/imdb$ touch models/actors/schema.yml
The file
schema.yml
defines our tables. These will subsequently be available for use in macros. Edit
models/actors/schema.yml
to contain this content:
```yml
version: 2 | {"source_file": "guides.md"} | [
0.020378123968839645,
-0.11827683448791504,
-0.04531465470790863,
-0.024161214008927345,
-0.037788547575473785,
0.02197990193963051,
0.03451855108141899,
0.07679048925638199,
-0.05080542713403702,
-0.004337102174758911,
0.037932686507701874,
-0.081372931599617,
0.054504867643117905,
0.0539... |
dd3a9db8-2868-4c80-bf17-9e52da8cc09e | The file
schema.yml
defines our tables. These will subsequently be available for use in macros. Edit
models/actors/schema.yml
to contain this content:
```yml
version: 2
sources:
- name: imdb
tables:
- name: directors
- name: actors
- name: roles
- name: movies
- name: genres
- name: movie_directors
The `actors_summary.sql` defines our actual model. Note in the config function we also request the model be materialized as a view in ClickHouse. Our tables are referenced from the `schema.yml` file via the function `source` e.g. `source('imdb', 'movies')` refers to the `movies` table in the `imdb` database. Edit `models/actors/actors_summary.sql` to contain this content:
sql
{{ config(materialized='view') }}
with actor_summary as (
SELECT id,
any(actor_name) as name,
uniqExact(movie_id) as num_movies,
avg(rank) as avg_rank,
uniqExact(genre) as genres,
uniqExact(director_name) as directors,
max(created_at) as updated_at
FROM (
SELECT {{ source('imdb', 'actors') }}.id as id,
concat({{ source('imdb', 'actors') }}.first_name, ' ', {{ source('imdb', 'actors') }}.last_name) as actor_name,
{{ source('imdb', 'movies') }}.id as movie_id,
{{ source('imdb', 'movies') }}.rank as rank,
genre,
concat({{ source('imdb', 'directors') }}.first_name, ' ', {{ source('imdb', 'directors') }}.last_name) as director_name,
created_at
FROM {{ source('imdb', 'actors') }}
JOIN {{ source('imdb', 'roles') }} ON {{ source('imdb', 'roles') }}.actor_id = {{ source('imdb', 'actors') }}.id
LEFT OUTER JOIN {{ source('imdb', 'movies') }} ON {{ source('imdb', 'movies') }}.id = {{ source('imdb', 'roles') }}.movie_id
LEFT OUTER JOIN {{ source('imdb', 'genres') }} ON {{ source('imdb', 'genres') }}.movie_id = {{ source('imdb', 'movies') }}.id
LEFT OUTER JOIN {{ source('imdb', 'movie_directors') }} ON {{ source('imdb', 'movie_directors') }}.movie_id = {{ source('imdb', 'movies') }}.id
LEFT OUTER JOIN {{ source('imdb', 'directors') }} ON {{ source('imdb', 'directors') }}.id = {{ source('imdb', 'movie_directors') }}.director_id
)
GROUP BY id
)
select *
from actor_summary
``
Note how we include the column
updated_at` in our final actor_summary. We use this later for incremental materializations.
From the
imdb
directory execute the command
dbt run
. | {"source_file": "guides.md"} | [
0.0033938889391720295,
-0.12770578265190125,
-0.052901625633239746,
0.04795407876372337,
0.033702950924634933,
0.05770246684551239,
-0.015801480039954185,
0.013906761072576046,
-0.000444895529653877,
0.03122710809111595,
0.05780790373682976,
-0.04439690336585045,
0.03910352662205696,
-0.00... |
13170610-d1f7-4112-90fb-d164af126543 | ``
Note how we include the column
updated_at` in our final actor_summary. We use this later for incremental materializations.
From the
imdb
directory execute the command
dbt run
.
bash
clickhouse-user@clickhouse:~/imdb$ dbt run
15:05:35 Running with dbt=1.1.0
15:05:35 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 6 sources, 0 exposures, 0 metrics
15:05:35
15:05:36 Concurrency: 1 threads (target='dev')
15:05:36
15:05:36 1 of 1 START view model imdb_dbt.actor_summary.................................. [RUN]
15:05:37 1 of 1 OK created view model imdb_dbt.actor_summary............................. [OK in 1.00s]
15:05:37
15:05:37 Finished running 1 view model in 1.97s.
15:05:37
15:05:37 Completed successfully
15:05:37
15:05:37 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
dbt will represent the model as a view in ClickHouse as requested. We can now query this view directly. This view will have been created in the
imdb_dbt
database - this is determined by the schema parameter in the file
~/.dbt/profiles.yml
under the
clickhouse_imdb
profile.
sql
SHOW DATABASES;
response
+------------------+
|name |
+------------------+
|INFORMATION_SCHEMA|
|default |
|imdb |
|imdb_dbt | <---created by dbt!
|information_schema|
|system |
+------------------+
Querying this view, we can replicate the results of our earlier query with a simpler syntax:
sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 5;
response
+------+------------+----------+------------------+------+---------+-------------------+
|id |name |num_movies|avg_rank |genres|directors|updated_at |
+------+------------+----------+------------------+------+---------+-------------------+
|45332 |Mel Blanc |832 |6.175853582979779 |18 |84 |2022-04-26 15:26:55|
|621468|Bess Flowers|659 |5.57727638854796 |19 |293 |2022-04-26 15:26:57|
|372839|Lee Phelps |527 |5.032976449684617 |18 |261 |2022-04-26 15:26:56|
|283127|Tom London |525 |2.8721716524875673|17 |203 |2022-04-26 15:26:56|
|356804|Bud Osborne |515 |2.0389507108727773|15 |149 |2022-04-26 15:26:56|
+------+------------+----------+------------------+------+---------+-------------------+
Creating a table materialization {#creating-a-table-materialization} | {"source_file": "guides.md"} | [
-0.02218838781118393,
-0.06992053985595703,
-0.07879523187875748,
0.0020351747516542673,
0.03704091161489487,
0.006871294230222702,
0.022057324647903442,
0.012956981547176838,
0.04607877507805824,
0.06120092421770096,
0.05812342092394829,
-0.04477361962199211,
0.0269534420222044,
-0.022224... |
e31f06bb-7d5b-4749-bf9f-3790f75a21a8 | Creating a table materialization {#creating-a-table-materialization}
In the previous example, our model was materialized as a view. While this might offer sufficient performance for some queries, more complex SELECTs or frequently executed queries may be better materialized as a table. This materialization is useful for models that will be queried by BI tools to ensure users have a faster experience. This effectively causes the query results to be stored as a new table, with the associated storage overheads - effectively, an
INSERT TO SELECT
is executed. Note that this table will be reconstructed each time i.e., it is not incremental. Large result sets may therefore result in long execution times - see
dbt Limitations
.
Modify the file
actors_summary.sql
such that the
materialized
parameter is set to
table
. Notice how
ORDER BY
is defined, and notice we use the
MergeTree
table engine:
sql
{{ config(order_by='(updated_at, id, name)', engine='MergeTree()', materialized='table') }}
From the
imdb
directory execute the command
dbt run
. This execution may take a little longer to execute - around 10s on most machines.
bash
clickhouse-user@clickhouse:~/imdb$ dbt run
15:13:27 Running with dbt=1.1.0
15:13:27 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 6 sources, 0 exposures, 0 metrics
15:13:27
15:13:28 Concurrency: 1 threads (target='dev')
15:13:28
15:13:28 1 of 1 START table model imdb_dbt.actor_summary................................. [RUN]
15:13:37 1 of 1 OK created table model imdb_dbt.actor_summary............................ [OK in 9.22s]
15:13:37
15:13:37 Finished running 1 table model in 10.20s.
15:13:37
15:13:37 Completed successfully
15:13:37
15:13:37 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
Confirm the creation of the table
imdb_dbt.actor_summary
:
sql
SHOW CREATE TABLE imdb_dbt.actor_summary;
You should the table with the appropriate data types:
response
+----------------------------------------
|statement
+----------------------------------------
|CREATE TABLE imdb_dbt.actor_summary
|(
|`id` UInt32,
|`first_name` String,
|`last_name` String,
|`num_movies` UInt64,
|`updated_at` DateTime
|)
|ENGINE = MergeTree
|ORDER BY (id, first_name, last_name)
+----------------------------------------
Confirm the results from this table are consistent with previous responses. Notice an appreciable improvement in the response time now that the model is a table:
sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 5; | {"source_file": "guides.md"} | [
-0.045147594064474106,
0.012968887574970722,
-0.0008003955590538681,
0.055156368762254715,
-0.04089808836579323,
-0.06589400768280029,
0.017407605424523354,
0.055721960961818695,
-0.0010871767299249768,
0.09889841824769974,
-0.022636830806732178,
0.03915572538971901,
0.06368114799261093,
-... |
e7d4bbe7-b16b-4d3d-a19e-ebedc61855d2 | sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 5;
response
+------+------------+----------+------------------+------+---------+-------------------+
|id |name |num_movies|avg_rank |genres|directors|updated_at |
+------+------------+----------+------------------+------+---------+-------------------+
|45332 |Mel Blanc |832 |6.175853582979779 |18 |84 |2022-04-26 15:26:55|
|621468|Bess Flowers|659 |5.57727638854796 |19 |293 |2022-04-26 15:26:57|
|372839|Lee Phelps |527 |5.032976449684617 |18 |261 |2022-04-26 15:26:56|
|283127|Tom London |525 |2.8721716524875673|17 |203 |2022-04-26 15:26:56|
|356804|Bud Osborne |515 |2.0389507108727773|15 |149 |2022-04-26 15:26:56|
+------+------------+----------+------------------+------+---------+-------------------+
Feel free to issue other queries against this model. For example, which actors have the highest ranking movies with more than 5 appearances?
sql
SELECT * FROM imdb_dbt.actor_summary WHERE num_movies > 5 ORDER BY avg_rank DESC LIMIT 10;
Creating an Incremental Materialization {#creating-an-incremental-materialization}
The previous example created a table to materialize the model. This table will be reconstructed for each dbt execution. This may be infeasible and extremely costly for larger result sets or complex transformations. To address this challenge and reduce the build time, dbt offers Incremental materializations. This allows dbt to insert or update records into a table since the last execution, making it appropriate for event-style data. Under the hood a temporary table is created with all the updated records and then all the untouched records as well as the updated records are inserted into a new target table. This results in similar
limitations
for large result sets as for the table model.
To overcome these limitations for large sets, the adapter supports 'inserts_only' mode, where all the updates are inserted into the target table without creating a temporary table (more about it below).
To illustrate this example, we will add the actor "Clicky McClickHouse", who will appear in an incredible 910 movies - ensuring he has appeared in more films than even
Mel Blanc
.
First, we modify our model to be of type incremental. This addition requires:
unique_key
- To ensure the adapter can uniquely identify rows, we must provide a unique_key - in this case, the
id
field from our query will suffice. This ensures we will have no row duplicates in our materialized table. For more details on uniqueness constraints, see
here
. | {"source_file": "guides.md"} | [
0.010399043560028076,
-0.06721607595682144,
-0.05374125763773918,
0.009540650062263012,
-0.08303150534629822,
0.14560163021087646,
0.08939184248447418,
0.0010956150945276022,
0.022599870339035988,
-0.013035711832344532,
0.06376751512289047,
-0.07657933980226517,
0.034397926181554794,
-0.02... |
da2c8cf2-468d-4ef0-902a-489f8b8e59a0 | Incremental filter
- We also need to tell dbt how it should identify which rows have changed on an incremental run. This is achieved by providing a delta expression. Typically this involves a timestamp for event data; hence our updated_at timestamp field. This column, which defaults to the value of now() when rows are inserted, allows new roles to be identified. Additionally, we need to identify the alternative case where new actors are added. Using the
{{this}}
variable, to denote the existing materialized table, this gives us the expression
where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})
. We embed this inside the
{% if is_incremental() %}
condition, ensuring it is only used on incremental runs and not when the table is first constructed. For more details on filtering rows for incremental models, see
this discussion in the dbt docs
.
Update the file
actor_summary.sql
as follows:
```sql
{{ config(order_by='(updated_at, id, name)', engine='MergeTree()', materialized='incremental', unique_key='id') }}
with actor_summary as (
SELECT id,
any(actor_name) as name,
uniqExact(movie_id) as num_movies,
avg(rank) as avg_rank,
uniqExact(genre) as genres,
uniqExact(director_name) as directors,
max(created_at) as updated_at
FROM (
SELECT {{ source('imdb', 'actors') }}.id as id,
concat({{ source('imdb', 'actors') }}.first_name, ' ', {{ source('imdb', 'actors') }}.last_name) as actor_name,
{{ source('imdb', 'movies') }}.id as movie_id,
{{ source('imdb', 'movies') }}.rank as rank,
genre,
concat({{ source('imdb', 'directors') }}.first_name, ' ', {{ source('imdb', 'directors') }}.last_name) as director_name,
created_at
FROM {{ source('imdb', 'actors') }}
JOIN {{ source('imdb', 'roles') }} ON {{ source('imdb', 'roles') }}.actor_id = {{ source('imdb', 'actors') }}.id
LEFT OUTER JOIN {{ source('imdb', 'movies') }} ON {{ source('imdb', 'movies') }}.id = {{ source('imdb', 'roles') }}.movie_id
LEFT OUTER JOIN {{ source('imdb', 'genres') }} ON {{ source('imdb', 'genres') }}.movie_id = {{ source('imdb', 'movies') }}.id
LEFT OUTER JOIN {{ source('imdb', 'movie_directors') }} ON {{ source('imdb', 'movie_directors') }}.movie_id = {{ source('imdb', 'movies') }}.id
LEFT OUTER JOIN {{ source('imdb', 'directors') }} ON {{ source('imdb', 'directors') }}.id = {{ source('imdb', 'movie_directors') }}.director_id
)
GROUP BY id
)
select *
from actor_summary
{% if is_incremental() %}
-- this filter will only be applied on an incremental run
where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})
{% endif %}
``` | {"source_file": "guides.md"} | [
-0.08502088487148285,
0.0012927528005093336,
0.01667153090238571,
-0.020684782415628433,
-0.009548673406243324,
-0.0004893041332252324,
0.04006238281726837,
-0.03598588705062866,
0.0015792290214449167,
0.040242310613393784,
0.04874466359615326,
0.007427179720252752,
-0.007849852554500103,
... |
5eef6a95-8a21-41f0-8a63-e46d85df3fd8 | -- this filter will only be applied on an incremental run
where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})
{% endif %}
```
Note that our model will only respond to updates and additions to the
roles
and
actors
tables. To respond to all tables, users would be encouraged to split this model into multiple sub-models - each with their own incremental criteria. These models can in turn be referenced and connected. For further details on cross-referencing models see
here
.
Execute a
dbt run
and confirm the results of the resulting table:
response
clickhouse-user@clickhouse:~/imdb$ dbt run
15:33:34 Running with dbt=1.1.0
15:33:34 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 6 sources, 0 exposures, 0 metrics
15:33:34
15:33:35 Concurrency: 1 threads (target='dev')
15:33:35
15:33:35 1 of 1 START incremental model imdb_dbt.actor_summary........................... [RUN]
15:33:41 1 of 1 OK created incremental model imdb_dbt.actor_summary...................... [OK in 6.33s]
15:33:41
15:33:41 Finished running 1 incremental model in 7.30s.
15:33:41
15:33:41 Completed successfully
15:33:41
15:33:41 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 5;
response
+------+------------+----------+------------------+------+---------+-------------------+
|id |name |num_movies|avg_rank |genres|directors|updated_at |
+------+------------+----------+------------------+------+---------+-------------------+
|45332 |Mel Blanc |832 |6.175853582979779 |18 |84 |2022-04-26 15:26:55|
|621468|Bess Flowers|659 |5.57727638854796 |19 |293 |2022-04-26 15:26:57|
|372839|Lee Phelps |527 |5.032976449684617 |18 |261 |2022-04-26 15:26:56|
|283127|Tom London |525 |2.8721716524875673|17 |203 |2022-04-26 15:26:56|
|356804|Bud Osborne |515 |2.0389507108727773|15 |149 |2022-04-26 15:26:56|
+------+------------+----------+------------------+------+---------+-------------------+
We will now add data to our model to illustrate an incremental update. Add our actor "Clicky McClickHouse" to the
actors
table:
sql
INSERT INTO imdb.actors VALUES (845466, 'Clicky', 'McClickHouse', 'M');
Let's have "Clicky" star in 910 random movies:
sql
INSERT INTO imdb.roles
SELECT now() as created_at, 845466 as actor_id, id as movie_id, 'Himself' as role
FROM imdb.movies
LIMIT 910 OFFSET 10000;
Confirm he is indeed now the actor with the most appearances by querying the underlying source table and bypassing any dbt models: | {"source_file": "guides.md"} | [
-0.04354514181613922,
-0.07398024201393127,
0.01634431816637516,
0.01768108829855919,
0.03910934925079346,
-0.0347917266190052,
0.007966750301420689,
-0.012040437199175358,
-0.0010588006116449833,
0.024490665644407272,
-0.011140483431518078,
-0.06705957651138306,
0.027202123776078224,
-0.0... |
fb2d4864-de2c-4e34-9ce6-fde4f1229218 | Confirm he is indeed now the actor with the most appearances by querying the underlying source table and bypassing any dbt models:
sql
SELECT id,
any(actor_name) as name,
uniqExact(movie_id) as num_movies,
avg(rank) as avg_rank,
uniqExact(genre) as unique_genres,
uniqExact(director_name) as uniq_directors,
max(created_at) as updated_at
FROM (
SELECT imdb.actors.id as id,
concat(imdb.actors.first_name, ' ', imdb.actors.last_name) as actor_name,
imdb.movies.id as movie_id,
imdb.movies.rank as rank,
genre,
concat(imdb.directors.first_name, ' ', imdb.directors.last_name) as director_name,
created_at
FROM imdb.actors
JOIN imdb.roles ON imdb.roles.actor_id = imdb.actors.id
LEFT OUTER JOIN imdb.movies ON imdb.movies.id = imdb.roles.movie_id
LEFT OUTER JOIN imdb.genres ON imdb.genres.movie_id = imdb.movies.id
LEFT OUTER JOIN imdb.movie_directors ON imdb.movie_directors.movie_id = imdb.movies.id
LEFT OUTER JOIN imdb.directors ON imdb.directors.id = imdb.movie_directors.director_id
)
GROUP BY id
ORDER BY num_movies DESC
LIMIT 2;
response
+------+-------------------+----------+------------------+------+---------+-------------------+
|id |name |num_movies|avg_rank |genres|directors|updated_at |
+------+-------------------+----------+------------------+------+---------+-------------------+
|845466|Clicky McClickHouse|910 |1.4687938697032283|21 |662 |2022-04-26 16:20:36|
|45332 |Mel Blanc |909 |5.7884792542982515|19 |148 |2022-04-26 16:17:42|
+------+-------------------+----------+------------------+------+---------+-------------------+
Execute a
dbt run
and confirm our model has been updated and matches the above results:
response
clickhouse-user@clickhouse:~/imdb$ dbt run
16:12:16 Running with dbt=1.1.0
16:12:16 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 6 sources, 0 exposures, 0 metrics
16:12:16
16:12:17 Concurrency: 1 threads (target='dev')
16:12:17
16:12:17 1 of 1 START incremental model imdb_dbt.actor_summary........................... [RUN]
16:12:24 1 of 1 OK created incremental model imdb_dbt.actor_summary...................... [OK in 6.82s]
16:12:24
16:12:24 Finished running 1 incremental model in 7.79s.
16:12:24
16:12:24 Completed successfully
16:12:24
16:12:24 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 2; | {"source_file": "guides.md"} | [
0.029434209689497948,
-0.14742426574230194,
0.008415481075644493,
0.010436632670462132,
0.001784109859727323,
0.03269180282950401,
0.0819353312253952,
-0.010610636323690414,
-0.0016897432506084442,
-0.02699817158281803,
0.037578802555799484,
-0.06874571740627289,
0.06079389527440071,
0.011... |
7bce29a4-4707-48e8-a96c-7ed4225b45b0 | sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 2;
response
+------+-------------------+----------+------------------+------+---------+-------------------+
|id |name |num_movies|avg_rank |genres|directors|updated_at |
+------+-------------------+----------+------------------+------+---------+-------------------+
|845466|Clicky McClickHouse|910 |1.4687938697032283|21 |662 |2022-04-26 16:20:36|
|45332 |Mel Blanc |909 |5.7884792542982515|19 |148 |2022-04-26 16:17:42|
+------+-------------------+----------+------------------+------+---------+-------------------+
Internals {#internals}
We can identify the statements executed to achieve the above incremental update by querying ClickHouse's query log.
sql
SELECT event_time, query FROM system.query_log WHERE type='QueryStart' AND query LIKE '%dbt%'
AND event_time > subtractMinutes(now(), 15) ORDER BY event_time LIMIT 100;
Adjust the above query to the period of execution. We leave result inspection to the user but highlight the general strategy used by the adapter to perform incremental updates:
The adapter creates a temporary table
actor_sumary__dbt_tmp
. Rows that have changed are streamed into this table.
A new table,
actor_summary_new,
is created. The rows from the old table are, in turn, streamed from the old to new, with a check to make sure row ids do not exist in the temporary table. This effectively handles updates and duplicates.
The results from the temporary table are streamed into the new
actor_summary
table:
Finally, the new table is exchanged atomically with the old version via an
EXCHANGE TABLES
statement. The old and temporary tables are in turn dropped.
This is visualized below:
This strategy may encounter challenges on very large models. For further details see
Limitations
.
Append Strategy (inserts-only mode) {#append-strategy-inserts-only-mode}
To overcome the limitations of large datasets in incremental models, the adapter uses the dbt configuration parameter
incremental_strategy
. This can be set to the value
append
. When set, updated rows are inserted directly into the target table (a.k.a
imdb_dbt.actor_summary
) and no temporary table is created.
Note: Append only mode requires your data to be immutable or for duplicates to be acceptable. If you want an incremental table model that supports altered rows don't use this mode!
To illustrate this mode, we will add another new actor and re-execute dbt run with
incremental_strategy='append'
.
Configure append only mode in actor_summary.sql:
sql
{{ config(order_by='(updated_at, id, name)', engine='MergeTree()', materialized='incremental', unique_key='id', incremental_strategy='append') }}
Let's add another famous actor - Danny DeBito
sql
INSERT INTO imdb.actors VALUES (845467, 'Danny', 'DeBito', 'M');
Let's star Danny in 920 random movies. | {"source_file": "guides.md"} | [
0.02220228686928749,
-0.10603976994752884,
-0.048516228795051575,
0.021725697442889214,
-0.08124566078186035,
0.0824776291847229,
0.08743792027235031,
-0.03617255762219429,
0.014569622464478016,
-0.0192819032818079,
0.06849426031112671,
-0.0683177188038826,
0.03316110372543335,
-0.02967527... |
c92ba7d9-c55b-4033-b4ae-e4fa42ad592e | Let's add another famous actor - Danny DeBito
sql
INSERT INTO imdb.actors VALUES (845467, 'Danny', 'DeBito', 'M');
Let's star Danny in 920 random movies.
sql
INSERT INTO imdb.roles
SELECT now() as created_at, 845467 as actor_id, id as movie_id, 'Himself' as role
FROM imdb.movies
LIMIT 920 OFFSET 10000;
Execute a dbt run and confirm that Danny was added to the actor-summary table
response
clickhouse-user@clickhouse:~/imdb$ dbt run
16:12:16 Running with dbt=1.1.0
16:12:16 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 186 macros, 0 operations, 0 seed files, 6 sources, 0 exposures, 0 metrics
16:12:16
16:12:17 Concurrency: 1 threads (target='dev')
16:12:17
16:12:17 1 of 1 START incremental model imdb_dbt.actor_summary........................... [RUN]
16:12:24 1 of 1 OK created incremental model imdb_dbt.actor_summary...................... [OK in 0.17s]
16:12:24
16:12:24 Finished running 1 incremental model in 0.19s.
16:12:24
16:12:24 Completed successfully
16:12:24
16:12:24 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
sql
SELECT * FROM imdb_dbt.actor_summary ORDER BY num_movies DESC LIMIT 3;
response
+------+-------------------+----------+------------------+------+---------+-------------------+
|id |name |num_movies|avg_rank |genres|directors|updated_at |
+------+-------------------+----------+------------------+------+---------+-------------------+
|845467|Danny DeBito |920 |1.4768987303293204|21 |670 |2022-04-26 16:22:06|
|845466|Clicky McClickHouse|910 |1.4687938697032283|21 |662 |2022-04-26 16:20:36|
|45332 |Mel Blanc |909 |5.7884792542982515|19 |148 |2022-04-26 16:17:42|
+------+-------------------+----------+------------------+------+---------+-------------------+
Note how much faster that incremental was compared to the insertion of "Clicky".
Checking again the query_log table reveals the differences between the 2 incremental runs: | {"source_file": "guides.md"} | [
-0.034699659794569016,
-0.10816637426614761,
-0.037404611706733704,
-0.014339246787130833,
-0.02825860306620598,
0.011147950775921345,
0.07317725569009781,
0.013431629166007042,
0.03987736999988556,
0.0023495762143284082,
0.06918250769376755,
-0.05780559778213501,
0.035000644624233246,
-0.... |
7f3fb856-82fb-470f-ba68-458699da997a | Note how much faster that incremental was compared to the insertion of "Clicky".
Checking again the query_log table reveals the differences between the 2 incremental runs:
```sql
INSERT INTO imdb_dbt.actor_summary ("id", "name", "num_movies", "avg_rank", "genres", "directors", "updated_at")
WITH actor_summary AS (
SELECT id,
any(actor_name) AS name,
uniqExact(movie_id) AS num_movies,
avg(rank) AS avg_rank,
uniqExact(genre) AS genres,
uniqExact(director_name) AS directors,
max(created_at) AS updated_at
FROM (
SELECT imdb.actors.id AS id,
concat(imdb.actors.first_name, ' ', imdb.actors.last_name) AS actor_name,
imdb.movies.id AS movie_id,
imdb.movies.rank AS rank,
genre,
concat(imdb.directors.first_name, ' ', imdb.directors.last_name) AS director_name,
created_at
FROM imdb.actors
JOIN imdb.roles ON imdb.roles.actor_id = imdb.actors.id
LEFT OUTER JOIN imdb.movies ON imdb.movies.id = imdb.roles.movie_id
LEFT OUTER JOIN imdb.genres ON imdb.genres.movie_id = imdb.movies.id
LEFT OUTER JOIN imdb.movie_directors ON imdb.movie_directors.movie_id = imdb.movies.id
LEFT OUTER JOIN imdb.directors ON imdb.directors.id = imdb.movie_directors.director_id
)
GROUP BY id
)
SELECT *
FROM actor_summary
-- this filter will only be applied on an incremental run
WHERE id > (SELECT max(id) FROM imdb_dbt.actor_summary) OR updated_at > (SELECT max(updated_at) FROM imdb_dbt.actor_summary)
```
In this run, only the new rows are added straight to
imdb_dbt.actor_summary
table and there is no table creation involved.
Delete and insert mode (experimental) {#deleteinsert-mode-experimental}
Historically ClickHouse has had only limited support for updates and deletes, in the form of asynchronous
Mutations
. These can be extremely IO-intensive and should generally be avoided.
ClickHouse 22.8 introduced
lightweight deletes
and ClickHouse 25.7 introduced
lightweight updates
. With the introduction of these features, modifications from single update queries, even when being materialized asynchronously, will occur instantly from the user's perspective.
This mode can be configured for a model via the
incremental_strategy
parameter i.e.
sql
{{ config(order_by='(updated_at, id, name)', engine='MergeTree()', materialized='incremental', unique_key='id', incremental_strategy='delete+insert') }}
This strategy operates directly on the target model's table, so if there is an issue during the operation, the data in the incremental model is likely to be in an invalid state - there is no atomic update.
In summary, this approach:
The adapter creates a temporary table
actor_sumary__dbt_tmp
. Rows that have changed are streamed into this table.
A
DELETE
is issued against the current
actor_summary
table. Rows are deleted by id from
actor_sumary__dbt_tmp | {"source_file": "guides.md"} | [
-0.0012614296283572912,
-0.1462583690881729,
0.001263786805793643,
0.03729671239852905,
-0.004773785825818777,
0.01141873374581337,
0.04610337316989899,
-0.04791276156902313,
0.02797180414199829,
0.0036298951599746943,
0.10346852242946625,
-0.0116023113951087,
0.07461261004209518,
-0.04128... |
e0283a14-bcb5-422b-9652-a14e28099fcf | A
DELETE
is issued against the current
actor_summary
table. Rows are deleted by id from
actor_sumary__dbt_tmp
The rows from
actor_sumary__dbt_tmp
are inserted into
actor_summary
using an
INSERT INTO actor_summary SELECT * FROM actor_sumary__dbt_tmp
.
This process is shown below:
insert_overwrite mode (experimental) {#insert_overwrite-mode-experimental}
Performs the following steps:
Create a staging (temporary) table with the same structure as the incremental model relation:
CREATE TABLE {staging} AS {target}
.
Insert only new records (produced by SELECT) into the staging table.
Replace only new partitions (present in the staging table) into the target table.
This approach has the following advantages:
It is faster than the default strategy because it doesn't copy the entire table.
It is safer than other strategies because it doesn't modify the original table until the INSERT operation completes successfully: in case of intermediate failure, the original table is not modified.
It implements "partitions immutability" data engineering best practice. Which simplifies incremental and parallel data processing, rollbacks, etc.
Creating a snapshot {#creating-a-snapshot}
dbt snapshots allow a record to be made of changes to a mutable model over time. This in turn allows point-in-time queries on models, where analysts can "look back in time" at the previous state of a model. This is achieved using
type-2 Slowly Changing Dimensions
where from and to date columns record when a row was valid. This functionality is supported by the ClickHouse adapter and is demonstrated below.
This example assumes you have completed
Creating an Incremental Table Model
. Make sure your actor_summary.sql doesn't set inserts_only=True. Your models/actor_summary.sql should look like this:
```sql
{{ config(order_by='(updated_at, id, name)', engine='MergeTree()', materialized='incremental', unique_key='id') }} | {"source_file": "guides.md"} | [
-0.06078747287392616,
0.01633373834192753,
0.01616501435637474,
-0.013433804735541344,
-0.03193296864628792,
-0.03855878859758377,
-0.01733619160950184,
-0.03936343640089035,
0.031655848026275635,
0.09308232367038727,
0.06190330535173416,
0.009439321234822273,
0.09556639939546585,
-0.02502... |
61a9c00e-b114-46fb-85e5-cb2592b90c96 | ```sql
{{ config(order_by='(updated_at, id, name)', engine='MergeTree()', materialized='incremental', unique_key='id') }}
with actor_summary as (
SELECT id,
any(actor_name) as name,
uniqExact(movie_id) as num_movies,
avg(rank) as avg_rank,
uniqExact(genre) as genres,
uniqExact(director_name) as directors,
max(created_at) as updated_at
FROM (
SELECT {{ source('imdb', 'actors') }}.id as id,
concat({{ source('imdb', 'actors') }}.first_name, ' ', {{ source('imdb', 'actors') }}.last_name) as actor_name,
{{ source('imdb', 'movies') }}.id as movie_id,
{{ source('imdb', 'movies') }}.rank as rank,
genre,
concat({{ source('imdb', 'directors') }}.first_name, ' ', {{ source('imdb', 'directors') }}.last_name) as director_name,
created_at
FROM {{ source('imdb', 'actors') }}
JOIN {{ source('imdb', 'roles') }} ON {{ source('imdb', 'roles') }}.actor_id = {{ source('imdb', 'actors') }}.id
LEFT OUTER JOIN {{ source('imdb', 'movies') }} ON {{ source('imdb', 'movies') }}.id = {{ source('imdb', 'roles') }}.movie_id
LEFT OUTER JOIN {{ source('imdb', 'genres') }} ON {{ source('imdb', 'genres') }}.movie_id = {{ source('imdb', 'movies') }}.id
LEFT OUTER JOIN {{ source('imdb', 'movie_directors') }} ON {{ source('imdb', 'movie_directors') }}.movie_id = {{ source('imdb', 'movies') }}.id
LEFT OUTER JOIN {{ source('imdb', 'directors') }} ON {{ source('imdb', 'directors') }}.id = {{ source('imdb', 'movie_directors') }}.director_id
)
GROUP BY id
)
select *
from actor_summary
{% if is_incremental() %}
-- this filter will only be applied on an incremental run
where id > (select max(id) from {{ this }}) or updated_at > (select max(updated_at) from {{this}})
{% endif %}
```
Create a file
actor_summary
in the snapshots directory.
bash
touch snapshots/actor_summary.sql
Update the contents of the actor_summary.sql file with the following content:
```sql
{% snapshot actor_summary_snapshot %}
{{
config(
target_schema='snapshots',
unique_key='id',
strategy='timestamp',
updated_at='updated_at',
)
}}
select * from {{ref('actor_summary')}}
{% endsnapshot %}
``` | {"source_file": "guides.md"} | [
0.01278891135007143,
-0.012826301157474518,
0.018273724243044853,
0.02783234417438507,
-0.011723753996193409,
0.045093562453985214,
0.026754511520266533,
-0.022453758865594864,
0.031039385125041008,
-0.004028803203254938,
0.044608402997255325,
-0.0037860991433262825,
0.0466216541826725,
-0... |
f3b46083-6da6-4e92-af7f-6bd2cc55343f | {{
config(
target_schema='snapshots',
unique_key='id',
strategy='timestamp',
updated_at='updated_at',
)
}}
select * from {{ref('actor_summary')}}
{% endsnapshot %}
```
A few observations regarding this content:
* The select query defines the results you wish to snapshot over time. The function ref is used to reference our previously created actor_summary model.
* We require a timestamp column to indicate record changes. Our updated_at column (see
Creating an Incremental Table Model
) can be used here. The parameter strategy indicates our use of a timestamp to denote updates, with the parameter updated_at specifying the column to use. If this is not present in your model you can alternatively use the
check strategy
. This is significantly more inefficient and requires the user to specify a list of columns to compare. dbt compares the current and historical values of these columns, recording any changes (or doing nothing if identical).
Run the command
dbt snapshot
.
response
clickhouse-user@clickhouse:~/imdb$ dbt snapshot
13:26:23 Running with dbt=1.1.0
13:26:23 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 3 sources, 0 exposures, 0 metrics
13:26:23
13:26:25 Concurrency: 1 threads (target='dev')
13:26:25
13:26:25 1 of 1 START snapshot snapshots.actor_summary_snapshot...................... [RUN]
13:26:25 1 of 1 OK snapshotted snapshots.actor_summary_snapshot...................... [OK in 0.79s]
13:26:25
13:26:25 Finished running 1 snapshot in 2.11s.
13:26:25
13:26:25 Completed successfully
13:26:25
13:26:25 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
Note how a table actor_summary_snapshot has been created in the snapshots db (determined by the target_schema parameter).
Sampling this data you will see how dbt has included the columns dbt_valid_from and dbt_valid_to. The latter has values set to null. Subsequent runs will update this.
sql
SELECT id, name, num_movies, dbt_valid_from, dbt_valid_to FROM snapshots.actor_summary_snapshot ORDER BY num_movies DESC LIMIT 5;
response
+------+----------+------------+----------+-------------------+------------+
|id |first_name|last_name |num_movies|dbt_valid_from |dbt_valid_to|
+------+----------+------------+----------+-------------------+------------+
|845467|Danny |DeBito |920 |2022-05-25 19:33:32|NULL |
|845466|Clicky |McClickHouse|910 |2022-05-25 19:32:34|NULL |
|45332 |Mel |Blanc |909 |2022-05-25 19:31:47|NULL |
|621468|Bess |Flowers |672 |2022-05-25 19:31:47|NULL |
|283127|Tom |London |549 |2022-05-25 19:31:47|NULL |
+------+----------+------------+----------+-------------------+------------+
Make our favorite actor Clicky McClickHouse appear in another 10 films. | {"source_file": "guides.md"} | [
-0.07735659182071686,
0.035112008452415466,
-0.045064643025398254,
0.026323776692152023,
0.00950205884873867,
0.0169965997338295,
-0.006178221199661493,
0.03691451996564865,
-0.011379343457520008,
0.05669797956943512,
0.051802512258291245,
-0.03444301337003708,
0.04172607883810997,
-0.0409... |
0f4fd9ef-64da-4baa-b4ed-612f6e730623 | Make our favorite actor Clicky McClickHouse appear in another 10 films.
sql
INSERT INTO imdb.roles
SELECT now() as created_at, 845466 as actor_id, rand(number) % 412320 as movie_id, 'Himself' as role
FROM system.numbers
LIMIT 10;
Re-run the dbt run command from the
imdb
directory. This will update the incremental model. Once this is complete, run the dbt snapshot to capture the changes.
```response
clickhouse-user@clickhouse:~/imdb$ dbt run
13:46:14 Running with dbt=1.1.0
13:46:14 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 3 sources, 0 exposures, 0 metrics
13:46:14
13:46:15 Concurrency: 1 threads (target='dev')
13:46:15
13:46:15 1 of 1 START incremental model imdb_dbt.actor_summary....................... [RUN]
13:46:18 1 of 1 OK created incremental model imdb_dbt.actor_summary.................. [OK in 2.76s]
13:46:18
13:46:18 Finished running 1 incremental model in 3.73s.
13:46:18
13:46:18 Completed successfully
13:46:18
13:46:18 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
clickhouse-user@clickhouse:~/imdb$ dbt snapshot
13:46:26 Running with dbt=1.1.0
13:46:26 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 0 seed files, 3 sources, 0 exposures, 0 metrics
13:46:26
13:46:27 Concurrency: 1 threads (target='dev')
13:46:27
13:46:27 1 of 1 START snapshot snapshots.actor_summary_snapshot...................... [RUN]
13:46:31 1 of 1 OK snapshotted snapshots.actor_summary_snapshot...................... [OK in 4.05s]
13:46:31
13:46:31 Finished running 1 snapshot in 5.02s.
13:46:31
13:46:31 Completed successfully
13:46:31
13:46:31 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
```
If we now query our snapshot, notice we have 2 rows for Clicky McClickHouse. Our previous entry now has a dbt_valid_to value. Our new value is recorded with the same value in the dbt_valid_from column, and a dbt_valid_to value of null. If we did have new rows, these would also be appended to the snapshot.
sql
SELECT id, name, num_movies, dbt_valid_from, dbt_valid_to FROM snapshots.actor_summary_snapshot ORDER BY num_movies DESC LIMIT 5;
response
+------+----------+------------+----------+-------------------+-------------------+
|id |first_name|last_name |num_movies|dbt_valid_from |dbt_valid_to |
+------+----------+------------+----------+-------------------+-------------------+
|845467|Danny |DeBito |920 |2022-05-25 19:33:32|NULL |
|845466|Clicky |McClickHouse|920 |2022-05-25 19:34:37|NULL |
|845466|Clicky |McClickHouse|910 |2022-05-25 19:32:34|2022-05-25 19:34:37|
|45332 |Mel |Blanc |909 |2022-05-25 19:31:47|NULL |
|621468|Bess |Flowers |672 |2022-05-25 19:31:47|NULL |
+------+----------+------------+----------+-------------------+-------------------+
For further details on dbt snapshots see
here
. | {"source_file": "guides.md"} | [
-0.05295247584581375,
-0.12881694734096527,
-0.021399211138486862,
-0.0038095652125775814,
0.012104127556085587,
0.005123215727508068,
0.05253671854734421,
-0.007241642102599144,
0.007749907672405243,
-0.0021344537381082773,
0.0546879880130291,
-0.03144121170043945,
0.0647156611084938,
-0.... |
2a0e977b-f585-471a-8ff5-69c71b0a9ee0 | For further details on dbt snapshots see
here
.
Using seeds {#using-seeds}
dbt provides the ability to load data from CSV files. This capability is not suited to loading large exports of a database and is more designed for small files typically used for code tables and
dictionaries
, e.g. mapping country codes to country names. For a simple example, we generate and then upload a list of genre codes using the seed functionality.
We generate a list of genre codes from our existing dataset. From the dbt directory, use the
clickhouse-client
to create a file
seeds/genre_codes.csv
:
bash
clickhouse-user@clickhouse:~/imdb$ clickhouse-client --password <password> --query
"SELECT genre, ucase(substring(genre, 1, 3)) as code FROM imdb.genres GROUP BY genre
LIMIT 100 FORMAT CSVWithNames" > seeds/genre_codes.csv
Execute the
dbt seed
command. This will create a new table
genre_codes
in our database
imdb_dbt
(as defined by our schema configuration) with the rows from our csv file.
bash
clickhouse-user@clickhouse:~/imdb$ dbt seed
17:03:23 Running with dbt=1.1.0
17:03:23 Found 1 model, 0 tests, 1 snapshot, 0 analyses, 181 macros, 0 operations, 1 seed file, 6 sources, 0 exposures, 0 metrics
17:03:23
17:03:24 Concurrency: 1 threads (target='dev')
17:03:24
17:03:24 1 of 1 START seed file imdb_dbt.genre_codes..................................... [RUN]
17:03:24 1 of 1 OK loaded seed file imdb_dbt.genre_codes................................. [INSERT 21 in 0.65s]
17:03:24
17:03:24 Finished running 1 seed in 1.62s.
17:03:24
17:03:24 Completed successfully
17:03:24
17:03:24 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
3. Confirm these have been loaded:
sql
SELECT * FROM imdb_dbt.genre_codes LIMIT 10;
```response
+-------+----+
|genre |code|
+-------+----+
|Drama |DRA |
|Romance|ROM |
|Short |SHO |
|Mystery|MYS |
|Adult |ADU |
|Family |FAM |
|Action |ACT |
|Sci-Fi |SCI |
|Horror |HOR |
|War |WAR |
+-------+----+=
```
Further Information {#further-information}
The previous guides only touch the surface of dbt functionality. Users are recommended to read the excellent
dbt documentation
. | {"source_file": "guides.md"} | [
-0.033233609050512314,
-0.04789420962333679,
-0.11919335275888443,
0.022183429449796677,
0.01481756940484047,
-0.01255329605191946,
0.023430563509464264,
0.02490510232746601,
-0.017514197155833244,
0.03202403709292412,
0.02093886397778988,
-0.03490927442908287,
0.10051579773426056,
-0.1196... |
1123ab9f-ec16-48af-bab4-1ef2f4bcead3 | title: 'Handling other JSON formats'
slug: /integrations/data-formats/json/other-formats
description: 'Handling other JSON formats'
sidebar_label: 'Handling other formats'
keywords: ['json', 'formats', 'json formats']
doc_type: 'guide'
Handling other JSON formats
Earlier examples of loading JSON data assume the use of
JSONEachRow
(
NDJSON
). This format reads the keys in each JSON line as columns. For example:
```sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz', JSONEachRow)
LIMIT 5
ββββββββdateββ¬βcountry_codeββ¬βprojectβββββββββββββ¬βtypeβββββββββ¬βinstallerβββββ¬βpython_minorββ¬βsystemββ¬βversionββ
β 2022-11-15 β CN β clickhouse-connect β bdist_wheel β bandersnatch β β β 0.2.8 β
β 2022-11-15 β CN β clickhouse-connect β bdist_wheel β bandersnatch β β β 0.2.8 β
β 2022-11-15 β CN β clickhouse-connect β bdist_wheel β bandersnatch β β β 0.2.8 β
β 2022-11-15 β CN β clickhouse-connect β bdist_wheel β bandersnatch β β β 0.2.8 β
β 2022-11-15 β CN β clickhouse-connect β bdist_wheel β bandersnatch β β β 0.2.8 β
ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββ΄ββββββββββ
5 rows in set. Elapsed: 0.449 sec.
```
While this is generally the most commonly used format for JSON, users will encounter other formats or need to read the JSON as a single object.
We provide examples of reading and loading JSON in other common formats below.
Reading JSON as an object {#reading-json-as-an-object}
Our previous examples show how
JSONEachRow
reads newline-delimited JSON, with each line read as a separate object mapped to a table row and each key to a column. This is ideal for cases where the JSON is predictable with single types for each column.
In contrast,
JSONAsObject
treats each line as a single
JSON
object and stores it in a single column, of type
JSON
, making it better suited for nested JSON payloads and cases where the keys are dynamic and have potentially more than one type.
Use
JSONEachRow
for row-wise inserts, and
JSONAsObject
when storing flexible or dynamic JSON data.
Contrast the above example, with the following query which reads the same data as a JSON object per line:
```sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz', JSONAsObject)
LIMIT 5 | {"source_file": "formats.md"} | [
-0.0527598075568676,
-0.02531786449253559,
-0.0890008956193924,
-0.018530787900090218,
0.022221390157938004,
-0.017821863293647766,
-0.04815046861767769,
-0.010855845175683498,
-0.01688406988978386,
-0.03005724586546421,
0.01971530355513096,
0.011721987277269363,
0.0038382818456739187,
-0.... |
d9cf3636-a7c6-4238-a0e4-4f7698c70642 | ```sql
SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz', JSONAsObject)
LIMIT 5
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
5 rows in set. Elapsed: 0.338 sec.
```
JSONAsObject
is useful for inserting rows into a table using a single JSON object column e.g.
``sql
CREATE TABLE pypi
(
json` JSON
)
ENGINE = MergeTree
ORDER BY tuple();
INSERT INTO pypi SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/json/*.json.gz', JSONAsObject)
LIMIT 5;
SELECT *
FROM pypi
LIMIT 2;
ββjsonββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
β {"country_code":"CN","date":"2022-11-15","installer":"bandersnatch","project":"clickhouse-connect","python_minor":"","system":"","type":"bdist_wheel","version":"0.2.8"} β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.003 sec.
```
The
JSONAsObject
format may also be useful for reading newline-delimited JSON in cases where the structure of the objects is inconsistent. For example, if a key varies in type across rows (it may sometimes be a string, but other times an object). In such cases, ClickHouse cannot infer a stable schema using
JSONEachRow
, and
JSONAsObject
allows the data to be ingested without strict type enforcement, storing each JSON row as a whole in a single column. For example, notice how
JSONEachRow
fails on the following example: | {"source_file": "formats.md"} | [
-0.002091890899464488,
-0.05439535155892372,
-0.03445452079176903,
0.004844633862376213,
0.027336129918694496,
-0.017753416672348976,
0.044396646320819855,
-0.05955984443426132,
-0.017123671248555183,
0.040316835045814514,
0.05030498653650284,
-0.07970684766769409,
0.03295881301164627,
-0.... |
f85ba4eb-74c8-4dc4-a97c-b76687635087 | ```sql
SELECT count()
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/bluesky/file_0001.json.gz', 'JSONEachRow')
Elapsed: 1.198 sec.
Received exception from server (version 24.12.1):
Code: 636. DB::Exception: Received from sql-clickhouse.clickhouse.com:9440. DB::Exception: The table structure cannot be extracted from a JSONEachRow format file. Error:
Code: 117. DB::Exception: JSON objects have ambiguous data: in some objects path 'record.subject' has type 'String' and in some - 'Tuple(
$type
String, cid String, uri String)'. You can enable setting input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects to use String type for path 'record.subject'. (INCORRECT_DATA) (version 24.12.1.18239 (official build))
To increase the maximum number of rows/bytes to read for structure determination, use setting input_format_max_rows_to_read_for_schema_inference/input_format_max_bytes_to_read_for_schema_inference.
You can specify the structure manually: (in file/uri bluesky/file_0001.json.gz). (CANNOT_EXTRACT_TABLE_STRUCTURE)
```
Conversely,
JSONAsObject
can be used in this case as the
JSON
type supports multiple types for the same subcolumn.
```sql
SELECT count()
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/bluesky/file_0001.json.gz', 'JSONAsObject')
ββcount()ββ
β 1000000 β
βββββββββββ
1 row in set. Elapsed: 0.480 sec. Processed 1.00 million rows, 256.00 B (2.08 million rows/s., 533.76 B/s.)
```
Array of JSON objects {#array-of-json-objects}
One of the most popular forms of JSON data is having a list of JSON objects in a JSON array, like in
this example
:
```bash
cat list.json
[
{
"path": "Akiba_Hebrew_Academy",
"month": "2017-08-01",
"hits": 241
},
{
"path": "Aegithina_tiphia",
"month": "2018-02-01",
"hits": 34
},
...
]
```
Let's create a table for this kind of data:
sql
CREATE TABLE sometable
(
`path` String,
`month` Date,
`hits` UInt32
)
ENGINE = MergeTree
ORDER BY tuple(month, path)
To import a list of JSON objects, we can use a
JSONEachRow
format (inserting data from
list.json
file):
sql
INSERT INTO sometable
FROM INFILE 'list.json'
FORMAT JSONEachRow
We have used a
FROM INFILE
clause to load data from the local file, and we can see the import was successful:
sql
SELECT *
FROM sometable
response
ββpathβββββββββββββββββββββββ¬ββββββmonthββ¬βhitsββ
β 1971-72_Utah_Stars_season β 2016-10-01 β 1 β
β Akiba_Hebrew_Academy β 2017-08-01 β 241 β
β Aegithina_tiphia β 2018-02-01 β 34 β
βββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
JSON object keys {#json-object-keys}
In some cases, the list of JSON objects can be encoded as object properties instead of array elements (see
objects.json
for example):
bash
cat objects.json | {"source_file": "formats.md"} | [
0.002286441158503294,
-0.05663676932454109,
-0.08700163662433624,
0.027394957840442657,
-0.0021730605512857437,
0.012875129468739033,
-0.0561492033302784,
-0.04505090415477753,
0.000906792061869055,
0.02084658481180668,
0.05156717821955681,
-0.02683599293231964,
0.032462023198604584,
-0.03... |
328d0b39-9d2e-4c66-a907-900625e64599 | In some cases, the list of JSON objects can be encoded as object properties instead of array elements (see
objects.json
for example):
bash
cat objects.json
response
{
"a": {
"path":"April_25,_2017",
"month":"2018-01-01",
"hits":2
},
"b": {
"path":"Akahori_Station",
"month":"2016-06-01",
"hits":11
},
...
}
ClickHouse can load data from this kind of data using the
JSONObjectEachRow
format:
sql
INSERT INTO sometable FROM INFILE 'objects.json' FORMAT JSONObjectEachRow;
SELECT * FROM sometable;
response
ββpathβββββββββββββ¬ββββββmonthββ¬βhitsββ
β Abducens_palsy β 2016-05-01 β 28 β
β Akahori_Station β 2016-06-01 β 11 β
β April_25,_2017 β 2018-01-01 β 2 β
βββββββββββββββββββ΄βββββββββββββ΄βββββββ
Specifying parent object key values {#specifying-parent-object-key-values}
Let's say we also want to save values in parent object keys to the table. In this case, we can use the
following option
to define the name of the column we want key values to be saved to:
sql
SET format_json_object_each_row_column_for_object_name = 'id'
Now, we can check which data is going to be loaded from the original JSON file using
file()
function:
sql
SELECT * FROM file('objects.json', JSONObjectEachRow)
response
ββidββ¬βpathβββββββββββββ¬ββββββmonthββ¬βhitsββ
β a β April_25,_2017 β 2018-01-01 β 2 β
β b β Akahori_Station β 2016-06-01 β 11 β
β c β Abducens_palsy β 2016-05-01 β 28 β
ββββββ΄ββββββββββββββββββ΄βββββββββββββ΄βββββββ
Note how the
id
column has been populated by key values correctly.
JSON arrays {#json-arrays}
Sometimes, for the sake of saving space, JSON files are encoded in arrays instead of objects. In this case, we deal with a
list of JSON arrays
:
bash
cat arrays.json
response
["Akiba_Hebrew_Academy", "2017-08-01", 241],
["Aegithina_tiphia", "2018-02-01", 34],
["1971-72_Utah_Stars_season", "2016-10-01", 1]
In this case, ClickHouse will load this data and attribute each value to the corresponding column based on its order in the array. We use
JSONCompactEachRow
format for this:
sql
SELECT * FROM sometable
response
ββc1βββββββββββββββββββββββββ¬βββββββββc2ββ¬ββc3ββ
β Akiba_Hebrew_Academy β 2017-08-01 β 241 β
β Aegithina_tiphia β 2018-02-01 β 34 β
β 1971-72_Utah_Stars_season β 2016-10-01 β 1 β
βββββββββββββββββββββββββββββ΄βββββββββββββ΄ββββββ
Importing individual columns from JSON arrays {#importing-individual-columns-from-json-arrays}
In some cases, data can be encoded column-wise instead of row-wise. In this case, a parent JSON object contains columns with values. Take a look at the
following file
:
bash
cat columns.json
response
{
"path": ["2007_Copa_America", "Car_dealerships_in_the_USA", "Dihydromyricetin_reductase"],
"month": ["2016-07-01", "2015-07-01", "2015-07-01"],
"hits": [178, 11, 1]
}
ClickHouse uses the
JSONColumns
format to parse data formatted like that:
sql
SELECT * FROM file('columns.json', JSONColumns) | {"source_file": "formats.md"} | [
0.0019304052693769336,
-0.03412023186683655,
-0.06191221997141838,
0.09916286170482635,
-0.11427527666091919,
0.005901035387068987,
-0.008605200797319412,
0.01456056535243988,
-0.019758742302656174,
0.02168957144021988,
-0.024846967309713364,
-0.008582697249948978,
0.01852305792272091,
-0.... |
c468f9e7-0fdf-4363-8fca-b1301afcbb64 | ClickHouse uses the
JSONColumns
format to parse data formatted like that:
sql
SELECT * FROM file('columns.json', JSONColumns)
response
ββpathββββββββββββββββββββββββ¬ββββββmonthββ¬βhitsββ
β 2007_Copa_America β 2016-07-01 β 178 β
β Car_dealerships_in_the_USA β 2015-07-01 β 11 β
β Dihydromyricetin_reductase β 2015-07-01 β 1 β
ββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
A more compact format is also supported when dealing with an
array of columns
instead of an object using
JSONCompactColumns
format:
sql
SELECT * FROM file('columns-array.json', JSONCompactColumns)
response
ββc1βββββββββββββββ¬βββββββββc2ββ¬βc3ββ
β Heidenrod β 2017-01-01 β 10 β
β Arthur_Henrique β 2016-11-01 β 12 β
β Alan_Ebnother β 2015-11-01 β 66 β
βββββββββββββββββββ΄βββββββββββββ΄βββββ
Saving JSON objects instead of parsing {#saving-json-objects-instead-of-parsing}
There are cases you might want to save JSON objects to a single
String
(or
JSON
) column instead of parsing it. This can be useful when dealing with a list of JSON objects of different structures. Let's take
this file
for example, where we have multiple different JSON objects inside a parent list:
bash
cat custom.json
response
[
{"name": "Joe", "age": 99, "type": "person"},
{"url": "/my.post.MD", "hits": 1263, "type": "post"},
{"message": "Warning on disk usage", "type": "log"}
]
We want to save original JSON objects into the following table:
sql
CREATE TABLE events
(
`data` String
)
ENGINE = MergeTree
ORDER BY ()
Now we can load data from the file into this table using
JSONAsString
format to keep JSON objects instead of parsing them:
sql
INSERT INTO events (data)
FROM INFILE 'custom.json'
FORMAT JSONAsString
And we can use
JSON functions
to query saved objects:
sql
SELECT
JSONExtractString(data, 'type') AS type,
data
FROM events
response
ββtypeββββ¬βdataββββββββββββββββββββββββββββββββββββββββββββββββββ
β person β {"name": "Joe", "age": 99, "type": "person"} β
β post β {"url": "/my.post.MD", "hits": 1263, "type": "post"} β
β log β {"message": "Warning on disk usage", "type": "log"} β
ββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Note that
JSONAsString
works perfectly fine in cases we have JSON object-per-line formatted files (usually used with
JSONEachRow
format).
Schema for nested objects {#schema-for-nested-objects}
In cases when we're dealing with
nested JSON objects
, we can additionally define an explicit schema and use complex types (
Array
,
JSON
or
Tuple
) to load data:
sql
SELECT *
FROM file('list-nested.json', JSONEachRow, 'page Tuple(path String, title String, owner_id UInt16), month Date, hits UInt32')
LIMIT 1
response
ββpageββββββββββββββββββββββββββββββββββββββββββββββββ¬ββββββmonthββ¬βhitsββ
β ('Akiba_Hebrew_Academy','Akiba Hebrew Academy',12) β 2017-08-01 β 241 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ | {"source_file": "formats.md"} | [
-0.004212064202874899,
-0.04199429228901863,
-0.045783232897520065,
0.050862688571214676,
-0.03947319835424423,
-0.020649198442697525,
-0.01342195924371481,
0.017875056713819504,
-0.03700365498661995,
-0.003105613635852933,
0.0506904236972332,
0.027294069528579712,
-0.01831665076315403,
-0... |
a808e5ee-bbb5-49d1-838d-9329ddf3c488 | Accessing nested JSON objects {#accessing-nested-json-objects}
We can refer to
nested JSON keys
by enabling the
following settings option
:
sql
SET input_format_import_nested_json = 1
This allows us to refer to nested JSON object keys using dot notation (remember to wrap those with backtick symbols to work):
sql
SELECT *
FROM file('list-nested.json', JSONEachRow, '`page.owner_id` UInt32, `page.title` String, month Date, hits UInt32')
LIMIT 1
results
ββpage.owner_idββ¬βpage.titleββββββββββββ¬ββββββmonthββ¬βhitsββ
β 12 β Akiba Hebrew Academy β 2017-08-01 β 241 β
βββββββββββββββββ΄βββββββββββββββββββββββ΄βββββββββββββ΄βββββββ
This way we can flatten nested JSON objects or use some nested values to save them as separate columns.
Skipping unknown columns {#skipping-unknown-columns}
By default, ClickHouse will ignore unknown columns when importing JSON data. Let's try to import the original file into the table without the
month
column:
sql
CREATE TABLE shorttable
(
`path` String,
`hits` UInt32
)
ENGINE = MergeTree
ORDER BY path
We can still insert the
original JSON data
with 3 columns into this table:
sql
INSERT INTO shorttable FROM INFILE 'list.json' FORMAT JSONEachRow;
SELECT * FROM shorttable
response
ββpathβββββββββββββββββββββββ¬βhitsββ
β 1971-72_Utah_Stars_season β 1 β
β Aegithina_tiphia β 34 β
β Akiba_Hebrew_Academy β 241 β
βββββββββββββββββββββββββββββ΄βββββββ
ClickHouse will ignore unknown columns while importing. This can be disabled with the
input_format_skip_unknown_fields
settings option:
sql
SET input_format_skip_unknown_fields = 0;
INSERT INTO shorttable FROM INFILE 'list.json' FORMAT JSONEachRow;
response
Ok.
Exception on client:
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: month: (in file/uri /data/clickhouse/user_files/list.json): (at row 1)
ClickHouse will throw exceptions in cases of inconsistent JSON and table columns structure.
BSON {#bson}
ClickHouse allows exporting to and importing data from
BSON
encoded files. This format is used by some DBMSs, e.g.
MongoDB
database.
To import BSON data, we use the
BSONEachRow
format. Let's import data from
this BSON file
:
sql
SELECT * FROM file('data.bson', BSONEachRow)
response
ββpathβββββββββββββββββββββββ¬βmonthββ¬βhitsββ
β Bob_Dolman β 17106 β 245 β
β 1-krona β 17167 β 4 β
β Ahmadabad-e_Kalij-e_Sofla β 17167 β 3 β
βββββββββββββββββββββββββββββ΄ββββββββ΄βββββββ
We can also export to BSON files using the same format:
sql
SELECT *
FROM sometable
INTO OUTFILE 'out.bson'
FORMAT BSONEachRow
After that, we'll have our data exported to the
out.bson
file. | {"source_file": "formats.md"} | [
-0.045817434787750244,
0.011283244006335735,
-0.04140792414546013,
0.07549341768026352,
-0.015084123238921165,
-0.015828516334295273,
-0.026409262791275978,
-0.001003777259029448,
-0.11422248184680939,
0.052470311522483826,
0.08487416058778763,
0.028845377266407013,
0.0605715848505497,
-0.... |
ad89c4bc-4406-44d5-b41b-5f7f821f07cf | title: 'JSON schema inference'
slug: /integrations/data-formats/json/inference
description: 'How to use JSON schema inference'
keywords: ['json', 'schema', 'inference', 'schema inference']
doc_type: 'guide'
ClickHouse can automatically determine the structure of JSON data. This can be used to query JSON data directly e.g. on disk with
clickhouse-local
or S3 buckets, and/or automatically create schemas prior to loading the data into ClickHouse.
When to use type inference {#when-to-use-type-inference}
Consistent structure
- The data from which you are going to infer types contains all the keys that you are interested in. Type inference is based on sampling the data up to a
maximum number of rows
or
bytes
. Data after the sample, with additional columns, will be ignored and can't be queried.
Consistent types
- Data types for specific keys need to be compatible i.e. it must be possible to coerce one type to the other automatically.
If you have more dynamic JSON, to which new keys are added and multiple types are possible for the same path, see
"Working with semi-structured and dynamic data"
.
Detecting types {#detecting-types}
The following assumes the JSON is consistently structured and has a single type for each path.
Our previous examples used a simple version of the
Python PyPI dataset
in
NDJSON
format. In this section, we explore a more complex dataset with nested structures - the
arXiv dataset
containing 2.5m scholarly papers. Each row in this dataset, distributed as
NDJSON
, represents a published academic paper. An example row is shown below:
json
{
"id": "2101.11408",
"submitter": "Daniel Lemire",
"authors": "Daniel Lemire",
"title": "Number Parsing at a Gigabyte per Second",
"comments": "Software at https://github.com/fastfloat/fast_float and\n https://github.com/lemire/simple_fastfloat_benchmark/",
"journal-ref": "Software: Practice and Experience 51 (8), 2021",
"doi": "10.1002/spe.2984",
"report-no": null,
"categories": "cs.DS cs.MS",
"license": "http://creativecommons.org/licenses/by/4.0/",
"abstract": "With disks and networks providing gigabytes per second ....\n",
"versions": [
{
"created": "Mon, 11 Jan 2021 20:31:27 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jan 2021 23:57:29 GMT",
"version": "v2"
}
],
"update_date": "2022-11-07",
"authors_parsed": [
[
"Lemire",
"Daniel",
""
]
]
}
This data requires a far more complex schema than previous examples. We outline the process of defining this schema below, introducing complex types such as
Tuple
and
Array
.
This dataset is stored in a public S3 bucket at
s3://datasets-documentation/arxiv/arxiv.json.gz
. | {"source_file": "inference.md"} | [
-0.030417386442422867,
-0.06538518518209457,
0.000052936291467631236,
0.011287105269730091,
-0.00961039587855339,
-0.017023297026753426,
-0.031498510390520096,
0.012870402075350285,
-0.01758614182472229,
-0.018804216757416725,
0.005173585843294859,
-0.02407781593501568,
-0.003674391889944672... |
88420fc2-e4d5-4cf4-b555-7e99018ad248 | This dataset is stored in a public S3 bucket at
s3://datasets-documentation/arxiv/arxiv.json.gz
.
You can see that the dataset above contains nested JSON objects. While users should draft and version their schemas, inference allows types to be inferred from the data. This allows the schema DDL to be auto-generated, avoiding the need to build it manually and accelerating the development process.
:::note Auto format detection
As well as detecting the schema, JSON schema inference will automatically infer the format of the data from the file extension and contents. The above file is detected as being NDJSON automatically as a result.
:::
Using the
s3 function
with the
DESCRIBE
command shows the types that will be inferred.
sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/arxiv/arxiv.json.gz')
SETTINGS describe_compact_output = 1
response
ββnameββββββββββββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β id β Nullable(String) β
β submitter β Nullable(String) β
β authors β Nullable(String) β
β title β Nullable(String) β
β comments β Nullable(String) β
β journal-ref β Nullable(String) β
β doi β Nullable(String) β
β report-no β Nullable(String) β
β categories β Nullable(String) β
β license β Nullable(String) β
β abstract β Nullable(String) β
β versions β Array(Tuple(created Nullable(String),version Nullable(String))) β
β update_date β Nullable(Date) β
β authors_parsed β Array(Array(Nullable(String))) β
ββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
:::note Avoid nulls
You can see a lot of the columns are detected as Nullable. We
do not recommend using the Nullable
type when not absolutely needed. You can use
schema_inference_make_columns_nullable
to control the behavior of when Nullable is applied.
:::
We can see that most columns have automatically been detected as
String
, with
update_date
column correctly detected as a
Date
. The
versions
column has been created as an
Array(Tuple(created String, version String))
to store a list of objects, with
authors_parsed
being defined as
Array(Array(String))
for nested arrays. | {"source_file": "inference.md"} | [
-0.08407368510961533,
-0.03241639584302902,
-0.07839138060808182,
0.0016806666972115636,
0.04338701814413071,
-0.03801577165722847,
-0.05081552267074585,
-0.05702805519104004,
0.013466068543493748,
-0.0022885578218847513,
0.01445853617042303,
0.02147507853806019,
-0.020382672548294067,
0.0... |
51b8812f-6952-491a-a807-853290d87634 | :::note Controlling type detection
The auto-detection of dates and datetimes can be controlled through the settings
input_format_try_infer_dates
and
input_format_try_infer_datetimes
respectively (both enabled by default). The inference of objects as tuples is controlled by the setting
input_format_json_try_infer_named_tuples_from_objects
. Other settings which control schema inference for JSON, such as the auto-detection of numbers, can be found
here
.
:::
Querying JSON {#querying-json}
The following assumes the JSON is consistently structured and has a single type for each path.
We can rely on schema inference to query JSON data in place. Below, we find the top authors for each year, exploiting the fact the dates and arrays are automatically detected.
```sql
SELECT
toYear(update_date) AS year,
authors,
count() AS c
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/arxiv/arxiv.json.gz')
GROUP BY
year,
authors
ORDER BY
year ASC,
c DESC
LIMIT 1 BY year
ββyearββ¬βauthorsβββββββββββββββββββββββββββββββββββββ¬βββcββ
β 2007 β The BABAR Collaboration, B. Aubert, et al β 98 β
β 2008 β The OPAL collaboration, G. Abbiendi, et al β 59 β
β 2009 β Ashoke Sen β 77 β
β 2010 β The BABAR Collaboration, B. Aubert, et al β 117 β
β 2011 β Amelia Carolina Sparavigna β 21 β
β 2012 β ZEUS Collaboration β 140 β
β 2013 β CMS Collaboration β 125 β
β 2014 β CMS Collaboration β 87 β
β 2015 β ATLAS Collaboration β 118 β
β 2016 β ATLAS Collaboration β 126 β
β 2017 β CMS Collaboration β 122 β
β 2018 β CMS Collaboration β 138 β
β 2019 β CMS Collaboration β 113 β
β 2020 β CMS Collaboration β 94 β
β 2021 β CMS Collaboration β 69 β
β 2022 β CMS Collaboration β 62 β
β 2023 β ATLAS Collaboration β 128 β
β 2024 β ATLAS Collaboration β 120 β
ββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββ΄ββββββ
18 rows in set. Elapsed: 20.172 sec. Processed 2.52 million rows, 1.39 GB (124.72 thousand rows/s., 68.76 MB/s.)
```
Schema inference allows us to query JSON files without needing to specify the schema, accelerating ad-hoc data analysis tasks.
Creating tables {#creating-tables}
We can rely on schema inference to create the schema for a table. The following
CREATE AS EMPTY
command causes the DDL for the table to be inferred and the table to created. This does not load any data:
sql
CREATE TABLE arxiv
ENGINE = MergeTree
ORDER BY update_date EMPTY
AS SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/arxiv/arxiv.json.gz')
SETTINGS schema_inference_make_columns_nullable = 0 | {"source_file": "inference.md"} | [
-0.01753535307943821,
-0.022249985486268997,
-0.03269154950976372,
0.048947062343358994,
-0.024818433448672295,
0.0106349540874362,
-0.03058277629315853,
-0.05956033617258072,
0.021011872217059135,
-0.031223643571138382,
-0.01924831047654152,
-0.02139125019311905,
0.021988768130540848,
0.0... |
f9821876-ebb7-44c4-b344-7f699b384264 | To confirm the table schema, we use the
SHOW CREATE TABLE
command:
```sql
SHOW CREATE TABLE arxiv
CREATE TABLE arxiv
(
id
String,
submitter
String,
authors
String,
title
String,
comments
String,
journal-ref
String,
doi
String,
report-no
String,
categories
String,
license
String,
abstract
String,
versions
Array(Tuple(created String, version String)),
update_date
Date,
authors_parsed
Array(Array(String))
)
ENGINE = MergeTree
ORDER BY update_date
```
The above is the correct schema for this data. Schema inference is based on sampling the data and reading the data row by row. Column values are extracted according to the format, with recursive parsers and heuristics used to determine the type for each value. The maximum number of rows and bytes read from the data in schema inference is controlled by the settings
input_format_max_rows_to_read_for_schema_inference
(25000 by default) and
input_format_max_bytes_to_read_for_schema_inference
(32MB by default). In the event detection is not correct, users can provide hints as described
here
.
Creating tables from snippets {#creating-tables-from-snippets}
The above example uses a file on S3 to create the table schema. Users may wish to create a schema from a single-row snippet. This can be achieved using the
format
function as shown below:
```sql
CREATE TABLE arxiv
ENGINE = MergeTree
ORDER BY update_date EMPTY
AS SELECT *
FROM format(JSONEachRow, '{"id":"2101.11408","submitter":"Daniel Lemire","authors":"Daniel Lemire","title":"Number Parsing at a Gigabyte per Second","comments":"Software at https://github.com/fastfloat/fast_float and","doi":"10.1002/spe.2984","report-no":null,"categories":"cs.DS cs.MS","license":"http://creativecommons.org/licenses/by/4.0/","abstract":"Withdisks and networks providing gigabytes per second ","versions":[{"created":"Mon, 11 Jan 2021 20:31:27 GMT","version":"v1"},{"created":"Sat, 30 Jan 2021 23:57:29 GMT","version":"v2"}],"update_date":"2022-11-07","authors_parsed":[["Lemire","Daniel",""]]}') SETTINGS schema_inference_make_columns_nullable = 0
SHOW CREATE TABLE arxiv
CREATE TABLE arxiv
(
id
String,
submitter
String,
authors
String,
title
String,
comments
String,
doi
String,
report-no
String,
categories
String,
license
String,
abstract
String,
versions
Array(Tuple(created String, version String)),
update_date
Date,
authors_parsed
Array(Array(String))
)
ENGINE = MergeTree
ORDER BY update_date
```
Loading JSON data {#loading-json-data}
The following assumes the JSON is consistently structured and has a single type for each path.
The previous commands created a table to which data can be loaded. You can now insert the data into your table using the following
INSERT INTO SELECT
: | {"source_file": "inference.md"} | [
0.054981693625450134,
-0.044468577951192856,
-0.027722764760255814,
-0.013744063675403595,
-0.037857141345739365,
-0.058372076600790024,
-0.05244205519556999,
0.0011792370351031423,
-0.03187568113207817,
0.08231606334447861,
0.024259284138679504,
-0.025173602625727654,
0.04053442180156708,
... |
46d40555-2290-4880-933d-3c9c9b0d2a3f | The previous commands created a table to which data can be loaded. You can now insert the data into your table using the following
INSERT INTO SELECT
:
```sql
INSERT INTO arxiv SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/arxiv/arxiv.json.gz')
0 rows in set. Elapsed: 38.498 sec. Processed 2.52 million rows, 1.39 GB (65.35 thousand rows/s., 36.03 MB/s.)
Peak memory usage: 870.67 MiB.
```
For examples of loading data from other sources e.g. file, see
here
.
Once loaded, we can query our data, optionally using the format
PrettyJSONEachRow
to show the rows in their original structure:
```sql
SELECT *
FROM arxiv
LIMIT 1
FORMAT PrettyJSONEachRow
{
"id": "0704.0004",
"submitter": "David Callan",
"authors": "David Callan",
"title": "A determinant of Stirling cycle numbers counts unlabeled acyclic",
"comments": "11 pages",
"journal-ref": "",
"doi": "",
"report-no": "",
"categories": "math.CO",
"license": "",
"abstract": " We show that a determinant of Stirling cycle numbers counts unlabeled acyclic\nsingle-source automata.",
"versions": [
{
"created": "Sat, 31 Mar 2007 03:16:14 GMT",
"version": "v1"
}
],
"update_date": "2007-05-23",
"authors_parsed": [
[
"Callan",
"David"
]
]
}
1 row in set. Elapsed: 0.009 sec.
```
Handling errors {#handling-errors}
Sometimes, you might have bad data. For example, specific columns that do not have the right type or an improperly formatted JSON object. For this, you can use the settings
input_format_allow_errors_num
and
input_format_allow_errors_ratio
to allow a certain number of rows to be ignored if the data is triggering insert errors. Additionally,
hints
can be provided to assist inference.
Working with semi-structured and dynamic data {#working-with-semi-structured-data}
Our previous example used JSON which was static with well known key names and types. This is often not the case - keys can be added or their types can change. This is common in use cases such as Observability data.
ClickHouse handles this through a dedicated
JSON
type.
If you know your JSON is highly dynamic with many unique keys and multiple types for the same keys, we recommend not using schema inference with
JSONEachRow
to try and infer a column for each key - even if the data is in newline-delimited JSON format.
Consider the following example from an extended version of the above
Python PyPI dataset
dataset. Here we have added an arbitrary
tags
column with random key value pairs.
json
{
"date": "2022-09-22",
"country_code": "IN",
"project": "clickhouse-connect",
"type": "bdist_wheel",
"installer": "bandersnatch",
"python_minor": "",
"system": "",
"version": "0.2.8",
"tags": {
"5gTux": "f3to*PMvaTYZsz!*rtzX1",
"nD8CV": "value"
}
} | {"source_file": "inference.md"} | [
-0.011109971441328526,
-0.04325786605477333,
-0.09854180365800858,
0.07228261232376099,
0.0012121518375352025,
-0.062117356806993484,
-0.07226340472698212,
-0.010616936720907688,
0.00738248648121953,
0.051725536584854126,
-0.02358127199113369,
0.06769827008247375,
0.04699162766337395,
-0.1... |
b02d7a94-9754-4832-81f5-48a258561679 | A sample of this data is publicly available in newline-delimited JSON format. If we attempt schema inference on this file, you will find performance is poor with an extremely verbose response:
```sql
DESCRIBE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/pypi_with_tags/sample_rows.json.gz')
-- result omitted for brevity
9 rows in set. Elapsed: 127.066 sec.
```
The primary issue here is that the
JSONEachRow
format is used for inference. This attempts to infer
a column type per key in the JSON
- effectively trying to apply a static schema to the data without using the
JSON
type.
With thousands of unique columns this approach to inference is slow. As an alternative, users can use the
JSONAsObject
format.
JSONAsObject
treats the entire input as a single JSON object and stores it in a single column of type
JSON
, making it better suited for highly dynamic or nested JSON payloads.
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/pypi/pypi_with_tags/sample_rows.json.gz', 'JSONAsObject')
SETTINGS describe_compact_output = 1
ββnameββ¬βtypeββ
β json β JSON β
ββββββββ΄βββββββ
1 row in set. Elapsed: 0.005 sec.
```
This format is also essential in cases where columns have multiple types that cannot be reconciled. For example, consider a
sample.json
file with the following newline-delimited JSON:
json
{"a":1}
{"a":"22"}
In this case, ClickHouse is able to coerce the type collision and resolve the column
a
as a
Nullable(String)
.
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/json/sample.json')
SETTINGS describe_compact_output = 1
ββnameββ¬βtypeββββββββββββββ
β a β Nullable(String) β
ββββββββ΄βββββββββββββββββββ
1 row in set. Elapsed: 0.081 sec.
```
:::note Type coercion
This type coercion can be controlled through a number of settings. The above example is dependent on the setting
input_format_json_read_numbers_as_strings
.
:::
However, some types are incompatible. Consider the following example:
json
{"a":1}
{"a":{"b":2}}
In this case any form of type conversion here is not possible. A
DESCRIBE
command thus fails:
```sql
DESCRIBE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/json/conflict_sample.json')
Elapsed: 0.755 sec.
Received exception from server (version 24.12.1):
Code: 636. DB::Exception: Received from sql-clickhouse.clickhouse.com:9440. DB::Exception: The table structure cannot be extracted from a JSON format file. Error:
Code: 53. DB::Exception: Automatically defined type Tuple(b Int64) for column 'a' in row 1 differs from type defined by previous rows: Int64. You can specify the type for this column using setting schema_inference_hints.
```
In this case,
JSONAsObject
considers each row as a single
JSON
type (which supports the same column having multiple types). This is essential: | {"source_file": "inference.md"} | [
-0.048490896821022034,
-0.04403684288263321,
-0.09632799029350281,
0.0004124654224142432,
0.0031432481482625008,
-0.039787378162145615,
-0.044447652995586395,
0.008426724933087826,
-0.00010040076449513435,
-0.009828047826886177,
0.0040880776941776276,
-0.014620402827858925,
0.012029445730149... |
1f9727cc-e578-45fb-ab6b-9fa7caef7b62 | In this case,
JSONAsObject
considers each row as a single
JSON
type (which supports the same column having multiple types). This is essential:
```sql
DESCRIBE TABLE s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/json/conflict_sample.json', JSONAsObject)
SETTINGS enable_json_type = 1, describe_compact_output = 1
ββnameββ¬βtypeββ
β json β JSON β
ββββββββ΄βββββββ
1 row in set. Elapsed: 0.010 sec.
```
Further reading {#further-reading}
To learn more about the data type inference, you can refer to
this
documentation page. | {"source_file": "inference.md"} | [
-0.01714223064482212,
-0.057293932884931564,
-0.039564624428749084,
0.025576146319508553,
0.0007559729856438935,
0.005730268079787493,
-0.029488639906048775,
0.0007532874587923288,
-0.008421946316957474,
-0.006619669031351805,
-0.019711339846253395,
-0.03317929804325104,
-0.00063823407981544... |
2e40fd5c-841f-4986-aa35-b23e46bf8feb | title: 'Exporting JSON'
slug: /integrations/data-formats/json/exporting
description: 'How to export JSON data from ClickHouse'
keywords: ['json', 'clickhouse', 'formats', 'exporting']
doc_type: 'guide'
Exporting JSON
Almost any JSON format used for import can be used for export as well. The most popular is
JSONEachRow
:
sql
SELECT * FROM sometable FORMAT JSONEachRow
response
{"path":"Bob_Dolman","month":"2016-11-01","hits":245}
{"path":"1-krona","month":"2017-01-01","hits":4}
{"path":"Ahmadabad-e_Kalij-e_Sofla","month":"2017-01-01","hits":3}
Or we can use
JSONCompactEachRow
to save disk space by skipping column names:
sql
SELECT * FROM sometable FORMAT JSONCompactEachRow
response
["Bob_Dolman", "2016-11-01", 245]
["1-krona", "2017-01-01", 4]
["Ahmadabad-e_Kalij-e_Sofla", "2017-01-01", 3]
Overriding data types as strings {#overriding-data-types-as-strings}
ClickHouse respects data types and will export JSON accordingly to standards. But in cases where we need to have all values encoded as strings, we can use the
JSONStringsEachRow
format:
sql
SELECT * FROM sometable FORMAT JSONStringsEachRow
response
{"path":"Bob_Dolman","month":"2016-11-01","hits":"245"}
{"path":"1-krona","month":"2017-01-01","hits":"4"}
{"path":"Ahmadabad-e_Kalij-e_Sofla","month":"2017-01-01","hits":"3"}
Now, the
hits
numeric column is encoded as a string. Exporting as strings is supported for all JSON* formats, just explore
JSONStrings\*
and
JSONCompactStrings\*
formats:
sql
SELECT * FROM sometable FORMAT JSONCompactStringsEachRow
response
["Bob_Dolman", "2016-11-01", "245"]
["1-krona", "2017-01-01", "4"]
["Ahmadabad-e_Kalij-e_Sofla", "2017-01-01", "3"]
Exporting metadata together with data {#exporting-metadata-together-with-data}
General
JSON
format, which is popular in apps, will export not only resulting data but column types and query stats:
sql
SELECT * FROM sometable FORMAT JSON
```response
{
"meta":
[
{
"name": "path",
"type": "String"
},
...
],
"data":
[
{
"path": "Bob_Dolman",
"month": "2016-11-01",
"hits": 245
},
...
],
"rows": 3,
"statistics":
{
"elapsed": 0.000497457,
"rows_read": 3,
"bytes_read": 87
}
}
```
The
JSONCompact
format will print the same metadata but use a compacted form for the data itself:
sql
SELECT * FROM sometable FORMAT JSONCompact
```response
{
"meta":
[
{
"name": "path",
"type": "String"
},
...
],
"data":
[
["Bob_Dolman", "2016-11-01", 245],
["1-krona", "2017-01-01", 4],
["Ahmadabad-e_Kalij-e_Sofla", "2017-01-01", 3]
],
"rows": 3, | {"source_file": "exporting.md"} | [
0.02016870491206646,
-0.026894647628068924,
-0.044497765600681305,
0.05480501428246498,
-0.026367299258708954,
-0.010559620335698128,
-0.03368469700217247,
0.023343052715063095,
-0.03361112251877785,
0.05271930992603302,
0.03912550210952759,
-0.019059468060731888,
0.015719937160611153,
-0.... |
0ebafe09-db02-47c0-9466-07e118c45548 | "data":
[
["Bob_Dolman", "2016-11-01", 245],
["1-krona", "2017-01-01", 4],
["Ahmadabad-e_Kalij-e_Sofla", "2017-01-01", 3]
],
"rows": 3,
"statistics":
{
"elapsed": 0.00074981,
"rows_read": 3,
"bytes_read": 87
}
}
```
Consider
JSONStrings
or
JSONCompactStrings
variants to encode all values as strings.
Compact way to export JSON data and structure {#compact-way-to-export-json-data-and-structure}
A more efficient way to have data, as well as it's structure, is to use
JSONCompactEachRowWithNamesAndTypes
format:
sql
SELECT * FROM sometable FORMAT JSONCompactEachRowWithNamesAndTypes
response
["path", "month", "hits"]
["String", "Date", "UInt32"]
["Bob_Dolman", "2016-11-01", 245]
["1-krona", "2017-01-01", 4]
["Ahmadabad-e_Kalij-e_Sofla", "2017-01-01", 3]
This will use a compact JSON format prepended by two header rows with column names and types. This format can then be used to ingest data into another ClickHouse instance (or other apps).
Exporting JSON to a file {#exporting-json-to-a-file}
To save exported JSON data to a file, we can use an
INTO OUTFILE
clause:
sql
SELECT * FROM sometable INTO OUTFILE 'out.json' FORMAT JSONEachRow
response
36838935 rows in set. Elapsed: 2.220 sec. Processed 36.84 million rows, 1.27 GB (16.60 million rows/s., 572.47 MB/s.)
It took ClickHouse only 2 seconds to export almost 37 million records to a JSON file. We can also export using a
COMPRESSION
clause to enable compression on the fly:
sql
SELECT * FROM sometable INTO OUTFILE 'out.json.gz' FORMAT JSONEachRow
response
36838935 rows in set. Elapsed: 22.680 sec. Processed 36.84 million rows, 1.27 GB (1.62 million rows/s., 56.02 MB/s.)
It takes more time to accomplish, but generates a much smaller compressed file:
bash
2.2G out.json
576M out.json.gz | {"source_file": "exporting.md"} | [
0.020308904349803925,
0.029433824121952057,
-0.02380331978201866,
0.04832612723112106,
-0.07498255372047424,
-0.0041681877337396145,
0.008779365569353104,
-0.005565118044614792,
-0.012108935043215752,
0.024070672690868378,
-0.0013698333641514182,
-0.029539702460169792,
0.010706006549298763,
... |
0119c999-4f60-45f9-8ab5-32a29f141f56 | title: 'Designing JSON schema'
slug: /integrations/data-formats/json/schema
description: 'How to optimally design JSON schemas'
keywords: ['json', 'clickhouse', 'inserting', 'loading', 'formats', 'schema', 'structured', 'semi-structured']
score: 20
doc_type: 'guide'
import Image from '@theme/IdealImage';
import json_column_per_type from '@site/static/images/integrations/data-ingestion/data-formats/json_column_per_type.png';
import json_offsets from '@site/static/images/integrations/data-ingestion/data-formats/json_offsets.png';
import shared_json_column from '@site/static/images/integrations/data-ingestion/data-formats/json_shared_column.png';
Designing your schema
While
schema inference
can be used to establish an initial schema for JSON data and query JSON data files in place, e.g., in S3, users should aim to establish an optimized versioned schema for their data. We discuss the recommended approach for modeling JSON structures below.
Static vs dynamic JSON {#static-vs-dynamic-json}
The principal task on defining a schema for JSON is to determine the appropriate type for each key's value. We recommended users apply the following rules recursively on each key in the JSON hierarchy to determine the appropriate type for each key.
Primitive types
- If the key's value is a primitive type, irrespective of whether it is part of a sub-object or on the root, ensure you select its type according to general schema
design best practices
and
type optimization rules
. Arrays of primitives, such as
phone_numbers
below, can be modeled as
Array(<type>)
e.g.,
Array(String)
.
Static vs dynamic
- If the key's value is a complex object i.e. either an object or an array of objects, establish whether it is subject to change. Objects that rarely have new keys, where the addition of a new key can be predicted and handled with a schema change via
ALTER TABLE ADD COLUMN
, can be considered
static
. This includes objects where only a subset of the keys may be provided on some JSON documents. Objects where new keys are added frequently and/or are not predictable should be considered
dynamic
.
The exception here is structures with hundreds or thousands of sub keys which can be considered dynamic for convenience purposes
.
To establish whether a value is
static
or
dynamic
, see the relevant sections
Handling static objects
and
Handling dynamic objects
below.
Important:
The above rules should be applied recursively. If a key's value is determined to be dynamic, no further evaluation is required and the guidelines in
Handling dynamic objects
can be followed. If the object is static, continue to assess the subkeys until either key values are primitive or dynamic keys are encountered.
To illustrate these rules, we use the following JSON example representing a person: | {"source_file": "schema.md"} | [
-0.02649683877825737,
0.03171148523688316,
-0.037379950284957886,
-0.019469229504466057,
0.019119882956147194,
-0.02363995462656021,
-0.06584491580724716,
0.1179656907916069,
-0.05876104161143303,
0.02331135980784893,
0.06370853632688522,
-0.006319480948150158,
0.08719755709171295,
0.11205... |
48047613-4b8e-4817-95b5-81cf8ce12db4 | To illustrate these rules, we use the following JSON example representing a person:
json
{
"id": 1,
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"address": [
{
"street": "Victor Plains",
"suite": "Suite 879",
"city": "Wisokyburgh",
"zipcode": "90566-7771",
"geo": {
"lat": -43.9509,
"lng": -34.4618
}
}
],
"phone_numbers": [
"010-692-6593",
"020-192-3333"
],
"website": "clickhouse.com",
"company": {
"name": "ClickHouse",
"catchPhrase": "The real-time data warehouse for analytics",
"labels": {
"type": "database systems",
"founded": "2021"
}
},
"dob": "2007-03-31",
"tags": {
"hobby": "Databases",
"holidays": [
{
"year": 2024,
"location": "Azores, Portugal"
}
],
"car": {
"model": "Tesla",
"year": 2023
}
}
}
Applying these rules:
The root keys
name
,
username
,
email
,
website
can be represented as type
String
. The column
phone_numbers
is an Array primitive of type
Array(String)
, with
dob
and
id
type
Date
and
UInt32
respectively.
New keys will not be added to the
address
object (only new address objects), and it can thus be considered
static
. If we recurse, all of the sub-columns can be considered primitives (and type
String
) except
geo
. This is also a static structure with two
Float32
columns,
lat
and
lon
.
The
tags
column is
dynamic
. We assume new arbitrary tags can be added to this object of any type and structure.
The
company
object is
static
and will always contain at most the 3 keys specified. The subkeys
name
and
catchPhrase
are of type
String
. The key
labels
is
dynamic
. We assume new arbitrary tags can be added to this object. Values will always be key-value pairs of type string.
:::note
Structures with hundreds or thousands of static keys can be considered dynamic, as it is rarely realistic to statically declare the columns for these. However, where possible
skip paths
which are not needed to save both storage and inference overhead.
:::
Handling static structures {#handling-static-structures}
We recommend static structures are handled using named tuples i.e.
Tuple
. Arrays of objects can be held using arrays of tuples i.e.
Array(Tuple)
. Within tuples themselves, columns and their respective types should be defined using the same rules. This can result in nested Tuples to represent nested objects as shown below.
To illustrate this, we use the earlier JSON person example, omitting the dynamic objects: | {"source_file": "schema.md"} | [
-0.08305013179779053,
0.051709141582250595,
-0.026611018925905228,
0.026764342561364174,
0.013114969246089458,
-0.0988125130534172,
0.028503013774752617,
-0.07697287201881409,
0.02191748097538948,
-0.0359293669462204,
-0.0044175987131893635,
-0.05006497725844383,
0.0577697828412056,
0.0278... |
367082c6-0843-41f4-b36e-26af285f0195 | To illustrate this, we use the earlier JSON person example, omitting the dynamic objects:
json
{
"id": 1,
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"address": [
{
"street": "Victor Plains",
"suite": "Suite 879",
"city": "Wisokyburgh",
"zipcode": "90566-7771",
"geo": {
"lat": -43.9509,
"lng": -34.4618
}
}
],
"phone_numbers": [
"010-692-6593",
"020-192-3333"
],
"website": "clickhouse.com",
"company": {
"name": "ClickHouse",
"catchPhrase": "The real-time data warehouse for analytics"
},
"dob": "2007-03-31"
}
The schema for this table is shown below:
sql
CREATE TABLE people
(
`id` Int64,
`name` String,
`username` String,
`email` String,
`address` Array(Tuple(city String, geo Tuple(lat Float32, lng Float32), street String, suite String, zipcode String)),
`phone_numbers` Array(String),
`website` String,
`company` Tuple(catchPhrase String, name String),
`dob` Date
)
ENGINE = MergeTree
ORDER BY username
Note how the
company
column is defined as a
Tuple(catchPhrase String, name String)
. The
address
key uses an
Array(Tuple)
, with a nested
Tuple
to represent the
geo
column.
JSON can be inserted into this table in its current structure:
sql
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics"},"dob":"2007-03-31"}
In our example above, we have minimal data, but as shown below, we can query the tuple columns by their period-delimited names.
```sql
SELECT
address.street,
company.name
FROM people
ββaddress.streetβββββ¬βcompany.nameββ
β ['Victor Plains'] β ClickHouse β
βββββββββββββββββββββ΄βββββββββββββββ
```
Note how the
address.street
column is returned as an
Array
. To query a specific object inside an array by position, the array offset should be specified after the column name. For example, to access the street from the first address:
```sql
SELECT address.street[1] AS street
FROM people
ββstreetβββββββββ
β Victor Plains β
βββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
Sub columns can also be used in ordering keys from
24.12
:
sql
CREATE TABLE people
(
`id` Int64,
`name` String,
`username` String,
`email` String,
`address` Array(Tuple(city String, geo Tuple(lat Float32, lng Float32), street String, suite String, zipcode String)),
`phone_numbers` Array(String),
`website` String,
`company` Tuple(catchPhrase String, name String),
`dob` Date
)
ENGINE = MergeTree
ORDER BY company.name | {"source_file": "schema.md"} | [
-0.04755961149930954,
0.05070715397596359,
-0.005394116975367069,
0.07941964268684387,
-0.058292511850595474,
-0.0697142630815506,
0.04309057071805,
-0.0031915237195789814,
0.0171853918582201,
-0.0161018967628479,
0.019658846780657768,
-0.044054459780454636,
0.0381181575357914,
0.010759891... |
3ef500c7-52c9-4e9d-8514-6a2d3508f70f | Handling default values {#handling-default-values}
Even if JSON objects are structured, they are often sparse with only a subset of the known keys provided. Fortunately, the
Tuple
type does not require all columns in the JSON payload. If not provided, default values will be used.
Consider our earlier
people
table and the following sparse JSON, missing the keys
suite
,
geo
,
phone_numbers
, and
catchPhrase
.
json
{
"id": 1,
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"address": [
{
"street": "Victor Plains",
"city": "Wisokyburgh",
"zipcode": "90566-7771"
}
],
"website": "clickhouse.com",
"company": {
"name": "ClickHouse"
},
"dob": "2007-03-31"
}
We can see below this row can be successfully inserted:
```sql
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","city":"Wisokyburgh","zipcode":"90566-7771"}],"website":"clickhouse.com","company":{"name":"ClickHouse"},"dob":"2007-03-31"}
Ok.
1 row in set. Elapsed: 0.002 sec.
```
Querying this single row, we can see that default values are used for the columns (including sub-objects) that were omitted:
```sql
SELECT *
FROM people
FORMAT PrettyJSONEachRow
{
"id": "1",
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"address": [
{
"city": "Wisokyburgh",
"geo": {
"lat": 0,
"lng": 0
},
"street": "Victor Plains",
"suite": "",
"zipcode": "90566-7771"
}
],
"phone_numbers": [],
"website": "clickhouse.com",
"company": {
"catchPhrase": "",
"name": "ClickHouse"
},
"dob": "2007-03-31"
}
1 row in set. Elapsed: 0.001 sec.
```
:::note Differentiating empty and null
If users need to differentiate between a value being empty and not provided, the
Nullable
type can be used. This
should be avoided
unless absolutely required, as it will negatively impact storage and query performance on these columns.
:::
Handling new columns {#handling-new-columns}
While a structured approach is simplest when the JSON keys are static, this approach can still be used if the changes to the schema can be planned, i.e., new keys are known in advance, and the schema can be modified accordingly.
Note that ClickHouse will, by default, ignore JSON keys that are provided in the payload and are not present in the schema. Consider the following modified JSON payload with the addition of a
nickname
key: | {"source_file": "schema.md"} | [
-0.02908596582710743,
-0.03373142331838608,
0.03138547018170357,
0.012888372875750065,
-0.06226607412099838,
-0.04332313686609268,
0.032271575182676315,
-0.03395109996199608,
-0.002009517513215542,
-0.040565125644207,
0.050021059811115265,
-0.029862092807888985,
0.009638202376663685,
0.005... |
eff00d35-dc66-432f-8888-61ad68464adb | json
{
"id": 1,
"name": "Clicky McCliickHouse",
"nickname": "Clicky",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"address": [
{
"street": "Victor Plains",
"suite": "Suite 879",
"city": "Wisokyburgh",
"zipcode": "90566-7771",
"geo": {
"lat": -43.9509,
"lng": -34.4618
}
}
],
"phone_numbers": [
"010-692-6593",
"020-192-3333"
],
"website": "clickhouse.com",
"company": {
"name": "ClickHouse",
"catchPhrase": "The real-time data warehouse for analytics"
},
"dob": "2007-03-31"
}
This JSON can be successfully inserted with the
nickname
key ignored:
```sql
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","nickname":"Clicky","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics"},"dob":"2007-03-31"}
Ok.
1 row in set. Elapsed: 0.002 sec.
```
Columns can be added to a schema using the
ALTER TABLE ADD COLUMN
command. A default can be specified via the
DEFAULT
clause, which will be used if it is not specified during the subsequent inserts. Rows for which this value is not present (as they were inserted prior to its creation) will also return this default value. If no
DEFAULT
value is specified, the default value for the type will be used.
For example:
```sql
-- insert initial row (nickname will be ignored)
INSERT INTO people FORMAT JSONEachRow
{"id":1,"name":"Clicky McCliickHouse","nickname":"Clicky","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics"},"dob":"2007-03-31"}
-- add column
ALTER TABLE people
(ADD COLUMN
nickname
String DEFAULT 'no_nickname')
-- insert new row (same data different id)
INSERT INTO people FORMAT JSONEachRow
{"id":2,"name":"Clicky McCliickHouse","nickname":"Clicky","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics"},"dob":"2007-03-31"}
-- select 2 rows
SELECT id, nickname FROM people
ββidββ¬βnicknameβββββ
β 2 β Clicky β
β 1 β no_nickname β
ββββββ΄ββββββββββββββ
2 rows in set. Elapsed: 0.001 sec.
``` | {"source_file": "schema.md"} | [
-0.10205905884504318,
0.03266925737261772,
0.0035595244262367487,
0.024909919127821922,
-0.0691540315747261,
-0.05755404382944107,
0.024874472990632057,
-0.021455751731991768,
0.005398730281740427,
-0.03730104863643646,
0.009599324315786362,
-0.03370531275868416,
0.03792116045951843,
0.011... |
a8dcd357-b0b8-43fa-b71b-6336e95dcde6 | -- select 2 rows
SELECT id, nickname FROM people
ββidββ¬βnicknameβββββ
β 2 β Clicky β
β 1 β no_nickname β
ββββββ΄ββββββββββββββ
2 rows in set. Elapsed: 0.001 sec.
```
Handling semi-structured/dynamic structures {#handling-semi-structured-dynamic-structures}
If JSON data is semi-structured where keys can be dynamically added and/or have multiple types, the
JSON
type is recommended.
More specifically, use the JSON type when your data:
Has
unpredictable keys
that can change over time.
Contains
values with varying types
(e.g., a path might sometimes contain a string, sometimes a number).
Requires schema flexibility where strict typing isn't viable.
You have
hundreds or even thousands
of paths which are static but simply not realistic to declare explicitly. This tends to be a rare.
Consider our
earlier person JSON
where the
company.labels
object was determined to be dynamic.
Let's suppose that
company.labels
contains arbitrary keys. Additionally, the type for any key in this structure may not be consistent between rows. For example:
json
{
"id": 1,
"name": "Clicky McCliickHouse",
"username": "Clicky",
"email": "clicky@clickhouse.com",
"address": [
{
"street": "Victor Plains",
"suite": "Suite 879",
"city": "Wisokyburgh",
"zipcode": "90566-7771",
"geo": {
"lat": -43.9509,
"lng": -34.4618
}
}
],
"phone_numbers": [
"010-692-6593",
"020-192-3333"
],
"website": "clickhouse.com",
"company": {
"name": "ClickHouse",
"catchPhrase": "The real-time data warehouse for analytics",
"labels": {
"type": "database systems",
"founded": "2021",
"employees": 250
}
},
"dob": "2007-03-31",
"tags": {
"hobby": "Databases",
"holidays": [
{
"year": 2024,
"location": "Azores, Portugal"
}
],
"car": {
"model": "Tesla",
"year": 2023
}
}
}
json
{
"id": 2,
"name": "Analytica Rowe",
"username": "Analytica",
"address": [
{
"street": "Maple Avenue",
"suite": "Apt. 402",
"city": "Dataford",
"zipcode": "11223-4567",
"geo": {
"lat": 40.7128,
"lng": -74.006
}
}
],
"phone_numbers": [
"123-456-7890",
"555-867-5309"
],
"website": "fastdata.io",
"company": {
"name": "FastData Inc.",
"catchPhrase": "Streamlined analytics at scale",
"labels": {
"type": [
"real-time processing"
],
"founded": 2019,
"dissolved": 2023,
"employees": 10
}
},
"dob": "1992-07-15",
"tags": {
"hobby": "Running simulations",
"holidays": [
{
"year": 2023,
"location": "Kyoto, Japan"
}
],
"car": {
"model": "Audi e-tron",
"year": 2022
}
}
} | {"source_file": "schema.md"} | [
-0.07419990003108978,
0.03142847493290901,
0.02652967907488346,
0.0372653603553772,
-0.09544886648654938,
-0.05735393241047859,
0.0661630854010582,
0.008750040084123611,
0.041345659643411636,
-0.10417002439498901,
0.017394578084349632,
-0.026086708530783653,
-0.020534295588731766,
0.033633... |
c1875381-d86c-412e-bb0c-520df5d27deb | Given the dynamic nature of the
company.labels
column between objects, with respect to keys and types, we have several options to model this data:
Single JSON column
- represents the entire schema as a single
JSON
column, allowing all structures to be dynamic beneath this.
Targeted JSON column
- only use the
JSON
type for the
company.labels
column, retaining the structured schema used above for all other columns.
While the first approach
does not align with previous methodology
, a single JSON column approach is useful for prototyping and data engineering tasks.
For production deployments of ClickHouse at scale, we recommend being specific with structure and using the JSON type for targeted dynamic sub-structures where possible.
A strict schema has a number of benefits:
Data validation
β enforcing a strict schema avoids the risk of column explosion, outside of specific structures.
Avoids risk of column explosion
- Although the JSON type scales to potentially thousands of columns, where subcolumns are stored as dedicated columns, this can lead to a column file explosion where an excessive number of column files are created that impacts performance. To mitigate this, the underlying
Dynamic type
used by JSON offers a
max_dynamic_paths
parameter, which limits the number of unique paths stored as separate column files. Once the threshold is reached, additional paths are stored in a shared column file using a compact encoded format, maintaining performance and storage efficiency while supporting flexible data ingestion. Accessing this shared column file is, however, not as performant. Note, however, that the JSON column can be used with
type hints
. "Hinted" columns will deliver the same performance as dedicated columns.
Simpler introspection of paths and types
- Although the JSON type supports
introspection functions
to determine the types and paths that have been inferred, static structures can be simpler to explore e.g. with
DESCRIBE
.
Single JSON column {#single-json-column}
This approach is useful for prototyping and data engineering tasks. For production, try use
JSON
only for dynamic sub structures where necessary.
:::note Performance considerations
A single JSON column can be optimized by skipping (not storing) JSON paths that are not required and by using
type hints
. Type hints allow the user to explicitly define the type for a sub-column, thereby skipping inference and indirection processing at query time. This can be used to deliver the same performance as if an explicit schema was used. See
"Using type hints and skipping paths"
for further details.
:::
The schema for a single JSON column here is simple:
```sql
SET enable_json_type = 1;
CREATE TABLE people
(
json
JSON(username String)
)
ENGINE = MergeTree
ORDER BY json.username;
``` | {"source_file": "schema.md"} | [
-0.0466514490544796,
0.02896975167095661,
-0.060750268399715424,
0.0016985888360068202,
-0.023867128416895866,
-0.058751583099365234,
-0.0528600700199604,
0.02043718844652176,
-0.0030590093228965998,
-0.07854612171649933,
0.0009463352616876364,
-0.020740022882819176,
0.013671834953129292,
... |
9d96e348-b1e2-4c22-ab26-9388b31a121b | The schema for a single JSON column here is simple:
```sql
SET enable_json_type = 1;
CREATE TABLE people
(
json
JSON(username String)
)
ENGINE = MergeTree
ORDER BY json.username;
```
:::note
We provide a
type hint
for the
username
column in the JSON definition as we use it in the ordering/primary key. This helps ClickHouse know this column won't be null and ensures it knows which
username
sub-column to use (there may be multiple for each type, so this is ambiguous otherwise).
:::
Inserting rows into the above table can be achieved using the
JSONAsObject
format:
```sql
INSERT INTO people FORMAT JSONAsObject
{"id":1,"name":"Clicky McCliickHouse","username":"Clicky","email":"clicky@clickhouse.com","address":[{"street":"Victor Plains","suite":"Suite 879","city":"Wisokyburgh","zipcode":"90566-7771","geo":{"lat":-43.9509,"lng":-34.4618}}],"phone_numbers":["010-692-6593","020-192-3333"],"website":"clickhouse.com","company":{"name":"ClickHouse","catchPhrase":"The real-time data warehouse for analytics","labels":{"type":"database systems","founded":"2021","employees":250}},"dob":"2007-03-31","tags":{"hobby":"Databases","holidays":[{"year":2024,"location":"Azores, Portugal"}],"car":{"model":"Tesla","year":2023}}}
1 row in set. Elapsed: 0.028 sec.
INSERT INTO people FORMAT JSONAsObject
{"id":2,"name":"Analytica Rowe","username":"Analytica","address":[{"street":"Maple Avenue","suite":"Apt. 402","city":"Dataford","zipcode":"11223-4567","geo":{"lat":40.7128,"lng":-74.006}}],"phone_numbers":["123-456-7890","555-867-5309"],"website":"fastdata.io","company":{"name":"FastData Inc.","catchPhrase":"Streamlined analytics at scale","labels":{"type":["real-time processing"],"founded":2019,"dissolved":2023,"employees":10}},"dob":"1992-07-15","tags":{"hobby":"Running simulations","holidays":[{"year":2023,"location":"Kyoto, Japan"}],"car":{"model":"Audi e-tron","year":2022}}}
1 row in set. Elapsed: 0.004 sec.
```
```sql
SELECT *
FROM people
FORMAT Vertical
Row 1:
ββββββ
json: {"address":[{"city":"Dataford","geo":{"lat":40.7128,"lng":-74.006},"street":"Maple Avenue","suite":"Apt. 402","zipcode":"11223-4567"}],"company":{"catchPhrase":"Streamlined analytics at scale","labels":{"dissolved":"2023","employees":"10","founded":"2019","type":["real-time processing"]},"name":"FastData Inc."},"dob":"1992-07-15","id":"2","name":"Analytica Rowe","phone_numbers":["123-456-7890","555-867-5309"],"tags":{"car":{"model":"Audi e-tron","year":"2022"},"hobby":"Running simulations","holidays":[{"location":"Kyoto, Japan","year":"2023"}]},"username":"Analytica","website":"fastdata.io"} | {"source_file": "schema.md"} | [
-0.02160363644361496,
-0.04815559834241867,
0.02720286324620247,
0.02153933420777321,
-0.12158939987421036,
-0.014857095666229725,
-0.00894866045564413,
0.04706616327166557,
-0.04128436744213104,
0.018781138584017754,
0.04898693412542343,
-0.02633354812860489,
0.03331414610147476,
0.008438... |
53d18a76-d348-4b82-9b50-1e79b9e48fc3 | Row 2:
ββββββ
json: {"address":[{"city":"Wisokyburgh","geo":{"lat":-43.9509,"lng":-34.4618},"street":"Victor Plains","suite":"Suite 879","zipcode":"90566-7771"}],"company":{"catchPhrase":"The real-time data warehouse for analytics","labels":{"employees":"250","founded":"2021","type":"database systems"},"name":"ClickHouse"},"dob":"2007-03-31","email":"clicky@clickhouse.com","id":"1","name":"Clicky McCliickHouse","phone_numbers":["010-692-6593","020-192-3333"],"tags":{"car":{"model":"Tesla","year":"2023"},"hobby":"Databases","holidays":[{"location":"Azores, Portugal","year":"2024"}]},"username":"Clicky","website":"clickhouse.com"}
2 rows in set. Elapsed: 0.005 sec.
```
We can determine the inferred sub columns and their types using
introspection functions
. For example:
```sql
SELECT JSONDynamicPathsWithTypes(json) AS paths
FROM people
FORMAT PrettyJsonEachRow
{
"paths": {
"address": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))",
"company.catchPhrase": "String",
"company.labels.employees": "Int64",
"company.labels.founded": "String",
"company.labels.type": "String",
"company.name": "String",
"dob": "Date",
"email": "String",
"id": "Int64",
"name": "String",
"phone_numbers": "Array(Nullable(String))",
"tags.car.model": "String",
"tags.car.year": "Int64",
"tags.hobby": "String",
"tags.holidays": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))",
"website": "String"
}
}
{
"paths": {
"address": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))",
"company.catchPhrase": "String",
"company.labels.dissolved": "Int64",
"company.labels.employees": "Int64",
"company.labels.founded": "Int64",
"company.labels.type": "Array(Nullable(String))",
"company.name": "String",
"dob": "Date",
"id": "Int64",
"name": "String",
"phone_numbers": "Array(Nullable(String))",
"tags.car.model": "String",
"tags.car.year": "Int64",
"tags.hobby": "String",
"tags.holidays": "Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))",
"website": "String"
}
}
2 rows in set. Elapsed: 0.009 sec.
```
For a complete list of introspection functions, see the
"Introspection functions"
Sub paths can be accessed
using
.
notation e.g.
```sql
SELECT json.name, json.email FROM people
ββjson.nameβββββββββββββ¬βjson.emailβββββββββββββ
β Analytica Rowe β α΄Ία΅α΄Έα΄Έ β
β Clicky McCliickHouse β clicky@clickhouse.com β
ββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.006 sec.
```
Note how columns missing in rows are returned as
NULL
. | {"source_file": "schema.md"} | [
-0.07795587927103043,
0.07636984437704086,
-0.0066656265407800674,
0.020512185990810394,
-0.011780946515500546,
-0.050355084240436554,
0.0160971749573946,
-0.05129396170377731,
0.028718460351228714,
-0.029973933473229408,
0.013584358617663383,
-0.0401092991232872,
0.037306200712919235,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.