id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
27f7b894-e6d8-4bc1-b9a8-0e425b961ffa | |
Compliance
| ClickHouse Cloud compliance includes CCPA, EU-US DPF, GDPR, HIPAA, ISO 27001, ISO 27001 SoA, PCI DSS, SOC2. ClickHouse Cloud's security, availability, processing integrity, and confidentiality processes are all independently audited. Details: trust.clickhouse.com. | ❌ | ✅ |
|
Enterprise-grade security
| Support for advanced security features such as SSO, multi-factor authentication, role-based access control (RBAC), private and secure connections with support for Private Link and Private Service Connect, IP filtering, customer-managed encryption keys (CMEK), and more. | ❌ | ✅ |
|
Scaling and optimization
| Seamlessly scales up or down based on workload, supporting both horizontal and vertical scaling. With automated backups, replication, and high availability, ClickHouse, it provides users with optimal resource allocation. | ❌ | ✅ |
|
Support services
| Our best-in-class support services and open-source community resources provide coverage for whichever deployment model you choose. | ❌ | ✅ |
|
Database upgrades
| Regular database upgrades are essential to establish a strong security posture and access the latest features and performance improvements. | ❌ | ✅ |
|
Backups
| Backups and restore functionality ensures data durability and supports graceful recovery in the event of outages or other disruptions. | ❌ | ✅ |
|
Compute-compute separation
| Users can scale compute resources independently of storage, so teams and workloads can share the same storage and maintain dedicated compute resources. This ensures that the performance of one workload doesn't interfere with another, enhancing flexibility, performance, and cost-efficiency. | ❌ | ✅ |
|
Managed services
| With a cloud-managed service, teams can focus on business outcomes and accelerate time-to-market without having to worry about the operational overhead of sizing, setup, and maintenance of ClickHouse. | ❌ | ✅ | | {"source_file": "01_what_is.md"} | [
-0.07825953513383865,
-0.06420538574457169,
-0.03372083976864815,
-0.028700603172183037,
0.005075742490589619,
0.019294923171401024,
-0.006321028806269169,
-0.01643775962293148,
-0.030695205554366112,
0.025067446753382683,
0.04144710674881935,
0.04333322495222092,
0.015710528939962387,
-0.... |
9d4b8897-7786-42dd-96cd-05b86a83f1d5 | sidebar_label: 'Overview'
sidebar_position: 1
slug: /integrations/migration/overview
keywords: ['clickhouse', 'migrate', 'migration', 'migrating', 'data']
title: 'Migrating Data into ClickHouse'
description: 'Page describing the options available for migrating data into ClickHouse'
doc_type: 'guide'
Migrating data into ClickHouse
There are several options for migrating data into ClickHouse Cloud, depending on where your data resides now:
Self-managed to Cloud
: use the
remoteSecure
function to transfer data
Another DBMS
: use the [clickhouse-local] ETL tool along with the appropriate ClickHouse table function for your current DBMS
Anywhere!
: use one of the many popular ETL/ELT tools that connect to all kinds of different data sources
Object Storage
: easily insert data from S3 into ClickHouse
In the example
Migrate from Redshift
, we present three different ways to migrate data to ClickHouse. | {"source_file": "01_overview.md"} | [
0.040100082755088806,
-0.10443330556154251,
-0.011991520412266254,
0.012157903984189034,
0.05444922670722008,
-0.005334498360753059,
-0.013927607797086239,
-0.021561162546277046,
-0.06458119302988052,
0.041471563279628754,
0.04159703850746155,
-0.03588191419839859,
0.07979612797498703,
-0.... |
3a94f4b9-cba0-4d6e-9303-5664aeacbe0b | title: 'BigQuery vs ClickHouse Cloud'
slug: /migrations/bigquery/biquery-vs-clickhouse-cloud
description: 'How BigQuery differs from ClickHouse Cloud'
keywords: ['BigQuery']
show_related_blogs: true
sidebar_label: 'Overview'
doc_type: 'guide'
import bigquery_1 from '@site/static/images/migrations/bigquery-1.png';
import Image from '@theme/IdealImage';
Comparing ClickHouse Cloud and BigQuery
Resource organization {#resource-organization}
The way resources are organized in ClickHouse Cloud is similar to
BigQuery's resource hierarchy
. We describe specific differences below based on the following diagram showing the ClickHouse Cloud resource hierarchy:
Organizations {#organizations}
Similar to BigQuery, organizations are the root nodes in the ClickHouse cloud resource hierarchy. The first user you set up in your ClickHouse Cloud account is automatically assigned to an organization owned by the user. The user may invite additional users to the organization.
BigQuery Projects vs ClickHouse Cloud Services {#bigquery-projects-vs-clickhouse-cloud-services}
Within organizations, you can create services loosely equivalent to BigQuery projects because stored data in ClickHouse Cloud is associated with a service. There are
several service types available
in ClickHouse Cloud. Each ClickHouse Cloud service is deployed in a specific region and includes:
A group of compute nodes (currently, 2 nodes for a Development tier service and 3 for a Production tier service). For these nodes, ClickHouse Cloud
supports vertical and horizontal scaling
, both manually and automatically.
An object storage folder where the service stores all the data.
An endpoint (or multiple endpoints created via ClickHouse Cloud UI console) - a service URL that you use to connect to the service (for example,
https://dv2fzne24g.us-east-1.aws.clickhouse.cloud:8443
)
BigQuery Datasets vs ClickHouse Cloud Databases {#bigquery-datasets-vs-clickhouse-cloud-databases}
ClickHouse logically groups tables into databases. Like BigQuery datasets, ClickHouse databases are logical containers that organize and control access to table data.
BigQuery Folders {#bigquery-folders}
ClickHouse Cloud currently has no concept equivalent to BigQuery folders.
BigQuery Slot reservations and Quotas {#bigquery-slot-reservations-and-quotas}
Like BigQuery slot reservations, you can
configure vertical and horizontal autoscaling
in ClickHouse Cloud. For vertical autoscaling, you can set the minimum and maximum size for the memory and CPU cores of the compute nodes for a service. The service will then scale as needed within those bounds. These settings are also available during the initial service creation flow. Each compute node in the service has the same size. You can change the number of compute nodes within a service with
horizontal scaling
. | {"source_file": "01_overview.md"} | [
0.01986873894929886,
-0.04035619646310806,
0.01299323420971632,
0.004236556123942137,
0.07996582239866257,
-0.08199003338813782,
0.016749035567045212,
-0.04796775057911873,
0.05346928909420967,
0.018619392067193985,
0.007783421315252781,
0.029714494943618774,
0.05450568348169327,
-0.023481... |
ab13b3ca-783a-4818-937d-1623410f8fb1 | Furthermore, similar to BigQuery quotas, ClickHouse Cloud offers concurrency control, memory usage limits, and I/O scheduling, enabling users to isolate queries into workload classes. By setting limits on shared resources (CPU cores, DRAM, disk and network I/O) for specific workload classes, it ensures these queries do not affect other critical business queries. Concurrency control prevents thread oversubscription in scenarios with a high number of concurrent queries.
ClickHouse tracks byte sizes of memory allocations at the server, user, and query level, allowing flexible memory usage limits. Memory overcommit enables queries to use additional free memory beyond the guaranteed memory, while assuring memory limits for other queries. Additionally, memory usage for aggregation, sort, and join clauses can be limited, allowing fallback to external algorithms when the memory limit is exceeded.
Lastly, I/O scheduling allows users to restrict local and remote disk accesses for workload classes based on maximum bandwidth, in-flight requests, and policy.
Permissions {#permissions}
ClickHouse Cloud controls user access in two places, via the
cloud console
and via the
database
. Console access is managed via the
clickhouse.cloud
user interface. Database access is managed via database user accounts and roles. Additionally, console users can be granted roles within the database that enable the console user to interact with the database via our
SQL console
.
Data types {#data-types}
ClickHouse offers more granular precision with respect to numerics. For example, BigQuery offers the numeric types
INT64
,
NUMERIC
,
BIGNUMERIC
and
FLOAT64
. Contrast these with ClickHouse, which offers multiple precision types for decimals, floats, and integers. With these data types, ClickHouse users can optimize storage and memory overhead, resulting in faster queries and lower resource consumption. Below we map the equivalent ClickHouse type for each BigQuery type: | {"source_file": "01_overview.md"} | [
-0.00752768712118268,
0.013545993715524673,
-0.027262737974524498,
0.03669169172644615,
-0.027243822813034058,
0.004704357124865055,
0.036412712186574936,
-0.022713469341397285,
0.060565635561943054,
0.059057608246803284,
-0.041634172201156616,
0.08807079493999481,
0.011824911460280418,
-0... |
6501c3af-73d8-4605-bbe0-1ba0d6e12bdd | | BigQuery | ClickHouse |
|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
ARRAY
|
Array(t)
|
|
NUMERIC
|
Decimal(P, S), Decimal32(S), Decimal64(S), Decimal128(S)
|
|
BIG NUMERIC
|
Decimal256(S)
|
|
BOOL
|
Bool
|
|
BYTES
|
FixedString
|
|
DATE
|
Date32
(with narrower range) |
|
DATETIME
|
DateTime
,
DateTime64
(narrow range, higher precision) |
|
FLOAT64
|
Float64
|
|
GEOGRAPHY
|
Geo Data Types
|
|
INT64
|
UInt8, UInt16, UInt32, UInt64, UInt128, UInt256, Int8, Int16, Int32, Int64, Int128, Int256
|
|
INTERVAL
| NA -
supported as expression
or
through functions
|
|
JSON
|
JSON
|
|
STRING
|
String (bytes)
|
|
STRUCT
|
Tuple
,
Nested
|
|
TIME
|
DateTime64
|
|
TIMESTAMP
|
DateTime64
|
When presented with multiple options for ClickHouse types, consider the actual range of the data and pick the lowest required. Also, consider utilizing
appropriate codecs
for further compression. | {"source_file": "01_overview.md"} | [
0.0648697093129158,
0.04708139970898628,
-0.08462212979793549,
-0.03867194056510925,
-0.09124855697154999,
-0.018531884998083115,
0.03948653116822243,
0.005840811412781477,
-0.038601137697696686,
0.004547088406980038,
-0.043893780559301376,
-0.06296706944704056,
0.009972359985113144,
-0.05... |
292e1b2d-64ce-438b-85be-c4511d3a085d | When presented with multiple options for ClickHouse types, consider the actual range of the data and pick the lowest required. Also, consider utilizing
appropriate codecs
for further compression.
Query acceleration techniques {#query-acceleration-techniques}
Primary and Foreign keys and Primary index {#primary-and-foreign-keys-and-primary-index}
In BigQuery, a table can have
primary key and foreign key constraints
. Typically, primary and foreign keys are used in relational databases to ensure data integrity. A primary key value is normally unique for each row and is not
NULL
. Each foreign key value in a row must be present in the primary key column of the primary key table or be
NULL
. In BigQuery, these constraints are not enforced, but the query optimizer may use this information to optimize queries better.
In ClickHouse, a table can also have a primary key. Like BigQuery, ClickHouse doesn't enforce uniqueness for a table's primary key column values. Unlike BigQuery, a table's data is stored on disk
ordered
by the primary key column(s). The query optimizer utilizes this sort order to prevent resorting, to minimize memory usage for joins, and to enable short-circuiting for limit clauses. Unlike BigQuery, ClickHouse automatically creates
a (sparse) primary index
based on the primary key column values. This index is used to speed up all queries that contain filters on the primary key columns. ClickHouse currently doesn't support foreign key constraints.
Secondary indexes (Only available in ClickHouse) {#secondary-indexes-only-available-in-clickhouse}
In addition to the primary index created from the values of a table's primary key columns, ClickHouse allows you to create secondary indexes on columns other than those in the primary key. ClickHouse offers several types of secondary indexes, each suited to different types of queries:
Bloom Filter Index
:
Used to speed up queries with equality conditions (e.g., =, IN).
Uses probabilistic data structures to determine whether a value exists in a data block.
Token Bloom Filter Index
:
Similar to a Bloom Filter Index but used for tokenized strings and suitable for full-text search queries.
Min-Max Index
:
Maintains the minimum and maximum values of a column for each data part.
Helps to skip reading data parts that do not fall within the specified range.
Search indexes {#search-indexes}
Similar to
search indexes
in BigQuery,
full-text indexes
can be created for ClickHouse tables on columns with string values.
Vector indexes {#vector-indexes}
BigQuery recently introduced
vector indexes
as a Pre-GA feature. Likewise, ClickHouse has experimental support for
indexes to speed up
vector search use cases.
Partitioning {#partitioning} | {"source_file": "01_overview.md"} | [
0.021002070978283882,
0.03408258780837059,
-0.025905711576342583,
-0.009598856791853905,
-0.02756623923778534,
-0.03292761370539665,
-0.04409443214535713,
-0.0019383231410756707,
0.027905382215976715,
0.017509914934635162,
-0.014226196333765984,
0.04031771421432495,
0.01641022600233555,
-0... |
1478489c-2d67-4785-b4c5-73f1f531ccce | BigQuery recently introduced
vector indexes
as a Pre-GA feature. Likewise, ClickHouse has experimental support for
indexes to speed up
vector search use cases.
Partitioning {#partitioning}
Like BigQuery, ClickHouse uses table partitioning to enhance the performance and manageability of large tables by dividing tables into smaller, more manageable pieces called partitions. We describe ClickHouse partitioning in detail
here
.
Clustering {#clustering}
With clustering, BigQuery automatically sorts table data based on the values of a few specified columns and colocates them in optimally sized blocks. Clustering improves query performance, allowing BigQuery to better estimate the cost of running the query. With clustered columns, queries also eliminate scans of unnecessary data.
In ClickHouse, data is automatically
clustered on disk
based on a table's primary key columns and logically organized in blocks that can be quickly located or pruned by queries utilizing the primary index data structure.
Materialized views {#materialized-views}
Both BigQuery and ClickHouse support materialized views – precomputed results based on a transformation query's result against a base table for increased performance and efficiency.
Querying materialized views {#querying-materialized-views}
BigQuery materialized views can be queried directly or used by the optimizer to process queries to the base tables. If changes to base tables might invalidate the materialized view, data is read directly from the base tables. If the changes to the base tables don't invalidate the materialized view, then the rest of the data is read from the materialized view, and only the changes are read from the base tables.
In ClickHouse, materialized views can be queried directly only. However, compared to BigQuery (in which materialized views are automatically refreshed within 5 minutes of a change to the base tables, but no more frequently than
every 30 minutes
), materialized views are always in sync with the base table.
Updating materialized views
BigQuery periodically fully refreshes materialized views by running the view's transformation query against the base table. Between refreshes, BigQuery combines the materialized view's data with new base table data to provide consistent query results while still using the materialized view. | {"source_file": "01_overview.md"} | [
-0.0022833305411040783,
0.04094558209180832,
-0.01435762643814087,
0.044604722410440445,
0.06881776452064514,
-0.05789628624916077,
-0.02335323579609394,
-0.017070313915610313,
0.0018126163631677628,
0.04137090593576431,
-0.04153332859277725,
0.06995166838169098,
0.007220391184091568,
-0.0... |
367296d3-45c1-4dbe-9de1-c709b5bb2465 | In ClickHouse, materialized views are incrementally updated. This incremental update mechanism provides high scalability and low computing costs: incrementally updated materialized views are engineered especially for scenarios where base tables contain billions or trillions of rows. Instead of querying the ever-growing base table repeatedly to refresh the materialized view, ClickHouse simply calculates a partial result from (only) the values of the newly inserted base table rows. This partial result is incrementally merged with the previously calculated partial result in the background. This results in dramatically lower computing costs compared to refreshing the materialized view repeatedly from the whole base table.
Transactions {#transactions}
In contrast to ClickHouse, BigQuery supports multi-statement transactions inside a single query, or across multiple queries when using sessions. A multi-statement transaction lets you perform mutating operations, such as inserting or deleting rows on one or more tables, and either commit or rollback the changes atomically. Multi-statement transactions are on
ClickHouse's roadmap for 2024
.
Aggregate functions {#aggregate-functions}
Compared to BigQuery, ClickHouse comes with significantly more built-in aggregate functions:
BigQuery comes with
18 aggregate functions
, and
4 approximate aggregate functions
.
ClickHouse has over
150 pre-built aggregation functions
, plus powerful
aggregation combinators
for
extending
the behavior of pre-built aggregation functions. As an example, you can apply the over 150 pre-built aggregate functions to arrays instead of table rows simply by calling them with a
-Array suffix
. With a
-Map suffix
you can apply any aggregate function to maps. And with a
-ForEach suffix
, you can apply any aggregate function to nested arrays.
Data sources and file formats {#data-sources-and-file-formats}
Compared to BigQuery, ClickHouse supports significantly more file formats and data sources:
ClickHouse has native support for loading data in 90+ file formats from virtually any data source
BigQuery supports 5 file formats and 19 data sources
SQL language features {#sql-language-features}
ClickHouse provides standard SQL with many extensions and improvements that make it more friendly for analytical tasks. E.g. ClickHouse SQL
supports lambda functions
and higher order functions, so you don't have to unnest/explode arrays when applying transformations. This is a big advantage over other systems like BigQuery.
Arrays {#arrays}
Compared to BigQuery's 8 array functions, ClickHouse has over 80
built-in array functions
for modeling and solving a wide range of problems elegantly and simply. | {"source_file": "01_overview.md"} | [
0.013604055158793926,
-0.046597640961408615,
-0.006583022885024548,
-0.007787833455950022,
-0.033539969474077225,
-0.03828304260969162,
-0.05037318170070648,
0.00032912121969275177,
0.09367702156305313,
0.04130515083670616,
-0.01405602227896452,
0.04953133314847946,
-0.010226288810372353,
... |
8959c29f-57f6-4f52-9160-03c1d2ae5119 | Arrays {#arrays}
Compared to BigQuery's 8 array functions, ClickHouse has over 80
built-in array functions
for modeling and solving a wide range of problems elegantly and simply.
A typical design pattern in ClickHouse is to use the
groupArray
aggregate function to (temporarily) transform specific row values of a table into an array. This then can be conveniently processed via array functions, and the result can be converted back into individual table rows via
arrayJoin
aggregate function.
Because ClickHouse SQL supports
higher order lambda functions
, many advanced array operations can be achieved by simply calling one of the higher order built-in array functions, instead of temporarily converting arrays back to tables, as it is often
required
in BigQuery, e.g. for
filtering
or
zipping
arrays. In ClickHouse these operations are just a simple function call of the higher order functions
arrayFilter
, and
arrayZip
, respectively.
In the following, we provide a mapping of array operations from BigQuery to ClickHouse:
| BigQuery | ClickHouse |
|----------|------------|
|
ARRAY_CONCAT
|
arrayConcat
|
|
ARRAY_LENGTH
|
length
|
|
ARRAY_REVERSE
|
arrayReverse
|
|
ARRAY_TO_STRING
|
arrayStringConcat
|
|
GENERATE_ARRAY
|
range
|
Create an array with one element for each row in a subquery
BigQuery
ARRAY function
```sql
SELECT ARRAY
(SELECT 1 UNION ALL
SELECT 2 UNION ALL
SELECT 3) AS new_array;
/
-----------
| new_array |
+-----------+
| [1, 2, 3] |
-----------
/
```
ClickHouse
groupArray
aggregate function
sql
SELECT groupArray(*) AS new_array
FROM
(
SELECT 1
UNION ALL
SELECT 2
UNION ALL
SELECT 3
)
┌─new_array─┐
1. │ [1,2,3] │
└───────────┘
Convert an array into a set of rows
BigQuery
UNNEST
operator
```sql
SELECT *
FROM UNNEST(['foo', 'bar', 'baz', 'qux', 'corge', 'garply', 'waldo', 'fred'])
AS element
WITH OFFSET AS offset
ORDER BY offset;
/
----------+--------
| element | offset |
+----------+--------+
| foo | 0 |
| bar | 1 |
| baz | 2 |
| qux | 3 |
| corge | 4 |
| garply | 5 |
| waldo | 6 |
| fred | 7 |
----------+--------
/
```
ClickHouse
ARRAY JOIN
clause
```sql
WITH ['foo', 'bar', 'baz', 'qux', 'corge', 'garply', 'waldo', 'fred'] AS values
SELECT element, num-1 AS offset
FROM (SELECT values AS element) AS subquery
ARRAY JOIN element, arrayEnumerate(element) AS num;
/
----------+--------
| element | offset |
+----------+--------+
| foo | 0 |
| bar | 1 |
| baz | 2 |
| qux | 3 |
| corge | 4 |
| garply | 5 |
| waldo | 6 |
| fred | 7 |
----------+--------
/
```
Return an array of dates
BigQuery
GENERATE_DATE_ARRAY
function
```sql
SELECT GENERATE_DATE_ARRAY('2016-10-05', '2016-10-08') AS example;
/
-------------------------------------------------- | {"source_file": "01_overview.md"} | [
0.04365590587258339,
0.029933970421552658,
-0.02104967087507248,
0.006138595286756754,
-0.06718981266021729,
-0.06534271687269211,
0.015154823660850525,
-0.059964463114738464,
-0.03169181942939758,
-0.00682320911437273,
-0.042062073945999146,
0.03701591491699219,
-0.0071177552454173565,
-0... |
e2026e9a-9387-44f7-8d20-73e7b24ce9a9 | BigQuery
GENERATE_DATE_ARRAY
function
```sql
SELECT GENERATE_DATE_ARRAY('2016-10-05', '2016-10-08') AS example;
/
--------------------------------------------------
| example |
+--------------------------------------------------+
| [2016-10-05, 2016-10-06, 2016-10-07, 2016-10-08] |
--------------------------------------------------
/
```
range
+
arrayMap
functions
ClickHouse
```sql
SELECT arrayMap(x -> (toDate('2016-10-05') + x), range(toUInt32((toDate('2016-10-08') - toDate('2016-10-05')) + 1))) AS example
┌─example───────────────────────────────────────────────┐
1. │ ['2016-10-05','2016-10-06','2016-10-07','2016-10-08'] │
└───────────────────────────────────────────────────────┘
```
Return an array of timestamps
BigQuery
GENERATE_TIMESTAMP_ARRAY
function
```sql
SELECT GENERATE_TIMESTAMP_ARRAY('2016-10-05 00:00:00', '2016-10-07 00:00:00',
INTERVAL 1 DAY) AS timestamp_array;
/
--------------------------------------------------------------------------
| timestamp_array |
+--------------------------------------------------------------------------+
| [2016-10-05 00:00:00+00, 2016-10-06 00:00:00+00, 2016-10-07 00:00:00+00] |
--------------------------------------------------------------------------
/
```
ClickHouse
range
+
arrayMap
functions
```sql
SELECT arrayMap(x -> (toDateTime('2016-10-05 00:00:00') + toIntervalDay(x)), range(dateDiff('day', toDateTime('2016-10-05 00:00:00'), toDateTime('2016-10-07 00:00:00')) + 1)) AS timestamp_array
Query id: b324c11f-655b-479f-9337-f4d34fd02190
┌─timestamp_array─────────────────────────────────────────────────────┐
1. │ ['2016-10-05 00:00:00','2016-10-06 00:00:00','2016-10-07 00:00:00'] │
└─────────────────────────────────────────────────────────────────────┘
```
Filtering arrays
BigQuery
Requires temporarily converting arrays back to tables via
UNNEST
operator
```sql
WITH Sequences AS
(SELECT [0, 1, 1, 2, 3, 5] AS some_numbers
UNION ALL SELECT [2, 4, 8, 16, 32] AS some_numbers
UNION ALL SELECT [5, 10] AS some_numbers)
SELECT
ARRAY(SELECT x * 2
FROM UNNEST(some_numbers) AS x
WHERE x < 5) AS doubled_less_than_five
FROM Sequences;
/
------------------------
| doubled_less_than_five |
+------------------------+
| [0, 2, 2, 4, 6] |
| [4, 8] |
| [] |
------------------------
/
```
ClickHouse
arrayFilter
function | {"source_file": "01_overview.md"} | [
0.05966347083449364,
0.055014658719301224,
-0.0039434186182916164,
-0.00665196031332016,
-0.08506076782941818,
0.027673963457345963,
-0.038336385041475296,
-0.0024172854609787464,
-0.10065056383609772,
-0.02504180185496807,
0.015160486102104187,
-0.09148240089416504,
-0.006006465293467045,
... |
29465844-6cc7-4f39-ac09-f6e6721cd6a8 | ClickHouse
arrayFilter
function
sql
WITH Sequences AS
(
SELECT [0, 1, 1, 2, 3, 5] AS some_numbers
UNION ALL
SELECT [2, 4, 8, 16, 32] AS some_numbers
UNION ALL
SELECT [5, 10] AS some_numbers
)
SELECT arrayMap(x -> (x * 2), arrayFilter(x -> (x < 5), some_numbers)) AS doubled_less_than_five
FROM Sequences;
┌─doubled_less_than_five─┐
1. │ [0,2,2,4,6] │
└────────────────────────┘
┌─doubled_less_than_five─┐
2. │ [] │
└────────────────────────┘
┌─doubled_less_than_five─┐
3. │ [4,8] │
└────────────────────────┘
Zipping arrays
BigQuery
Requires temporarily converting arrays back to tables via
UNNEST
operator
```sql
WITH
Combinations AS (
SELECT
['a', 'b'] AS letters,
[1, 2, 3] AS numbers
)
SELECT
ARRAY(
SELECT AS STRUCT
letters[SAFE_OFFSET(index)] AS letter,
numbers[SAFE_OFFSET(index)] AS number
FROM Combinations
CROSS JOIN
UNNEST(
GENERATE_ARRAY(
0,
LEAST(ARRAY_LENGTH(letters), ARRAY_LENGTH(numbers)) - 1)) AS index
ORDER BY index
);
/
------------------------------
| pairs |
+------------------------------+
| [{ letter: "a", number: 1 }, |
| { letter: "b", number: 2 }] |
------------------------------
/
```
ClickHouse
arrayZip
function
sql
WITH Combinations AS
(
SELECT
['a', 'b'] AS letters,
[1, 2, 3] AS numbers
)
SELECT arrayZip(letters, arrayResize(numbers, length(letters))) AS pairs
FROM Combinations;
┌─pairs─────────────┐
1. │ [('a',1),('b',2)] │
└───────────────────┘
Aggregating arrays
BigQuery
Requires converting arrays back to tables via
UNNEST
operator
```sql
WITH Sequences AS
(SELECT [0, 1, 1, 2, 3, 5] AS some_numbers
UNION ALL SELECT [2, 4, 8, 16, 32] AS some_numbers
UNION ALL SELECT [5, 10] AS some_numbers)
SELECT some_numbers,
(SELECT SUM(x)
FROM UNNEST(s.some_numbers) AS x) AS sums
FROM Sequences AS s;
/
--------------------+------
| some_numbers | sums |
+--------------------+------+
| [0, 1, 1, 2, 3, 5] | 12 |
| [2, 4, 8, 16, 32] | 62 |
| [5, 10] | 15 |
--------------------+------
/
```
ClickHouse
arraySum
,
arrayAvg
, ... function, or any of the over 90 existing aggregate function names as argument for the
arrayReduce
function
sql
WITH Sequences AS
(
SELECT [0, 1, 1, 2, 3, 5] AS some_numbers
UNION ALL
SELECT [2, 4, 8, 16, 32] AS some_numbers
UNION ALL
SELECT [5, 10] AS some_numbers
)
SELECT
some_numbers,
arraySum(some_numbers) AS sums
FROM Sequences;
┌─some_numbers──┬─sums─┐
1. │ [0,1,1,2,3,5] │ 12 │
└───────────────┴──────┘
┌─some_numbers──┬─sums─┐
2. │ [2,4,8,16,32] │ 62 │
└───────────────┴──────┘
┌─some_numbers─┬─sums─┐
3. │ [5,10] │ 15 │
└──────────────┴──────┘ | {"source_file": "01_overview.md"} | [
0.042989060282707214,
-0.00373237207531929,
-0.009495527483522892,
-0.06248972937464714,
-0.038025014102458954,
0.03451671823859215,
0.0363677516579628,
-0.06846656650304794,
-0.09461497515439987,
-0.002717676805332303,
-0.021336032077670097,
0.013668238185346127,
0.04211762174963951,
-0.0... |
4ba1bfa7-6efe-4ff0-aa68-a8fe7b532112 | sidebar_label: 'Loading data'
title: 'Loading data from BigQuery to ClickHouse'
slug: /migrations/bigquery/loading-data
description: 'How to load data from BigQuery to ClickHouse'
keywords: ['migrate', 'migration', 'migrating', 'data', 'etl', 'elt', 'BigQuery']
doc_type: 'guide'
This guide is compatible with ClickHouse Cloud and for self-hosted ClickHouse v23.5+.
This guide shows how to migrate data from
BigQuery
to ClickHouse.
We first export a table to
Google's object store (GCS)
and then import that data into
ClickHouse Cloud
. These steps need to be repeated for each table you wish to export from BigQuery to ClickHouse.
How long will exporting data to ClickHouse take? {#how-long-will-exporting-data-to-clickhouse-take}
Exporting data from BigQuery to ClickHouse is dependent on the size of your dataset. As a comparison, it takes about an hour to export the
4TB public Ethereum dataset
from BigQuery to ClickHouse using this guide.
| Table | Rows | Files Exported | Data Size | BigQuery Export | Slot Time | ClickHouse Import |
| ------------------------------------------------------------------------------------------------- | ------------- | -------------- | --------- | --------------- | --------------- | ----------------- |
|
blocks
| 16,569,489 | 73 | 14.53GB | 23 secs | 37 min | 15.4 secs |
|
transactions
| 1,864,514,414 | 5169 | 957GB | 1 min 38 sec | 1 day 8hrs | 18 mins 5 secs |
|
traces
| 6,325,819,306 | 17,985 | 2.896TB | 5 min 46 sec | 5 days 19 hr | 34 mins 55 secs |
|
contracts
| 57,225,837 | 350 | 45.35GB | 16 sec | 1 hr 51 min | 39.4 secs |
| Total | 8.26 billion | 23,577 | 3.982TB | 8 min 3 sec | > 6 days 5 hrs | 53 mins 45 secs |
Export table data to GCS {#1-export-table-data-to-gcs}
In this step, we utilize the
BigQuery SQL workspace
to execute our SQL commands. Below, we export a BigQuery table named
mytable
to a GCS bucket using the
EXPORT DATA
statement.
```sql
DECLARE export_path STRING;
DECLARE n INT64;
DECLARE i INT64;
SET i = 0;
-- We recommend setting n to correspond to x billion rows. So 5 billion rows, n = 5
SET n = 100;
WHILE i < n DO
SET export_path = CONCAT('gs://mybucket/mytable/', i,'-*.parquet');
EXPORT DATA
OPTIONS (
uri = export_path,
format = 'PARQUET',
overwrite = true
)
AS (
SELECT * FROM mytable WHERE export_id = i
);
SET i = i + 1;
END WHILE;
``` | {"source_file": "03_loading-data.md"} | [
0.04607338830828667,
-0.036184053868055344,
0.04071073979139328,
0.02040959894657135,
0.033230241388082504,
-0.05595133453607559,
-0.09849239885807037,
-0.0515974685549736,
-0.0478275828063488,
0.03475554659962654,
0.034209392964839935,
-0.05255492031574249,
-0.01708471029996872,
-0.145242... |
a743e39c-b1c2-4278-90e2-404e18a25013 | In the above query, we export our BigQuery table to the
Parquet data format
. We also have a
*
character in our
uri
parameter. This ensures the output is sharded into multiple files, with a numerically increasing suffix, should the export exceed 1GB of data.
This approach has a number of advantages:
Google allows up to 50TB per day to be exported to GCS for free. Users only pay for GCS storage.
Exports produce multiple files automatically, limiting each to a maximum of 1GB of table data. This is beneficial to ClickHouse since it allows imports to be parallelized.
Parquet, as a column-oriented format, represents a better interchange format since it is inherently compressed and faster for BigQuery to export and ClickHouse to query
Importing data into ClickHouse from GCS {#2-importing-data-into-clickhouse-from-gcs}
Once the export is complete, we can import this data into a ClickHouse table. You can use the
ClickHouse SQL console
or
clickhouse-client
to execute the commands below.
You must first
create your table
in ClickHouse:
```sql
-- If your BigQuery table contains a column of type STRUCT, you must enable this setting
-- to map that column to a ClickHouse column of type Nested
SET input_format_parquet_import_nested = 1;
CREATE TABLE default.mytable
(
timestamp
DateTime64(6),
some_text
String
)
ENGINE = MergeTree
ORDER BY (timestamp);
```
After creating the table, enable the setting
parallel_distributed_insert_select
if you have multiple ClickHouse replicas in your cluster to speed up our export. If you only have one ClickHouse node, you can skip this step:
sql
SET parallel_distributed_insert_select = 1;
Finally, we can insert the data from GCS into our ClickHouse table using the
INSERT INTO SELECT
command
, which inserts data into a table based on the results from a
SELECT
query.
To retrieve the data to
INSERT
, we can use the
s3Cluster function
to retrieve data from our GCS bucket since GCS is interoperable with
Amazon S3
. If you only have one ClickHouse node, you can use the
s3 table function
instead of the
s3Cluster
function.
sql
INSERT INTO mytable
SELECT
timestamp,
ifNull(some_text, '') AS some_text
FROM s3Cluster(
'default',
'https://storage.googleapis.com/mybucket/mytable/*.parquet.gz',
'<ACCESS_ID>',
'<SECRET>'
);
The
ACCESS_ID
and
SECRET
used in the above query is your
HMAC key
associated with your GCS bucket.
:::note Use
ifNull
when exporting nullable columns
In the above query, we use the
ifNull
function
with the
some_text
column to insert data into our ClickHouse table with a default value. You can also make your columns in ClickHouse
Nullable
, but this is not recommended as it may affect negatively performance.
Alternatively, you can
SET input_format_null_as_default=1
and any missing or NULL values will be replaced by default values for their respective columns, if those defaults are specified.
::: | {"source_file": "03_loading-data.md"} | [
-0.021924953907728195,
-0.009380321018397808,
-0.04170508682727814,
-0.05983287841081619,
-0.038715120404958725,
-0.06557226181030273,
-0.0097738578915596,
0.0318324938416481,
-0.057283513247966766,
0.07043857127428055,
-0.03556361421942711,
0.01159130223095417,
-0.02096799947321415,
-0.10... |
fa1d7350-4657-4d3a-af54-f857b6a30036 | Alternatively, you can
SET input_format_null_as_default=1
and any missing or NULL values will be replaced by default values for their respective columns, if those defaults are specified.
:::
Testing successful data export {#3-testing-successful-data-export}
To test whether your data was properly inserted, simply run a
SELECT
query on your new table:
sql
SELECT * FROM mytable LIMIT 10;
To export more BigQuery tables, simply redo the steps above for each additional table.
Further reading and support {#further-reading-and-support}
In addition to this guide, we also recommend reading our blog post that shows
how to use ClickHouse to speed up BigQuery and how to handle incremental imports
.
If you are having issues transferring data from BigQuery to ClickHouse, please feel free to contact us at support@clickhouse.com. | {"source_file": "03_loading-data.md"} | [
0.06781572848558426,
-0.0357355959713459,
-0.033110976219177246,
0.024852070957422256,
0.03787064552307129,
0.012327195145189762,
-0.07401509582996368,
0.018899329006671906,
-0.08866553753614426,
0.048063576221466064,
0.021661557257175446,
-0.052063971757888794,
-0.011127421632409096,
-0.1... |
e4169392-cb75-4be3-b57f-2eee1370b0f2 | title: 'Migrating from BigQuery to ClickHouse Cloud'
slug: /migrations/bigquery/migrating-to-clickhouse-cloud
description: 'How to migrate your data from BigQuery to ClickHouse Cloud'
keywords: ['BigQuery']
show_related_blogs: true
sidebar_label: 'Migration guide'
doc_type: 'guide'
import bigquery_2 from '@site/static/images/migrations/bigquery-2.png';
import bigquery_3 from '@site/static/images/migrations/bigquery-3.png';
import bigquery_4 from '@site/static/images/migrations/bigquery-4.png';
import bigquery_5 from '@site/static/images/migrations/bigquery-5.png';
import bigquery_6 from '@site/static/images/migrations/bigquery-6.png';
import bigquery_7 from '@site/static/images/migrations/bigquery-7.png';
import bigquery_8 from '@site/static/images/migrations/bigquery-8.png';
import bigquery_9 from '@site/static/images/migrations/bigquery-9.png';
import bigquery_10 from '@site/static/images/migrations/bigquery-10.png';
import bigquery_11 from '@site/static/images/migrations/bigquery-11.png';
import bigquery_12 from '@site/static/images/migrations/bigquery-12.png';
import Image from '@theme/IdealImage';
Why use ClickHouse Cloud over BigQuery? {#why-use-clickhouse-cloud-over-bigquery}
TLDR: Because ClickHouse is faster, cheaper, and more powerful than BigQuery for modern data analytics:
Loading data from BigQuery to ClickHouse Cloud {#loading-data-from-bigquery-to-clickhouse-cloud}
Dataset {#dataset}
As an example dataset to show a typical migration from BigQuery to ClickHouse Cloud, we use the Stack Overflow dataset documented
here
. This contains every
post
,
vote
,
user
,
comment
, and
badge
that has occurred on Stack Overflow from 2008 to Apr 2024. The BigQuery schema for this data is shown below:
For users who wish to populate this dataset into a BigQuery instance to test migration steps, we have provided data for these tables in Parquet format in a GCS bucket and DDL commands to create and load the tables in BigQuery are available
here
.
Migrating data {#migrating-data}
Migrating data between BigQuery and ClickHouse Cloud falls into two primary workload types:
Initial bulk load with periodic updates
- An initial dataset must be migrated along with periodic updates at set intervals e.g. daily. Updates here are handled by resending rows that have changed - identified by either a column that can be used for comparisons (e.g., a date). Deletes are handled with a complete periodic reload of the dataset.
Real time replication or CDC
- An initial dataset must be migrated. Changes to this dataset must be reflected in ClickHouse in near-real time with only a delay of several seconds acceptable. This is effectively a
Change Data Capture (CDC) process
where tables in BigQuery must be synchronized with ClickHouse i.e. inserts, updates and deletes in the BigQuery table must be applied to an equivalent table in ClickHouse. | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
0.0621604397892952,
-0.004663724452257156,
0.029138261452317238,
0.0017545176669955254,
0.07164928317070007,
-0.03285817429423332,
-0.026079773902893066,
-0.04496479034423828,
-0.06696279346942902,
0.03299225866794586,
0.056470371782779694,
-0.03026406653225422,
0.0385955311357975,
-0.0880... |
281e6c88-df31-4956-b9e7-d8b622da9842 | Bulk loading via Google Cloud Storage (GCS) {#bulk-loading-via-google-cloud-storage-gcs}
BigQuery supports exporting data to Google's object store (GCS). For our example data set:
Export the 7 tables to GCS. Commands for that are available
here
.
Import the data into ClickHouse Cloud. For that we can use the
gcs table function
. The DDL and import queries are available
here
. Note that because a ClickHouse Cloud instance consists of multiple compute nodes, instead of the
gcs
table function, we are using the
s3Cluster table function
instead. This function also works with gcs buckets and
utilizes all nodes of a ClickHouse Cloud service
to load the data in parallel.
This approach has a number of advantages:
BigQuery export functionality supports a filter for exporting a subset of data.
BigQuery supports exporting to
Parquet, Avro, JSON, and CSV
formats and several
compression types
- all supported by ClickHouse.
GCS supports
object life cycle management
, allowing data that has been exported and imported into ClickHouse to be deleted after a specified period.
Google allows up to 50TB per day to be exported to GCS for free
. Users only pay for GCS storage.
Exports produce multiple files automatically, limiting each to a maximum of 1GB of table data. This is beneficial to ClickHouse since it allows imports to be parallelized.
Before trying the following examples, we recommend users review the
permissions required for export
and
locality recommendations
to maximize export and import performance.
Real-time replication or CDC via scheduled queries {#real-time-replication-or-cdc-via-scheduled-queries}
Change Data Capture (CDC) is the process by which tables are kept in sync between two databases. This is significantly more complex if updates and deletes are to be handled in near real-time. One approach is to simply schedule a periodic export using BigQuery's
scheduled query functionality
. Provided you can accept some delay in the data being inserted into ClickHouse, this approach is easy to implement and maintain. An example is given in
this blog post
.
Designing schemas {#designing-schemas}
The Stack Overflow dataset contains a number of related tables. We recommend focusing on migrating the primary table first. This may not necessarily be the largest table but rather the one on which you expect to receive the most analytical queries. This will allow you to familiarize yourself with the main ClickHouse concepts. This table may require remodeling as additional tables are added to fully exploit ClickHouse features and obtain optimal performance. We explore this modeling process in our
Data Modeling docs
.
Adhering to this principle, we focus on the main
posts
table. The BigQuery schema for this is shown below: | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
-0.08189480006694794,
-0.06494633108377457,
-0.018848491832613945,
-0.008930893614888191,
-0.0038799147587269545,
-0.06352975964546204,
-0.024584190919995308,
-0.056575875729322433,
-0.020699720829725266,
0.08303827792406082,
-0.02504984475672245,
-0.04024720564484596,
0.06519139558076859,
... |
da2eb1e0-2b96-4cd5-8ad6-c487d2e3de55 | Adhering to this principle, we focus on the main
posts
table. The BigQuery schema for this is shown below:
sql
CREATE TABLE stackoverflow.posts (
id INTEGER,
posttypeid INTEGER,
acceptedanswerid STRING,
creationdate TIMESTAMP,
score INTEGER,
viewcount INTEGER,
body STRING,
owneruserid INTEGER,
ownerdisplayname STRING,
lasteditoruserid STRING,
lasteditordisplayname STRING,
lasteditdate TIMESTAMP,
lastactivitydate TIMESTAMP,
title STRING,
tags STRING,
answercount INTEGER,
commentcount INTEGER,
favoritecount INTEGER,
conentlicense STRING,
parentid STRING,
communityowneddate TIMESTAMP,
closeddate TIMESTAMP
);
Optimizing types {#optimizing-types}
Applying the process
described here
results in the following schema:
sql
CREATE TABLE stackoverflow.posts
(
`Id` Int32,
`PostTypeId` Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime,
`Score` Int32,
`ViewCount` UInt32,
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime,
`LastActivityDate` DateTime,
`Title` String,
`Tags` String,
`AnswerCount` UInt16,
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense`LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime,
`ClosedDate` DateTime
)
ENGINE = MergeTree
ORDER BY tuple()
COMMENT 'Optimized types'
We can populate this table with a simple
INSERT INTO SELECT
, reading the exported data from gcs using the
gcs
table function
. Note that on ClickHouse Cloud you can also use the gcs-compatible
s3Cluster
table function
to parallelize the loading over multiple nodes:
sql
INSERT INTO stackoverflow.posts SELECT * FROM gcs( 'gs://clickhouse-public-datasets/stackoverflow/parquet/posts/*.parquet', NOSIGN);
We don't retain any nulls in our new schema. The above insert converts these implicitly to default values for their respective types - 0 for integers and an empty value for strings. ClickHouse also automatically converts any numerics to their target precision.
How are ClickHouse Primary keys different? {#how-are-clickhouse-primary-keys-different}
As described
here
, like in BigQuery, ClickHouse doesn't enforce uniqueness for a table's primary key column values. | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
0.0642695501446724,
0.015608672983944416,
0.02669355273246765,
0.05243663862347603,
-0.016683558002114296,
-0.017609527334570885,
-0.01405460387468338,
0.027483835816383362,
-0.04143398627638817,
0.0744306668639183,
-0.001759052975103259,
-0.028487304225564003,
-0.0027184325736016035,
-0.0... |
293ead35-0ae2-4be5-b20b-ec799de25bbf | As described
here
, like in BigQuery, ClickHouse doesn't enforce uniqueness for a table's primary key column values.
Similar to clustering in BigQuery, a ClickHouse table's data is stored on disk ordered by the primary key column(s). This sort order is utilized by the query optimizer to prevent resorting, minimize memory usage for joins, and enable short-circuiting for limit clauses.
In contrast to BigQuery, ClickHouse automatically creates
a (sparse) primary index
based on the primary key column values. This index is used to speed up all queries that contain filters on the primary key columns. Specifically:
Memory and disk efficiency are paramount to the scale at which ClickHouse is often used. Data is written to ClickHouse tables in chunks known as parts, with rules applied for merging the parts in the background. In ClickHouse, each part has its own primary index. When parts are merged, then the merged part's primary indexes are also merged. Not that these indexes are not built for each row. Instead, the primary index for a part has one index entry per group of rows - this technique is called sparse indexing.
Sparse indexing is possible because ClickHouse stores the rows for a part on disk ordered by a specified key. Instead of directly locating single rows (like a B-Tree-based index), the sparse primary index allows it to quickly (via a binary search over index entries) identify groups of rows that could possibly match the query. The located groups of potentially matching rows are then, in parallel, streamed into the ClickHouse engine in order to find the matches. This index design allows for the primary index to be small (it completely fits into the main memory) while still significantly speeding up query execution times, especially for range queries that are typical in data analytics use cases. For more details, we recommend
this in-depth guide
.
The selected primary key in ClickHouse will determine not only the index but also the order in which data is written on disk. Because of this, it can dramatically impact compression levels, which can, in turn, affect query performance. An ordering key that causes the values of most columns to be written in a contiguous order will allow the selected compression algorithm (and codecs) to compress the data more effectively.
All columns in a table will be sorted based on the value of the specified ordering key, regardless of whether they are included in the key itself. For instance, if
CreationDate
is used as the key, the order of values in all other columns will correspond to the order of values in the
CreationDate
column. Multiple ordering keys can be specified - this will order with the same semantics as an
ORDER BY
clause in a
SELECT
query.
Choosing an ordering key {#choosing-an-ordering-key}
For the considerations and steps in choosing an ordering key, using the posts table as an example, see
here
.
Data modeling techniques {#data-modeling-techniques} | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
0.010893031023442745,
-0.019049642607569695,
0.005971184000372887,
0.013249137438833714,
0.016018882393836975,
-0.0648634135723114,
-0.002306975657120347,
-0.04256123676896095,
0.05862798914313316,
-0.002676363568753004,
0.010708948597311974,
0.09492091834545135,
0.01792639121413231,
-0.08... |
494a0d6f-7a10-4196-9983-65212c442ce3 | For the considerations and steps in choosing an ordering key, using the posts table as an example, see
here
.
Data modeling techniques {#data-modeling-techniques}
We recommend users migrating from BigQuery read
the guide for modeling data in ClickHouse
. This guide uses the same Stack Overflow dataset and explores multiple approaches using ClickHouse features.
Partitions {#partitions}
BigQuery users will be familiar with the concept of table partitioning for enhancing performance and manageability for large databases by dividing tables into smaller, more manageable pieces called partitions. This partitioning can be achieved using either a range on a specified column (e.g., dates), defined lists, or via hash on a key. This allows administrators to organize data based on specific criteria like date ranges or geographical locations.
Partitioning helps with improving query performance by enabling faster data access through partition pruning and more efficient indexing. It also helps maintenance tasks such as backups and data purges by allowing operations on individual partitions rather than the entire table. Additionally, partitioning can significantly improve the scalability of BigQuery databases by distributing the load across multiple partitions.
In ClickHouse, partitioning is specified on a table when it is initially defined via the
PARTITION BY
clause. This clause can contain a SQL expression on any column/s, the results of which will define which partition a row is sent to.
The data parts are logically associated with each partition on disk and can be queried in isolation. For the example below, we partition the posts table by year using the expression
toYear(CreationDate)
. As rows are inserted into ClickHouse, this expression will be evaluated against each row – rows are then routed to the resulting partition in the form of new data parts belonging to that partition.
sql
CREATE TABLE posts
(
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime64(3, 'UTC'),
...
`ClosedDate` DateTime64(3, 'UTC')
)
ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate)
PARTITION BY toYear(CreationDate)
Applications {#applications}
Partitioning in ClickHouse has similar applications as in BigQuery but with some subtle differences. More specifically: | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
0.03484991565346718,
0.031111832708120346,
0.04115862771868706,
-0.009941265918314457,
-0.034058429300785065,
0.016444705426692963,
-0.04214516654610634,
0.028180422261357307,
-0.016415024176239967,
0.02833983302116394,
-0.03899377956986427,
0.07266116887331009,
0.03224510699510574,
-0.037... |
e2a5c90d-2997-41c5-bfbb-828589df6d5a | Applications {#applications}
Partitioning in ClickHouse has similar applications as in BigQuery but with some subtle differences. More specifically:
Data management
- In ClickHouse, users should principally consider partitioning to be a data management feature, not a query optimization technique. By separating data logically based on a key, each partition can be operated on independently e.g. deleted. This allows users to move partitions, and thus subsets, between
storage tiers
efficiently on time or
expire data/efficiently delete from the cluster
. In example, below we remove posts from 2008:
``sql
SELECT DISTINCT partition
FROM system.parts
WHERE
table` = 'posts'
┌─partition─┐
│ 2008 │
│ 2009 │
│ 2010 │
│ 2011 │
│ 2012 │
│ 2013 │
│ 2014 │
│ 2015 │
│ 2016 │
│ 2017 │
│ 2018 │
│ 2019 │
│ 2020 │
│ 2021 │
│ 2022 │
│ 2023 │
│ 2024 │
└───────────┘
17 rows in set. Elapsed: 0.002 sec.
ALTER TABLE posts
(DROP PARTITION '2008')
Ok.
0 rows in set. Elapsed: 0.103 sec.
```
Query optimization
- While partitions can assist with query performance, this depends heavily on the access patterns. If queries target only a few partitions (ideally one), performance can potentially improve. This is only typically useful if the partitioning key is not in the primary key and you are filtering by it. However, queries that need to cover many partitions may perform worse than if no partitioning is used (as there may possibly be more parts as a result of partitioning). The benefit of targeting a single partition will be even less pronounced to non-existence if the partitioning key is already an early entry in the primary key. Partitioning can also be used to
optimize
GROUP BY
queries
if values in each partition are unique. However, in general, users should ensure the primary key is optimized and only consider partitioning as a query optimization technique in exceptional cases where access patterns access a specific predictable subset of the day, e.g., partitioning by day, with most queries in the last day.
Recommendations {#recommendations}
Users should consider partitioning a data management technique. It is ideal when data needs to be expired from the cluster when operating with time series data e.g. the oldest partition can
simply be dropped
.
Important: Ensure your partitioning key expression does not result in a high cardinality set i.e. creating more than 100 partitions should be avoided. For example, do not partition your data by high cardinality columns such as client identifiers or names. Instead, make a client identifier or name the first column in the
ORDER BY
expression. | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
-0.03243577480316162,
-0.017833752557635307,
0.02684309147298336,
0.0349835567176342,
0.03065555915236473,
-0.06755069643259048,
0.03671121224761009,
0.035706859081983566,
0.005960630718618631,
0.019027790054678917,
-0.02449219301342964,
0.05601416900753975,
0.032884757965803146,
-0.021525... |
361fa44d-6518-4181-ab6e-c322c92c6ec5 | Internally, ClickHouse
creates parts
for inserted data. As more data is inserted, the number of parts increases. In order to prevent an excessively high number of parts, which will degrade query performance (because there are more files to read), parts are merged together in a background asynchronous process. If the number of parts exceeds a
pre-configured limit
, then ClickHouse will throw an exception on insert as a
"too many parts" error
. This should not happen under normal operation and only occurs if ClickHouse is misconfigured or used incorrectly e.g. many small inserts. Since parts are created per partition in isolation, increasing the number of partitions causes the number of parts to increase i.e. it is a multiple of the number of partitions. High cardinality partitioning keys can, therefore, cause this error and should be avoided.
Materialized views vs projections {#materialized-views-vs-projections}
ClickHouse's concept of projections allows users to specify multiple
ORDER BY
clauses for a table.
In
ClickHouse data modeling
, we explore how materialized views can be used
in ClickHouse to pre-compute aggregations, transform rows, and optimize queries
for different access patterns. For the latter, we
provided an example
where
the materialized view sends rows to a target table with a different ordering key
to the original table receiving inserts.
For example, consider the following query:
```sql
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
┌──────────avg(Score)─┐
│ 0.18181818181818182 │
└─────────────────────┘
--highlight-next-line
1 row in set. Elapsed: 0.040 sec. Processed 90.38 million rows, 361.59 MB (2.25 billion rows/s., 9.01 GB/s.)
Peak memory usage: 201.93 MiB.
```
This query requires all 90m rows to be scanned (albeit quickly) as the
UserId
is not the ordering key. Previously, we solved this using a materialized view
acting as a lookup for the
PostId
. The same problem can be solved with a projection.
The command below adds a projection with
ORDER BY user_id
.
```sql
ALTER TABLE comments ADD PROJECTION comments_user_id (
SELECT * ORDER BY UserId
)
ALTER TABLE comments MATERIALIZE PROJECTION comments_user_id
```
Note that we have to first create the projection and then materialize it.
This latter command causes the data to be stored twice on disk in two different
orders. The projection can also be defined when the data is created, as shown below,
and will be automatically maintained as data is inserted.
sql
CREATE TABLE comments
(
`Id` UInt32,
`PostId` UInt32,
`Score` UInt16,
`Text` String,
`CreationDate` DateTime64(3, 'UTC'),
`UserId` Int32,
`UserDisplayName` LowCardinality(String),
--highlight-begin
PROJECTION comments_user_id
(
SELECT *
ORDER BY UserId
)
--highlight-end
)
ENGINE = MergeTree
ORDER BY PostId | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
-0.036104071885347366,
-0.06385745853185654,
-0.015579413622617722,
0.008174868300557137,
-0.024976078420877457,
-0.07499881833791733,
-0.011049600318074226,
-0.009459919296205044,
0.07406436651945114,
-0.004598159343004227,
-0.015351807698607445,
0.024384941905736923,
0.0719793513417244,
... |
3bd5e2c7-b797-4079-b9f4-f3a62314f698 | If the projection is created via an
ALTER
command, the creation is asynchronous
when the
MATERIALIZE PROJECTION
command is issued. Users can confirm the progress
of this operation with the following query, waiting for
is_done=1
.
``sql
SELECT
parts_to_do,
is_done,
latest_fail_reason
FROM system.mutations
WHERE (
table` = 'comments') AND (command LIKE '%MATERIALIZE%')
┌─parts_to_do─┬─is_done─┬─latest_fail_reason─┐
1. │ 1 │ 0 │ │
└─────────────┴─────────┴────────────────────┘
1 row in set. Elapsed: 0.003 sec.
```
If we repeat the above query, we can see performance has improved significantly
at the expense of additional storage.
```sql
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
┌──────────avg(Score)─┐
1. │ 0.18181818181818182 │
└─────────────────────┘
--highlight-next-line
1 row in set. Elapsed: 0.008 sec. Processed 16.36 thousand rows, 98.17 KB (2.15 million rows/s., 12.92 MB/s.)
Peak memory usage: 4.06 MiB.
```
With an
EXPLAIN
command
, we also confirm the projection was used to serve this query:
```sql
EXPLAIN indexes = 1
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
┌─explain─────────────────────────────────────────────┐
│ Expression ((Projection + Before ORDER BY)) │
│ Aggregating │
│ Filter │
│ ReadFromMergeTree (comments_user_id) │
│ Indexes: │
│ PrimaryKey │
│ Keys: │
│ UserId │
│ Condition: (UserId in [8592047, 8592047]) │
│ Parts: 2/2 │
│ Granules: 2/11360 │
└─────────────────────────────────────────────────────┘
11 rows in set. Elapsed: 0.004 sec.
```
When to use projections {#when-to-use-projections}
Projections are an appealing feature for new users as they are automatically
maintained as data is inserted. Furthermore, queries can just be sent to a single
table where the projections are exploited where possible to speed up the response
time.
This is in contrast to materialized views, where the user has to select the
appropriate optimized target table or rewrite their query, depending on the filters.
This places greater emphasis on user applications and increases client-side
complexity.
Despite these advantages, projections come with some inherent limitations which
users should be aware of and thus should be deployed sparingly. For further
details see
"materialized views versus projections"
We recommend using projections when: | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
-0.058302611112594604,
-0.04970436543226242,
-0.029563987627625465,
0.13821975886821747,
0.03189688175916672,
-0.053529635071754456,
0.048556502908468246,
0.00603770837187767,
0.014827598817646503,
0.07303216308355331,
-0.07000374048948288,
0.012518016621470451,
0.09251599758863449,
-0.031... |
cf8085df-f959-409d-b425-8086b1593abe | We recommend using projections when:
A complete reordering of the data is required. While the expression in the projection can, in theory, use a
GROUP BY,
materialized views are more effective for maintaining aggregates. The query optimizer is also more likely to exploit projections that use a simple reordering, i.e.,
SELECT * ORDER BY x
. Users can select a subset of columns in this expression to reduce storage footprint.
Users are comfortable with the associated increase in storage footprint and overhead of writing data twice. Test the impact on insertion speed and
evaluate the storage overhead
.
Rewriting BigQuery queries in ClickHouse {#rewriting-bigquery-queries-in-clickhouse}
The following provides example queries comparing BigQuery to ClickHouse. This list aims to demonstrate how to exploit ClickHouse features to significantly simplify queries. The examples here use the full Stack Overflow dataset (up to April 2024).
Users (with more than 10 questions) which receive the most views:
BigQuery
ClickHouse
```sql
SELECT
OwnerDisplayName,
sum(ViewCount) AS total_views
FROM stackoverflow.posts
WHERE (PostTypeId = 'Question') AND (OwnerDisplayName != '')
GROUP BY OwnerDisplayName
HAVING count() > 10
ORDER BY total_views DESC
LIMIT 5
┌─OwnerDisplayName─┬─total_views─┐
1. │ Joan Venge │ 25520387 │
2. │ Ray Vega │ 21576470 │
3. │ anon │ 19814224 │
4. │ Tim │ 19028260 │
5. │ John │ 17638812 │
└──────────────────┴─────────────┘
5 rows in set. Elapsed: 0.076 sec. Processed 24.35 million rows, 140.21 MB (320.82 million rows/s., 1.85 GB/s.)
Peak memory usage: 323.37 MiB.
```
Which tags receive the most views:
BigQuery
ClickHouse
```sql
-- ClickHouse
SELECT
arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS tags,
sum(ViewCount) AS views
FROM stackoverflow.posts
GROUP BY tags
ORDER BY views DESC
LIMIT 5
┌─tags───────┬──────views─┐
1. │ javascript │ 8190916894 │
2. │ python │ 8175132834 │
3. │ java │ 7258379211 │
4. │ c# │ 5476932513 │
5. │ android │ 4258320338 │
└────────────┴────────────┘
5 rows in set. Elapsed: 0.318 sec. Processed 59.82 million rows, 1.45 GB (188.01 million rows/s., 4.54 GB/s.)
Peak memory usage: 567.41 MiB.
```
Aggregate functions {#aggregate-functions}
Where possible, users should exploit ClickHouse aggregate functions. Below, we show the use of the
argMax
function
to compute the most viewed question of each year.
BigQuery
ClickHouse
```sql
-- ClickHouse
SELECT
toYear(CreationDate) AS Year,
argMax(Title, ViewCount) AS MostViewedQuestionTitle,
max(ViewCount) AS MaxViewCount
FROM stackoverflow.posts
WHERE PostTypeId = 'Question'
GROUP BY Year
ORDER BY Year ASC
FORMAT Vertical
Row 1:
──────
Year: 2008
MostViewedQuestionTitle: How to find the index for a given item in a list?
MaxViewCount: 6316987 | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
0.00572435325011611,
-0.03031580150127411,
0.008289671503007412,
0.037353333085775375,
-0.031160147860646248,
0.007702838629484177,
0.0027574801351875067,
-0.007974942214787006,
0.007898824289441109,
0.028821274638175964,
-0.029435377568006516,
0.01615930162370205,
0.05215940624475479,
-0.... |
bc711afd-7702-4ba4-b044-55c45c0c562b | Row 1:
──────
Year: 2008
MostViewedQuestionTitle: How to find the index for a given item in a list?
MaxViewCount: 6316987
Row 2:
──────
Year: 2009
MostViewedQuestionTitle: How do I undo the most recent local commits in Git?
MaxViewCount: 13962748
...
Row 16:
───────
Year: 2023
MostViewedQuestionTitle: How do I solve "error: externally-managed-environment" every time I use pip 3?
MaxViewCount: 506822
Row 17:
───────
Year: 2024
MostViewedQuestionTitle: Warning "Third-party cookie will be blocked. Learn more in the Issues tab"
MaxViewCount: 66975
17 rows in set. Elapsed: 0.225 sec. Processed 24.35 million rows, 1.86 GB (107.99 million rows/s., 8.26 GB/s.)
Peak memory usage: 377.26 MiB.
```
Conditionals and arrays {#conditionals-and-arrays}
Conditional and array functions make queries significantly simpler. The following query computes the tags (with more than 10000 occurrences) with the largest percentage increase from 2022 to 2023. Note how the following ClickHouse query is succinct thanks to conditionals, array functions, and the ability to reuse aliases in the
HAVING
and
SELECT
clauses.
BigQuery
ClickHouse
```sql
SELECT
arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS tag,
countIf(toYear(CreationDate) = 2023) AS count_2023,
countIf(toYear(CreationDate) = 2022) AS count_2022,
((count_2023 - count_2022) / count_2022) * 100 AS percent_change
FROM stackoverflow.posts
WHERE toYear(CreationDate) IN (2022, 2023)
GROUP BY tag
HAVING (count_2022 > 10000) AND (count_2023 > 10000)
ORDER BY percent_change DESC
LIMIT 5
┌─tag─────────┬─count_2023─┬─count_2022─┬──────percent_change─┐
│ next.js │ 13788 │ 10520 │ 31.06463878326996 │
│ spring-boot │ 16573 │ 17721 │ -6.478189718413183 │
│ .net │ 11458 │ 12968 │ -11.644046884639112 │
│ azure │ 11996 │ 14049 │ -14.613139725247349 │
│ docker │ 13885 │ 16877 │ -17.72826924216389 │
└─────────────┴────────────┴────────────┴─────────────────────┘
5 rows in set. Elapsed: 0.096 sec. Processed 5.08 million rows, 155.73 MB (53.10 million rows/s., 1.63 GB/s.)
Peak memory usage: 410.37 MiB.
```
This concludes our basic guide for users migrating from BigQuery to ClickHouse. We recommend users migrating from BigQuery read the guide for
modeling data in ClickHouse
to learn more about advanced ClickHouse features. | {"source_file": "02_migrating-to-clickhouse-cloud.md"} | [
0.03159259259700775,
-0.024197528138756752,
0.001857551047578454,
0.028227977454662323,
0.03259146958589554,
-0.06371217966079712,
-0.008798912167549133,
-0.025670034810900688,
-0.03405406326055527,
0.10932222753763199,
0.02308090403676033,
-0.0035814421717077494,
-0.014299173839390278,
-0... |
c8af21f5-828e-4d67-b64f-9780079282a5 | slug: /migrations/bigquery
title: 'BigQuery'
pagination_prev: null
pagination_next: null
description: 'Landing page for the BigQuery migrations section'
keywords: ['BigQuery', 'migration']
doc_type: 'landing-page'
In this section of the docs, learn more about the similarities and differences between BigQuery and ClickHouse Cloud, as well as why you might want to migrate and how to do so.
| Page | Description |
|-----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
|
BigQuery vs ClickHouse Cloud
| The way resources are organized in ClickHouse Cloud is similar to BigQuery's resource hierarchy. We describe the specific differences in this article. |
|
Migrating from BigQuery to ClickHouse Cloud
| Learn about why you might want to migrate from BigQuery to ClickHouse Cloud. |
|
Loading Data
| A guide showing you how to migrate data from BigQuery to ClickHouse. | | {"source_file": "index.md"} | [
0.08202715218067169,
0.0028494675643742085,
0.047455478459596634,
-0.021939679980278015,
0.005651269573718309,
-0.01119638979434967,
-0.08940514922142029,
-0.05788157135248184,
-0.021115366369485855,
0.043348174542188644,
0.030971458181738853,
-0.0212454441934824,
0.01630740985274315,
-0.1... |
e679d561-90f1-4601-9d33-d2e25bdda080 | sidebar_label: 'Overview'
slug: /migrations/redshift-overview
description: 'Migrating from Amazon Redshift to ClickHouse'
keywords: ['Redshift']
title: 'Comparing ClickHouse Cloud and Amazon Redshift'
doc_type: 'guide'
Amazon Redshift to ClickHouse migration
This document provides an introduction to migrating data from Amazon
Redshift to ClickHouse.
Introduction {#introduction}
Amazon Redshift is a cloud data warehouse that provides reporting and
analytics capabilities for structured and semi-structured data. It was
designed to handle analytical workloads on big data sets using
column-oriented database principles similar to ClickHouse. As part of the
AWS offering, it is often the default solution AWS users turn to for their
analytical data needs.
While attractive to existing AWS users due to its tight integration with the
Amazon ecosystem, Redshift users that adopt it to power real-time analytics
applications find themselves in need of a more optimized solution for this
purpose. As a result, they increasingly turn to ClickHouse to benefit from
superior query performance and data compression, either as a replacement or
a "speed layer" deployed alongside existing Redshift workloads.
ClickHouse vs Redshift {#clickhouse-vs-redshift}
For users heavily invested in the AWS ecosystem, Redshift represents a
natural choice when faced with data warehousing needs. Redshift differs from
ClickHouse in this important aspect – it optimizes its engine for data
warehousing workloads requiring complex reporting and analytical queries.
Across all deployment modes, the following two limitations make it difficult
to use Redshift for real-time analytical workloads:
* Redshift
compiles code for each query execution plan
,
which adds significant overhead to first-time query execution. This overhead can
be justified when query patterns are predictable and compiled execution plans
can be stored in a query cache. However, this introduces challenges for interactive
applications with variable queries. Even when Redshift is able to exploit this
code compilation cache, ClickHouse is faster on most queries. See
"ClickBench"
.
* Redshift
limits concurrency to 50 across all queues
,
which (while adequate for BI) makes it inappropriate for highly concurrent
analytical applications.
Conversely, while ClickHouse can also be utilized for complex analytical queries
it is optimized for real-time analytical workloads, either powering applications
or acting as a warehouse acceleration later. As a result, Redshift users typically
replace or augment Redshift with ClickHouse for the following reasons: | {"source_file": "01_overview.md"} | [
-0.010837755165994167,
-0.021056372672319412,
-0.03235551342368126,
0.053927984088659286,
0.03794616088271141,
-0.03574342280626297,
-0.017127159982919693,
-0.03617115318775177,
0.017269624397158623,
0.015451798215508461,
-0.01597757078707218,
0.016206204891204834,
0.028719410300254822,
-0... |
ce2ae813-e4dc-43f5-a62c-58d50907af24 | | Advantage | Description |
|------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Lower query latencies
| ClickHouse achieves lower query latencies, including for varied query patterns, under high concurrency and while subjected to streaming inserts. Even when your query misses a cache, which is inevitable in interactive user-facing analytics, ClickHouse can still process it fast. |
|
Higher concurrent query limits
| ClickHouse places much higher limits on concurrent queries, which is vital for real-time application experiences. In ClickHouse, self-managed as well as cloud, you can scale up your compute allocation to achieve the concurrency your application needs for each service. The level of permitted query concurrency is configurable in ClickHouse, with ClickHouse Cloud defaulting to a value of 1000. |
|
Superior data compression
| ClickHouse offers superior data compression, which allows users to reduce their total storage (and thus cost) or persist more data at the same cost and derive more real-time insights from their data. See "ClickHouse vs Redshift Storage Efficiency" below. | | {"source_file": "01_overview.md"} | [
0.014558879658579826,
0.0710616484284401,
-0.007947787642478943,
-0.02907976321876049,
-0.04150828346610069,
0.05946654453873634,
0.0205544400960207,
0.02871936745941639,
0.005983277689665556,
-0.08928714692592621,
0.016056938096880913,
-0.006470134016126394,
-0.0030778106302022934,
-0.062... |
59347e2b-9065-4af1-9304-f02b45f03dae | sidebar_label: 'Migration guide'
slug: /migrations/redshift/migration-guide
description: 'Migrating from Amazon Redshift to ClickHouse'
keywords: ['Redshift']
title: 'Amazon Redshift to ClickHouse migration guide'
doc_type: 'guide'
import MigrationGuide from '@site/docs/integrations/data-ingestion/redshift/_snippets/_migration_guide.md'
Amazon Redshift to ClickHouse migration guide | {"source_file": "02_migration_guide.md"} | [
0.03791920095682144,
-0.058049265295267105,
0.004904326517134905,
0.03761988878250122,
0.11747418344020844,
0.017048191279172897,
0.01178406085819006,
-0.017470190301537514,
-0.0993935838341713,
-0.052846960723400116,
0.013432038947939873,
-0.02999887987971306,
-0.014656472951173782,
-0.09... |
5287aa8f-19dd-40a3-8369-0aca511f9c02 | sidebar_label: 'SQL translation reference'
slug: /migrations/redshift/sql-translation-reference
description: 'SQL translation reference for Amazon Redshift to ClickHouse'
keywords: ['Redshift']
title: 'Amazon Redshift SQL translation guide'
doc_type: 'reference'
Amazon Redshift SQL translation guide
Data types {#data-types}
Users moving data between ClickHouse and Redshift will immediately notice
that ClickHouse offers a more extensive range of types, which are also less
restrictive. While Redshift requires users to specify possible string
lengths, even if variable, ClickHouse removes this restriction and burden
from the user by storing strings without encoding as bytes. The ClickHouse
String type thus has no limits or length specification requirements.
Furthermore, users can exploit Arrays, Tuples, and Enums - absent from
Redshift as first-class citizens (although Arrays/Structs can be imitated
with
SUPER
) and a common frustration of users. ClickHouse additionally
allows the persistence, either at query time or even in a table, of
aggregation states. This will enable data to be pre-aggregated, typically
using a materialized view, and can dramatically improve query performance
for common queries.
Below we map the equivalent ClickHouse type for each Redshift type: | {"source_file": "03_sql_translation_reference.md"} | [
0.022543467581272125,
-0.0806465893983841,
-0.05602394416928291,
0.02258564531803131,
-0.0324726477265358,
0.012229804880917072,
0.04833013564348221,
0.0033574379049241543,
-0.02193179354071617,
-0.08508096635341644,
-0.007030396722257137,
0.006667804438620806,
0.027947256341576576,
-0.066... |
4c3e2908-a191-431d-968f-056cd3ee86d5 | | Redshift | ClickHouse |
|------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
SMALLINT
|
Int8
* |
|
INTEGER
|
Int32
* |
|
BIGINT
|
Int64
* |
|
DECIMAL
|
UInt128
,
UInt256
,
Int128
,
Int256
,
Decimal(P, S)
,
Decimal32(S)
,
Decimal64(S)
,
Decimal128(S)
,
Decimal256(S)
- (high precision and ranges possible) |
|
REAL
|
Float32 | {"source_file": "03_sql_translation_reference.md"} | [
0.004592535085976124,
-0.08084062486886978,
0.032714784145355225,
0.008587392047047615,
-0.039727676659822464,
0.0038235599640756845,
-0.01811417192220688,
-0.011897902004420757,
0.01283227652311325,
-0.0899566262960434,
0.04070552438497543,
-0.05645386874675751,
-0.03622347488999367,
-0.1... |
2954b475-ef13-4cd0-acd7-d7473d096bfb | |
REAL
|
Float32
|
|
DOUBLE PRECISION
|
Float64
|
|
BOOLEAN
|
Bool
|
|
CHAR
|
String
,
FixedString
|
|
VARCHAR
** |
String
|
|
DATE
|
Date32
|
|
TIMESTAMP
|
DateTime
,
DateTime64
|
|
TIMESTAMPTZ
|
DateTime
,
DateTime64 | {"source_file": "03_sql_translation_reference.md"} | [
0.016668282449245453,
0.025308288633823395,
-0.09113425761461258,
0.0405694879591465,
-0.06729412078857422,
0.03267672285437584,
-0.03774355724453926,
0.04794556647539139,
-0.04241276904940605,
0.03373485803604126,
0.06967554241418839,
-0.12124186009168625,
0.03887874633073807,
0.005223080... |
d15830ae-6eb0-4771-bb1a-5896819a44e4 | |
TIMESTAMPTZ
|
DateTime
,
DateTime64
|
|
GEOMETRY
|
Geo Data Types
|
|
GEOGRAPHY
|
Geo Data Types
(less developed e.g. no coordinate systems - can be emulated
with functions
) |
|
HLLSKETCH
|
AggregateFunction(uniqHLL12, X)
|
|
SUPER
|
Tuple
,
Nested
,
Array
,
JSON
,
Map
|
|
TIME
|
DateTime
,
DateTime64
|
|
TIMETZ
|
DateTime
,
DateTime64
|
|
VARBYTE
** |
String
combined with
Bit
and
Encoding
functions | | {"source_file": "03_sql_translation_reference.md"} | [
0.0033434289507567883,
-0.018575148656964302,
-0.04080720618367195,
0.002395969582721591,
-0.030610213056206703,
-0.04290197417140007,
-0.03879731521010399,
0.014898487366735935,
-0.06296847015619278,
-0.018231378868222237,
-0.01668507233262062,
-0.06798572838306427,
0.024028053507208824,
... |
728da327-4524-4272-99d9-daf4a84f8db6 | ClickHouse additionally supports unsigned integers with extended ranges i.e.
UInt8
,
UInt32
,
UInt32
and
UInt64
.
*
ClickHouse’s String type is unlimited by default but can be constrained to specific lengths using
Constraints
.
DDL syntax {#compression}
Sorting keys {#sorting-keys}
Both ClickHouse and Redshift have the concept of a “sorting key”, which define
how data is sorted when being stored. Redshift defines the sorting key using the
SORTKEY
clause:
sql
CREATE TABLE some_table(...) SORTKEY (column1, column2)
Comparatively, ClickHouse uses an
ORDER BY
clause to specify the sort order:
sql
CREATE TABLE some_table(...) ENGINE = MergeTree ORDER BY (column1, column2)
In most cases, you can use the same sorting key columns and order in ClickHouse
as Redshift, assuming you are using the default
COMPOUND
type. When data is
added to Redshift, you should run the
VACUUM
and
ANALYZE
commands to re-sort
newly added data and update the statistics for the query planner - otherwise, the
unsorted space grows. No such process is required for ClickHouse.
Redshift supports a couple of convenience features for sorting keys. The first is
automatic sorting keys (using
SORTKEY AUTO
). While this may be appropriate for
getting started, explicit sorting keys ensure the best performance and storage
efficiency when the sorting key is optimal. The second is the
INTERLEAVED
sort key,
which gives equal weight to a subset of columns in the sort key to improve
performance when a query uses one or more secondary sort columns. ClickHouse
supports explicit
projections
, which achieve the
same end-result with a slightly different setup.
Users should be aware that the “primary key” concept represents different things
in ClickHouse and Redshift. In Redshift, the primary key resembles the traditional
RDMS concept intended to enforce constraints. However, they are not strictly
enforced in Redshift and instead act as hints for the query planner and data
distribution among nodes. In ClickHouse, the primary key denotes columns used
to construct the sparse primary index, used to ensure the data is ordered on
disk, maximizing compression while avoiding pollution of the primary index and
wasting memory. | {"source_file": "03_sql_translation_reference.md"} | [
0.00959246139973402,
-0.08408327400684357,
-0.02408929169178009,
-0.000313106516841799,
-0.0690520852804184,
-0.036583881825208664,
-0.0071210856549441814,
0.006767187267541885,
-0.025647200644016266,
-0.018376609310507774,
0.007649346254765987,
0.044408902525901794,
-0.0029452962335199118,
... |
04a99cb8-c36f-430a-8a58-857094721b58 | sidebar_label: 'Using clickhouse-local'
keywords: ['clickhouse', 'migrate', 'migration', 'migrating', 'data', 'etl', 'elt', 'clickhouse-local', 'clickhouse-client']
slug: /cloud/migration/clickhouse-local
title: 'Migrating to ClickHouse using clickhouse-local'
description: 'Guide showing how to migrate to ClickHouse using clickhouse-local'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from '@theme/CodeBlock';
import AddARemoteSystem from '@site/docs/_snippets/_add_remote_ip_access_list_detail.md';
import ch_local_01 from '@site/static/images/integrations/migration/ch-local-01.png';
import ch_local_02 from '@site/static/images/integrations/migration/ch-local-02.png';
import ch_local_03 from '@site/static/images/integrations/migration/ch-local-03.png';
import ch_local_04 from '@site/static/images/integrations/migration/ch-local-04.png';
Migrating to ClickHouse using clickhouse-local
You can use ClickHouse, or to be more specific,
clickhouse-local
as an ETL tool for migrating data from your current database system to ClickHouse Cloud, as long as for your current database system there is either a
ClickHouse-provided
integration engine
or
table function
, respectively,
or a vendor provided JDBC driver or ODBC driver available.
We sometimes call this migration method a "pivot" method, because it uses an intermediate pivot point or hop to move the data from the source database to the destination database. For example, this method may be required if only outbound connections are allowed from within a private or internal network due to security requirements, and therefore you need to pull the data from the source database with clickhouse-local, then push the data into a destination ClickHouse database, with clickhouse-local acting as the pivot point.
ClickHouse provides integration engines and table functions (that create integration engines on-the-fly) for
MySQL
,
PostgreSQL
,
MongoDB
and
SQLite
.
For all other popular database systems, there is JDBC driver or ODBC driver available from the vendor of the system.
What is clickhouse-local? {#what-is-clickhouse-local}
Typically, ClickHouse is run in the form of a cluster, where several instances of the ClickHouse database engine are running in a distributed fashion on different servers.
On a single server, the ClickHouse database engine is run as part of the
clickhouse-server
program. Database access (paths, users, security, ...) is configured with a server configuration file.
The
clickhouse-local
tool allows you to use the ClickHouse database engine isolated in a command-line utility fashion for blazing-fast SQL data processing on an ample amount of inputs and outputs, without having to configure and start a ClickHouse server.
Installing clickhouse-local {#installing-clickhouse-local} | {"source_file": "01_clickhouse-local-etl.md"} | [
0.06076716631650925,
0.006333462428301573,
-0.008767868392169476,
0.015738533809781075,
0.08513668924570084,
0.02308567613363266,
0.04481407254934311,
0.0017626691842451692,
-0.08452808111906052,
0.04331102967262268,
0.08924354612827301,
-0.03177759796380997,
0.028284376487135887,
-0.06109... |
77b6fe52-6b0f-46de-b3e5-25b57384e687 | Installing clickhouse-local {#installing-clickhouse-local}
You need a host machine for
clickhouse-local
that has network access to both your current source database system and your ClickHouse Cloud target service.
On that host machine, download the appropriate build of
clickhouse-local
based on your computer's operating system:
The simplest way to download
clickhouse-local
locally is to run the following command:
bash
curl https://clickhouse.com/ | sh
Run
clickhouse-local
(it will just print its version):
bash
./clickhouse-local
The simplest way to download
clickhouse-local
locally is to run the following command:
bash
curl https://clickhouse.com/ | sh
Run
clickhouse-local
(it will just print its version):
bash
./clickhouse local
:::info Important
The examples throughout this guide use the Linux commands for running
clickhouse-local
(
./clickhouse-local
).
To run
clickhouse-local
on a Mac, use
./clickhouse local
.
:::
:::tip Add the remote system to your ClickHouse Cloud service IP Access List
In order for the
remoteSecure
function to connect to your ClickHouse Cloud service, the IP address of the remote system needs to be allowed by the IP Access List. Expand
Manage your IP Access List
below this tip for more information.
:::
Example 1: Migrating from MySQL to ClickHouse Cloud with an Integration engine {#example-1-migrating-from-mysql-to-clickhouse-cloud-with-an-integration-engine}
We will use the
integration table engine
(created on-the-fly by the
mysql table function
) for reading data from the source MySQL database and we will use the
remoteSecure table function
for writing the data into a destination table on your ClickHouse cloud service.
On the destination ClickHouse Cloud service: {#on-the-destination-clickhouse-cloud-service}
Create the destination database: {#create-the-destination-database}
sql
CREATE DATABASE db
Create a destination table that has a schema equivalent to the MySQL table: {#create-a-destination-table-that-has-a-schema-equivalent-to-the-mysql-table}
sql
CREATE TABLE db.table ...
:::note
The schema of the ClickHouse Cloud destination table and schema of the source MySQL table must be aligned (the column names and order must be the same, and the column data types must be compatible).
:::
On the clickhouse-local host machine: {#on-the-clickhouse-local-host-machine}
Run clickhouse-local with the migration query: {#run-clickhouse-local-with-the-migration-query}
sql
./clickhouse-local --query "
INSERT INTO FUNCTION
remoteSecure('HOSTNAME.clickhouse.cloud:9440', 'db.table', 'default', 'PASS')
SELECT * FROM mysql('host:port', 'database', 'table', 'user', 'password');"
:::note
No data is stored locally on the
clickhouse-local
host machine. Instead, the data is read from the source MySQL table
and then immediately written to the destination table on the ClickHouse Cloud service.
::: | {"source_file": "01_clickhouse-local-etl.md"} | [
0.07162745296955109,
-0.07585223019123077,
-0.006513649597764015,
-0.07376419007778168,
0.018332835286855698,
-0.027421072125434875,
-0.01573275774717331,
-0.07386436313390732,
-0.004270123317837715,
0.03531736508011818,
0.04938673973083496,
-0.04495745897293091,
0.003978286404162645,
-0.0... |
78321538-569d-4282-9701-031be09ebf70 | Example 2: Migrating from MySQL to ClickHouse Cloud with the JDBC bridge {#example-2-migrating-from-mysql-to-clickhouse-cloud-with-the-jdbc-bridge}
We will use the
JDBC integration table engine
(created on-the-fly by the
jdbc table function
) together with the
ClickHouse JDBC Bridge
and the MySQL JDBC driver for reading data from the source MySQL database and we will use the
remoteSecure table function
for writing the data into a destination table on your ClickHouse cloud service.
On the destination ClickHouse Cloud service: {#on-the-destination-clickhouse-cloud-service-1}
Create the destination database: {#create-the-destination-database-1}
sql
CREATE DATABASE db | {"source_file": "01_clickhouse-local-etl.md"} | [
-0.004772596526890993,
-0.06860381364822388,
-0.05038381740450859,
0.02233508974313736,
-0.07205593585968018,
-0.02438076213002205,
0.041136402636766434,
0.005305563565343618,
-0.0585131011903286,
0.01568109355866909,
-0.027144931256771088,
-0.03987656906247139,
0.12385980039834976,
-0.062... |
054e6abb-417a-445b-a441-d1f5734ec003 | title: 'Using object storage'
description: 'Moving data from object storage to ClickHouse Cloud'
keywords: ['object storage', 's3', 'azure blob', 'gcs', 'migration']
slug: /integrations/migration/object-storage-to-clickhouse
doc_type: 'guide'
import Image from '@theme/IdealImage';
import object_storage_01 from '@site/static/images/integrations/migration/object-storage-01.png';
Move data from cloud object storage to ClickHouse Cloud
If you use a Cloud Object Storage as a data lake and wish to import this data into ClickHouse Cloud,
or if your current database system is able to directly offload data into a Cloud Object Storage, then you can use one of the
table functions for migrating data stored in Cloud Object Storage into a ClickHouse Cloud table:
s3
or
s3Cluster
gcs
azureBlobStorage
If your current database system is not able to directly offload data into a Cloud Object Storage, you could use a
third-party ETL/ELT tool
or
clickhouse-local
for moving data
from you current database system to Cloud Object Storage, in order to migrate that data in a second step into a ClickHouse Cloud table.
Although this is a two steps process (offload data into a Cloud Object Storage, then load into ClickHouse), the advantage is that this
scales to petabytes thanks to a
solid ClickHouse Cloud
support of highly-parallel reads from Cloud Object Storage.
Also you can leverage sophisticated and compressed formats like
Parquet
.
There is a
blog article
with concrete code examples showing how you can get data into ClickHouse Cloud using S3. | {"source_file": "03_object-storage-to-clickhouse.md"} | [
0.011972134932875633,
-0.08492940664291382,
-0.06161148473620415,
0.020045368000864983,
0.04832414910197258,
0.013114295899868011,
0.046285297721624374,
-0.03884432837367058,
-0.024940408766269684,
0.09885934740304947,
0.013135193847119808,
-0.019412780180573463,
0.11431349068880081,
-0.00... |
ae9fa39c-46af-47df-942f-95868e63764f | sidebar_label: 'Using a third-party ETL tool'
keywords: ['clickhouse', 'migrate', 'migration', 'migrating', 'data', 'etl', 'elt', 'clickhouse-local', 'clickhouse-client']
slug: /cloud/migration/etl-tool-to-clickhouse
title: 'Using a third-party ETL Tool'
description: 'Page describing how to use a 3rd-party ETL tool with ClickHouse'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import third_party_01 from '@site/static/images/integrations/migration/third-party-01.png';
Using a 3rd-party ETL Tool
A great option for moving data from an external data source into ClickHouse is to use one of the many popular ETL and ELT. We have docs that cover the following:
Airbyte
dbt
Vector
But there are many other ETL/ELT tools that integrate with ClickHouse, so check your favorite tool's documentation for details. | {"source_file": "02_etl-tool-to-clickhouse.md"} | [
-0.06618113070726395,
-0.03188714385032654,
-0.017499178647994995,
-0.008763398975133896,
0.12403231859207153,
-0.0440409816801548,
0.03833503648638725,
-0.005871195811778307,
-0.040805354714393616,
0.002919941209256649,
0.07926557958126068,
-0.05822078138589859,
0.09621971845626831,
-0.09... |
f162ffcf-dae3-4a06-8e0b-a3e8635ab08a | sidebar_label: 'ClickHouse OSS'
slug: /cloud/migration/clickhouse-to-cloud
title: 'Migrating between self-managed ClickHouse and ClickHouse Cloud'
description: 'Page describing how to migrate between self-managed ClickHouse and ClickHouse Cloud'
doc_type: 'guide'
keywords: ['migration', 'ClickHouse Cloud', 'OSS', 'Migrate self-managed to Cloud']
import Image from '@theme/IdealImage';
import AddARemoteSystem from '@site/docs/_snippets/_add_remote_ip_access_list_detail.md';
import self_managed_01 from '@site/static/images/integrations/migration/self-managed-01.png';
import self_managed_02 from '@site/static/images/integrations/migration/self-managed-02.png';
import self_managed_03 from '@site/static/images/integrations/migration/self-managed-03.png';
import self_managed_04 from '@site/static/images/integrations/migration/self-managed-04.png';
import self_managed_05 from '@site/static/images/integrations/migration/self-managed-05.png';
import self_managed_06 from '@site/static/images/integrations/migration/self-managed-06.png';
Migrating between self-managed ClickHouse and ClickHouse Cloud
This guide will show how to migrate from a self-managed ClickHouse server to ClickHouse Cloud, and also how to migrate between ClickHouse Cloud services. The
remoteSecure
function is used in
SELECT
and
INSERT
queries to allow access to remote ClickHouse servers, which makes migrating tables as simple as writing an
INSERT INTO
query with an embedded
SELECT
.
Migrating from Self-managed ClickHouse to ClickHouse Cloud {#migrating-from-self-managed-clickhouse-to-clickhouse-cloud}
:::note
Regardless of if your source table is sharded and/or replicated, on ClickHouse Cloud you just create a destination table (you can leave out the Engine parameter for this table, it will be automatically a ReplicatedMergeTree table),
and ClickHouse Cloud will automatically take care of vertical and horizontal scaling. There is no need from your side to think about how to replicate and shard the table.
:::
In this example the self-managed ClickHouse server is the
source
and the ClickHouse Cloud service is the
destination
.
Overview {#overview}
The process is:
Add a read-only user to the source service
Duplicate the source table structure on the destination service
Pull the data from source to destination, or push the data from the source, depending on the network availability of the source
Remove the source server from the IP Access List on the destination (if applicable)
Remove the read-only user from the source service
Migration of tables from one system to another: {#migration-of-tables-from-one-system-to-another}
This example migrates one table from a self-managed ClickHouse server to ClickHouse Cloud.
On the source ClickHouse system (the system that currently hosts the data) {#on-the-source-clickhouse-system-the-system-that-currently-hosts-the-data} | {"source_file": "01_clickhouse-to-cloud.md"} | [
0.014817888848483562,
-0.04882288724184036,
0.009532254189252853,
-0.004610508680343628,
0.10537461191415787,
-0.029289226979017258,
0.004529528319835663,
-0.029859352856874466,
-0.07017794251441956,
0.05278310552239418,
0.11436934769153595,
-0.0144920963793993,
0.026094092056155205,
-0.02... |
9a906230-4605-4e80-9873-cdf073338241 | On the source ClickHouse system (the system that currently hosts the data) {#on-the-source-clickhouse-system-the-system-that-currently-hosts-the-data}
Add a read only user that can read the source table (
db.table
in this example)
sql
CREATE USER exporter
IDENTIFIED WITH SHA256_PASSWORD BY 'password-here'
SETTINGS readonly = 1;
sql
GRANT SELECT ON db.table TO exporter;
Copy the table definition
sql
SELECT create_table_query
FROM system.tables
WHERE database = 'db' AND table = 'table'
On the destination ClickHouse Cloud system: {#on-the-destination-clickhouse-cloud-system}
Create the destination database:
sql
CREATE DATABASE db
Using the CREATE TABLE statement from the source, create the destination.
:::tip
Change the ENGINE to to ReplicatedMergeTree without any parameters when you run the CREATE statement. ClickHouse Cloud always replicates tables and provides the correct parameters. Keep the
ORDER BY
,
PRIMARY KEY
,
PARTITION BY
,
SAMPLE BY
,
TTL
, and
SETTINGS
clauses though.
:::
sql
CREATE TABLE db.table ...
Use the
remoteSecure
function to pull the data from the self-managed source
sql
INSERT INTO db.table SELECT * FROM
remoteSecure('source-hostname', db, table, 'exporter', 'password-here')
:::note
If the source system is not available from outside networks then you can push the data rather than pulling it, as the
remoteSecure
function works for both selects and inserts. See the next option.
:::
Use the
remoteSecure
function to push the data to the ClickHouse Cloud service
:::tip Add the remote system to your ClickHouse Cloud service IP Access List
In order for the
remoteSecure
function to connect to your ClickHouse Cloud service the IP Address of the remote system will need to be allowed by the IP Access List. Expand
Manage your IP Access List
below this tip for more information.
:::
sql
INSERT INTO FUNCTION
remoteSecure('HOSTNAME.clickhouse.cloud:9440', 'db.table',
'default', 'PASS') SELECT * FROM db.table
Migrating between ClickHouse Cloud services {#migrating-between-clickhouse-cloud-services}
Some example uses for migrating data between ClickHouse Cloud services:
- Migrating data from a restored backup
- Copying data from a development service to a staging service (or staging to production)
In this example there are two ClickHouse Cloud services, and they will be referred to as
source
and
destination
. The data will be pulled from the source to the destination. Although you could push if you like, pulling is shown as it uses a read-only user.
There are a few steps in the migration:
1. Identify one ClickHouse Cloud service to be the
source
, and the other as the
destination | {"source_file": "01_clickhouse-to-cloud.md"} | [
-0.024999601766467094,
-0.09704513847827911,
-0.09696996212005615,
0.013442691415548325,
-0.04684655740857124,
-0.05906296148896217,
0.034073565155267715,
-0.05258218199014664,
-0.022303910925984383,
0.0913325622677803,
0.02423485927283764,
-0.04890173301100731,
0.12741538882255554,
-0.090... |
9d8d3602-a72c-40b6-b955-14a249311750 | There are a few steps in the migration:
1. Identify one ClickHouse Cloud service to be the
source
, and the other as the
destination
1. Add a read-only user to the source service
1. Duplicate the source table structure on the destination service
1. Temporarily allow IP access to the source service
1. Copy the data from source to destination
1. Re-establish the IP Access List on the destination
1. Remove the read-only user from the source service
Add a read-only user to the source service {#add-a-read-only-user-to-the-source-service}
Add a read only user that can read the source table (
db.table
in this example)
sql
CREATE USER exporter
IDENTIFIED WITH SHA256_PASSWORD BY 'password-here'
SETTINGS readonly = 1;
sql
GRANT SELECT ON db.table TO exporter;
Copy the table definition
sql
select create_table_query
from system.tables
where database = 'db' and table = 'table'
Duplicate the table structure on the destination service {#duplicate-the-table-structure-on-the-destination-service}
On the destination create the database if it is not there already:
Create the destination database:
sql
CREATE DATABASE db
Using the CREATE TABLE statement from the source, create the destination.
On the destination create the table using the output of the
select create_table_query...
from the source:
sql
CREATE TABLE db.table ...
Allow remote access to the source service {#allow-remote-access-to-the-source-service}
In order to pull data from the source to the destination the source service must allow connections. Temporarily disable the "IP Access List" functionality on the source service.
:::tip
If you will continue to use the source ClickHouse Cloud service then export the existing IP Access list to a JSON file before switching to allow access from anywhere; this will allow you to import the access list after the data is migrated.
:::
Modify the allow list and allow access from
Anywhere
temporarily. See the
IP Access List
docs for details.
Copy the data from source to destination {#copy-the-data-from-source-to-destination}
Use the
remoteSecure
function to pull the data from the source ClickHouse Cloud service
Connect to the destination. Run this command on the destination ClickHouse Cloud service:
sql
INSERT INTO db.table SELECT * FROM
remoteSecure('source-hostname', db, table, 'exporter', 'password-here')
Verify the data in the destination service
Re-establish the IP access list on the source {#re-establish-the-ip-access-list-on-the-source}
If you exported the access list earlier, then you can re-import it using
Share
, otherwise re-add your entries to the access list.
Remove the read-only
exporter
user {#remove-the-read-only-exporter-user}
sql
DROP USER exporter
Switch the service IP Access List to limit access | {"source_file": "01_clickhouse-to-cloud.md"} | [
-0.002482747659087181,
-0.1228165552020073,
-0.05579765886068344,
-0.06585358083248138,
-0.04277249425649643,
0.0030609669629484415,
0.06662275642156601,
-0.10597053915262222,
-0.04361511021852493,
0.046476271003484726,
0.0037876141723245382,
-0.0012935871491208673,
0.11892717331647873,
-0... |
cd3cb540-c82e-4de1-bfed-16a7f1f8420e | sidebar_label: 'Overview'
slug: /migrations/elastic-overview
description: 'Migrating from Snowflake to ClickHouse'
keywords: ['Snowflake']
title: 'Migrate from Snowflake to ClickHouse'
show_related_blogs: true
doc_type: 'landing-page'
Elasticsearch to ClickHouse migration
For observability use cases, see the
Elasticsearch to ClickStack
migration docs. | {"source_file": "01_overview.md"} | [
0.05875181406736374,
-0.03953171893954277,
0.011996075510978699,
0.039458878338336945,
0.11186251044273376,
0.004935420583933592,
-0.016577638685703278,
-0.01833971031010151,
-0.0580800324678421,
0.008622401393949986,
-0.0328109934926033,
-0.01693194918334484,
0.0061616855673491955,
-0.010... |
d8295baf-07ec-4426-8539-ccd2bd459881 | sidebar_label: 'Overview'
slug: /migrations/snowflake-overview
description: 'Migrating from Snowflake to ClickHouse'
keywords: ['Snowflake']
title: 'Migrate from Snowflake to ClickHouse'
show_related_blogs: true
doc_type: 'guide'
import snowflake_architecture from '@site/static/images/cloud/onboard/discover/use_cases/snowflake_architecture.png';
import cloud_architecture from '@site/static/images/cloud/onboard/discover/use_cases/cloud_architecture.png';
import Image from '@theme/IdealImage';
Snowflake to ClickHouse migration
This document provides an introduction to migrating data from Snowflake to ClickHouse.
Snowflake is a cloud data warehouse primarily focused on migrating legacy on-premise
data warehousing workloads to the cloud. It is well-optimized for executing
long-running reports at scale. As datasets migrate to the cloud, data owners start
thinking about how else they can extract value from this data, including using
these datasets to power real-time applications for internal and external use cases.
When this happens, they often realize they need a database optimized for
powering real-time analytics, like ClickHouse.
Comparison {#comparison}
In this section, we'll compare the key features of ClickHouse and Snowflake.
Similarities {#similarities}
Snowflake is a cloud-based data warehousing platform that provides a scalable
and efficient solution for storing, processing, and analyzing large amounts of
data.
Like ClickHouse, Snowflake is not built on existing technologies but relies
on its own SQL query engine and custom architecture.
Snowflake’s architecture is described as a hybrid between a shared-storage (shared-disk)
architecture and a shared-nothing architecture. A shared-storage architecture is
one where data is both accessible from all compute nodes using object
stores such as S3. A shared-nothing architecture is one where each compute node
stores a portion of the entire data set locally to respond to queries. This, in
theory, delivers the best of both models: the simplicity of a shared-disk
architecture and the scalability of a shared-nothing architecture.
This design fundamentally relies on object storage as the primary storage medium,
which scales almost infinitely under concurrent access while providing high
resilience and scalable throughput guarantees.
The image below from
docs.snowflake.com
shows this architecture:
Conversely, as an open-source and cloud-hosted product, ClickHouse can be deployed
in both shared-disk and shared-nothing architectures. The latter is typical for
self-managed deployments. While allowing for CPU and memory to be easily scaled,
shared-nothing configurations introduce classic data management challenges and
overhead of data replication, especially during membership changes. | {"source_file": "01_overview.md"} | [
-0.024755824357271194,
0.01734653115272522,
-0.029301276430487633,
0.06321264058351517,
0.11317279189825058,
-0.04936312511563301,
0.01040263008326292,
0.012566163204610348,
-0.023342981934547424,
-0.019075211137533188,
-0.029242131859064102,
0.020449042320251465,
0.04110908880829811,
-0.0... |
05e0608d-7086-49e0-90a2-645752221237 | For this reason, ClickHouse Cloud utilizes a shared-storage architecture that is
conceptually similar to Snowflake. Data is stored once in an object store
(single copy), such as S3 or GCS, providing virtually infinite storage with
strong redundancy guarantees. Each node has access to this single copy of the
data as well as its own local SSDs for cache purposes. Nodes can, in turn, be
scaled to provide additional CPU and memory resources as required. Like Snowflake,
S3’s scalability properties address the classic limitation of shared-disk
architectures (disk I/O and network bottlenecks) by ensuring the I/O throughput
available to current nodes in a cluster is not impacted as additional nodes are
added.
Differences {#differences}
Aside from the underlying storage formats and query engines, these architectures
differ in a few subtle ways:
Compute resources in Snowflake are provided through a concept of
warehouses
.
These consist of a number of nodes, each of a set size. While Snowflake
doesn't publish the specific architecture of their warehouses, it is
generally understood
that each node consists of 8 vCPUs, 16GiB, and 200GB of local storage (for cache).
The number of nodes depends on a t-shirt size, e.g. an x-small has one node,
a small 2, medium 4, large 8, etc. These warehouses are independent of the data
and can be used to query any database residing on object storage. When idle
and not subjected to query load, warehouses are paused - resuming when a query
is received. While storage costs are always reflected in billing, warehouses
are only charged when active.
ClickHouse Cloud utilizes a similar principle of nodes with local cache
storage. Rather than t-shirt sizes, users deploy a service with a total
amount of compute and available RAM. This, in turn, transparently
auto-scales (within defined limits) based on the query load - either
vertically by increasing (or decreasing) the resources for each node or
horizontally by raising/lowering the total number of nodes. ClickHouse
Cloud nodes currently have a 1 CPU-to-memory ratio, unlike Snowflake's 1.
While a looser coupling is possible, services are currently coupled to the
data, unlike Snowflake warehouses. Nodes will also pause if idle and
resume if subjected to queries. Users can also manually resize services if
needed.
ClickHouse Cloud's query cache is currently node specific, unlike
Snowflake's, which is delivered at a service layer independent of the
warehouse. Based on benchmarks, ClickHouse Cloud's node cache outperforms
Snowflake's. | {"source_file": "01_overview.md"} | [
0.03146393224596977,
-0.06127455085515976,
-0.029081812128424644,
0.06887296587228775,
0.061435483396053314,
-0.026264039799571037,
-0.07540544122457504,
-0.029978493228554726,
0.08976878225803375,
0.005619957111775875,
-0.038377389311790466,
0.06664463877677917,
0.0229952409863472,
-0.070... |
7ff13216-19b4-453a-948e-84507d85074e | Snowflake and ClickHouse Cloud take different approaches to scaling to
increase query concurrency. Snowflake addresses this through a feature
known as
multi-cluster warehouses
.
This feature allows users to add clusters to a warehouse. While this offers no
improvement to query latency, it does provide additional parallelization and
allows higher query concurrency. ClickHouse achieves this by adding more memory
and CPU to a service through vertical or horizontal scaling. We do not explore the
capabilities of these services to scale to higher concurrency in this blog,
focusing instead on latency, but acknowledge that this work should be done
for a complete comparison. However, we would expect ClickHouse to perform
well in any concurrency test, with Snowflake explicitly limiting the number
of concurrent queries allowed for a
warehouse to 8 by default
.
In comparison, ClickHouse Cloud allows up to 1000 queries to be executed per
node.
Snowflake's ability to switch compute size on a dataset, coupled with fast
resume times for warehouses, makes it an excellent experience for ad hoc
querying. For data warehouse and data lake use cases, this provides an
advantage over other systems.
Real-time analytics {#real-time-analytics}
Based on public
benchmark
data,
ClickHouse outperforms Snowflake for real-time analytics applications in the following areas:
Query latency
: Snowflake queries have a higher query latency even
when clustering is applied to tables to optimize performance. In our
testing, Snowflake requires over twice the compute to achieve equivalent
ClickHouse performance on queries where a filter is applied that is part
of the Snowflake clustering key or ClickHouse primary key. While
Snowflake's
persistent query cache
offsets some of these latency challenges, this is ineffective in cases
where the filter criteria are more diverse. This query cache effectiveness
can be further impacted by changes to the underlying data, with cache
entries invalidated when the table changes. While this is not the case in
the benchmark for our application, a real deployment would require the new,
more recent data to be inserted. Note that ClickHouse's query cache is
node specific and not
transactionally consistent
,
making it
better suited
to real-time analytics. Users also have granular control over its use
with the ability to control its use on a
per-query basis
,
its
precise size
,
whether a
query is cached
(limits on duration or required number of executions), and whether it is
only
passively used
. | {"source_file": "01_overview.md"} | [
-0.019830621778964996,
-0.06075149402022362,
-0.0555599071085453,
0.05933316797018051,
-0.02367609366774559,
-0.07505013793706894,
-0.04648689553141594,
-0.052690085023641586,
0.000009742943802848458,
0.0005663862684741616,
-0.03288751840591431,
0.006526283919811249,
0.020342836156487465,
... |
624c3d42-257a-4119-8c26-92183db9d088 | Lower cost
: Snowflake warehouses can be configured to suspend after
a period of query inactivity. Once suspended, charges are not incurred.
Practically, this inactivity check can
only be lowered to 60s
.
Warehouses will automatically resume, within several seconds, once a query
is received. With Snowflake only charging for resources when a warehouse
is under use, this behavior caters to workloads that often sit idle, like
ad-hoc querying.
However, many real-time analytics workloads require ongoing real-time data
ingestion and frequent querying that doesn't benefit from idling (like
customer-facing dashboards). This means warehouses must often be fully
active and incurring charges. This negates the cost-benefit of idling as
well as any performance advantage that may be associated with Snowflake's
ability to resume a responsive state faster than alternatives. This active
state requirement, when combined with ClickHouse Cloud's lower per-second
cost for an active state, results in ClickHouse Cloud offering a
significantly lower total cost for these kinds of workloads.
Predictable pricing of features:
Features such as materialized views
and clustering (equivalent to ClickHouse's ORDER BY) are required to reach
the highest levels of performance in real-time analytics use cases. These
features incur additional charges in Snowflake, requiring not only a
higher tier, which increases costs per credit by 1.5x, but also
unpredictable background costs. For instance, materialized views incur a
background maintenance cost, as does clustering, which is hard to predict
prior to use. In contrast, these features incur no additional cost in
ClickHouse Cloud, except additional CPU and memory usage at insert time,
typically negligible outside of high insert workload use cases. We have
observed in our benchmark that these differences, along with lower query
latencies and higher compression, result in significantly lower costs with
ClickHouse. | {"source_file": "01_overview.md"} | [
-0.011422139592468739,
-0.014517590403556824,
-0.018713418394327164,
0.11304870992898941,
0.0626162588596344,
-0.01062480453401804,
-0.02500150538980961,
-0.03429670259356499,
0.07217005640268326,
0.011018707416951656,
-0.05794570967555046,
0.046959616243839264,
0.02002188377082348,
-0.016... |
0fe13552-d904-4c13-b894-3d4ab9e00662 | sidebar_label: 'Migration guide'
slug: /migrations/snowflake
description: 'Migrating from Snowflake to ClickHouse'
keywords: ['Snowflake']
title: 'Migrating from Snowflake to ClickHouse'
show_related_blogs: false
doc_type: 'guide'
import migrate_snowflake_clickhouse from '@site/static/images/migrations/migrate_snowflake_clickhouse.png';
import Image from '@theme/IdealImage';
Migrate from Snowflake to ClickHouse
This guide shows you how to migrate data from Snowflake to ClickHouse.
Migrating data between Snowflake and ClickHouse requires the use of an object store,
such as S3, as an intermediate storage for transfer. The migration process also
relies on using the commands
COPY INTO
from Snowflake and
INSERT INTO SELECT
of ClickHouse.
Export data from Snowflake {#1-exporting-data-from-snowflake}
Exporting data from Snowflake requires the use of an external stage, as shown in the diagram above.
Let's say we want to export a Snowflake table with the following schema:
sql
CREATE TABLE MYDATASET (
timestamp TIMESTAMP,
some_text varchar,
some_file OBJECT,
complex_data VARIANT,
) DATA_RETENTION_TIME_IN_DAYS = 0;
To move this table's data to a ClickHouse database, we first need to copy this data to an external stage. When copying data, we recommend Parquet as the intermittent format as it allows type information to be shared, preserves precision, compresses well, and natively supports nested structures common in analytics.
In the example below, we create a named file format in Snowflake to represent Parquet and the desired file options. We then specify which bucket will contain our copied dataset. Finally, we copy the dataset to the bucket.
```sql
CREATE FILE FORMAT my_parquet_format TYPE = parquet;
-- Create the external stage that specifies the S3 bucket to copy into
CREATE OR REPLACE STAGE external_stage
URL='s3://mybucket/mydataset'
CREDENTIALS=(AWS_KEY_ID='
' AWS_SECRET_KEY='
')
FILE_FORMAT = my_parquet_format;
-- Apply "mydataset" prefix to all files and specify a max file size of 150mb
-- The
header=true
parameter is required to get column names
COPY INTO @external_stage/mydataset from mydataset max_file_size=157286400 header=true;
```
For a dataset around 5TB of data with a maximum file size of 150MB, and using a 2X-Large Snowflake warehouse located in the same AWS
us-east-1
region, copying data to the S3 bucket will take around 30 minutes.
Import to ClickHouse {#2-importing-to-clickhouse}
Once the data is staged in intermediary object storage, ClickHouse functions such as the
s3 table function
can be used to insert the data into a table, as shown below.
This example uses the
s3 table function
for AWS S3, but the
gcs table function
can be used for Google Cloud Storage and the
azureBlobStorage table function
can be used for Azure Blob Storage.
Assuming the following table target schema: | {"source_file": "02_migration_guide.md"} | [
0.020341575145721436,
-0.05271514877676964,
-0.029832836240530014,
0.04792839288711548,
0.11926805973052979,
-0.007737283129245043,
0.002833705861121416,
0.00442384323105216,
-0.03291330859065056,
-0.03433345630764961,
-0.004159474745392799,
-0.04426777735352516,
0.0492931567132473,
-0.069... |
78dcf717-25d4-484d-80ed-f8d868a174ff | Assuming the following table target schema:
sql
CREATE TABLE default.mydataset
(
`timestamp` DateTime64(6),
`some_text` String,
`some_file` Tuple(filename String, version String),
`complex_data` Tuple(name String, description String),
)
ENGINE = MergeTree
ORDER BY (timestamp)
We can then use the
INSERT INTO SELECT
command to insert the data from S3 into a ClickHouse table:
sql
INSERT INTO mydataset
SELECT
timestamp,
some_text,
JSONExtract(
ifNull(some_file, '{}'),
'Tuple(filename String, version String)'
) AS some_file,
JSONExtract(
ifNull(complex_data, '{}'),
'Tuple(filename String, description String)'
) AS complex_data,
FROM s3('https://mybucket.s3.amazonaws.com/mydataset/mydataset*.parquet')
SETTINGS input_format_null_as_default = 1, -- Ensure columns are inserted as default if values are null
input_format_parquet_case_insensitive_column_matching = 1 -- Column matching between source data and target table should be case insensitive
:::note Note on nested column structures
The
VARIANT
and
OBJECT
columns in the original Snowflake table schema will be output as JSON strings by default, forcing us to cast these when inserting them into ClickHouse.
Nested structures such as
some_file
are converted to JSON strings on copy by Snowflake. Importing this data requires us to transform these structures to Tuples at insert time in ClickHouse, using the
JSONExtract function
as shown above.
:::
Test successful data export {#3-testing-successful-data-export}
To test whether your data was properly inserted, simply run a
SELECT
query on your new table:
sql
SELECT * FROM mydataset LIMIT 10; | {"source_file": "02_migration_guide.md"} | [
0.003799623576924205,
-0.07031719386577606,
-0.059474263340234756,
-0.0104214521124959,
-0.018120966851711273,
0.006952309049665928,
-0.038016997277736664,
0.012191088870167732,
-0.05356412008404732,
0.02814248576760292,
0.07580247521400452,
-0.046565353870391846,
0.0582844577729702,
-0.08... |
67eb883f-75aa-4947-965c-9c8cb564c5ef | sidebar_label: 'SQL translation reference'
slug: /migrations/snowflake-translation-reference
description: 'SQL translation reference'
keywords: ['Snowflake']
title: 'Migrating from Snowflake to ClickHouse'
show_related_blogs: true
doc_type: 'guide'
Snowflake SQL translation guide
Data types {#data-types}
Numerics {#numerics}
Users moving data between ClickHouse and Snowflake will immediately notice that
ClickHouse offers more granular precision concerning declaring numerics. For example,
Snowflake offers the type Number for numerics. This requires the user to specify a
precision (total number of digits) and scale (digits to the right of the decimal place)
up to a total of 38. Integer declarations are synonymous with Number, and simply
define a fixed precision and scale where the range is the same. This convenience
is possible as modifying the precision (scale is 0 for integers) does not impact the
size of data on disk in Snowflake - the minimal required bytes are used for a
numeric range at write time at a micro partition level. The scale does, however,
impact storage space and is offset with compression. A
Float64
type offers a
wider range of values with a loss of precision.
Contrast this with ClickHouse, which offers multiple signed and unsigned
precision for floats and integers. With these, ClickHouse users can be explicit about
the precision required for integers to optimize storage and memory overhead. A
Decimal type, equivalent to Snowflake’s Number type, also offers twice the
precision and scale at 76 digits. In addition to a similar
Float64
value,
ClickHouse also provides a
Float32
for when precision is less critical and
compression paramount.
Strings {#strings}
ClickHouse and Snowflake take contrasting approaches to the storage of string
data. The
VARCHAR
in Snowflake holds Unicode characters in UTF-8, allowing the
user to specify a maximum length. This length has no impact on storage or
performance, with the minimum number of bytes always used to store a string, and
rather provides only constraints useful for downstream tooling. Other types, such
as
Text
and
NChar
, are simply aliases for this type. ClickHouse conversely
stores all
string data as raw bytes
with a
String
type (no length specification required), deferring encoding to the user, with
query time functions
available for different encodings. We refer the reader to
"Opaque data argument"
for the motivation as to why. The ClickHouse
String
is thus more comparable
to the Snowflake Binary type in its implementation. Both
Snowflake
and
ClickHouse
support “collation”, allowing users to override how strings are sorted and compared.
Semi-structured types {#semi-structured-data}
Snowflake supports the
VARIANT
,
OBJECT
and
ARRAY
types for semi-structured
data.
ClickHouse offers the equivalent
Variant
,
Object
(now deprecated in favor of the native
JSON
type) and
Array | {"source_file": "03_sql_translation_reference.md"} | [
0.021618982776999474,
-0.031711023300886154,
-0.05327264964580536,
0.021351667121052742,
0.014223456382751465,
-0.00030531585798598826,
0.03182229772210121,
0.054183099418878555,
-0.023091521114110947,
-0.09850839525461197,
-0.07721864432096481,
0.003995484672486782,
0.025552498176693916,
... |
fe265868-d940-4831-88e7-be6c1c901fef | ClickHouse offers the equivalent
Variant
,
Object
(now deprecated in favor of the native
JSON
type) and
Array
types. Additionally, ClickHouse has the
JSON
type which replaces the now deprecated
Object('json')
type and is particularly
performant and storage efficient in
comparison to other native JSON types
.
ClickHouse also supports named
Tuple
s
and arrays of Tuples
via the
Nested
type,
allowing users to explicitly map nested structures. This allows codecs and type
optimizations to be applied throughout the hierarchy, unlike Snowflake, which
requires the user to use the
OBJECT
,
VARIANT
, and
ARRAY
types for the outer
object and does not allow
explicit internal typing
.
This internal typing also simplifies queries on nested numerics in ClickHouse,
which do not need to be cast and can be used in index definitions.
In ClickHouse, codecs and optimized types can also be applied to substructures.
This provides an added benefit that compression with nested structures remains
excellent, and comparable, to flattened data. In contrast, as a result of the
inability to apply specific types to substructures, Snowflake recommends
flattening
data to achieve optimal compression
.
Snowflake also
imposes size restrictions
for these data types.
Type reference {#type-reference} | {"source_file": "03_sql_translation_reference.md"} | [
-0.06494662910699844,
0.015277481637895107,
-0.012785237282514572,
0.028670255094766617,
0.005928645841777325,
-0.0860917717218399,
-0.04162480682134628,
0.03400587663054466,
-0.010501728393137455,
-0.06137973442673683,
-0.08451230078935623,
0.03174262493848801,
-0.04240535572171211,
0.041... |
9fee1c8f-6347-49d3-a78f-caa67da447b8 | | Snowflake | ClickHouse | Note |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
NUMBER
|
Decimal
| ClickHouse supports twice the precision and scale than Snowflake - 76 digits vs. 38. |
|
FLOAT
,
FLOAT4
,
FLOAT8 | {"source_file": "03_sql_translation_reference.md"} | [
-0.01001453585922718,
-0.0006727049476467073,
0.01699129492044449,
-0.013975062407553196,
-0.034197136759757996,
-0.010079222731292248,
0.00853007473051548,
-0.012036092579364777,
-0.022855889052152634,
-0.05191882699728012,
-0.040425363928079605,
-0.008059924468398094,
-0.014858197420835495... |
a8fe61ab-ca5b-462f-b813-611456242622 | |
FLOAT
,
FLOAT4
,
FLOAT8
|
Float32
,
Float64
| All floats in Snowflake are 64 bit. |
|
VARCHAR
|
String
| |
|
BINARY
|
String
| |
|
BOOLEAN | {"source_file": "03_sql_translation_reference.md"} | [
0.11574121564626694,
0.01814737543463707,
-0.04348642751574516,
-0.0018007007893174887,
0.0008765647653490305,
-0.00956083182245493,
-0.05797484144568443,
0.004443679004907608,
-0.04481865093111992,
-0.049074821174144745,
-0.03656693920493126,
-0.10284093767404556,
-0.005949890706688166,
-... |
ef2b6e77-946c-445f-aaf7-ac831fc3bb0b | |
BOOLEAN
|
Bool
| |
|
DATE
|
Date
,
Date32
|
DATE
in Snowflake offers a wider date range than ClickHouse e.g. min for
Date32
is
1900-01-01
and
Date
1970-01-01
.
Date
in ClickHouse provides more cost efficient (two byte) storage. |
|
TIME(N)
| No direct equivalent but can be represented by
DateTime
and
DateTime64(N)
. |
DateTime64
uses the same concepts of precision. |
|
TIMESTAMP
-
TIMESTAMP_LTZ
,
TIMESTAMP_NTZ
,
TIMESTAMP_TZ
|
DateTime
and
DateTime64
|
DateTime
and
DateTime64
can optionally have a TZ parameter defined for the column. If not present, the server's timezone is used. Additionally a
--use_client_time_zone | {"source_file": "03_sql_translation_reference.md"} | [
0.006691169459372759,
0.03231026604771614,
-0.08970007300376892,
0.022136257961392403,
0.04007498174905777,
-0.012948191724717617,
-0.058382075279951096,
0.021945061162114143,
-0.010757984593510628,
-0.015339026227593422,
-0.04877440631389618,
-0.01613158918917179,
-0.02764412946999073,
-0... |
0312a1db-8e25-496e-81a3-e7e202707f3a | DateTime
and
DateTime64
can optionally have a TZ parameter defined for the column. If not present, the server's timezone is used. Additionally a
--use_client_time_zone
parameter is available for the client. |
|
VARIANT
|
JSON
,
Tuple
,
Nested
|
JSON
type is experimental in ClickHouse. This type infers the column types at insert time.
Tuple
,
Nested
and
Array
can also be used to build explicitly type structures as an alternative. |
|
OBJECT
|
Tuple
,
Map
,
JSON
| Both
OBJECT
and
Map
are analogous to
JSON
type in ClickHouse where the keys are a
String
. ClickHouse requires the value to be consistent and strongly typed whereas Snowflake uses
VARIANT
. This means the values of different keys can be a different type. If this is required in ClickHouse, explicitly define the hierarchy using
Tuple
or rely on
JSON
type. |
|
ARRAY
|
Array
,
Nested
|
ARRAY
in Snowflake uses
VARIANT
for the elements - a super type. Conversely these are strongly typed in ClickHouse. |
|
GEOGRAPHY | {"source_file": "03_sql_translation_reference.md"} | [
0.013304811902344227,
-0.0005062801647000015,
-0.01382453739643097,
0.025990009307861328,
0.020609967410564423,
-0.045350510627031326,
-0.06530823558568954,
0.03073187917470932,
-0.005325862672179937,
-0.04474548622965813,
-0.08156124502420425,
0.00564936175942421,
-0.04539353772997856,
0.... |
da817ff1-df7d-4fe6-b61b-b09176fc631f | |
GEOGRAPHY
|
Point
,
Ring
,
Polygon
,
MultiPolygon
| Snowflake imposes a coordinate system (WGS 84) while ClickHouse applies at query time. |
|
GEOMETRY
|
Point
,
Ring
,
Polygon
,
MultiPolygon
| | | | {"source_file": "03_sql_translation_reference.md"} | [
0.05468246340751648,
-0.0319739505648613,
-0.04529844596982002,
-0.014520002529025078,
-0.023383306339383125,
-0.05240129679441452,
-0.009542778134346008,
-0.031124543398618698,
0.017229869961738586,
-0.08966635912656784,
-0.02615823596715927,
-0.011737214401364326,
0.014218559488654137,
-... |
4588490e-493a-47b1-9963-c75c01e98cf7 | | ClickHouse Type | Description |
|-------------------|-----------------------------------------------------------------------------------------------------|
|
IPv4
and
IPv6
| IP-specific types, potentially allowing more efficient storage than Snowflake. |
|
FixedString
| Allows a fixed length of bytes to be used, which is useful for hashes. |
|
LowCardinality
| Allows any type to be dictionary encoded. Useful for when the cardinality is expected to be < 100k. |
|
Enum
| Allows efficient encoding of named values in either 8 or 16-bit ranges. |
|
UUID
| For efficient storage of UUIDs. |
|
Array(Float32)
| Vectors can be represented as an Array of Float32 with supported distance functions. |
Finally, ClickHouse offers the unique ability to store the intermediate
state of aggregate functions
. This
state is implementation-specific, but allows the result of an aggregation to be
stored and later queried (with corresponding merge functions). Typically, this
feature is used via a materialized view and, as demonstrated below, offers the
ability to improve performance of specific queries with minimal storage cost by
storing the incremental result of queries over inserted data (more details here). | {"source_file": "03_sql_translation_reference.md"} | [
-0.021269798278808594,
-0.009614133276045322,
-0.08939839899539948,
-0.047604966908693314,
0.01754305511713028,
-0.04713703319430351,
0.014947504736483097,
-0.010210483334958553,
-0.021265089511871338,
-0.05181436613202095,
-0.036388203501701355,
-0.0006737532676197588,
0.027137598022818565,... |
54032b84-1923-449e-8016-9d1aff1d568e | slug: /migrations/postgresql/overview
title: 'Comparing PostgreSQL and ClickHouse'
description: 'A guide to migrating from PostgreSQL to ClickHouse'
keywords: ['postgres', 'postgresql', 'migrate', 'migration']
sidebar_label: 'Overview'
doc_type: 'guide'
Comparing ClickHouse and PostgreSQL
Why use ClickHouse over Postgres? {#why-use-clickhouse-over-postgres}
TLDR: Because ClickHouse is designed for fast analytics, specifically
GROUP BY
queries, as an OLAP database whereas Postgres is an OLTP database designed for transactional workloads.
OLTP, or online transactional processing databases, are designed to manage transactional information.The primary objective of these databases, for which Postgres is the classic example, is to ensure that an engineer can submit a block of updates to the database and be sure that it will — in its entirety — either succeed or fail. These types of transactional guarantees with ACID properties are the main focus of OLTP databases and a huge strength of Postgres. Given these requirements, OLTP databases typically hit performance limitations when used for analytical queries over large datasets.
OLAP, or online analytical processing databases, are designed to meet those needs — to manage analytical workloads. The primary objective of these databases is to ensure that engineers can efficiently query and aggregate over vast datasets. Real-time OLAP systems like ClickHouse allow this analysis to happen as data is ingested in real time.
See
here
for a more in-depth comparison of ClickHouse and PostgreSQL.
To see the potential performance differences between ClickHouse and Postgres on analytical queries, see
Rewriting PostgreSQL Queries in ClickHouse
.
Migration strategies {#migration-strategies}
When migrating from PostgreSQL to ClickHouse, the right strategy depends on your use case, infrastructure, and data requirements. In general, real-time Change Data Capture (CDC) is the best approach for most modern use cases, while manual bulk loading followed by periodic updates is suitable for simpler scenarios or one-time migrations.
Below section describes the two main strategies for migration:
Real-Time CDC
and
Manual Bulk Load + Periodic Updates
.
Real-time replication (CDC) {#real-time-replication-cdc}
Change Data Capture (CDC) is the process by which tables are kept in sync between two databases. It is the most efficient approach for most migration from PostgreSQL, but yet more complex as it handles insert, updates and deletes from PostgreSQL to ClickHouse in near real-time. It is ideal for use cases where real-time analytics are important. | {"source_file": "01_overview.md"} | [
-0.015111216343939304,
-0.04905413091182709,
-0.018372584134340286,
0.03936990350484848,
-0.018749447539448738,
-0.07090400159358978,
0.004901784472167492,
0.009637029841542244,
0.0651244968175888,
-0.04945731535553932,
-0.006570317782461643,
0.08337674289941788,
-0.0392046794295311,
-0.07... |
88d8e42d-06cc-4e52-b74f-f8888648a290 | Real-time Change Data Capture (CDC) can be implemented in ClickHouse using
ClickPipes
, if you're using ClickHouse Cloud, or
PeerDB
in case you're running ClickHouse on-prem. Those solutions handles the complexities of real-time data synchronization, including initial load, by capturing inserts, updates, and deletes from PostgreSQL and replicating them in ClickHouse. This approach ensures that the data in ClickHouse is always fresh and accurate without requiring manual intervention.
Manual bulk load + periodic updates {#manual-bulk-load-periodic-updates}
In some cases, a more straightforward approach like manual bulk loading followed by periodic updates may be sufficient. This strategy is ideal for one-time migrations or situations where real-time replication is not required. It involves loading data from PostgreSQL to ClickHouse in bulk, either through direct SQL
INSERT
commands or by exporting and importing CSV files. After the initial migration, you can periodically update the data in ClickHouse by syncing changes from PostgreSQL at regular intervals.
The bulk load process is simple and flexible but comes with the downside of no real-time updates. Once the initial data is in ClickHouse, updates won't be reflected immediately, so you must schedule periodic updates to sync the changes from PostgreSQL. This approach works well for less time-sensitive use cases, but it introduces a delay between when data changes in PostgreSQL and when those changes appear in ClickHouse.
Which strategy to choose? {#which-strategy-to-choose}
For most applications that require fresh, up-to-date data in ClickHouse, real-time CDC through ClickPipes is the recommended approach. It provides continuous data syncing with minimal setup and maintenance. On the other hand, manual bulk loading with periodic updates is a viable option for simpler, one-off migrations or workloads where real-time updates aren't critical.
Start the PostgreSQL migration guide here
. | {"source_file": "01_overview.md"} | [
-0.04868904501199722,
-0.058932844549417496,
-0.019940409809350967,
0.015176977030932903,
-0.0349239781498909,
-0.06602119654417038,
-0.028681235387921333,
-0.041683632880449295,
-0.029416464269161224,
0.04134543985128403,
-0.0005467965384013951,
0.02875698357820511,
-0.023176366463303566,
... |
5353ba51-0e03-41b7-bef1-bc3653a33b13 | slug: /migrations/postgresql
pagination_prev: null
pagination_next: null
title: 'PostgreSQL'
description: 'Landing page for the PostgreSQL migrations section'
doc_type: 'landing-page'
keywords: ['PostgreSQL migration', 'database migration', 'ClickHouse migration', 'CDC replication', 'data migration']
| Page | Description |
|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Overview
| Introduction page for this section |
|
Connecting to PostgreSQL
| This page covers the following options for integrating PostgreSQL with ClickHouse: ClickPipes, PeerDB, PostgreSQL table engine, MaterializedPostgreSQL database engine. |
|
Migrating data
| Part 1 of a guide on migrating from PostgreSQL to ClickHouse. Using a practical example, it demonstrates how to efficiently carry out the migration with a real-time replication (CDC) approach. Many of the concepts covered are also applicable to manual bulk data transfers from PostgreSQL to ClickHouse. |
|
Rewriting PostgreSQL Queries
|Part 2 of a guide on migrating from PostgreSQL to ClickHouse. Using a practical example, it demonstrates how to efficiently carry out the migration with a real-time replication (CDC) approach. Many of the concepts covered are also applicable to manual bulk data transfers from PostgreSQL to ClickHouse.|
|
Data modeling techniques
|Part 3 of a guide on migrating from PostgreSQL to ClickHouse. Using a practical example, it demonstrates how to model data in ClickHouse if migrating from PostgreSQL.|
|
Appendix
|Additional information relevant to migrating from PostgreSQL| | {"source_file": "index.md"} | [
0.058747779577970505,
-0.00858794990926981,
0.042206309735774994,
-0.013956311158835888,
-0.033106014132499695,
-0.006093042436987162,
-0.056534294039011,
-0.07241527736186981,
0.006194252986460924,
0.05411656200885773,
0.046435993164777756,
-0.03638972342014313,
0.009052404202520847,
-0.1... |
470d20f9-d170-4832-8e8e-9e234fd235f0 | slug: /migrations/postgresql/appendix
title: 'Appendix'
keywords: ['postgres', 'postgresql', 'data types', 'types']
description: 'Additional information relative to migrating from PostgreSQL'
doc_type: 'reference'
import postgresReplicas from '@site/static/images/integrations/data-ingestion/dbms/postgres-replicas.png';
import Image from '@theme/IdealImage';
Postgres vs ClickHouse: Equivalent and different concepts {#postgres-vs-clickhouse-equivalent-and-different-concepts}
Users coming from OLTP systems who are used to ACID transactions should be aware that ClickHouse makes deliberate compromises in not fully providing these in exchange for performance. ClickHouse semantics can deliver high durability guarantees and high write throughput if well understood. We highlight some key concepts below that users should be familiar with prior to working with ClickHouse from Postgres.
Shards vs replicas {#shards-vs-replicas}
Sharding and replication are two strategies used for scaling beyond one Postgres instance when storage and/or compute become a bottleneck to performance. Sharding in Postgres involves splitting a large database into smaller, more manageable pieces across multiple nodes. However, Postgres does not support sharding natively. Instead, sharding can be achieved using extensions such as
Citus
, in which Postgres becomes a distributed database capable of scaling horizontally. This approach allows Postgres to handle higher transaction rates and larger datasets by spreading the load across several machines. Shards can be row or schema-based in order to provide flexibility for workload types, such as transactional or analytical. Sharding can introduce significant complexity in terms of data management and query execution as it requires coordination across multiple machines and consistency guarantees.
Unlike shards, replicas are additional Postgres instances that contain all or some of the data from the primary node. Replicas are used for various reasons, including enhanced read performance and HA (High Availability) scenarios. Physical replication is a native feature of Postgres that involves copying the entire database or significant portions to another server, including all databases, tables, and indexes. This involves streaming WAL segments from the primary node to replicas over TCP/IP. In contrast, logical replication is a higher level of abstraction that streams changes based on
INSERT
,
UPDATE
, and
DELETE
operations. Although the same outcomes may apply to physical replication, greater flexibility is enabled for targeting specific tables and operations, as well as data transformations and supporting different Postgres versions. | {"source_file": "appendix.md"} | [
-0.002896935446187854,
-0.02770007774233818,
0.0007543150568380952,
0.014245793223381042,
-0.02310444787144661,
-0.08128943294286728,
-0.016218021512031555,
0.0028091829735785723,
0.030073147267103195,
0.011989018879830837,
-0.011516624130308628,
0.09141626209020615,
0.02765323407948017,
-... |
d46e2008-0f2c-4606-a621-dde470300737 | In contrast, ClickHouse shards and replicas are two key concepts related to data distribution and redundancy
. ClickHouse replicas can be thought of as analogous to Postgres replicas, although replication is eventually consistent with no notion of a primary. Sharding, unlike Postgres, is supported natively.
A shard is a portion of your table data. You always have at least one shard. Sharding data across multiple servers can be used to divide the load if you exceed the capacity of a single server with all shards used to run a query in parallel. Users can manually create shards for a table on different servers and insert data directly into them. Alternatively, a distributed table can be used with a sharding key defining to which shard data is routed. The sharding key can be random or as an output of a hash function. Importantly, a shard can consist of multiple replicas.
A replica is a copy of your data. ClickHouse always has at least one copy of your data, and so the minimum number of replicas is one. Adding a second replica of your data provides fault tolerance and potentially additional compute for processing more queries (
Parallel Replicas
can also be used to distribute the compute for a single query thus lowering latency). Replicas are achieved with the
ReplicatedMergeTree table engine
, which enables ClickHouse to keep multiple copies of data in sync across different servers. Replication is physical: only compressed parts are transferred between nodes, not queries.
In summary, a replica is a copy of data that provides redundancy and reliability (and potentially distributed processing), while a shard is a subset of data that allows for distributed processing and load balancing.
ClickHouse Cloud uses a single copy of data backed in S3 with multiple compute replicas. The data is available to each replica node, each of which has a local SSD cache. This relies on metadata replication only through ClickHouse Keeper.
Eventual consistency {#eventual-consistency}
ClickHouse uses ClickHouse Keeper (C++ ZooKeeper implementation, ZooKeeper can also be used) for managing its internal replication mechanism, focusing primarily on metadata storage and ensuring eventual consistency. Keeper is used to assign unique sequential numbers for each insert within a distributed environment. This is crucial for maintaining order and consistency across operations. This framework also handles background operations such as merges and mutations, ensuring that the work for these is distributed while guaranteeing they are executed in the same order across all replicas. In addition to metadata, Keeper functions as a comprehensive control center for replication, including tracking checksums for stored data parts, and acts as a distributed notification system among replicas. | {"source_file": "appendix.md"} | [
0.012539663352072239,
-0.08035597950220108,
-0.03353344649076462,
-0.003155189100652933,
-0.07805528491735458,
-0.03968561440706253,
-0.05527478829026222,
-0.025022348389029503,
0.10009560734033585,
0.03761153668165207,
-0.023354539647698402,
0.051538798958063126,
0.0754433199763298,
-0.03... |
a3b8568c-0cac-42a2-83f4-005a8727c4ee | The replication process in ClickHouse (1) starts when data is inserted into any replica. This data, in its raw insert form, is (2) written to disk along with its checksums. Once written, the replica (3) attempts to register this new data part in Keeper by allocating a unique block number and logging the new part's details. Other replicas, upon (4) detecting new entries in the replication log, (5) download the corresponding data part via an internal HTTP protocol, verifying it against the checksums listed in ZooKeeper. This method ensures that all replicas eventually hold consistent and up-to-date data despite varying processing speeds or potential delays. Moreover, the system is capable of handling multiple operations concurrently, optimizing data management processes, and allowing for system scalability and robustness against hardware discrepancies.
Note that ClickHouse Cloud uses a
cloud-optimized replication mechanism
adapted to its separation of storage and compute architecture. By storing data in shared object storage, data is automatically available for all compute nodes without the need to physically replicate data between nodes. Instead, Keeper is used to only share metadata (which data exists where in object storage) between compute nodes.
PostgreSQL employs a different replication strategy compared to ClickHouse, primarily using streaming replication, which involves a primary replica model where data is continuously streamed from the primary to one or more replica nodes. This type of replication ensures near real-time consistency and is synchronous or asynchronous, giving administrators control over the balance between availability and consistency. Unlike ClickHouse, PostgreSQL relies on a WAL (Write-Ahead Logging) with logical replication and decoding to stream data objects and changes between nodes. This approach in PostgreSQL is more straightforward but might not offer the same level of scalability and fault tolerance in highly distributed environments that ClickHouse achieves through its complex use of Keeper for distributed operations coordination and eventual consistency.
User implications {#user-implications}
In ClickHouse, the possibility of dirty reads - where users can write data to one replica and then read potentially unreplicated data from another—arises from its eventually consistent replication model managed via Keeper. This model emphasizes performance and scalability across distributed systems, allowing replicas to operate independently and sync asynchronously. As a result, newly inserted data might not be immediately visible across all replicas, depending on the replication lag and the time it takes for changes to propagate through the system. | {"source_file": "appendix.md"} | [
-0.04459299147129059,
-0.03740287944674492,
-0.0688793882727623,
-0.01270669512450695,
-0.011667792685329914,
-0.10347699373960495,
-0.011623409576714039,
-0.0926869660615921,
0.06787550449371338,
0.0946013480424881,
0.013472061604261398,
0.05970282107591629,
0.0606677271425724,
-0.0684322... |
479ea928-93c0-42d1-9399-f43d33a1aefc | Conversely, PostgreSQL's streaming replication model typically can prevent dirty reads by employing synchronous replication options where the primary waits for at least one replica to confirm the receipt of data before committing transactions. This ensures that once a transaction is committed, a guarantee exists that the data is available in another replica. In the event of primary failure, the replica will ensure queries see the committed data, thereby maintaining a stricter level of consistency.
Recommendations {#recommendations}
Users new to ClickHouse should be aware of these differences, which will manifest themselves in replicated environments. Typically, eventual consistency is sufficient in analytics over billions, if not trillions, of data points - where metrics are either more stable or estimation is sufficient as new data is continuously being inserted at high rates.
Several options exist for increasing the consistency of reads should this be required. Both examples require either increased complexity or overhead - reducing query performance and making it more challenging to scale ClickHouse.
We advise these approaches only if absolutely required.
Consistent routing {#consistent-routing}
To overcome some of the limitations of eventual consistency, users can ensure clients are routed to the same replicas. This is useful in cases where multiple users are querying ClickHouse and results should be deterministic across requests. While results may differ, as new data inserted, the same replicas should be queried ensuring a consistent view.
This can be achieved through several approaches depending on your architecture and whether you are using ClickHouse OSS or ClickHouse Cloud.
ClickHouse Cloud {#clickhouse-cloud}
ClickHouse Cloud uses a single copy of data backed in S3 with multiple compute replicas. The data is available to each replica node which has a local SSD cache. To ensure consistent results, users therefore need to only ensure consistent routing to the same node.
Communication to the nodes of a ClickHouse Cloud service occurs through a proxy. HTTP and Native protocol connections will be routed to the same node for the period on which they are held open. In the case of HTTP 1.1 connections from most clients, this depends on the Keep-Alive window. This can be configured on most clients e.g. Node Js. This also requires a server side configuration, which will be higher than the client and is set to 10s in ClickHouse Cloud.
To ensure consistent routing across connections e.g. if using a connection pool or if connections expire, users can either ensure the same connection is used (easier for native) or request the exposure of sticky endpoints. This provides a set of endpoints for each node in the cluster, thus allowing clients to ensure queries are deterministically routed.
Contact support for access to sticky endpoints.
ClickHouse OSS {#clickhouse-oss} | {"source_file": "appendix.md"} | [
-0.0529181994497776,
-0.08231870085000992,
-0.037639666348695755,
0.04631887003779411,
-0.062234897166490555,
-0.05237628519535065,
-0.04151685908436775,
-0.05588316172361374,
0.04561682417988777,
0.02046099677681923,
-0.027574850246310234,
0.1136382669210434,
0.027136921882629395,
-0.0363... |
844c55e9-0f89-4894-94a0-aebb14d6f7cc | Contact support for access to sticky endpoints.
ClickHouse OSS {#clickhouse-oss}
To achieve this behavior in OSS depends on your shard and replica topology and if you are using a
Distributed table
for querying.
When you have only one shard and replicas (common since ClickHouse vertically scales), users select the node at the client layer and query a replica directly, ensuring this is deterministically selected.
While topologies with multiple shards and replicas are possible without a distributed table, these advanced deployments typically have their own routing infrastructure. We therefore assume deployments with more than one shard are using a Distributed table (distributed tables can be used with single shard deployments but are usually unnecessary).
In this case, users should ensure consistent node routing is performed based on a property e.g.
session_id
or
user_id
. The settings
prefer_localhost_replica=0
,
load_balancing=in_order
should be
set in the query
. This will ensure any local replicas of shards are preferred, with replicas preferred as listed in the configuration otherwise - provided they have the same number of errors - failover will occur with random selection if errors are higher.
load_balancing=nearest_hostname
can also be used as an alternative for this deterministic shard selection.
When creating a Distributed table, users will specify a cluster. This cluster definition, specified in config.xml, will list the shards (and their replicas) - thus allowing users to control the order in which they are used from each node. Using this, users can ensure selection is deterministic.
Sequential consistency {#sequential-consistency}
In exceptional cases, users may need sequential consistency.
Sequential consistency in databases is where the operations on a database appear to be executed in some sequential order, and this order is consistent across all processes interacting with the database. This means that every operation appears to take effect instantaneously between its invocation and completion, and there is a single, agreed-upon order in which all operations are observed by any process.
From a user's perspective this typically manifests itself as the need to write data into ClickHouse and when reading data, to guarantee that the latest inserted rows are returned.
This can be achieved in several ways (in order of preference):
Read/Write to the same node
- If you are using native protocol, or a
session to do your write/read via HTTP
, you should then be connected to the same replica: in this scenario you're reading directly from the node where you're writing, then your read will always be consistent.
Sync replicas manually
- If you write to one replica and read from another, you can use issue
SYSTEM SYNC REPLICA LIGHTWEIGHT
prior to reading. | {"source_file": "appendix.md"} | [
0.03164758160710335,
-0.034555014222860336,
0.037054553627967834,
0.014193938113749027,
-0.03301244601607323,
-0.08895473927259445,
-0.0043857526034116745,
-0.053502727299928665,
0.01506193820387125,
0.0058029284700751305,
-0.026204733178019524,
0.02220432460308075,
0.08130897581577301,
-0... |
54aeef43-bf16-4812-809d-87b1951b55ec | Sync replicas manually
- If you write to one replica and read from another, you can use issue
SYSTEM SYNC REPLICA LIGHTWEIGHT
prior to reading.
Enable sequential consistency
- via the query setting
select_sequential_consistency = 1
. In OSS, the setting
insert_quorum = 'auto'
must also be specified.
See
here
for further details on enabling these settings.
Use of sequential consistency will place a greater load on ClickHouse Keeper. The result can
mean slower inserts and reads. SharedMergeTree, used in ClickHouse Cloud as the main table engine, sequential consistency
incurs less overhead and will scale better
. OSS users should use this approach cautiously and measure Keeper load.
Transactional (ACID) support {#transactional-acid-support}
Users migrating from PostgreSQL may be used to its robust support for ACID (Atomicity, Consistency, Isolation, Durability) properties, making it a reliable choice for transactional databases. Atomicity in PostgreSQL ensures that each transaction is treated as a single unit, which either completely succeeds or is entirely rolled back, preventing partial updates. Consistency is maintained by enforcing constraints, triggers, and rules that guarantee that all database transactions lead to a valid state. Isolation levels, from Read Committed to Serializable, are supported in PostgreSQL, allowing fine-tuned control over the visibility of changes made by concurrent transactions. Lastly, Durability is achieved through write-ahead logging (WAL), ensuring that once a transaction is committed, it remains so even in the event of a system failure.
These properties are common for OLTP databases that act as a source of truth.
While powerful, this comes with inherent limitations and makes PB scales challenging. ClickHouse compromises on these properties in order to provide fast analytical queries at scale while sustaining high write throughput.
ClickHouse provides ACID properties under
limited configurations
- most simply when using a non-replicated instance of the MergeTree table engine with one partition. Users should not expect these properties outside of these cases and ensure these are not a requirement.
Compression {#compression}
ClickHouse's column-oriented storage means compression will often be significantly better when compared to Postgres. The following illustrated when comparing the storage requirement for all Stack Overflow tables in both databases:
sql title="Query (Postgres)"
SELECT
schemaname,
tablename,
pg_total_relation_size(schemaname || '.' || tablename) AS total_size_bytes,
pg_total_relation_size(schemaname || '.' || tablename) / (1024 * 1024 * 1024) AS total_size_gb
FROM
pg_tables s
WHERE
schemaname = 'public';
sql title="Query (ClickHouse)"
SELECT
`table`,
formatReadableSize(sum(data_compressed_bytes)) AS compressed_size
FROM system.parts
WHERE (database = 'stackoverflow') AND active
GROUP BY `table` | {"source_file": "appendix.md"} | [
-0.0387842133641243,
-0.09709431231021881,
-0.06755976378917694,
0.03259604424238205,
-0.08924104273319244,
-0.0343136340379715,
-0.02920076996088028,
-0.03700987622141838,
0.031181881204247475,
-0.008625820279121399,
0.0017387355910614133,
0.0745055079460144,
0.006387379486113787,
-0.0800... |
107cc846-6bad-47f4-8f54-42f65bb420d2 | response title="Response"
┌─table───────┬─compressed_size─┐
│ posts │ 25.17 GiB │
│ users │ 846.57 MiB │
│ badges │ 513.13 MiB │
│ comments │ 7.11 GiB │
│ votes │ 1.28 GiB │
│ posthistory │ 40.44 GiB │
│ postlinks │ 79.22 MiB │
└─────────────┴─────────────────┘
Further details on optimizing and measuring compression can be found
here
.
Data type mappings {#data-type-mappings}
The following table shows the equivalent ClickHouse data types for Postgres.
| Postgres Data Type | ClickHouse Type |
| --- | --- |
|
DATE
|
Date
|
|
TIMESTAMP
|
DateTime
|
|
REAL
|
Float32
|
|
DOUBLE
|
Float64
|
|
DECIMAL, NUMERIC
|
Decimal
|
|
SMALLINT
|
Int16
|
|
INTEGER
|
Int32
|
|
BIGINT
|
Int64
|
|
SERIAL
|
UInt32
|
|
BIGSERIAL
|
UInt64
|
|
TEXT, CHAR, BPCHAR
|
String
|
|
INTEGER
| Nullable(
Int32
) |
|
ARRAY
|
Array
|
|
FLOAT4
|
Float32
|
|
BOOLEAN
|
Bool
|
|
VARCHAR
|
String
|
|
BIT
|
String
|
|
BIT VARYING
|
String
|
|
BYTEA
|
String
|
|
NUMERIC
|
Decimal
|
|
GEOGRAPHY
|
Point
,
Ring
,
Polygon
,
MultiPolygon
|
|
GEOMETRY
|
Point
,
Ring
,
Polygon
,
MultiPolygon
|
|
INET
|
IPv4
,
IPv6
|
|
MACADDR
|
String
|
|
CIDR
|
String
|
|
HSTORE
|
Map(K, V)
,
Map
(K,
Variant
) |
|
UUID
|
UUID
|
|
ARRAY<T>
|
ARRAY(T)
|
|
JSON*
|
String
,
Variant
,
Nested
,
Tuple
|
|
JSONB
|
String
|
* Production support for JSON in ClickHouse is under development. Currently users can either map JSON as String, and use
JSON functions
, or map the JSON directly to
Tuples
and
Nested
if the structure is predictable. Read more about JSON
here
. | {"source_file": "appendix.md"} | [
0.000007456239472958259,
-0.02951643615961075,
-0.02068207412958145,
0.09621424227952957,
-0.06310955435037613,
-0.009786607697606087,
-0.003983658738434315,
-0.004091800190508366,
-0.0530741848051548,
0.05600123479962349,
-0.03027837909758091,
-0.027130497619509697,
0.032103974372148514,
... |
379bc8ef-8c74-4c51-94c2-488600adc351 | slug: /migrations/postgresql/rewriting-queries
title: 'Rewriting PostgreSQL Queries'
keywords: ['postgres', 'postgresql', 'rewriting queries']
description: 'Part 2 of a guide on migrating from PostgreSQL to ClickHouse'
sidebar_label: 'Part 2'
doc_type: 'guide'
This is
Part 2
of a guide on migrating from PostgreSQL to ClickHouse. Using a practical example, it demonstrates how to efficiently carry out the migration with a real-time replication (CDC) approach. Many of the concepts covered are also applicable to manual bulk data transfers from PostgreSQL to ClickHouse.
Most SQL queries from your PostgreSQL setup should run in ClickHouse without modification and will likely execute faster.
Deduplication using CDC {#deduplication-cdc}
When using real-time replication with CDC, keep in mind that updates and deletes may result in duplicate rows. To manage this, you can use techniques involving Views and Refreshable Materialized Views.
Refer to this
guide
to learn how to migrate your application from PostgreSQL to ClickHouse with minimal friction when migrating using real-time replication with CDC.
Optimize queries in ClickHouse {#optimize-queries-in-clickhouse}
While this is possible to migrate with minimum query rewriting, it is recommended to leverage ClickHouse features to significantly simplify queries and further improve query performance.
The examples here covers common query patterns and show how to optimize them with ClickHouse. They use the full
Stack Overflow dataset
(up to April 2024) on equivalent resources in PostgreSQL and ClickHouse (8 cores, 32GiB RAM).
For simplicity, the queries below omit the use of techniques to deduplicate the data.
Counts here will slightly differ as the Postgres data only contains rows which satisfy the referential integrity of the foreign keys. ClickHouse imposes no such constraints and thus has the full dataset e.g. inc. anon users.
Users (with more than 10 questions) which receive the most views:
```sql
-- ClickHouse
SELECT OwnerDisplayName, sum(ViewCount) AS total_views
FROM stackoverflow.posts
WHERE (PostTypeId = 'Question') AND (OwnerDisplayName != '')
GROUP BY OwnerDisplayName
HAVING count() > 10
ORDER BY total_views DESC
LIMIT 5
┌─OwnerDisplayName────────┬─total_views─┐
│ Joan Venge │ 25520387 │
│ Ray Vega │ 21576470 │
│ anon │ 19814224 │
│ Tim │ 19028260 │
│ John │ 17638812 │
└─────────────────────────┴─────────────┘
5 rows in set. Elapsed: 0.360 sec. Processed 24.37 million rows, 140.45 MB (67.73 million rows/s., 390.38 MB/s.)
Peak memory usage: 510.71 MiB.
```
```sql
--Postgres
SELECT OwnerDisplayName, SUM(ViewCount) AS total_views
FROM public.posts
WHERE (PostTypeId = 1) AND (OwnerDisplayName != '')
GROUP BY OwnerDisplayName
HAVING COUNT(*) > 10
ORDER BY total_views DESC
LIMIT 5;
ownerdisplayname | total_views | {"source_file": "02_migration_guide_part2.md"} | [
-0.01972835697233677,
-0.07193788886070251,
-0.020183833315968513,
0.012384083122015,
-0.07594306021928787,
-0.07243718951940536,
-0.0018907446647062898,
-0.020516788586974144,
-0.05483940243721008,
0.038367997854948044,
0.03661537170410156,
0.04290362820029259,
0.009927603416144848,
-0.11... |
f95ec39d-03ee-4aa5-b754-b622d6da433c | ownerdisplayname | total_views
-------------------------+-------------
Joan Venge | 25520387
Ray Vega | 21576470
Tim | 18283579
J. Pablo Fernández | 12446818
Matt | 12298764
Time: 107620.508 ms (01:47.621)
```
Which
tags
receive the most
views
:
```sql
--ClickHouse
SELECT arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS tags,
sum(ViewCount) AS views
FROM posts
GROUP BY tags
ORDER BY views DESC
LIMIT 5
┌─tags───────┬──────views─┐
│ javascript │ 8190916894 │
│ python │ 8175132834 │
│ java │ 7258379211 │
│ c# │ 5476932513 │
│ android │ 4258320338 │
└────────────┴────────────┘
5 rows in set. Elapsed: 0.908 sec. Processed 59.82 million rows, 1.45 GB (65.87 million rows/s., 1.59 GB/s.)
```
```sql
--Postgres
WITH tags_exploded AS (
SELECT
unnest(string_to_array(Tags, '|')) AS tag,
ViewCount
FROM public.posts
),
filtered_tags AS (
SELECT
tag,
ViewCount
FROM tags_exploded
WHERE tag <> ''
)
SELECT tag AS tags,
SUM(ViewCount) AS views
FROM filtered_tags
GROUP BY tag
ORDER BY views DESC
LIMIT 5;
tags | views
------------+------------
javascript | 7974880378
python | 7972340763
java | 7064073461
c# | 5308656277
android | 4186216900
(5 rows)
Time: 112508.083 ms (01:52.508)
```
Aggregate functions
Where possible, users should exploit ClickHouse aggregate functions. Below we show the use of the
argMax
function to compute the most viewed question of each year.
```sql
--ClickHouse
SELECT toYear(CreationDate) AS Year,
argMax(Title, ViewCount) AS MostViewedQuestionTitle,
max(ViewCount) AS MaxViewCount
FROM stackoverflow.posts
WHERE PostTypeId = 'Question'
GROUP BY Year
ORDER BY Year ASC
FORMAT Vertical
Row 1:
──────
Year: 2008
MostViewedQuestionTitle: How to find the index for a given item in a list?
MaxViewCount: 6316987
Row 2:
──────
Year: 2009
MostViewedQuestionTitle: How do I undo the most recent local commits in Git?
MaxViewCount: 13962748
...
Row 16:
───────
Year: 2023
MostViewedQuestionTitle: How do I solve "error: externally-managed-environment" every time I use pip 3?
MaxViewCount: 506822
Row 17:
───────
Year: 2024
MostViewedQuestionTitle: Warning "Third-party cookie will be blocked. Learn more in the Issues tab"
MaxViewCount: 66975
17 rows in set. Elapsed: 0.677 sec. Processed 24.37 million rows, 1.86 GB (36.01 million rows/s., 2.75 GB/s.)
Peak memory usage: 554.31 MiB.
```
This is significantly simpler (and faster) than the equivalent Postgres query: | {"source_file": "02_migration_guide_part2.md"} | [
0.03598877787590027,
-0.10975038260221481,
-0.030285291373729706,
-0.03116811253130436,
-0.011352652683854103,
-0.052811894565820694,
0.07928233593702316,
-0.026206178590655327,
-0.0014853921020403504,
0.029922761023044586,
0.018557190895080566,
-0.05428772419691086,
0.06775636225938797,
-... |
827bc178-ae96-4a92-906e-2c3bf6b33d36 | This is significantly simpler (and faster) than the equivalent Postgres query:
```sql
--Postgres
WITH yearly_views AS (
SELECT
EXTRACT(YEAR FROM CreationDate) AS Year,
Title,
ViewCount,
ROW_NUMBER() OVER (PARTITION BY EXTRACT(YEAR FROM CreationDate) ORDER BY ViewCount DESC) AS rn
FROM public.posts
WHERE PostTypeId = 1
)
SELECT
Year,
Title AS MostViewedQuestionTitle,
ViewCount AS MaxViewCount
FROM yearly_views
WHERE rn = 1
ORDER BY Year;
year | mostviewedquestiontitle | maxviewcount
------+-----------------------------------------------------------------------------------------------------------------------+--------------
2008 | How to find the index for a given item in a list? | 6316987
2009 | How do I undo the most recent local commits in Git? | 13962748
...
2023 | How do I solve "error: externally-managed-environment" every time I use pip 3? | 506822
2024 | Warning "Third-party cookie will be blocked. Learn more in the Issues tab" | 66975
(17 rows)
Time: 125822.015 ms (02:05.822)
```
Conditionals and Arrays
Conditional and array functions make queries significantly simpler. The following query computes the tags (with more than 10000 occurrences) with the largest percentage increase from 2022 to 2023. Note how the following ClickHouse query is succinct thanks to the conditionals, array functions, and ability to reuse aliases in the HAVING and SELECT clauses.
```sql
--ClickHouse
SELECT arrayJoin(arrayFilter(t -> (t != ''), splitByChar('|', Tags))) AS tag,
countIf(toYear(CreationDate) = 2023) AS count_2023,
countIf(toYear(CreationDate) = 2022) AS count_2022,
((count_2023 - count_2022) / count_2022) * 100 AS percent_change
FROM stackoverflow.posts
WHERE toYear(CreationDate) IN (2022, 2023)
GROUP BY tag
HAVING (count_2022 > 10000) AND (count_2023 > 10000)
ORDER BY percent_change DESC
LIMIT 5
┌─tag─────────┬─count_2023─┬─count_2022─┬──────percent_change─┐
│ next.js │ 13788 │ 10520 │ 31.06463878326996 │
│ spring-boot │ 16573 │ 17721 │ -6.478189718413183 │
│ .net │ 11458 │ 12968 │ -11.644046884639112 │
│ azure │ 11996 │ 14049 │ -14.613139725247349 │
│ docker │ 13885 │ 16877 │ -17.72826924216389 │
└─────────────┴────────────┴────────────┴─────────────────────┘
5 rows in set. Elapsed: 0.247 sec. Processed 5.08 million rows, 155.73 MB (20.58 million rows/s., 630.61 MB/s.)
Peak memory usage: 403.04 MiB.
``` | {"source_file": "02_migration_guide_part2.md"} | [
0.005646960809826851,
-0.034476153552532196,
0.04467632248997688,
-0.0181280430406332,
-0.08599510788917542,
0.05647030100226402,
-0.02662615105509758,
-0.029733192175626755,
-0.03850306570529938,
-0.01650257781147957,
0.0013404127676039934,
-0.04840996488928795,
-0.02086445689201355,
-0.0... |
2548c7de-36e5-4563-b362-2159ab63d8ef | 5 rows in set. Elapsed: 0.247 sec. Processed 5.08 million rows, 155.73 MB (20.58 million rows/s., 630.61 MB/s.)
Peak memory usage: 403.04 MiB.
```
```sql
--Postgres
SELECT
tag,
SUM(CASE WHEN year = 2023 THEN count ELSE 0 END) AS count_2023,
SUM(CASE WHEN year = 2022 THEN count ELSE 0 END) AS count_2022,
((SUM(CASE WHEN year = 2023 THEN count ELSE 0 END) - SUM(CASE WHEN year = 2022 THEN count ELSE 0 END))
/ SUM(CASE WHEN year = 2022 THEN count ELSE 0 END)::float) * 100 AS percent_change
FROM (
SELECT
unnest(string_to_array(Tags, '|')) AS tag,
EXTRACT(YEAR FROM CreationDate) AS year,
COUNT(*) AS count
FROM public.posts
WHERE EXTRACT(YEAR FROM CreationDate) IN (2022, 2023)
AND Tags <> ''
GROUP BY tag, year
) AS yearly_counts
GROUP BY tag
HAVING SUM(CASE WHEN year = 2022 THEN count ELSE 0 END) > 10000
AND SUM(CASE WHEN year = 2023 THEN count ELSE 0 END) > 10000
ORDER BY percent_change DESC
LIMIT 5;
tag | count_2023 | count_2022 | percent_change
-------------+------------+------------+---------------------
next.js | 13712 | 10370 | 32.22757955641273
spring-boot | 16482 | 17474 | -5.677005837243905
.net | 11376 | 12750 | -10.776470588235295
azure | 11938 | 13966 | -14.520979521695546
docker | 13832 | 16701 | -17.178612059158134
(5 rows)
Time: 116750.131 ms (01:56.750)
```
Click here for Part 3 | {"source_file": "02_migration_guide_part2.md"} | [
0.06099560856819153,
0.035557325929403305,
-0.052802674472332,
0.0163011122494936,
-0.0037819440476596355,
-0.04100198298692703,
-0.031506266444921494,
0.007517390418797731,
-0.04336699843406677,
0.045933742076158524,
0.01555152703076601,
-0.0364166796207428,
0.01790950819849968,
-0.059959... |
0851bae3-6f68-4251-870c-0c88c6a59ac3 | slug: /migrations/postgresql/data-modeling-techniques
title: 'Data modeling techniques'
description: 'Part 3 of a guide on migrating from PostgreSQL to ClickHouse'
keywords: ['postgres', 'postgresql']
show_related_blogs: true
sidebar_label: 'Part 3'
doc_type: 'guide'
import postgres_b_tree from '@site/static/images/migrations/postgres-b-tree.png';
import postgres_sparse_index from '@site/static/images/migrations/postgres-sparse-index.png';
import postgres_partitions from '@site/static/images/migrations/postgres-partitions.png';
import postgres_projections from '@site/static/images/migrations/postgres-projections.png';
import Image from '@theme/IdealImage';
This is
Part 3
of a guide on migrating from PostgreSQL to ClickHouse. Using a practical example, it demonstrates how to model data in ClickHouse if migrating from PostgreSQL.
We recommend users migrating from Postgres read
the guide for modeling data in ClickHouse
. This guide uses the same Stack Overflow dataset and explores multiple approaches using ClickHouse features.
Primary (Ordering) Keys in ClickHouse {#primary-ordering-keys-in-clickhouse}
Users coming from OLTP databases often look for the equivalent concept in ClickHouse. On noticing that ClickHouse supports a
PRIMARY KEY
syntax, users might be tempted to define their table schema using the same keys as their source OLTP database. This is not appropriate.
How are ClickHouse Primary keys different? {#how-are-clickhouse-primary-keys-different}
To understand why using your OLTP primary key in ClickHouse is not appropriate, users should understand the basics of ClickHouse indexing. We use Postgres as an example comparison, but these general concepts apply to other OLTP databases.
Postgres primary keys are, by definition, unique per row. The use of
B-tree structures
allows the efficient lookup of single rows by this key. While ClickHouse can be optimized for the lookup of a single row value, analytics workloads will typically require the reading of a few columns but for many rows. Filters will more often need to identify
a subset of rows
on which an aggregation will be performed.
Memory and disk efficiency are paramount to the scale at which ClickHouse is often used. Data is written to ClickHouse tables in chunks known as parts, with rules applied for merging the parts in the background. In ClickHouse, each part has its own primary index. When parts are merged, the merged part's primary indexes are also merged. Unlike Postgres, these indexes are not built for each row. Instead, the primary index for a part has one index entry per group of rows - this technique is called
sparse indexing
. | {"source_file": "03_migration_guide_part3.md"} | [
0.019291765987873077,
-0.04278985410928726,
0.001876826281659305,
0.03420764580368996,
0.01846981793642044,
-0.04347774013876915,
0.01298764068633318,
-0.00989203155040741,
-0.06850530207157135,
0.009289737790822983,
0.037814173847436905,
-0.008747326210141182,
0.040972139686346054,
-0.055... |
243ed0cd-0e82-4132-a79c-a222cd716594 | Sparse indexing
is possible because ClickHouse stores the rows for a part on disk ordered by a specified key. Instead of directly locating single rows (like a B-Tree-based index), the sparse primary index allows it to quickly (via a binary search over index entries) identify groups of rows that could possibly match the query. The located groups of potentially matching rows are then, in parallel, streamed into the ClickHouse engine in order to find the matches. This index design allows for the primary index to be small (it completely fits into the main memory) whilst still significantly speeding up query execution times, especially for range queries that are typical in data analytics use cases.
For more details, we recommend this
in-depth guide
.
The selected key in ClickHouse will determine not only the index but also the order in which data is written on disk. Because of this, it can dramatically impact compression levels, which can, in turn, affect query performance. An ordering key that causes the values of most columns to be written in a contiguous order will allow the selected compression algorithm (and codecs) to compress the data more effectively.
All columns in a table will be sorted based on the value of the specified ordering key, regardless of whether they are included in the key itself. For instance, if
CreationDate
is used as the key, the order of values in all other columns will correspond to the order of values in the
CreationDate
column. Multiple ordering keys can be specified - this will order with the same semantics as an
ORDER BY
clause in a
SELECT
query.
Choosing an ordering key {#choosing-an-ordering-key}
For the considerations and steps in choosing an ordering key, using the posts table as an example, see
here
.
When using real-time replication with CDC, there are additional constraints to take in account, refer to this
documentation
for techniques on how to customize ordering keys with CDC.
Partitions {#partitions}
Postgres users will be familiar with the concept of table partitioning for enhancing performance and manageability for large databases by dividing tables into smaller, more manageable pieces called partitions. This partitioning can be achieved using either a range on a specified column (e.g., dates), defined lists, or via hash on a key. This allows administrators to organize data based on specific criteria like date ranges or geographical locations. Partitioning helps in improving query performance by enabling faster data access through partition pruning and more efficient indexing. It also helps maintenance tasks such as backups and data purges by allowing operations on individual partitions rather than the entire table. Additionally, partitioning can significantly improve the scalability of PostgreSQL databases by distributing the load across multiple partitions. | {"source_file": "03_migration_guide_part3.md"} | [
-0.014104526489973068,
-0.011588934808969498,
-0.042150162160396576,
0.05174494534730911,
0.016834987327456474,
-0.05108753591775894,
-0.004597048740833998,
-0.057548344135284424,
0.05581521615386009,
0.00950183067470789,
-0.019190553575754166,
0.07487465441226959,
0.03754081949591637,
-0.... |
aafaf0a2-1eb7-4306-8ec6-8600bfb28273 | In ClickHouse, partitioning is specified on a table when it is initially defined via the
PARTITION BY
clause. This clause can contain a SQL expression on any columns, the results of which will define which partition a row is sent to.
The data parts are logically associated with each partition on disk and can be queried in isolation. For the example below, we partition the
posts
table by year using the expression
toYear(CreationDate)
. As rows are inserted into ClickHouse, this expression will be evaluated against each row and routed to the resulting partition if it exists (if the row is the first for a year, the partition will be created).
sql
CREATE TABLE posts
(
`Id` Int32 CODEC(Delta(4), ZSTD(1)),
`PostTypeId` Enum8('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime64(3, 'UTC'),
...
`ClosedDate` DateTime64(3, 'UTC')
)
ENGINE = MergeTree
ORDER BY (PostTypeId, toDate(CreationDate), CreationDate)
PARTITION BY toYear(CreationDate)
For a full description of partitioning see
"Table partitions"
.
Applications of partitions {#applications-of-partitions}
Partitioning in ClickHouse has similar applications as in Postgres but with some subtle differences. More specifically:
Data management
- In ClickHouse, users should principally consider partitioning to be a data management feature, not a query optimization technique. By separating data logically based on a key, each partition can be operated on independently e.g. deleted. This allows users to move partitions, and thus subsets, between
storage tiers
efficiently on time or
expire data/efficiently delete from the cluster
. In example, below we remove posts from 2008.
``sql
SELECT DISTINCT partition
FROM system.parts
WHERE
table` = 'posts'
┌─partition─┐
│ 2008 │
│ 2009 │
│ 2010 │
│ 2011 │
│ 2012 │
│ 2013 │
│ 2014 │
│ 2015 │
│ 2016 │
│ 2017 │
│ 2018 │
│ 2019 │
│ 2020 │
│ 2021 │
│ 2022 │
│ 2023 │
│ 2024 │
└───────────┘
17 rows in set. Elapsed: 0.002 sec.
ALTER TABLE posts
(DROP PARTITION '2008')
Ok.
0 rows in set. Elapsed: 0.103 sec.
``` | {"source_file": "03_migration_guide_part3.md"} | [
-0.03186561539769173,
-0.01938086375594139,
0.0013996486086398363,
0.05535859987139702,
-0.0026503161061555147,
-0.031029392033815384,
0.025784119963645935,
-0.00007296114199561998,
-0.01821862906217575,
0.029948405921459198,
0.033294763416051865,
-0.04939156025648117,
0.06501320749521255,
... |
86c114e0-4c0d-4e08-a857-ecfa1788901e | 17 rows in set. Elapsed: 0.002 sec.
ALTER TABLE posts
(DROP PARTITION '2008')
Ok.
0 rows in set. Elapsed: 0.103 sec.
```
Query optimization
- While partitions can assist with query performance, this depends heavily on the access patterns. If queries target only a few partitions (ideally one), performance can potentially improve. This is only typically useful if the partitioning key is not in the primary key and you are filtering by it. However, queries that need to cover many partitions may perform worse than if no partitioning is used (as there may possibly be more parts as a result of partitioning). The benefit of targeting a single partition will be even less pronounced to non-existence if the partitioning key is already an early entry in the primary key. Partitioning can also be used to
optimize GROUP BY queries
if values in each partition are unique. However, in general, users should ensure the primary key is optimized and only consider partitioning as a query optimization technique in exceptional cases where access patterns access a specific predictable subset of the day, e.g., partitioning by day, with most queries in the last day.
Recommendations for partitions {#recommendations-for-partitions}
Users should consider partitioning a data management technique. It is ideal when data needs to be expired from the cluster when operating with time series data e.g. the oldest partition can
simply be dropped
.
Important:
Ensure your partitioning key expression does not result in a high cardinality set i.e. creating more than 100 partitions should be avoided. For example, do not partition your data by high cardinality columns such as client identifiers or names. Instead, make a client identifier or name the first column in the ORDER BY expression.
Internally, ClickHouse
creates parts
for inserted data. As more data is inserted, the number of parts increases. In order to prevent an excessively high number of parts, which will degrade query performance (more files to read), parts are merged together in a background asynchronous process. If the number of parts exceeds a pre-configured limit, then ClickHouse will throw an exception on insert - as a "too many parts" error. This should not happen under normal operation and only occurs if ClickHouse is misconfigured or used incorrectly e.g. many small inserts.
Since parts are created per partition in isolation, increasing the number of partitions causes the number of parts to increase i.e. it is a multiple of the number of partitions. High cardinality partitioning keys can, therefore, cause this error and should be avoided.
Materialized views vs projections {#materialized-views-vs-projections} | {"source_file": "03_migration_guide_part3.md"} | [
-0.032774269580841064,
0.052155300974845886,
0.06310296058654785,
0.020761046558618546,
0.05213630571961403,
-0.0119780283421278,
0.02997194416821003,
0.04057624191045761,
0.03360392898321152,
-0.05052465200424194,
-0.06777766346931458,
-0.013319541700184345,
-0.006501175928860903,
-0.0129... |
1dfb4772-7405-462e-98ce-a1376f44e8d0 | Materialized views vs projections {#materialized-views-vs-projections}
Postgres allows for the creation of multiple indices on a single table, enabling optimization for a variety of access patterns. This flexibility allows administrators and developers to tailor database performance to specific queries and operational needs. ClickHouse's concept of projections, while not fully analogous to this, allows users to specify multiple
ORDER BY
clauses for a table.
In ClickHouse
data modeling docs
, we explore how materialized views can be used in ClickHouse to pre-compute aggregations, transform rows, and optimize queries for different access patterns.
For the latter of these, we provided
an example
where the materialized view sends rows to a target table with a different ordering key than the original table receiving inserts.
For example, consider the following query:
```sql
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
┌──────────avg(Score)─┐
1. │ 0.18181818181818182 │
└─────────────────────┘
1 row in set. Elapsed: 0.040 sec. Processed 90.38 million rows, 361.59 MB (2.25 billion rows/s., 9.01 GB/s.)
Peak memory usage: 201.93 MiB.
```
This query requires all 90m rows to be scanned (admittedly quickly) as the
UserId
is not the ordering key.
Previously, we solved this using a materialized view acting as a lookup for the
PostId
. The same problem can be solved
with a
projection
. The command below adds a
projection for the
ORDER BY user_id
.
```sql
ALTER TABLE comments ADD PROJECTION comments_user_id (
SELECT * ORDER BY UserId
)
ALTER TABLE comments MATERIALIZE PROJECTION comments_user_id
```
Note that we have to first create the projection and then materialize it. This latter command causes the data to be stored twice on disk in two different orders. The projection can also be defined when the data is created, as shown below, and will be automatically maintained as data is inserted.
sql
CREATE TABLE comments
(
`Id` UInt32,
`PostId` UInt32,
`Score` UInt16,
`Text` String,
`CreationDate` DateTime64(3, 'UTC'),
`UserId` Int32,
`UserDisplayName` LowCardinality(String),
PROJECTION comments_user_id
(
SELECT *
ORDER BY UserId
)
)
ENGINE = MergeTree
ORDER BY PostId
If the projection is created via an
ALTER
, the creation is asynchronous when the
MATERIALIZE PROJECTION
command is issued. Users can confirm the progress of this operation with the following query, waiting for
is_done=1
.
``sql
SELECT
parts_to_do,
is_done,
latest_fail_reason
FROM system.mutations
WHERE (
table` = 'comments') AND (command LIKE '%MATERIALIZE%')
┌─parts_to_do─┬─is_done─┬─latest_fail_reason─┐
1. │ 1 │ 0 │ │
└─────────────┴─────────┴────────────────────┘
1 row in set. Elapsed: 0.003 sec.
``` | {"source_file": "03_migration_guide_part3.md"} | [
-0.055786047130823135,
-0.08401140570640564,
-0.06332497298717499,
0.05398562178015709,
-0.08034833520650864,
-0.009036953561007977,
-0.0005194226978346705,
-0.04325518012046814,
0.07330753654241562,
0.04424206167459488,
-0.049909837543964386,
0.08926092088222504,
0.07766064256429672,
-0.0... |
b3c519f4-b454-4101-9030-cb499c45b08d | ┌─parts_to_do─┬─is_done─┬─latest_fail_reason─┐
1. │ 1 │ 0 │ │
└─────────────┴─────────┴────────────────────┘
1 row in set. Elapsed: 0.003 sec.
```
If we repeat the above query, we can see performance has improved significantly at the expense of additional storage.
```sql
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
┌──────────avg(Score)─┐
1. │ 0.18181818181818182 │
└─────────────────────┘
1 row in set. Elapsed: 0.008 sec. Processed 16.36 thousand rows, 98.17 KB (2.15 million rows/s., 12.92 MB/s.)
Peak memory usage: 4.06 MiB.
```
With an
EXPLAIN
command, we also confirm the projection was used to serve this query:
```sql
EXPLAIN indexes = 1
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
┌─explain─────────────────────────────────────────────┐
│ Expression ((Projection + Before ORDER BY)) │
│ Aggregating │
│ Filter │
│ ReadFromMergeTree (comments_user_id) │
│ Indexes: │
│ PrimaryKey │
│ Keys: │
│ UserId │
│ Condition: (UserId in [8592047, 8592047]) │
│ Parts: 2/2 │
│ Granules: 2/11360 │
└─────────────────────────────────────────────────────┘
11 rows in set. Elapsed: 0.004 sec.
```
When to use projections {#when-to-use-projections}
Projections are an appealing feature for new users as they are automatically
maintained as data is inserted. Furthermore, queries can just be sent to a single
table where the projections are exploited where possible to speed up the response
time.
This is in contrast to materialized views, where the user has to select the
appropriate optimized target table or rewrite their query, depending on the filters.
This places greater emphasis on user applications and increases client-side complexity.
Despite these advantages, projections come with some
inherent limitations
which users should be aware of and thus should be deployed sparingly.
We recommend using projections when:
A complete reordering of the data is required. While the expression in the
projection can, in theory, use a
GROUP BY,
materialized views are more
effective for maintaining aggregates. The query optimizer is also more likely
to exploit projections that use a simple reordering, i.e.,
SELECT * ORDER BY x
.
Users can select a subset of columns in this expression to reduce storage footprint.
Users are comfortable with the associated increase in storage footprint and
overhead of writing data twice. Test the impact on insertion speed and
evaluate the storage overhead
. | {"source_file": "03_migration_guide_part3.md"} | [
-0.0408686101436615,
-0.07600858062505722,
-0.038990676403045654,
0.1353418380022049,
-0.005957781337201595,
-0.00975506380200386,
0.08423157036304474,
0.07078846544027328,
0.024243216961622238,
0.06647402793169022,
-0.01533757708966732,
0.01958363503217697,
0.10498765110969543,
-0.0137502... |
64d28ef4-dc14-4e0f-a239-a2ddf77a6970 | Users are comfortable with the associated increase in storage footprint and
overhead of writing data twice. Test the impact on insertion speed and
evaluate the storage overhead
.
:::note
Since version 25.5, ClickHouse supports the virtual column
_part_offset
in
projections. This unlocks a more space-efficient way to store projections.
For more details see
"Projections"
:::
Denormalization {#denormalization}
Since Postgres is a relational database, its data model is heavily
normalized
, often involving hundreds of tables. In ClickHouse, denormalization can be beneficial at times to optimize JOIN performance.
You can refer to this
guide
that shows the benefits of denormalizing the Stack Overflow dataset in ClickHouse.
This concludes our basic guide for users migrating from Postgres to ClickHouse. We recommend users migrating from Postgres read the
guide for modeling data in ClickHouse
to learn more about advanced ClickHouse features. | {"source_file": "03_migration_guide_part3.md"} | [
-0.006047913804650307,
-0.026483528316020966,
-0.04802615940570831,
0.01968533918261528,
0.012037647888064384,
-0.018793392926454544,
-0.059061698615550995,
0.012263627722859383,
0.015199011191725731,
0.00693892827257514,
-0.01401819009333849,
0.07582350075244904,
-0.016605906188488007,
-0... |
6ad4e7ce-2c60-4330-953d-0b55ce2afa8d | slug: /migrations/postgresql/dataset
title: 'Migrating data'
description: 'Dataset example to migrate from PostgreSQL to ClickHouse'
keywords: ['Postgres']
show_related_blogs: true
sidebar_label: 'Part 1'
doc_type: 'guide'
import postgres_stackoverflow_schema from '@site/static/images/migrations/postgres-stackoverflow-schema.png';
import Image from '@theme/IdealImage';
This is
Part 1
of a guide on migrating from PostgreSQL to ClickHouse. Using a practical example, it demonstrates how to efficiently carry out the migration with a real-time replication (CDC) approach. Many of the concepts covered are also applicable to manual bulk data transfers from PostgreSQL to ClickHouse.
Dataset {#dataset}
As an example dataset to show a typical migration from Postgres to ClickHouse, we use the Stack Overflow dataset documented
here
. This contains every
post
,
vote
,
user
,
comment
, and
badge
that has occurred on Stack Overflow from 2008 to Apr 2024. The PostgreSQL schema for this data is shown below:
DDL commands to create the tables in PostgreSQL are available
here
.
This schema, while not necessarily the most optimal, exploits a number of popular PostgreSQL features, including primary keys, foreign keys, partitioning, and indexes.
We will migrate each of these concepts to their ClickHouse equivalents.
For those users who wish to populate this dataset into a PostgreSQL instance to test migration steps, we have provided the data in
pg_dump
format for download with the DDL, and subsequent data load commands are shown below:
```bash
users
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/users.sql.gz
gzip -d users.sql.gz
psql < users.sql
posts
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/posts.sql.gz
gzip -d posts.sql.gz
psql < posts.sql
posthistory
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/posthistory.sql.gz
gzip -d posthistory.sql.gz
psql < posthistory.sql
comments
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/comments.sql.gz
gzip -d comments.sql.gz
psql < comments.sql
votes
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/votes.sql.gz
gzip -d votes.sql.gz
psql < votes.sql
badges
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/badges.sql.gz
gzip -d badges.sql.gz
psql < badges.sql
postlinks
wget https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/pdump/2024/postlinks.sql.gz
gzip -d postlinks.sql.gz
psql < postlinks.sql
```
While small for ClickHouse, this dataset is substantial for Postgres. The above represents a subset covering the first three months of 2024. | {"source_file": "01_migration_guide_part1.md"} | [
-0.024915538728237152,
-0.10252857208251953,
-0.021951017901301384,
0.010082394815981388,
0.03727284446358681,
-0.0305637139827013,
-0.0029607100877910852,
-0.04973423480987549,
-0.06866820156574249,
0.014493543654680252,
0.05728641897439957,
0.013980424031615257,
0.0404694527387619,
-0.08... |
b929d357-45da-4912-ad4d-dc2256fcb2ba | While small for ClickHouse, this dataset is substantial for Postgres. The above represents a subset covering the first three months of 2024.
While our example results use the full dataset to show performance differences between Postgres and Clickhouse, all steps documented below are functionally identical with the smaller subset. Users wanting to load the full dataset into Postgres see
here
. Due to the foreign constraints imposed by the above schema, the full dataset for PostgreSQL only contains rows that satisfy referential integrity. A
Parquet version
, with no such constraints, can be easily loaded directly into ClickHouse if needed.
Migrating data {#migrating-data}
Real time replication (CDC) {#real-time-replication-or-cdc}
Refer to this
guide
to set up ClickPipes for PostgreSQL. The guide is covering many different types of source Postgres instances.
With CDC approach using ClickPipes or PeerDB, each tables in the PostgreSQL database are automatically replicated in ClickHouse.
To handle updates and deletes in near real-time, ClickPipes maps Postgres tables to ClickHouse using
ReplacingMergeTree
engine, specifically designed to handle updates and deletes in ClickHouse. You can find more information on how the data gets replicated to ClickHouse using ClickPipes
here
. It is important to note that replication using CDC creates duplicated rows in ClickHouse when replicating updates or deletes operations.
See techniques
using the
FINAL
modifier for handling those in ClickHouse.
Let's have a look on how the table
users
is created in ClickHouse using ClickPipes.
sql
CREATE TABLE users
(
`id` Int32,
`reputation` String,
`creationdate` DateTime64(6),
`displayname` String,
`lastaccessdate` DateTime64(6),
`aboutme` String,
`views` Int32,
`upvotes` Int32,
`downvotes` Int32,
`websiteurl` String,
`location` String,
`accountid` Int32,
`_peerdb_synced_at` DateTime64(9) DEFAULT now64(),
`_peerdb_is_deleted` Int8,
`_peerdb_version` Int64
)
ENGINE = ReplacingMergeTree(_peerdb_version)
PRIMARY KEY id
ORDER BY id;
Once set up, ClickPipes starts migrating all data from PostgreSQL to ClickHouse. Depending on the network and size of the deployments, this should take only a few minutes for the Stack Overflow dataset.
Manual bulk load with periodic updates {#initial-bulk-load-with-periodic-updates}
Using a manual approach, the initial bulk load of the dataset can be achieved via:
Table functions
- Using the
Postgres table function
in ClickHouse to
SELECT
data from Postgres and
INSERT
it into a ClickHouse table. Relevant to bulk loads up to datasets of several hundred GB.
Exports
- Exporting to intermediary formats such as CSV or SQL script file. These files can then be loaded into ClickHouse from either the client via the
INSERT FROM INFILE
clause or using object storage and their associated functions i.e. s3, gcs. | {"source_file": "01_migration_guide_part1.md"} | [
-0.03554532304406166,
-0.07207770645618439,
-0.04718555137515068,
-0.028716357424855232,
-0.024192407727241516,
-0.06806380301713943,
-0.02907245233654976,
-0.049025505781173706,
-0.03497551381587982,
0.046682827174663544,
0.015444912016391754,
0.04044799134135246,
-0.019652115181088448,
-... |
d5b62020-1cba-4fee-bd59-194f167691fe | When loading data manually from PostgreSQL, you need to first create the tables in ClickHouse. Refer to this
Data Modeling documentation
to that also uses the Stack Overflow dataset to optimize the table schema in ClickHouse.
Data types between PostgreSQL and ClickHouse might differ. To establish the equivalent types for each of the table columns, we can use the
DESCRIBE
command with the
Postgres table function
. The following command describe the table
posts
in PostgreSQL, modify it according to your environment:
sql title="Query"
DESCRIBE TABLE postgresql('<host>:<port>', 'postgres', 'posts', '<username>', '<password>')
SETTINGS describe_compact_output = 1
For an overview of data type mapping between PostgreSQL and ClickHouse, refer to the
appendix documentation
.
The steps for optimizing the types for this schema are identical to if the data has been loaded from other sources e.g. Parquet on S3. Applying the process described in this
alternate guide using Parquet
results in the following schema:
sql title="Query"
CREATE TABLE stackoverflow.posts
(
`Id` Int32,
`PostTypeId` Enum('Question' = 1, 'Answer' = 2, 'Wiki' = 3, 'TagWikiExcerpt' = 4, 'TagWiki' = 5, 'ModeratorNomination' = 6, 'WikiPlaceholder' = 7, 'PrivilegeWiki' = 8),
`AcceptedAnswerId` UInt32,
`CreationDate` DateTime,
`Score` Int32,
`ViewCount` UInt32,
`Body` String,
`OwnerUserId` Int32,
`OwnerDisplayName` String,
`LastEditorUserId` Int32,
`LastEditorDisplayName` String,
`LastEditDate` DateTime,
`LastActivityDate` DateTime,
`Title` String,
`Tags` String,
`AnswerCount` UInt16,
`CommentCount` UInt8,
`FavoriteCount` UInt8,
`ContentLicense`LowCardinality(String),
`ParentId` String,
`CommunityOwnedDate` DateTime,
`ClosedDate` DateTime
)
ENGINE = MergeTree
ORDER BY tuple()
COMMENT 'Optimized types'
We can populate this with a simple
INSERT INTO SELECT
, reading the data from PostgresSQL and inserting into ClickHouse:
sql title="Query"
INSERT INTO stackoverflow.posts SELECT * FROM postgresql('<host>:<port>', 'postgres', 'posts', '<username>', '<password>')
0 rows in set. Elapsed: 146.471 sec. Processed 59.82 million rows, 83.82 GB (408.40 thousand rows/s., 572.25 MB/s.)
Incremental loads can, in turn, be scheduled. If the Postgres table only receives inserts and an incrementing id or timestamp exists, users can use the above table function approach to load increments, i.e. a
WHERE
clause can be applied to the
SELECT
. This approach may also be used to support updates if these are guaranteed to update the same column. Supporting deletes will, however, require a complete reload, which may be difficult to achieve as the table grows.
We demonstrate an initial load and incremental load using the
CreationDate
(we assume this gets updated if rows are updated)..
```sql
-- initial load
INSERT INTO stackoverflow.posts SELECT * FROM postgresql('
', 'postgres', 'posts', 'postgres', '<password') | {"source_file": "01_migration_guide_part1.md"} | [
0.04391104355454445,
-0.10794541239738464,
-0.09289798140525818,
-0.005016739945858717,
-0.1319311261177063,
-0.06422309577465057,
0.022862078621983528,
-0.015796424821019173,
-0.03986596316099167,
-0.006865385454148054,
0.013691962696611881,
-0.01558037381619215,
0.017630139365792274,
-0.... |
514b472e-2c7e-440f-abd3-783bea93833c | ```sql
-- initial load
INSERT INTO stackoverflow.posts SELECT * FROM postgresql('
', 'postgres', 'posts', 'postgres', '<password')
INSERT INTO stackoverflow.posts SELECT * FROM postgresql('
', 'postgres', 'posts', 'postgres', '
( SELECT (max(CreationDate) FROM stackoverflow.posts)
```
ClickHouse will push down simple
WHERE
clauses such as
=
,
!=
,
>
,
>=
,
<
,
<=
, and IN to the PostgreSQL server. Incremental loads can thus be made more efficient by ensuring an index exists on columns used to identify the change set.
A possible method to detect UPDATE operations when using query replication is using the
XMIN
system column
(transaction IDs) as a watermark - a change in this column is indicative of a change and therefore can be applied to the destination table. Users employing this approach should be aware that
XMIN
values can wrap around and comparisons require a full table scan, making tracking changes more complex.
Click here for Part 2 | {"source_file": "01_migration_guide_part1.md"} | [
-0.04824385419487953,
-0.0884239450097084,
-0.03632090985774994,
0.017461398616433144,
-0.07142623513936996,
-0.05979003757238388,
0.0195328202098608,
-0.05421644449234009,
0.02165398746728897,
0.01638529822230339,
-0.005465904250741005,
0.0941142588853836,
0.04264460876584053,
-0.13269335... |
c6828b85-2484-4aa0-8630-0f831089e8ae | slug: /cloud/get-started/cloud/use-cases/real-time-analytics
title: 'Real-time analytics'
description: 'Learn how to build real-time analytics applications with ClickHouse Cloud for instant insights and data-driven decision making'
keywords: ['use cases', 'real-time analytics']
sidebar_label: 'Real-time analytics'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import rta_0 from '@site/static/images/cloud/onboard/discover/use_cases/0_rta.png';
import rta_1 from '@site/static/images/cloud/onboard/discover/use_cases/1_rta.png';
import rta_2 from '@site/static/images/cloud/onboard/discover/use_cases/2_rta.png';
import rta_3 from '@site/static/images/cloud/onboard/discover/use_cases/3_rta.png';
What is real-time analytics? {#what-is-real-time-analytics}
Real-time analytics refers to data processing that delivers insights to end users
and customers as soon as the data is generated. It differs from traditional or
batch analytics, where data is collected in batches and processed, often a long
time after it was generated.
Real-time analytics systems are built on top of event streams, which consist of
a series of events ordered in time. An event is something that’s already happened.
It could be the addition of an item to the shopping cart on an e-commerce website,
the emission of a reading from an Internet of Things (IoT) sensor, or a shot on
goal in a football (soccer) match.
An event (from an imaginary IoT sensor) is shown below, as an example:
json
{
"deviceId": "sensor-001",
"timestamp": "2023-10-05T14:30:00Z",
"eventType": "temperatureAlert",
"data": {
"temperature": 28.5,
"unit": "Celsius",
"thresholdExceeded": true
}
}
Organizations can discover insights about their customers by aggregating and
analyzing events like this. This has traditionally been done using batch analytics,
and in the next section, we’ll compare batch and real-time analytics.
Real-Time analytics vs batch analytics {#real-time-analytics-vs-batch-analytics}
The diagram below shows what a typical batch analytics system would look like
from the perspective of an individual event:
You can see that there’s quite a big gap from when the event happens until we
process and gain some insight from it. Traditionally, this was the only means of
data analysis, and we’d need to create artificial time boundaries to process
the data in batches. For example, we might process all the data collected at the
end of a day. This worked for many use cases, but for others, it’s sub-optimal
because we’re working with stale data, and it doesn’t allow us to react to the
data quickly enough.
By contrast, in real-time analytics systems, we react to an event as soon as it
happens, as shown in the following diagram:
We can now derive insights from events almost as soon as they’re generated. But
why is this useful?
Benefits of real-time analytics {#benefits-of-real-time-analytics} | {"source_file": "01_real-time-analytics.md"} | [
-0.09664497524499893,
0.018157770857214928,
-0.14121201634407043,
0.06781863421201706,
0.06191827356815338,
-0.07049192488193512,
0.09403324872255325,
0.008399443700909615,
0.02760406583547592,
0.04722985997796059,
0.04632817953824997,
0.014275740832090378,
0.01711765117943287,
0.050525989... |
98afdc13-9607-42eb-8b89-2aaf7d73da7d | We can now derive insights from events almost as soon as they’re generated. But
why is this useful?
Benefits of real-time analytics {#benefits-of-real-time-analytics}
In today's fast-paced world, organizations rely on real-time analytics to stay
agile and responsive to ever-changing conditions. A real-time analytics system
can benefit a business in many ways.
Better decision-making {#better-decision-making}
Decision-making can be improved by having access to actionable insights via
real-time analytics. When business operators can see events as they’re happening,
it makes it much easier to make timely interventions.
For example, if we make changes to an application and want to know whether it’s
having a detrimental effect on the user experience, we want to know this as
quickly as possible so that we can revert the changes if necessary. With a less
real-time approach, we might have to wait until the next day to do this
analysis, by which type we’ll have a lot of unhappy users.
New products and revenue streams {#new-products-and-revenue-streams}
Real-time analytics can help businesses generate new revenue streams. Organizations
can develop new data-centered products and services that give users access to
analytical querying capabilities. These products are often compelling enough for
users to pay for access.
In addition, existing applications can be made stickier, increasing user
engagement and retention. This will result in more application use, creating more
revenue for the organization.
Improved customer experience {#improved-customer-experience}
With real-time analytics, businesses can gain instant insights into customer
behavior, preferences, and needs. This lets businesses offer timely assistance,
personalize interactions, and create more engaging experiences that keep
customers returning.
Real-time analytics use cases {#real-time-analytics-use-cases}
The actual value of real-time analytics becomes evident when we consider its
practical applications. Let’s examine some of them.
Fraud detection {#fraud-detection}
Fraud detection is about detecting fraudulent patterns, ranging from fake accounts
to payment fraud. We want to detect this fraud as quickly as possible, flagging
suspicious activities, blocking transactions, and disabling accounts when necessary.
This use case stretches across industries: healthcare, digital banking, financial
services, retail, and more.
Instacart
is North America's leading online grocery
company, with millions of active customers and shoppers. It uses ClickHouse as
part of Yoda, its fraud detection platform. In addition to the general types of
fraud described above, it also tries to detect collusion between customers and
shoppers.
They identified the following characteristics of ClickHouse that enable real-time
fraud detection: | {"source_file": "01_real-time-analytics.md"} | [
-0.07181466370820999,
0.04512430727481842,
-0.029294895008206367,
0.057028863579034805,
0.10127239674329758,
0.03840586915612221,
0.0030035206582397223,
0.022526545450091362,
0.12514354288578033,
0.019092300906777382,
-0.04765266925096512,
0.051626428961753845,
-0.05789311230182648,
0.0386... |
c27093c3-2d52-4bd6-bfd6-0897e85a879c | They identified the following characteristics of ClickHouse that enable real-time
fraud detection:
ClickHouse supports LSM-tree based MergeTree family engines.
These are optimized for writing which is suitable for ingesting large amounts
of data in real-time.
ClickHouse is designed and optimized explicitly for analytical queries. This
fits perfectly with the needs of applications where data is continuously
analyzed for patterns that might indicate fraud.
Time-sensitive decision making {#ftime-sensitive-decision-making}
Time-sensitive decision-making refers to situations where users or organizations
need to make informed choices quickly based on the most current information
available. Real-time analytics empowers users to make informed choices in
dynamic environments, whether they're traders reacting to market fluctuations,
consumers making purchasing decisions, or professionals adapting to real-time
operational changes.
Coinhall provides its users with real-time insights into price movements over
time via a candlestick chart, which shows the open, high, low, and close prices
for each trading period. They needed to be able to run these types of queries
quickly and with a large number of concurrent users.
In terms of performance, ClickHouse was the clear winner, executing candlestick queries in 20 milliseconds, compared
to 400 milliseconds or more for the other databases. It ran latest-price queries in 8 milliseconds, outpacing the
next-best performance (SingleStore) which came in at 45 milliseconds. Finally, it handled ASOF JOIN queries in
50 milliseconds, while Snowflake took 20 minutes and Rockset timed out. | {"source_file": "01_real-time-analytics.md"} | [
-0.06724793463945389,
-0.0001327255740761757,
-0.04365910589694977,
0.0440741665661335,
0.036515675485134125,
-0.07940191775560379,
0.03601644188165665,
0.0288789551705122,
0.031012337654829025,
0.0007096084882505238,
-0.022289618849754333,
0.021375562995672226,
0.008945885114371777,
0.000... |
c56d5b27-631a-476f-b31c-2c8e97af8444 | slug: /cloud/get-started/cloud/use-cases/data_lake_and_warehouse
title: 'Data Lakehouse'
description: 'Build modern data warehousing architectures with ClickHouse Cloud combining the flexibility of data lakes with database performance'
keywords: ['use cases', 'data lake and warehouse']
sidebar_label: 'Data warehousing'
doc_type: 'guide'
import Image from '@theme/IdealImage';
import datalakehouse_01 from '@site/static/images/cloud/onboard/discover/use_cases/datalakehouse_01.png';
The data lakehouse is a convergent architecture that applies database principles
to data lake infrastructure while maintaining the flexibility and scale of cloud storage systems.
The lakehouse is not just taking a database apart but building database-like
capabilities onto a fundamentally different foundation (cloud object storage)
that focuses on supporting traditional analytics and modern AI/ML workloads in
a unified platform.
What are the components of the data lakehouse? {#components-of-the-data-lakehouse}
The modern data lakehouse architecture represents a convergence of data warehouse
and data lake technologies, combining the best aspects of both approaches. This
architecture comprises several distinct but interconnected layers providing a
flexible, robust data storage, management, and analysis platform.
Understanding these components is essential for organizations looking to
implement or optimize their data lakehouse strategy. The layered approach allows
for component substitution and independent evolution of each layer, providing
architectural flexibility and future-proofing.
Let's explore the core building blocks of a typical data lakehouse architecture
and how they interact to create a cohesive data management platform. | {"source_file": "03_data_warehousing.md"} | [
-0.00007780291343806311,
-0.02001807652413845,
0.014487060718238354,
-0.003313276218250394,
0.0280738715082407,
-0.04903591796755791,
0.00983632542192936,
-0.028125304728746414,
-0.0225380826741457,
-0.00936046615242958,
0.013145232573151588,
-0.010958334431052208,
0.09757264703512192,
-0.... |
b1a240f0-8c84-49be-957b-af15ae5d21d6 | Let's explore the core building blocks of a typical data lakehouse architecture
and how they interact to create a cohesive data management platform.
| Component | Description |
|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Data sources
| Lakehouse data sources include operational databases, streaming platforms, IoT devices, application logs, and external providers. |
|
Query engine
| Processes analytical queries against the data stored in the object storage, leveraging the metadata and optimizations provided by the table format layer. Supports SQL and potentially other query languages to analyze large volumes of data efficiently. |
|
Metadata catalog
| The
data catalog
acts as a central repository for metadata, storing and managing table definitions and schemas, partitioning information, and access control policies. Enables data discovery, lineage tracking, and governance across the lakehouse. |
|
Table format layer
| The
table format layer
manages the logical organization of data files into tables, providing database-like features such as ACID transactions, schema enforcement and evolution, time travel capabilities, and performance optimizations like data skipping and clustering. |
|
Object storage
| This layer provides scalable, durable, cost-effective storage for all data files and metadata. It handles the physical persistence of data in an open format, enabling direct access from multiple tools and systems. |
|
Client applications
| Various tools and applications that connect to the lakehouse to query data, visualize insights, or build data products. These can include BI tools, data science notebooks, custom applications, and ETL/ELT tools. | | {"source_file": "03_data_warehousing.md"} | [
0.045866403728723526,
-0.03351962938904762,
-0.00754794804379344,
-0.024851029738783836,
-0.04705685004591942,
-0.029892921447753906,
-0.0268959179520607,
0.036437228322029114,
-0.08311944454908371,
-0.0005834166659042239,
0.030995452776551247,
-0.035906970500946045,
0.041188377887010574,
... |
b2af204c-bb59-4605-be19-469e0035a7f6 | What are the benefits of the data lakehouse? {#benefits-of-the-data-lakehouse}
The data lakehouse architecture offers several significant advantages when compared
directly to both traditional data warehouses and data lakes:
Compared to traditional data warehouses {#compared-to-traditional-data-warehouses} | {"source_file": "03_data_warehousing.md"} | [
0.004729690030217171,
0.03928203135728836,
0.05871505290269852,
-0.01151724811643362,
0.0105280801653862,
0.02797890268266201,
-0.050529640167951584,
0.008919422514736652,
0.012700299732387066,
0.014517767354846,
0.024038800969719887,
0.003617717884480953,
0.05562201514840126,
-0.054852265... |
e3d5e99d-f7bd-40b7-8e19-60981db0bd5d | | # | Benefit | Description |
|---|--------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 |
Cost efficiency
| Lakehouses leverage inexpensive object storage rather than proprietary storage formats, significantly reducing storage costs compared to data warehouses that charge premium prices for their integrated storage. |
| 2 |
Component flexibility and interchangeability
| The lakehouse architecture allows organizations to substitute different components. Traditional systems require wholesale replacement when requirements change or technology advances, while lakehouses enable incremental evolution by swapping out individual components like query engines or table formats. This flexibility reduces vendor lock-in and allows organizations to adapt to changing needs without disruptive migrations. |
| 3 |
Open format support
| Lakehouses store data in open file formats like Parquet, allowing direct access from various tools without vendor lock-in, unlike proprietary data warehouse formats that restrict access to their ecosystem. |
| 4 |
AI/ML integration
| Lakehouses provide direct access to data for machine learning frameworks and Python/R libraries, whereas data warehouses typically require extracting data before using it for advanced analytics. |
| 5 |
Independent scaling | {"source_file": "03_data_warehousing.md"} | [
0.00038413103902712464,
0.08246181905269623,
0.003911793231964111,
-0.005540262442082167,
-0.03329992666840553,
0.06140381470322609,
0.04778531566262245,
0.02402087301015854,
0.018861260265111923,
-0.08561837673187256,
0.03745585307478905,
-0.007835416123270988,
-0.00417125690728426,
-0.05... |
77e39f80-1fd3-4e68-8f35-63878d2604d8 | | 5 |
Independent scaling
| Lakehouses separate storage from compute, allowing each to scale independently based on actual needs, unlike many data warehouses, where they scale together. | | {"source_file": "03_data_warehousing.md"} | [
0.010252677835524082,
-0.020113496109843254,
-0.011552851647138596,
-0.008412124589085579,
0.004457999486476183,
-0.06961978226900101,
-0.08832864463329315,
-0.007185293361544609,
-0.010816953144967556,
0.030576622113585472,
0.04920443147420883,
-0.03194519504904747,
0.0683724582195282,
-0... |
b7edaa38-7fcc-4021-8cd8-385d814709f4 | Compared to data lakes {#compared-to-data-lakes}
| # | Benefit | Description |
|---|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 |
Query performance
| Lakehouses implement indexing, statistics, and data layout optimizations that enable SQL queries to run at speeds comparable to data warehouses, overcoming the poor performance of raw data lakes. |
| 2 |
Data consistency
| Through ACID transaction support, lakehouses ensure consistency during concurrent operations, solving a major limitation of traditional data lakes, where file conflicts can corrupt data. |
| 3 |
Schema management
| Lakehouses enforce schema validation and track schema evolution, preventing the "data swamp" problem common in data lakes where data becomes unusable due to schema inconsistencies. |
| 4 |
Governance capabilities
| Lakehouses provide fine-grained access control and auditing features at row/column levels, addressing the limited security controls in basic data lakes. |
| 5 |
BI Tool support
| Lakehouses offer SQL interfaces and optimizations that make them compatible with standard BI tools, unlike raw data lakes that require additional processing layers before visualization. |
Where does ClickHouse fit in the data lakehouse architecture? {#where-does-clickhouse-fit-in-the-data-lakehouse-architecture}
ClickHouse is a powerful analytical query engine within the modern data lakehouse
ecosystem. It offers organizations a high-performance option for analyzing data
at scale. ClickHouse is a compelling choice due to its exceptional query speed and
efficiency.
Within the lakehouse architecture, ClickHouse functions as a specialized
processing layer that can flexibly interact with the underlying data. It can
directly query Parquet files stored in cloud object storage systems like S3,
Azure Blob Storage, or Google Cloud Storage, leveraging its optimized columnar
processing capabilities to deliver rapid results even on massive datasets.
This direct query capability allows organizations to analyze their lake data
without complex data movement or transformation processes. | {"source_file": "03_data_warehousing.md"} | [
0.05844135209918022,
0.09160808473825455,
0.04923425614833832,
-0.00470190541818738,
-0.021236207336187363,
-0.02866053581237793,
0.019852854311466217,
0.03164758160710335,
-0.02453175187110901,
-0.026584794744849205,
0.054315660148859024,
-0.08724214136600494,
0.015903033316135406,
-0.028... |
fec13f94-b268-4da7-af7d-b851d1eb3c17 | ClickHouse integrates with open table formats such as Apache Iceberg, Delta Lake,
or Apache Hudi for more sophisticated data management needs. This integration
enables ClickHouse to take advantage of these formats' advanced features, while
still delivering the exceptional query performance it's known for. Organizations
can integrate these table formats directly or connect through metadata catalogs
like AWS Glue, Unity, or other catalog services.
By incorporating ClickHouse as a query engine in their lakehouse architecture,
organizations can run lightning-fast analytical queries against their data lake
while maintaining the flexibility and openness that define the lakehouse approach.
This combination delivers the performance characteristics of a specialized
analytical database without sacrificing the core benefits of the lakehouse model,
including component interchangeability, open formats, and unified data management.
Hybrid architecture: The best of both worlds {#hybrid-architecture-the-best-of-both-worlds}
While ClickHouse excels at querying lakehouse components, its highly optimized
storage engine offers an additional advantage. For use cases demanding ultra-low
latency queries - such as real-time dashboards, operational analytics, or
interactive user experiences - organizations can selectively store
performance-critical data directly in ClickHouse's native format. This hybrid
approach delivers the best of both worlds: the unmatched query speed of
ClickHouse's specialized storage for time-sensitive analytics and the flexibility
to query the broader data lakehouse when needed.
This dual capability allows organizations to implement tiered data strategies
where hot, frequently accessed data resides in ClickHouse's optimized storage
for sub-second query responses, while maintaining seamless access to the complete
data history in the lakehouse. Teams can make architectural decisions based on
performance requirements rather than technical limitations, using ClickHouse as
a lightning-fast analytical database for critical workloads and a flexible query
engine for the broader data ecosystem. | {"source_file": "03_data_warehousing.md"} | [
-0.0006993397837504745,
-0.009574519470334053,
-0.05987522378563881,
0.027588015422225,
-0.0015196584863588214,
-0.04833827540278435,
-0.044817034155130386,
-0.0017769368132576346,
-0.005775004625320435,
-0.013273244723677635,
0.00902595091611147,
0.016868868842720985,
0.030994514003396034,
... |
ef030a2d-826e-4e72-b446-56ec026331eb | slug: /cloud/get-started/cloud/use-cases/observability
title: 'Observability'
description: 'Use ClickHouse Cloud for observability, monitoring, logging, and system performance analysis in distributed applications'
keywords: ['use cases', 'observability']
sidebar_label: 'Observability'
doc_type: 'guide'
Modern software systems are complex. Microservices, cloud infrastructure, and
distributed systems have made it increasingly difficult to understand what's
happening inside our applications. When something goes wrong, teams need to know
where and why quickly.
This is where observability comes in. It's evolved from simple system monitoring
into a comprehensive approach to understanding system behavior. However,
implementing effective observability isn't straightforward - it requires
understanding technical concepts and organizational challenges.
What is Observability? {#what-is-observability}
Observability is understanding a system's internal state by examining its outputs.
In software systems, this means understanding what's happening inside your
applications and infrastructure through the data they generate.
This field has evolved significantly and can be understood through two distinct
generations of observability approaches.
The first generation, often called Observability 1.0, was built around the
traditional "three pillars" approach of metrics, logs, and traces. This approach
required multiple tools and data stores for different types of telemetry. It
often forced engineers to pre-define what they wanted to measure, making it
costly and complex to maintain multiple systems.
Modern observability, or Observability 2.0, takes a fundamentally different
approach. It's based on collecting wide, structured events for each unit of work
(e.g., an HTTP request and response) in our system. This approach captures
high-cardinality data, such as user IDs, request IDs, Git commit hashes,
instance IDs, Kubernetes pod names, specific route parameters, and vendor
transaction IDs. A rule of thumb is adding a piece of metadata if it could help
us understand how the system behaves.
This rich data collection enables dynamic slicing and dicing of data without
pre-defining metrics. Teams can derive metrics, traces, and other visualizations
from this base data, allowing them to answer complex questions about system
behavior that weren't anticipated when the instrumentation was first added.
However, implementing modern observability capabilities presents its challenges.
Organizations need reliable ways to collect, process, and export this rich
telemetry data across diverse systems and technologies. While modern approaches
have evolved beyond traditional boundaries, understanding the fundamental
building blocks of observability remains crucial.
The three pillars of observability {#three-pillars-of-observability} | {"source_file": "02_observability.md"} | [
0.03148660063743591,
-0.0066433255560696125,
-0.03249416500329971,
-0.021856971085071564,
0.04228148236870766,
-0.09185941517353058,
-0.006804512348026037,
0.01200313400477171,
-0.003125672461465001,
0.03293224796652794,
-0.030500900000333786,
0.03740159049630165,
0.0673346221446991,
0.045... |
e7996ec9-f1f4-4083-afd1-e0d4976010af | The three pillars of observability {#three-pillars-of-observability}
To better understand how observability has evolved and works in practice, let's
examine the three pillars of observability - logs, metrics, and traces.
While modern observability has moved beyond treating these as separate concerns,
they remain fundamental concepts for understanding different aspects of system
behavior.
Logs
- Text-based records of discrete events that occur within a system.
These provide detailed context about specific occurrences, errors, and state changes.
Metrics
- Numerical measurements collected over time. These include counters,
gauges, and histograms that help track system performance, resource usage, and business KPIs.
Traces
- Records that track the journey of requests as they flow through distributed systems.
These help understand the relationships between services and identify performance bottlenecks.
These pillars enable teams to monitor, troubleshoot, and optimize their systems.
However, the real power comes from understanding how to effectively collect,
analyze, and correlate data across all three pillars to gain meaningful insights
into system behavior.
The benefits of observability {#the-benefits-of-observability}
While the technical aspects of observability - logs, metrics, and traces - are
well understood, the business benefits are equally important to consider.
In their book
"Observability Engineering"
(O'Reilly, 2022), Charity Majors, Liz Fong-Jones, and George Miranda draw from
industry research and anecdotal feedback to identify four key business benefits
that organizations can expect from implementing proper observability practices.
Let's examine these benefits:
Higher incremental revenue {#higher-incremental-revenue}
The authors note that observability tools that help teams improve uptime and
performance can lead to increased incremental revenue through improved code quality.
This manifests in several ways:
Improved customer experience: Fast problem resolution and prevention of service
degradation leads to higher customer satisfaction and retention
Increased system reliability: Better uptime means more successful transactions
and fewer lost business opportunities
Enhanced performance: The ability to identify and optimize performance bottlenecks
helps maintain responsive services that keep customers engaged
Competitive advantage: Organizations that can maintain high service quality
through comprehensive monitoring and quick issue resolution often gain an edge
over competitors
Cost Savings from faster incident response {#cost-savings-from-faster-incident-response}
One of the most immediate benefits of observability is reduced labor costs
through faster detection and resolution of issues. This comes from:
Reduced Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR)
Improved query response times, enabling faster investigation | {"source_file": "02_observability.md"} | [
-0.020950714126229286,
-0.021033205091953278,
-0.0688815489411354,
0.002588077913969755,
0.024183640256524086,
-0.08323921263217926,
0.00919446349143982,
-0.016217563301324844,
0.028182292357087135,
0.0029153220821172,
-0.07307916134595871,
0.027022821828722954,
0.028817204758524895,
0.041... |
2916ec7f-2056-4055-afd0-84b2d76db9b8 | Reduced Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR)
Improved query response times, enabling faster investigation
Quicker identification of performance bottlenecks
Reduced time spent on-call
Fewer resources wasted on unnecessary rollbacks
We see this in practice -
trip.com built their observability system with ClickHouse
and achieved query speeds 4-30x faster than their previous solution, with 90% of
queries completing in under 300ms, enabling rapid issue investigation.
Cost savings from incidents avoided {#cost-savings-from-incidents-avoided}
Observability doesn't just help resolve issues faster - it helps prevent them entirely.
The authors emphasize how teams can prevent critical issues by:
Identifying potential problems before they become critical
Analyzing patterns to prevent recurring issues
Understanding system behavior under different conditions
Proactively addressing performance bottlenecks
Making data-driven decisions about system improvements
ClickHouse's
own observability platform, LogHouse
,
demonstrates this. It enables our core engineers to search historical patterns across all clusters, helping prevent
recurring issues.
Cost savings from decreased employee churn {#cost-savings-from-decreased-employee-churn}
One of the most overlooked benefits is the impact on team satisfaction and retention.
The authors highlight how observability leads to:
Improved job satisfaction through better tooling
Decreased developer burnout from fewer unresolved issues
Reduced alert fatigue through better signal-to-noise ratio
Lower on-call stress due to better incident management
Increased team confidence in system reliability
We see this in practice - when
Fastly migrated to ClickHouse
,
their engineers were amazed by the improvement in query performance, noting:
"I couldn't believe it. I actually had to go back a couple of times just to
make sure that I was querying it properly... this is coming back too fast.
This doesn't make sense."
As the authors emphasize, while the specific measures of these benefits may vary
depending on the tools and implementation, these fundamental improvements can be
expected across organizations that adopt robust observability practices. The key
is choosing and implementing the right tools effectively to maximize these benefits.
Achieving these benefits requires overcoming several significant hurdles. Even
organizations that understand the value of observability often find that
implementation presents unexpected complexities and challenges that must be
carefully navigated.
Challenges in implementing observability {#challenges-in-implementing-observability} | {"source_file": "02_observability.md"} | [
-0.06256821751594543,
0.03305293992161751,
0.0036213051062077284,
0.044343311339616776,
0.057477571070194244,
-0.029144225642085075,
-0.048867929726839066,
-0.013783590868115425,
0.015601846389472485,
-0.016931230202317238,
-0.040766894817352295,
0.02887672558426857,
0.05909757316112518,
0... |
b7480f27-b464-4f04-aaa3-0c39b7f3a86b | Challenges in implementing observability {#challenges-in-implementing-observability}
Implementing observability within an organization is a transformative step toward
gaining deeper insights into system performance and reliability. However, this
journey is not without its challenges. As organizations strive to harness the
full potential of observability, they encounter various obstacles that can impede
progress. Let’s go through some of them.
Data volume and scalability {#data-volume-and-scalability}
One of the primary hurdles in implementing observability is managing the sheer
volume and scalability of telemetry data generated by modern systems. As
organizations grow, so does the data they need to monitor, necessitating
solutions that efficiently handle large-scale data ingestion and
real-time analytics.
Integration with existing systems {#integration-with-existing-systems}
Integration with existing systems poses another significant challenge. Many
organizations operate in heterogeneous environments with diverse technologies,
making it essential for observability tools to seamlessly integrate with current
infrastructure. Open standards are crucial in facilitating this integration,
ensuring interoperability and reducing the complexity of deploying observability
solutions across varied tech stacks.
Skill gaps {#skill-gaps}
Skill gaps can also impede the successful implementation of observability. The
transition to advanced observability solutions often requires specialized
knowledge of data analytics and specific tools. Teams may need to invest in
training or hiring to bridge these gaps and fully leverage the capabilities of
their observability platforms.
Cost management {#cost-management}
Cost management is critical, as observability solutions can become expensive,
particularly at scale. Organizations must balance the costs of these tools with
the value they provide, seeking cost-effective solutions that offer significant
savings compared to traditional approaches.
Data retention and storage {#data-retention-and-storage}
Data retention and storage management present additional challenges. Deciding
how long to retain observability data without compromising performance or
insights requires careful planning and efficient storage solutions that reduce
storage requirements while maintaining data accessibility.
Standardization and vendor lock-in {#standardization-and-vendor-lock-in}
Ensuring standardization and avoiding vendor lock-in are vital for maintaining
flexibility and adaptability in observability solutions. By adhering to open
standards, organizations can prevent being tied to specific vendors and ensure
their observability stack can evolve with their needs.
Security and compliance {#security-and-compliance} | {"source_file": "02_observability.md"} | [
-0.02260054089128971,
-0.017502008005976677,
-0.031101668253540993,
-0.00005060744297225028,
0.02660750411450863,
-0.12856552004814148,
-0.05662009119987488,
-0.012549261562526226,
-0.061621397733688354,
-0.011886999942362309,
-0.02728760614991188,
-0.01905440352857113,
0.023485632613301277,... |
dc54560f-8229-4a80-a8f3-1e967f239bd1 | Security and compliance {#security-and-compliance}
Security and compliance considerations remain crucial, especially when handling
sensitive data within observability systems. Organizations must ensure that their
observability solutions adhere to relevant regulations and effectively protect
sensitive information.
These challenges underscore the importance of strategic planning and informed
decision-making in implementing observability solutions that effectively meet
organizational needs.
To address these challenges, organizations need a well-structured approach to
implementing observability. The standard observability pipeline has evolved to
provide a framework for effectively collecting, processing, and analyzing
telemetry data. One of the earliest and most influential examples of this
evolution comes from Twitter's experience in 2013. | {"source_file": "02_observability.md"} | [
-0.007660457398742437,
0.08037291467189789,
-0.05175439640879631,
-0.051661863923072815,
-0.00028278480749577284,
-0.07662710547447205,
-0.03543371707201004,
-0.02406233362853527,
-0.0029456336051225662,
0.024908391758799553,
0.009304076433181763,
0.033452779054641724,
0.04990460351109505,
... |
baa50024-c3db-404d-b0ce-5252b4309b9f | slug: /cloud/get-started/cloud/use-cases/overview
title: 'Building on ClickHouse Cloud'
description: 'Explore ClickHouse Cloud use cases including real-time analytics, observability, data lake & warehouse, and machine learning applications'
keywords: ['use cases', 'Cloud']
sidebar_label: 'Overview'
doc_type: 'landing-page'
ClickHouse Cloud is suitable for use as both a
primary data store
and as an
analytics
layer
.
ClickHouse's columnar architecture, vectorized processing, and cloud-native design
make it uniquely suited for analytical workloads that require both speed and scale.
Broadly, the most common use cases for ClickHouse Cloud are: | {"source_file": "00_overview.md"} | [
-0.03799213469028473,
-0.02742306888103485,
-0.04104885831475258,
-0.005362064577639103,
0.01797560788691044,
-0.047400083392858505,
-0.006409579422324896,
-0.0028860156890004873,
-0.01579088717699051,
0.03924084082245827,
0.019732102751731873,
0.05810399353504181,
0.022986682131886482,
0.... |
b59eb2f7-9cb5-4d1f-b75a-78db6c5a8719 | | Use case | Description |
|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Real-Time analytics
| ClickHouse Cloud excels at real-time analytics by delivering sub-second query responses on billions of rows through its columnar storage architecture and vectorized execution engine. The platform handles high-throughput data ingestion of millions of events per second while enabling direct queries on raw data without requiring pre-aggregation. Materialized Views provide real-time aggregations and pre-computed results, while approximate functions for quantiles and counts deliver instant insights perfect for interactive dashboards and real-time decision making. |
|
Observability | {"source_file": "00_overview.md"} | [
0.02976197563111782,
0.08871979266405106,
0.01856815256178379,
-0.030431494116783142,
-0.04131359979510307,
0.06634893268346786,
0.014380432665348053,
0.01878468692302704,
0.03315512835979462,
-0.09461909532546997,
0.04873669147491455,
-0.022827524691820145,
-0.0009608393302187324,
-0.0196... |
a1cb4ef4-774b-4431-8bf1-ea45e9e8f42a | |
Observability
| ClickHouse Cloud is well suited for observability workloads, featuring specialized engines and functions optimized for time-series data that can ingest and query terabytes of logs, metrics, and traces with ease. Through ClickStack, ClickHouse's comprehensive observability solution, organizations can break down the traditional three silos of logs, metrics, and traces by unifying all observability data in a single platform, enabling correlated analysis and eliminating the complexity of managing separate systems. This unified approach makes it ideal for application performance monitoring, infrastructure monitoring, and security event analysis at enterprise scale, with ClickStack providing the tools and integrations needed for complete observability workflows without data silos. |
|
Data warehousing
| ClickHouse's data warehousing ecosystem connectivity allows users to get set up with a few clicks, and easily get their data into ClickHouse. With excellent support for historical data analysis, data lakes, query federation and JSON as a native data type it enables users to store their data with cost efficiency at scale. |
|
Machine Learning and Artificial Intelligence
| ClickHouse Cloud can be used across the ML value chain, from exploration and preparation through to training, testing and inference. Tools like Clickhouse-local, Clickhouse-server and chdb can be used for data exploration, discovery and transformation, while ClickHouse can be used as a feature store, vector store or MLOps observability store. Furthermore, it enables agentic analytics through built-in tools like fully managed remote MCP server, inline text completion for queries, AI-powered chart configuration and Ask AI in product. | | {"source_file": "00_overview.md"} | [
-0.02509600855410099,
-0.046966392546892166,
-0.04276011884212494,
0.03339816629886627,
-0.006725706625729799,
-0.09810272604227066,
0.034336697310209274,
-0.04872867092490196,
-0.02844289503991604,
0.012922847643494606,
-0.0519515685737133,
-0.03546231612563133,
0.02363753877580166,
0.022... |
104f4b9a-b851-4aa1-8a7d-4ae7bf11859e | slug: /cloud/get-started/cloud/use-cases/AI_ML
title: 'Machine learning'
description: 'Learn how ClickHouse powers machine learning applications across the ML pipeline.'
keywords: ['use cases', 'Machine Learning', 'Generative AI']
sidebar_label: 'Machine learning'
doc_type: 'guide'
import machine_learning_data_layer from '@site/static/images/cloud/onboard/discover/use_cases/ml_data_layer.png'
import online_feature_store from '@site/static/images/cloud/onboard/discover/use_cases/ml_data_layer.png'
import Image from '@theme/IdealImage';
The machine learning data layer {#machine-learning-data-layer}
You’ve probably heard the lore that 80% of a machine learning practitioner's time is spent cleaning data.
Regardless of whether this myth holds true or not, what does remain true is that data is at the heart of the machine learning problem, from start to finish.
Whether you’re building RAG pipelines, fine-tuning, training your own model, or evaluating model performance, data is the root of each problem.
Managing data can be tricky, and as a byproduct, the space has experienced a proliferation of tools that are designed to boost productivity by solving a specific slice of a machine learning data problem.
Oftentimes, this takes shape as a layer of abstraction around a more general-purpose solution with an opinionated interface that, on the surface, makes it easier to apply to the specific sub problem at hand.
In effect, this reduces the flexibility that exists with a general-purpose solution in favor of ease-of-use and simplicity of a specific task.
There are several drawbacks to this approach.
A cascading suite of specialized tools, products, and services, in contrast with a general-purpose solution coupled with supporting application code, presents the risk of greater architectural complexity and data costs than necessary.
It’s easy to accidentally find yourself with an endless list of tools and services, each used for just a single step.
There are two common dimensions to these risks:
Learning, maintenance, and switching costs
Machine learning architectures can become so cluttered with various tools and components that it creates a fragmented and challenging environment to learn and manage, with increased points of failure and expense creep.
Data duplication and transfer costs
Using several discrete yet overlapping data systems in a machine learning pipeline may introduce an unnecessary, and often costly, overhead of shipping data around from one to another. | {"source_file": "01_machine_learning.md"} | [
-0.053569331765174866,
-0.0026384887751191854,
0.050043705850839615,
0.05391515791416168,
0.08012734353542328,
-0.12309413403272629,
0.013033455237746239,
-0.026798050850629807,
-0.06634776294231415,
0.000038111604226287454,
0.01726316288113594,
0.042709264904260635,
0.01932423748075962,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.