id stringlengths 36 36 | document stringlengths 3 3k | metadata stringlengths 23 69 | embeddings listlengths 384 384 |
|---|---|---|---|
0c9df36d-6f82-4c69-b3ba-6f38247f04f1 | slug: /intro
sidebar_label: 'What is ClickHouse?'
description: 'ClickHouseยฎ is a column-oriented SQL database management system (DBMS) for online analytical processing (OLAP). It is available as both an open-source software and a cloud offering.'
title: 'What is ClickHouse?'
keywords: ['ClickHouse', 'columnar database', 'OLAP database', 'analytical database', 'high-performance database']
doc_type: 'guide'
import column_example from '@site/static/images/column-oriented-example-query.png';
import row_orientated from '@site/static/images/row-oriented.gif';
import column_orientated from '@site/static/images/column-oriented.gif';
import Image from '@theme/IdealImage';
ClickHouseยฎ is a high-performance, column-oriented SQL database management system (DBMS) for online analytical processing (OLAP). It is available as both an
open-source software
and a
cloud offering
.
What are analytics? {#what-are-analytics}
Analytics, also known as OLAP (Online Analytical Processing), refers to SQL queries with complex calculations (e.g., aggregations, string processing, arithmetic) over massive datasets.
Unlike transactional queries (or OLTP, Online Transaction Processing) that read and write just a few rows per query and, therefore, complete in milliseconds, analytics queries routinely process billions and trillions of rows.
In many use cases,
analytics queries must be "real-time"
, i.e., return a result in less than one second.
Row-oriented vs. column-oriented storage {#row-oriented-vs-column-oriented-storage}
Such a level of performance can only be achieved with the right data "orientation".
Databases store data either
row-oriented or column-oriented
.
In a row-oriented database, consecutive table rows are sequentially stored one after the other. This layout allows to retrieve rows quickly as the column values of each row are stored together.
ClickHouse is a column-oriented database. In such systems, tables are stored as a collection of columns, i.e. the values of each column are stored sequentially one after the other. This layout makes it harder to restore single rows (as there are now gaps between the row values) but column operations such as filters or aggregation become much faster than in a row-oriented database.
The difference is best explained with an example query running over 100 million rows of
real-world anonymized web analytics data
:
sql
SELECT MobilePhoneModel, COUNT() AS c
FROM metrica.hits
WHERE
RegionID = 229
AND EventDate >= '2013-07-01'
AND EventDate <= '2013-07-31'
AND MobilePhone != 0
AND MobilePhoneModel not in ['', 'iPad']
GROUP BY MobilePhoneModel
ORDER BY c DESC
LIMIT 8;
You can
run this query on the ClickHouse SQL Playground
that selects and filters
just a few out of over 100
existing columns, returning the result within milliseconds: | {"source_file": "intro.md"} | [
-0.01928318478167057,
-0.0033720252104103565,
-0.16380944848060608,
0.06337737292051315,
0.028821028769016266,
-0.12753655016422272,
0.06000310927629471,
0.01023700088262558,
0.0026031085290014744,
0.03428010269999504,
0.047059301286935806,
0.05013338848948479,
0.10776232928037643,
-0.0612... |
dc182783-ccd6-4f7f-a8f8-97d9ae122754 | You can
run this query on the ClickHouse SQL Playground
that selects and filters
just a few out of over 100
existing columns, returning the result within milliseconds:
As you can see in the stats section in the above diagram, the query processed 100 million rows in 92 milliseconds, a throughput of approximately over 1 billion rows per second or just under 7 GB of data transferred per second.
Row-oriented DBMS
In a row-oriented database, even though the query above only processes a few out of the existing columns, the system still needs to load the data from other existing columns from disk to memory. The reason for that is that data is stored on disk in chunks called
blocks
(usually fixed sizes, e.g., 4 KB or 8 KB). Blocks are the smallest units of data read from disk to memory. When an application or database requests data, the operating system's disk I/O subsystem reads the required blocks from the disk. Even if only part of a block is needed, the entire block is read into memory (this is due to disk and file system design):
Column-oriented DBMS
Because the values of each column are stored sequentially one after the other on disk, no unnecessary data is loaded when the query from above is run.
Because the block-wise storage and transfer from disk to memory is aligned with the data access pattern of analytical queries, only the columns required for a query are read from disk, avoiding unnecessary I/O for unused data. This is
much faster
compared to row-based storage, where entire rows (including irrelevant columns) are read:
Data replication and integrity {#data-replication-and-integrity}
ClickHouse uses an asynchronous multi-master replication scheme to ensure that data is stored redundantly on multiple nodes. After being written to any available replica, all the remaining replicas retrieve their copy in the background. The system maintains identical data on different replicas. Recovery after most failures is performed automatically, or semi-automatically in complex cases.
Role-Based Access Control {#role-based-access-control}
ClickHouse implements user account management using SQL queries and allows for role-based access control configuration similar to what can be found in ANSI SQL standard and popular relational database management systems.
SQL support {#sql-support}
ClickHouse supports a
declarative query language based on SQL
that is identical to the ANSI SQL standard in many cases. Supported query clauses include
GROUP BY
,
ORDER BY
, subqueries in
FROM
,
JOIN
clause,
IN
operator,
window functions
and scalar subqueries.
Approximate calculation {#approximate-calculation} | {"source_file": "intro.md"} | [
0.038416940718889236,
-0.07482552528381348,
-0.1039363443851471,
0.046946100890636444,
-0.004697634372860193,
-0.09376468509435654,
0.042874399572610855,
-0.003287683706730604,
0.06976799666881561,
0.07863056659698486,
-0.007381668779999018,
0.07531782984733582,
0.03252466395497322,
-0.080... |
dc1bfc41-638a-45eb-8913-7ac7b352c117 | Approximate calculation {#approximate-calculation}
ClickHouse provides ways to trade accuracy for performance. For example, some of its aggregate functions calculate the distinct value count, the median, and quantiles approximately. Also, queries can be run on a sample of the data to compute an approximate result quickly. Finally, aggregations can be run with a limited number of keys instead of for all keys. Depending on how skewed the distribution of the keys is, this can provide a reasonably accurate result that uses far fewer resources than an exact calculation.
Adaptive join algorithms {#adaptive-join-algorithms}
ClickHouse chooses the join algorithm adaptively: it starts with fast hash joins and falls back to merge joins if there's more than one large table.
Superior query performance {#superior-query-performance}
ClickHouse is well known for having extremely fast query performance.
To learn why ClickHouse is so fast, see the
Why is ClickHouse fast?
guide. | {"source_file": "intro.md"} | [
-0.0642973780632019,
-0.002286137081682682,
-0.024531498551368713,
0.035086531192064285,
-0.02869349904358387,
-0.10448848456144333,
-0.02375706098973751,
0.042714446783065796,
0.041751131415367126,
-0.03424740582704544,
-0.04660622775554657,
0.046110183000564575,
0.057001739740371704,
-0.... |
eb9d94fa-0a26-46de-88e5-d81db80a066e | slug: /deployment-modes
sidebar_label: 'Deployment modes'
description: 'ClickHouse offers four deployment options that all use the same powerful database engine, just packaged differently to suit your specific needs.'
title: 'Deployment modes'
keywords: ['Deployment Modes', 'chDB']
show_related_blogs: true
doc_type: 'guide'
import chServer from '@site/static/images/deployment-modes/ch-server.png';
import chCloud from '@site/static/images/deployment-modes/ch-cloud.png';
import chLocal from '@site/static/images/deployment-modes/ch-local.png';
import chDB from '@site/static/images/deployment-modes/chdb.png';
import Image from '@theme/IdealImage';
ClickHouse is a versatile database system that can be deployed in several different ways depending on your needs. At its core, all deployment options
use the same powerful ClickHouse database engine
โ what differs is how you interact with it and where it runs.
Whether you're running large-scale analytics in production, doing local data analysis, or building applications, there's a deployment option designed for your use case. The consistency of the underlying engine means you get the same high performance and SQL compatibility across all deployment modes.
This guide explores the four main ways to deploy and use ClickHouse:
ClickHouse Server for traditional client/server deployments
ClickHouse Cloud for fully managed database operations
clickhouse-local for command-line data processing
chDB for embedding ClickHouse directly in applications
Each deployment mode has its own strengths and ideal use cases, which we'll explore in detail below.
ClickHouse Server {#clickhouse-server}
ClickHouse Server represents the traditional client/server architecture and is ideal for production deployments. This deployment mode provides the full OLAP database capabilities with high throughput and low latency queries that ClickHouse is known for.
When it comes to deployment flexibility, ClickHouse Server can be installed on your local machine for development or testing, deployed to major cloud providers like AWS, GCP, or Azure for cloud-based operations, or set up on your own on-premises hardware. For larger scale operations, it can be configured as a distributed cluster to handle increased load and provide high availability.
This deployment mode is the go-to choice for production environments where reliability, performance, and full feature access are crucial.
ClickHouse Cloud {#clickhouse-cloud}
ClickHouse Cloud
is a fully managed version of ClickHouse that removes the operational overhead of running your own deployment. While it maintains all the core capabilities of ClickHouse Server, it enhances the experience with additional features designed to streamline development and operations. | {"source_file": "deployment-modes.md"} | [
0.05412152037024498,
-0.02603895030915737,
-0.03815948963165283,
0.019295426085591316,
0.004027033224701881,
-0.026011155918240547,
-0.00045833399053663015,
0.02847343310713768,
-0.06331382691860199,
0.016819315031170845,
0.04479881003499031,
-0.05258171260356903,
0.14135469496250153,
-0.0... |
d4d6f4a8-e697-49a4-898c-641b3e021053 | A key advantage of ClickHouse Cloud is its integrated tooling.
ClickPipes
provides a robust data ingestion framework, allowing you to easily connect and stream data from various sources without managing complex ETL pipelines. The platform also offers a dedicated
querying API
, making it significantly easier to build applications.
The SQL Console in ClickHouse Cloud includes a powerful
dashboarding
feature that lets you transform your queries into interactive visualizations. You can create and share dashboards built from your saved queries, with the ability to add interactive elements through query parameters. These dashboards can be made dynamic using global filters, allowing users to explore data through customizable views โ though it's important to note that users will need at least read access to the underlying saved queries to view the visualizations.
For monitoring and optimization, ClickHouse Cloud includes built-in charts and
query insights
. These tools provide deep visibility into your cluster's performance, helping you understand query patterns, resource utilization, and potential optimization opportunities. This level of observability is particularly valuable for teams that need to maintain high-performance analytics operations without dedicating resources to infrastructure management.
The managed nature of the service means you don't need to worry about updates, backups, scaling, or security patches โ these are all handled automatically. This makes it an ideal choice for organizations that want to focus on their data and applications rather than database administration.
clickhouse-local {#clickhouse-local}
clickhouse-local
is a powerful command-line tool that provides the complete functionality of ClickHouse in a standalone executable. It's essentially the same database as ClickHouse Server, but packaged in a way that lets you harness all of ClickHouse's capabilities directly from the command line without running a server instance.
This tool excels at ad-hoc data analysis, particularly when working with local files or data stored in cloud storage services. You can directly query files in various formats (CSV, JSON, Parquet, etc.) using ClickHouse's SQL dialect, making it an excellent choice for quick data exploration or one-off analysis tasks.
Since clickhouse-local includes all of ClickHouse's functionality, you can use it for data transformations, format conversions, or any other database operations you'd normally do with ClickHouse Server. While primarily used for temporary operations, it can also persist data using the same storage engine as ClickHouse Server when needed. | {"source_file": "deployment-modes.md"} | [
-0.006867928430438042,
-0.016696671023964882,
-0.04296214506030083,
0.03092867136001587,
0.01908104494214058,
-0.021781345829367638,
0.005512129981070757,
-0.015093328431248665,
0.021132726222276688,
0.03882049396634102,
-0.06392413377761841,
0.002775719854980707,
0.04716826602816582,
-0.0... |
177d0ed1-ac5d-4759-b3a2-eb95df425c24 | The combination of remote table functions and access to the local file system makes clickhouse-local particularly useful for scenarios where you need to join data between a ClickHouse Server and files on your local machine. This is especially valuable when working with sensitive or temporary local data that you don't want to upload to a server.
chDB {#chdb}
chDB
is ClickHouse embedded as an in-process database engine, with Python being the primary implementation, though it's also available for Go, Rust, NodeJS, and Bun. This deployment option brings ClickHouse's powerful OLAP capabilities directly into your application's process, eliminating the need for a separate database installation.
chDB provides seamless integration with your application's ecosystem. In Python, for example, it's optimized to work efficiently with common data science tools like Pandas and Arrow, minimizing data copying overhead through Python memoryview. This makes it particularly valuable for data scientists and analysts who want to leverage ClickHouse's query performance within their existing workflows.
chDB's can also connect to databases created with clickhouse-local, providing flexibility in how you work with your data. This means you can seamlessly transition between local development, data exploration in Python, and more permanent storage solutions without changing your data access patterns. | {"source_file": "deployment-modes.md"} | [
0.01718216761946678,
-0.02985115721821785,
-0.05021491274237633,
0.05011414363980293,
-0.013536999933421612,
-0.02060171216726303,
-0.03894020989537239,
0.06507585197687149,
0.012710122391581535,
-0.003960840869694948,
-0.02456613816320896,
0.013348270207643509,
0.04905331879854202,
-0.026... |
51057e2c-6362-46e0-9943-5ff57197dc7a | slug: /introduction-clickhouse
title: 'Introduction'
description: 'Landing page for Introduction'
pagination_next: null
doc_type: 'landing-page'
keywords: ['ClickHouse introduction', 'getting started', 'what is ClickHouse', 'quick start', 'installation', 'deployment', 'tutorial']
Welcome to ClickHouse! Check out the pages below to learn how to get up and running with ClickHouse - the fastest and most resource efficient real-time data warehouse and open-source database.
| Page | Description |
|------------------------------------------------|--------------------------------------------------------------------|
|
What is ClickHouse?
| Learn more about what ClickHouse is. |
|
Quick Start
| Quick start guide to get you up and running in no time. |
|
Advanced Tutorial
| Comfortable with the basics? Let's do something more interesting. |
|
Install
| Learn about the various ways you can install ClickHouse. |
|
Deployment modes
| This guide explores the four main ways to deploy and use ClickHouse.| | {"source_file": "introduction-index.md"} | [
0.01919693499803543,
-0.02640656754374504,
-0.037898462265729904,
0.008657253347337246,
-0.04782656580209732,
-0.015120362862944603,
-0.03792779892683029,
-0.03959900140762329,
-0.07403235882520676,
0.010023212991654873,
0.05680134892463684,
0.03053119406104088,
0.05606914311647415,
-0.082... |
ef994a92-1978-4887-90e6-a5ad0223a81f | slug: /tutorial
sidebar_label: 'Advanced tutorial'
title: 'Advanced tutorial'
description: 'Learn how to ingest and query data in ClickHouse using a New York City taxi example dataset.'
sidebar_position: 0.5
keywords: ['clickhouse', 'install', 'tutorial', 'dictionary', 'dictionaries', 'example', 'advanced', 'taxi', 'new york', 'nyc']
show_related_blogs: true
doc_type: 'guide'
Advanced Tutorial
Overview {#overview}
Learn how to ingest and query data in ClickHouse using the New York City taxi example dataset.
Prerequisites {#prerequisites}
You need access to a running ClickHouse service to complete this tutorial. For instructions, see the
Quick Start
guide.
Create a new table {#create-a-new-table}
The New York City taxi dataset contains details about millions of taxi rides, with columns including tip amount, tolls, payment type, and more. Create a table to store this data.
Connect to the SQL console:
For ClickHouse Cloud, select a service from the dropdown menu and then select
SQL Console
from the left navigation menu.
For self-managed ClickHouse, connect to the SQL console at
https://_hostname_:8443/play
. Check with your ClickHouse administrator for the details. | {"source_file": "tutorial.md"} | [
0.06047366186976433,
-0.05308140814304352,
-0.020995961502194405,
0.07229398190975189,
-0.07572075724601746,
-0.0031241104006767273,
0.0999741181731224,
-0.004984110128134489,
-0.09802655130624771,
0.031181475147604942,
0.08243687450885773,
-0.028116928413510323,
0.0142813166603446,
-0.021... |
ac70d7fb-cf22-41f4-8ab5-85bb5a3a2466 | For self-managed ClickHouse, connect to the SQL console at
https://_hostname_:8443/play
. Check with your ClickHouse administrator for the details.
Create the following
trips
table in the
default
database:
sql
CREATE TABLE trips
(
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(pickup_date)
ORDER BY pickup_datetime;
Add the dataset {#add-the-dataset}
Now that you've created a table, add the New York City taxi data from CSV files in S3.
The following command inserts ~2,000,000 rows into your
trips
table from two different files in S3:
trips_1.tsv.gz
and
trips_2.tsv.gz
: | {"source_file": "tutorial.md"} | [
0.07858863472938538,
-0.09341228753328323,
-0.06888148188591003,
0.04802561178803444,
-0.06447990983724594,
0.01892010308802128,
0.023231476545333862,
-0.012598633766174316,
-0.049334727227687836,
0.06526753306388855,
0.06607182323932648,
-0.10060583800077438,
-0.0040159327909350395,
0.008... |
e56d593e-250b-4715-83eb-5c38ce1ad0ec | The following command inserts ~2,000,000 rows into your
trips
table from two different files in S3:
trips_1.tsv.gz
and
trips_2.tsv.gz
:
sql
INSERT INTO trips
SELECT * FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/trips_{1..2}.gz',
'TabSeparatedWithNames', "
`trip_id` UInt32,
`vendor_id` Enum8('1' = 1, '2' = 2, '3' = 3, '4' = 4, 'CMT' = 5, 'VTS' = 6, 'DDS' = 7, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14, '' = 15),
`pickup_date` Date,
`pickup_datetime` DateTime,
`dropoff_date` Date,
`dropoff_datetime` DateTime,
`store_and_fwd_flag` UInt8,
`rate_code_id` UInt8,
`pickup_longitude` Float64,
`pickup_latitude` Float64,
`dropoff_longitude` Float64,
`dropoff_latitude` Float64,
`passenger_count` UInt8,
`trip_distance` Float64,
`fare_amount` Float32,
`extra` Float32,
`mta_tax` Float32,
`tip_amount` Float32,
`tolls_amount` Float32,
`ehail_fee` Float32,
`improvement_surcharge` Float32,
`total_amount` Float32,
`payment_type` Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4),
`trip_type` UInt8,
`pickup` FixedString(25),
`dropoff` FixedString(25),
`cab_type` Enum8('yellow' = 1, 'green' = 2, 'uber' = 3),
`pickup_nyct2010_gid` Int8,
`pickup_ctlabel` Float32,
`pickup_borocode` Int8,
`pickup_ct2010` String,
`pickup_boroct2010` String,
`pickup_cdeligibil` String,
`pickup_ntacode` FixedString(4),
`pickup_ntaname` String,
`pickup_puma` UInt16,
`dropoff_nyct2010_gid` UInt8,
`dropoff_ctlabel` Float32,
`dropoff_borocode` UInt8,
`dropoff_ct2010` String,
`dropoff_boroct2010` String,
`dropoff_cdeligibil` String,
`dropoff_ntacode` FixedString(4),
`dropoff_ntaname` String,
`dropoff_puma` UInt16
") SETTINGS input_format_try_infer_datetimes = 0
Wait for the
INSERT
to finish. It might take a moment for the 150ย MB of data to be downloaded.
When the insert is finished, verify it worked:
sql
SELECT count() FROM trips
This query should return 1,999,657 rows.
Analyze the data {#analyze-the-data}
Run some queries to analyze the data. Explore the following examples or try your own SQL query.
Calculate the average tip amount:
sql
SELECT round(avg(tip_amount), 2) FROM trips
Expected output
response
โโround(avg(tip_amount), 2)โโ
โ 1.68 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Calculate the average cost based on the number of passengers:
sql
SELECT
passenger_count,
ceil(avg(total_amount),2) AS average_total_amount
FROM trips
GROUP BY passenger_count
Expected output
The
passenger_count
ranges from 0 to 9: | {"source_file": "tutorial.md"} | [
0.016475437209010124,
-0.05652754008769989,
-0.05326473340392113,
0.03552749752998352,
0.003494352800771594,
-0.0350937619805336,
0.03493805229663849,
-0.024082254618406296,
-0.050371166318655014,
0.061176180839538574,
0.02920377068221569,
-0.057888031005859375,
0.048549503087997437,
-0.05... |
e4e320c8-15f3-40b9-ae81-46d9032bd8fb | Expected output
The
passenger_count
ranges from 0 to 9:
response
โโpassenger_countโโฌโaverage_total_amountโโ
โ 0 โ 22.69 โ
โ 1 โ 15.97 โ
โ 2 โ 17.15 โ
โ 3 โ 16.76 โ
โ 4 โ 17.33 โ
โ 5 โ 16.35 โ
โ 6 โ 16.04 โ
โ 7 โ 59.8 โ
โ 8 โ 36.41 โ
โ 9 โ 9.81 โ
โโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโ
Calculate the daily number of pickups per neighborhood:
sql
SELECT
pickup_date,
pickup_ntaname,
SUM(1) AS number_of_trips
FROM trips
GROUP BY pickup_date, pickup_ntaname
ORDER BY pickup_date ASC
Expected output
response
โโpickup_dateโโฌโpickup_ntanameโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโnumber_of_tripsโโ
โ 2015-07-01 โ Brooklyn Heights-Cobble Hill โ 13 โ
โ 2015-07-01 โ Old Astoria โ 5 โ
โ 2015-07-01 โ Flushing โ 1 โ
โ 2015-07-01 โ Yorkville โ 378 โ
โ 2015-07-01 โ Gramercy โ 344 โ
โ 2015-07-01 โ Fordham South โ 2 โ
โ 2015-07-01 โ SoHo-TriBeCa-Civic Center-Little Italy โ 621 โ
โ 2015-07-01 โ Park Slope-Gowanus โ 29 โ
โ 2015-07-01 โ Bushwick South โ 5 โ
Calculate the length of each trip in minutes, then group the results by trip length:
sql
SELECT
avg(tip_amount) AS avg_tip,
avg(fare_amount) AS avg_fare,
avg(passenger_count) AS avg_passenger,
count() AS count,
truncate(date_diff('second', pickup_datetime, dropoff_datetime)/60) as trip_minutes
FROM trips
WHERE trip_minutes > 0
GROUP BY trip_minutes
ORDER BY trip_minutes DESC
Expected output | {"source_file": "tutorial.md"} | [
0.03751980513334274,
-0.05694474279880524,
0.05882725492119789,
0.1545122116804123,
-0.07292012870311737,
-0.020707206800580025,
0.03689555823802948,
-0.02319454774260521,
-0.062479935586452484,
-0.02073882520198822,
0.07053999602794647,
-0.10191455483436584,
-0.010695922188460827,
0.03544... |
08169bc3-4812-4ff7-8c4c-3211fbd274c1 | Expected output
response
โโโโโโโโโโโโโโโavg_tipโโฌโโโโโโโโโโโavg_fareโโฌโโโโโโavg_passengerโโฌโโcountโโฌโtrip_minutesโโ
โ 1.9600000381469727 โ 8 โ 1 โ 1 โ 27511 โ
โ 0 โ 12 โ 2 โ 1 โ 27500 โ
โ 0.542166673981895 โ 19.716666666666665 โ 1.9166666666666667 โ 60 โ 1439 โ
โ 0.902499997522682 โ 11.270625001192093 โ 1.95625 โ 160 โ 1438 โ
โ 0.9715789457909146 โ 13.646616541353383 โ 2.0526315789473686 โ 133 โ 1437 โ
โ 0.9682692398245518 โ 14.134615384615385 โ 2.076923076923077 โ 104 โ 1436 โ
โ 1.1022105210705808 โ 13.778947368421052 โ 2.042105263157895 โ 95 โ 1435 โ
Show the number of pickups in each neighborhood broken down by hour of the day:
sql
SELECT
pickup_ntaname,
toHour(pickup_datetime) as pickup_hour,
SUM(1) AS pickups
FROM trips
WHERE pickup_ntaname != ''
GROUP BY pickup_ntaname, pickup_hour
ORDER BY pickup_ntaname, pickup_hour
Expected output | {"source_file": "tutorial.md"} | [
0.0027975807897746563,
-0.05905662477016449,
-0.030203988775610924,
-0.00924490112811327,
-0.020787514746189117,
-0.12085580080747604,
0.019807320088148117,
-0.0020363519433885813,
0.04580549895763397,
0.06828351318836212,
0.030300572514533997,
-0.011409077793359756,
0.04370974749326706,
-... |
582dc70b-dd40-40b0-b47a-7ac744691c21 | response
โโpickup_ntanameโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโpickup_hourโโฌโpickupsโโ
โ Airport โ 0 โ 3509 โ
โ Airport โ 1 โ 1184 โ
โ Airport โ 2 โ 401 โ
โ Airport โ 3 โ 152 โ
โ Airport โ 4 โ 213 โ
โ Airport โ 5 โ 955 โ
โ Airport โ 6 โ 2161 โ
โ Airport โ 7 โ 3013 โ
โ Airport โ 8 โ 3601 โ
โ Airport โ 9 โ 3792 โ
โ Airport โ 10 โ 4546 โ
โ Airport โ 11 โ 4659 โ
โ Airport โ 12 โ 4621 โ
โ Airport โ 13 โ 5348 โ
โ Airport โ 14 โ 5889 โ
โ Airport โ 15 โ 6505 โ
โ Airport โ 16 โ 6119 โ
โ Airport โ 17 โ 6341 โ
โ Airport โ 18 โ 6173 โ
โ Airport โ 19 โ 6329 โ
โ Airport โ 20 โ 6271 โ
โ Airport โ 21 โ 6649 โ
โ Airport โ 22 โ 6356 โ
โ Airport โ 23 โ 6016 โ
โ Allerton-Pelham Gardens โ 4 โ 1 โ
โ Allerton-Pelham Gardens โ 6 โ 1 โ
โ Allerton-Pelham Gardens โ 7 โ 1 โ
โ Allerton-Pelham Gardens โ 9 โ 5 โ
โ Allerton-Pelham Gardens โ 10 โ 3 โ
โ Allerton-Pelham Gardens โ 15 โ 1 โ
โ Allerton-Pelham Gardens โ 20 โ 2 โ
โ Allerton-Pelham Gardens โ 23 โ 1 โ
โ Annadale-Huguenot-Prince's Bay-Eltingville โ 23 โ 1 โ
โ Arden Heights โ 11 โ 1 โ | {"source_file": "tutorial.md"} | [
0.058384861797094345,
-0.009645843878388405,
0.02925707958638668,
0.03436172753572464,
0.04325731098651886,
-0.05869687721133232,
0.030438240617513657,
-0.06944656372070312,
-0.08211804181337357,
0.07249592989683151,
0.043082475662231445,
-0.0678800493478775,
0.025240017101168633,
-0.02123... |
f8fca207-9265-4a86-bfb4-3b2a695f9f9f | Retrieve rides to LaGuardia or JFK airports:
sql
SELECT
pickup_datetime,
dropoff_datetime,
total_amount,
pickup_nyct2010_gid,
dropoff_nyct2010_gid,
CASE
WHEN dropoff_nyct2010_gid = 138 THEN 'LGA'
WHEN dropoff_nyct2010_gid = 132 THEN 'JFK'
END AS airport_code,
EXTRACT(YEAR FROM pickup_datetime) AS year,
EXTRACT(DAY FROM pickup_datetime) AS day,
EXTRACT(HOUR FROM pickup_datetime) AS hour
FROM trips
WHERE dropoff_nyct2010_gid IN (132, 138)
ORDER BY pickup_datetime
Expected output
response
โโโโโโpickup_datetimeโโฌโโโโdropoff_datetimeโโฌโtotal_amountโโฌโpickup_nyct2010_gidโโฌโdropoff_nyct2010_gidโโฌโairport_codeโโฌโyearโโฌโdayโโฌโhourโโ
โ 2015-07-01 00:04:14 โ 2015-07-01 00:15:29 โ 13.3 โ -34 โ 132 โ JFK โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:09:42 โ 2015-07-01 00:12:55 โ 6.8 โ 50 โ 138 โ LGA โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:23:04 โ 2015-07-01 00:24:39 โ 4.8 โ -125 โ 132 โ JFK โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:27:51 โ 2015-07-01 00:39:02 โ 14.72 โ -101 โ 138 โ LGA โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:32:03 โ 2015-07-01 00:55:39 โ 39.34 โ 48 โ 138 โ LGA โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:34:12 โ 2015-07-01 00:40:48 โ 9.95 โ -93 โ 132 โ JFK โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:38:26 โ 2015-07-01 00:49:00 โ 13.3 โ -11 โ 138 โ LGA โ 2015 โ 1 โ 0 โ
โ 2015-07-01 00:41:48 โ 2015-07-01 00:44:45 โ 6.3 โ -94 โ 132 โ JFK โ 2015 โ 1 โ 0 โ
โ 2015-07-01 01:06:18 โ 2015-07-01 01:14:43 โ 11.76 โ 37 โ 132 โ JFK โ 2015 โ 1 โ 1 โ
Create a dictionary {#create-a-dictionary}
A dictionary is a mapping of key-value pairs stored in memory. For details, see
Dictionaries
Create a dictionary associated with a table in your ClickHouse service.
The table and dictionary are based on a CSV file that contains a row for each neighborhood in New York City.
The neighborhoods are mapped to the names of the five New York City boroughs (Bronx, Brooklyn, Manhattan, Queens and Staten Island), as well as Newark Airport (EWR).
Here's an excerpt from the CSV file you're using in table format. The
LocationID
column in the file maps to the
pickup_nyct2010_gid
and
dropoff_nyct2010_gid
columns in your
trips
table: | {"source_file": "tutorial.md"} | [
0.056326694786548615,
-0.03791501373052597,
0.07194582372903824,
0.08467718213796616,
-0.02403566986322403,
-0.016435807570815086,
0.04129025712609291,
0.0021058947313576937,
-0.049052171409130096,
-0.013069674372673035,
0.03475150093436241,
-0.02887413091957569,
-0.047933388501405716,
0.0... |
df11f09f-1690-4c86-a251-c8559d753872 | Here's an excerpt from the CSV file you're using in table format. The
LocationID
column in the file maps to the
pickup_nyct2010_gid
and
dropoff_nyct2010_gid
columns in your
trips
table:
| LocationID | Borough | Zone | service_zone |
| ----------- | ----------- | ----------- | ----------- |
| 1 | EWR | Newark Airport | EWR |
| 2 | Queens | Jamaica Bay | Boro Zone |
| 3 | Bronx | Allerton/Pelham Gardens | Boro Zone |
| 4 | Manhattan | Alphabet City | Yellow Zone |
| 5 | Staten Island | Arden Heights | Boro Zone |
Run the following SQL command, which creates a dictionary named
taxi_zone_dictionary
and populates the dictionary from the CSV file in S3. The URL for the file is
https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/taxi_zone_lookup.csv
.
sql
CREATE DICTIONARY taxi_zone_dictionary
(
`LocationID` UInt16 DEFAULT 0,
`Borough` String,
`Zone` String,
`service_zone` String
)
PRIMARY KEY LocationID
SOURCE(HTTP(URL 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/nyc-taxi/taxi_zone_lookup.csv' FORMAT 'CSVWithNames'))
LIFETIME(MIN 0 MAX 0)
LAYOUT(HASHED_ARRAY())
:::note
Setting
LIFETIME
to 0 disables automatic updates to avoid unnecessary traffic to our S3 bucket. In other cases, you might configure it differently. For details, see
Refreshing dictionary data using LIFETIME
.
:::
Verify it worked. The following should return 265 rows, or one row for each neighborhood:
sql
SELECT * FROM taxi_zone_dictionary
Use the
dictGet
function (
or its variations
) to retrieve a value from a dictionary. You pass in the name of the dictionary, the value you want, and the key (which in our example is the
LocationID
column of
taxi_zone_dictionary
).
For example, the following query returns the
Borough
whose
LocationID
is 132, which corresponds to JFK airport):
sql
SELECT dictGet('taxi_zone_dictionary', 'Borough', 132)
JFK is in Queens. Notice the time to retrieve the value is essentially 0:
```response
โโdictGet('taxi_zone_dictionary', 'Borough', 132)โโ
โ Queens โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
1 rows in set. Elapsed: 0.004 sec.
```
Use the
dictHas
function to see if a key is present in the dictionary. For example, the following query returns
1
(which is "true" in ClickHouse):
sql
SELECT dictHas('taxi_zone_dictionary', 132)
The following query returns 0 because 4567 is not a value of
LocationID
in the dictionary:
sql
SELECT dictHas('taxi_zone_dictionary', 4567) | {"source_file": "tutorial.md"} | [
0.05993638187646866,
-0.06723175942897797,
-0.06772593408823013,
0.02287931926548481,
0.052565086632966995,
0.008882269263267517,
0.003751863492652774,
-0.023081909865140915,
-0.05735364183783531,
-0.017438219860196114,
0.0400756299495697,
-0.050385717302560806,
-0.019738776609301567,
-0.0... |
5717d1b5-988f-4bb6-96d0-fdb0c7a56bbe | The following query returns 0 because 4567 is not a value of
LocationID
in the dictionary:
sql
SELECT dictHas('taxi_zone_dictionary', 4567)
Use the
dictGet
function to retrieve a borough's name in a query. For example:
sql
SELECT
count(1) AS total,
dictGetOrDefault('taxi_zone_dictionary','Borough', toUInt64(pickup_nyct2010_gid), 'Unknown') AS borough_name
FROM trips
WHERE dropoff_nyct2010_gid = 132 OR dropoff_nyct2010_gid = 138
GROUP BY borough_name
ORDER BY total DESC
This query sums up the number of taxi rides per borough that end at either the LaGuardia or JFK airport. The result looks like the following, and notice there are quite a few trips where the pickup neighborhood is unknown:
```response
โโtotalโโฌโborough_nameโโโ
โ 23683 โ Unknown โ
โ 7053 โ Manhattan โ
โ 6828 โ Brooklyn โ
โ 4458 โ Queens โ
โ 2670 โ Bronx โ
โ 554 โ Staten Island โ
โ 53 โ EWR โ
โโโโโโโโโดโโโโโโโโโโโโโโโโ
7 rows in set. Elapsed: 0.019 sec. Processed 2.00 million rows, 4.00 MB (105.70 million rows/s., 211.40 MB/s.)
```
Perform a join {#perform-a-join}
Write some queries that join the
taxi_zone_dictionary
with your
trips
table.
Start with a simple
JOIN
that acts similarly to the previous airport query above:
sql
SELECT
count(1) AS total,
Borough
FROM trips
JOIN taxi_zone_dictionary ON toUInt64(trips.pickup_nyct2010_gid) = taxi_zone_dictionary.LocationID
WHERE dropoff_nyct2010_gid = 132 OR dropoff_nyct2010_gid = 138
GROUP BY Borough
ORDER BY total DESC
The response looks identical to the
dictGet
query:
```response
โโtotalโโฌโBoroughโโโโโโโโ
โ 7053 โ Manhattan โ
โ 6828 โ Brooklyn โ
โ 4458 โ Queens โ
โ 2670 โ Bronx โ
โ 554 โ Staten Island โ
โ 53 โ EWR โ
โโโโโโโโโดโโโโโโโโโโโโโโโโ
6 rows in set. Elapsed: 0.034 sec. Processed 2.00 million rows, 4.00 MB (59.14 million rows/s., 118.29 MB/s.)
```
:::note
Notice the output of the above
JOIN
query is the same as the query before it that used
dictGetOrDefault
(except that the
Unknown
values are not included). Behind the scenes, ClickHouse is actually calling the
dictGet
function for the
taxi_zone_dictionary
dictionary, but the
JOIN
syntax is more familiar for SQL developers.
:::
This query returns rows for the the 1000 trips with the highest tip amount, then performs an inner join of each row with the dictionary:
sql
SELECT *
FROM trips
JOIN taxi_zone_dictionary
ON trips.dropoff_nyct2010_gid = taxi_zone_dictionary.LocationID
WHERE tip_amount > 0
ORDER BY tip_amount DESC
LIMIT 1000
:::note
Generally, we avoid using
SELECT *
often in ClickHouse. You should only retrieve the columns you actually need.
:::
Next steps {#next-steps}
Learn more about ClickHouse with the following documentation: | {"source_file": "tutorial.md"} | [
0.0657186284661293,
-0.0352841392159462,
0.02900548465549946,
0.06622118502855301,
-0.034131888300180435,
-0.09874722361564636,
0.0916043221950531,
0.013648932799696922,
0.012802446261048317,
0.01459417399019003,
0.05195318162441254,
-0.04214928671717644,
0.020907532423734665,
-0.027089070... |
98294a7e-4c9c-4c04-9498-4ee18eab7d1e | Next steps {#next-steps}
Learn more about ClickHouse with the following documentation:
Introduction to Primary Indexes in ClickHouse
: Learn how ClickHouse uses sparse primary indexes to efficiently locate relevant data during queries.
Integrate an external data source
: Review data source integration options, including files, Kafka, PostgreSQL, data pipelines, and many others.
Visualize data in ClickHouse
: Connect your favorite UI/BI tool to ClickHouse.
SQL Reference
: Browse the SQL functions available in ClickHouse for transforming, processing and analyzing data. | {"source_file": "tutorial.md"} | [
0.047080058604478836,
-0.05786196142435074,
-0.06612276285886765,
0.04817037656903267,
-0.08397457748651505,
-0.024488694965839386,
-0.011186684481799603,
-0.005086170509457588,
-0.055344246327877045,
0.0072028194554150105,
-0.01758847013115883,
0.020748693495988846,
0.005121382884681225,
... |
1652497a-ad87-4366-8c63-896e4b2ff650 | slug: /managing-data/truncate
sidebar_label: 'Truncate table'
title: 'Truncate Table'
hide_title: false
description: 'Truncate allows the data in a table or database to be removed, while preserving their existence.'
doc_type: 'reference'
keywords: ['truncate', 'delete data', 'remove data', 'clear table', 'table maintenance']
Truncate allows the data in a table or database to be removed, while preserving their existence. This is a lightweight operation which cannot be reversed.
import Truncate from '@site/docs/sql-reference/statements/truncate.md'; | {"source_file": "truncate.md"} | [
0.012780768796801567,
0.09131450206041336,
0.007843563333153725,
0.03966870903968811,
-0.004269876983016729,
-0.03797914832830429,
0.05653275176882744,
-0.01025429181754589,
-0.032584067434072495,
0.04249319061636925,
0.07270859181880951,
0.0993771031498909,
0.06825192272663116,
-0.1121570... |
2469869e-ffeb-47bf-9e9e-135227638391 | slug: /architecture/introduction
sidebar_label: 'Introduction'
title: 'Introduction'
sidebar_position: 1
description: 'Page with deployment examples that are based on the advice provided to ClickHouse users by the ClickHouse Support and Services organization'
doc_type: 'guide'
keywords: ['deployment', 'architecture', 'replication', 'sharding', 'cluster setup']
import ReplicationShardingTerminology from '@site/docs/_snippets/_replication-sharding-terminology.md';
The deployment examples in this section are based on the advice provided to ClickHouse users by
the ClickHouse Support and Services organization. These are working examples, and
we recommend that you try them and then adjust them to suit your needs. You may find
an example here that fits your requirements exactly.
We offer 'recipes' of a number of different topologies in the
example repo
and recommend taking a look at them if the examples in this section do not fit your
needs exactly. | {"source_file": "terminology.md"} | [
-0.01570202223956585,
-0.06512011587619781,
-0.01998758688569069,
-0.041050225496292114,
-0.026695221662521362,
-0.02941574715077877,
-0.06881500035524368,
-0.033201202750205994,
-0.1069282814860344,
0.018296081572771072,
0.00800006277859211,
0.02310624159872532,
0.09707011282444,
-0.02192... |
0c131683-c91d-4e33-808d-0ae639854837 | slug: /deployment-guides/index
title: 'Deployment Guides Overview'
description: 'Landing page for the deployment and scaling section'
keywords: ['deployment guides', 'scaling', 'cluster deployment', 'replication', 'fault tolerance']
doc_type: 'landing-page'
Deployment and scaling
This section covers the following topics:
| Topic |
|------------------------------------------------------------------|
|
Introduction
|
|
Scaling Out
|
|
Replication for fault tolerance
|
|
Cluster deployment
| | {"source_file": "index.md"} | [
0.03878030180931091,
-0.06717346608638763,
-0.015256235376000404,
-0.014660497196018696,
0.009461086243391037,
-0.04744037613272667,
-0.017895333468914032,
0.059590935707092285,
-0.1223229467868805,
0.05728667229413986,
0.03194570168852806,
0.06153428554534912,
0.0818960890173912,
-0.04765... |
f5a17561-34f7-443e-95f2-b67670adc21e | slug: /materialized-view/refreshable-materialized-view
title: 'Refreshable materialized view'
description: 'How to use materialized views to speed up queries'
keywords: ['refreshable materialized view', 'refresh', 'materialized views', 'speed up queries', 'query optimization']
doc_type: 'guide'
import refreshableMaterializedViewDiagram from '@site/static/images/materialized-view/refreshable-materialized-view-diagram.png';
import Image from '@theme/IdealImage';
Refreshable materialized views
are conceptually similar to materialized views in traditional OLTP databases, storing the result of a specified query for quick retrieval and reducing the need to repeatedly execute resource-intensive queries. Unlike ClickHouse's
incremental materialized views
, this requires the periodic execution of the query over the full dataset - the results of which are stored in a target table for querying. This result set should, in theory, be smaller than the original dataset, allowing the subsequent query to execute faster.
The diagram explains how Refreshable Materialized Views work:
You can also see the following video:
When should refreshable materialized views be used? {#when-should-refreshable-materialized-views-be-used}
ClickHouse incremental materialized views are enormously powerful and typically scale much better than the approach used by refreshable materialized views, especially in cases where an aggregate over a single table needs to be performed. By only computing the aggregation over each block of data as it is inserted and merging the incremental states in the final table, the query only ever executes on a subset of the data. This method scales to potentially petabytes of data and is usually the preferred method.
However, there are use cases where this incremental process is not required or is not applicable. Some problems are either incompatible with an incremental approach or don't require real-time updates, with a periodic rebuild being more appropriate. For example, you may want to regularly perform a complete re-computation of a view over the full dataset because it uses a complex join, which is incompatible with an incremental approach.
Refreshable materialized views can run batch processes performing tasks such as denormalization. Dependencies can be created between refreshable materialized views such that one view depends on the results of another and only executes once it is complete. This can replace scheduled workflows or simple DAGs such as a
dbt
job. To find out more about how to set dependencies between refreshable materialized views go to
CREATE VIEW
,
Dependencies
section.
How do you refresh a refreshable materialized view? {#how-do-you-refresh-a-refreshable-materialized-view}
Refreshable materialized views are refreshed automatically on an interval that's defined during creation.
For example, the following materialized view is refreshed every minute: | {"source_file": "refreshable-materialized-view.md"} | [
-0.041755303740501404,
-0.030312083661556244,
-0.020857909694314003,
0.0728878378868103,
-0.004546112380921841,
-0.08645723760128021,
-0.01842697709798813,
-0.011734364554286003,
0.039963286370038986,
0.004401677753776312,
0.019301073625683784,
0.08643145114183426,
0.037839263677597046,
-0... |
87e49c04-2385-42b0-b4a3-8971652c6247 | Refreshable materialized views are refreshed automatically on an interval that's defined during creation.
For example, the following materialized view is refreshed every minute:
sql
CREATE MATERIALIZED VIEW table_name_mv
REFRESH EVERY 1 MINUTE TO table_name AS
...
If you want to force refresh a materialized view, you can use the
SYSTEM REFRESH VIEW
clause:
sql
SYSTEM REFRESH VIEW table_name_mv;
You can also cancel, stop, or start a view.
For more details, see the
managing refreshable materialized views
documentation.
When was a refreshable materialized view last refreshed? {#when-was-a-refreshable-materialized-view-last-refreshed}
To find out when a refreshable materialized view was last refreshed, you can query the
system.view_refreshes
system table, as shown below:
sql
SELECT database, view, status,
last_success_time, last_refresh_time, next_refresh_time,
read_rows, written_rows
FROM system.view_refreshes;
text
โโdatabaseโโฌโviewโโโโโโโโโโโโโโฌโstatusโโโโโฌโโโlast_success_timeโโฌโโโlast_refresh_timeโโฌโโโnext_refresh_timeโโฌโread_rowsโโฌโwritten_rowsโโ
โ database โ table_name_mv โ Scheduled โ 2024-11-11 12:10:00 โ 2024-11-11 12:10:00 โ 2024-11-11 12:11:00 โ 5491132 โ 817718 โ
โโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโ
How can I change the refresh rate? {#how-can-i-change-the-refresh-rate}
To change the refresh rate of a refreshable materialized view, use the
ALTER TABLE...MODIFY REFRESH
syntax.
sql
ALTER TABLE table_name_mv
MODIFY REFRESH EVERY 30 SECONDS;
Once you've done that, you can use
When was a refreshable materialized view last refreshed?
query to check that the rate has been updated:
text
โโdatabaseโโฌโviewโโโโโโโโโโโโโโฌโstatusโโโโโฌโโโlast_success_timeโโฌโโโlast_refresh_timeโโฌโโโnext_refresh_timeโโฌโread_rowsโโฌโwritten_rowsโโ
โ database โ table_name_mv โ Scheduled โ 2024-11-11 12:22:30 โ 2024-11-11 12:22:30 โ 2024-11-11 12:23:00 โ 5491132 โ 817718 โ
โโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโ
Using
APPEND
to add new rows {#using-append-to-add-new-rows}
The
APPEND
functionality allows you to add new rows to the end of the table instead of replacing the whole view.
One use of this feature is to capture snapshots of values at a point in time. For example, let's imagine that we have an
events
table populated by a stream of messages from
Kafka
,
Redpanda
, or another streaming data platform.
```sql
SELECT *
FROM events
LIMIT 10
Query id: 7662bc39-aaf9-42bd-b6c7-bc94f2881036 | {"source_file": "refreshable-materialized-view.md"} | [
0.002424909733235836,
-0.12971819937229156,
-0.03942727670073509,
0.04588273912668228,
-0.0328659787774086,
-0.03491056337952614,
0.0041352021507918835,
-0.030547592788934708,
0.014004361815750599,
0.0607239231467247,
-0.00997274648398161,
0.011113900691270828,
0.013866748660802841,
-0.041... |
17f815d9-9ea1-4598-aeb6-ef81e2a65eb9 | ```sql
SELECT *
FROM events
LIMIT 10
Query id: 7662bc39-aaf9-42bd-b6c7-bc94f2881036
โโโโโโโโโโโโโโโโโโโtsโโฌโuuidโโฌโcountโโ
โ 2008-08-06 17:07:19 โ 0eb โ 547 โ
โ 2008-08-06 17:07:19 โ 60b โ 148 โ
โ 2008-08-06 17:07:19 โ 106 โ 750 โ
โ 2008-08-06 17:07:19 โ 398 โ 875 โ
โ 2008-08-06 17:07:19 โ ca0 โ 318 โ
โ 2008-08-06 17:07:19 โ 6ba โ 105 โ
โ 2008-08-06 17:07:19 โ df9 โ 422 โ
โ 2008-08-06 17:07:19 โ a71 โ 991 โ
โ 2008-08-06 17:07:19 โ 3a2 โ 495 โ
โ 2008-08-06 17:07:19 โ 598 โ 238 โ
โโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโดโโโโโโโโ
```
This dataset has
4096
values in the
uuid
column. We can write the following query to find the ones with the highest total count:
```sql
SELECT
uuid,
sum(count) AS count
FROM events
GROUP BY ALL
ORDER BY count DESC
LIMIT 10
โโuuidโโฌโโโcountโโ
โ c6f โ 5676468 โ
โ 951 โ 5669731 โ
โ 6a6 โ 5664552 โ
โ b06 โ 5662036 โ
โ 0ca โ 5658580 โ
โ 2cd โ 5657182 โ
โ 32a โ 5656475 โ
โ ffe โ 5653952 โ
โ f33 โ 5653783 โ
โ c5b โ 5649936 โ
โโโโโโโโดโโโโโโโโโโ
```
Let's say we want to capture the count for each
uuid
every 10 seconds and store it in a new table called
events_snapshot
. The schema of
events_snapshot
would look like this:
sql
CREATE TABLE events_snapshot (
ts DateTime32,
uuid String,
count UInt64
)
ENGINE = MergeTree
ORDER BY uuid;
We could then create a refreshable materialized view to populate this table:
sql
CREATE MATERIALIZED VIEW events_snapshot_mv
REFRESH EVERY 10 SECOND APPEND TO events_snapshot
AS SELECT
now() AS ts,
uuid,
sum(count) AS count
FROM events
GROUP BY ALL;
We can then query
events_snapshot
to get the count over time for a specific
uuid
:
```sql
SELECT *
FROM events_snapshot
WHERE uuid = 'fff'
ORDER BY ts ASC
FORMAT PrettyCompactMonoBlock
โโโโโโโโโโโโโโโโโโโtsโโฌโuuidโโฌโโโcountโโ
โ 2024-10-01 16:12:56 โ fff โ 5424711 โ
โ 2024-10-01 16:13:00 โ fff โ 5424711 โ
โ 2024-10-01 16:13:10 โ fff โ 5424711 โ
โ 2024-10-01 16:13:20 โ fff โ 5424711 โ
โ 2024-10-01 16:13:30 โ fff โ 5674669 โ
โ 2024-10-01 16:13:40 โ fff โ 5947912 โ
โ 2024-10-01 16:13:50 โ fff โ 6203361 โ
โ 2024-10-01 16:14:00 โ fff โ 6501695 โ
โโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโดโโโโโโโโโโ
```
Examples {#examples}
Lets now have a look at how to use refreshable materialized views with some example datasets.
Stack Overflow {#stack-overflow}
The
denormalizing data guide
shows various techniques for denormalizing data using a Stack Overflow dataset. We populate data into the following tables:
votes
,
users
,
badges
,
posts
, and
postlinks
.
In that guide, we showed how to denormalize the
postlinks
dataset onto the
posts
table with the following query: | {"source_file": "refreshable-materialized-view.md"} | [
-0.009879187680780888,
-0.037098582834005356,
0.004189153667539358,
-0.0007879679324105382,
-0.015202699229121208,
-0.02899617701768875,
0.11038251221179962,
0.0075590829364955425,
0.004073039162904024,
0.04189596325159073,
0.013852370902895927,
-0.009868900291621685,
-0.009256947785615921,
... |
80474c03-f8e2-48ad-a7ea-764cb153cd5c | In that guide, we showed how to denormalize the
postlinks
dataset onto the
posts
table with the following query:
sql
SELECT
posts.*,
arrayMap(p -> (p.1, p.2), arrayFilter(p -> p.3 = 'Linked' AND p.2 != 0, Related)) AS LinkedPosts,
arrayMap(p -> (p.1, p.2), arrayFilter(p -> p.3 = 'Duplicate' AND p.2 != 0, Related)) AS DuplicatePosts
FROM posts
LEFT JOIN (
SELECT
PostId,
groupArray((CreationDate, RelatedPostId, LinkTypeId)) AS Related
FROM postlinks
GROUP BY PostId
) AS postlinks ON posts_types_codecs_ordered.Id = postlinks.PostId;
We then showed how to do a one-time insert of this data into the
posts_with_links
table, but in a production system, we'd want to run this operation periodically.
Both the
posts
and
postlinks
table could potentially be updated. Therefore, rather than attempt to implement this join using incremental materialized views, it may be sufficient to simply schedule this query to run at a set interval, e.g., once every hour, storing the results in a
post_with_links
table.
This is where a refreshable materialized view helps, and we can create one with the following query:
sql
CREATE MATERIALIZED VIEW posts_with_links_mv
REFRESH EVERY 1 HOUR TO posts_with_links AS
SELECT
posts.*,
arrayMap(p -> (p.1, p.2), arrayFilter(p -> p.3 = 'Linked' AND p.2 != 0, Related)) AS LinkedPosts,
arrayMap(p -> (p.1, p.2), arrayFilter(p -> p.3 = 'Duplicate' AND p.2 != 0, Related)) AS DuplicatePosts
FROM posts
LEFT JOIN (
SELECT
PostId,
groupArray((CreationDate, RelatedPostId, LinkTypeId)) AS Related
FROM postlinks
GROUP BY PostId
) AS postlinks ON posts_types_codecs_ordered.Id = postlinks.PostId;
The view will execute immediately and every hour thereafter as configured to ensure updates to the source table are reflected. Importantly, when the query re-runs, the result set is atomically and transparently updated.
:::note
The syntax here is identical to an incremental materialized view, except we include a
REFRESH
clause:
:::
IMDb {#imdb}
In the
dbt and ClickHouse integration guide
we populated an IMDb dataset with the following tables:
actors
,
directors
,
genres
,
movie_directors
,
movies
, and
roles
.
We can then write the following query can be used to compute a summary of each actor, ordered by the most movie appearances. | {"source_file": "refreshable-materialized-view.md"} | [
-0.08368182182312012,
-0.037874676287174225,
-0.018214132636785507,
0.04269685596227646,
-0.0685066506266594,
-0.0674600750207901,
-0.026277629658579826,
-0.0723649337887764,
-0.0692918449640274,
0.01806752011179924,
0.06501472741365433,
0.012007282115519047,
0.05034010112285614,
-0.069585... |
199515aa-b55a-4279-b55e-43fb2e77d026 | We can then write the following query can be used to compute a summary of each actor, ordered by the most movie appearances.
sql
SELECT
id, any(actor_name) AS name, uniqExact(movie_id) AS movies,
round(avg(rank), 2) AS avg_rank, uniqExact(genre) AS genres,
uniqExact(director_name) AS directors, max(created_at) AS updated_at
FROM (
SELECT
imdb.actors.id AS id,
concat(imdb.actors.first_name, ' ', imdb.actors.last_name) AS actor_name,
imdb.movies.id AS movie_id, imdb.movies.rank AS rank, genre,
concat(imdb.directors.first_name, ' ', imdb.directors.last_name) AS director_name,
created_at
FROM imdb.actors
INNER JOIN imdb.roles ON imdb.roles.actor_id = imdb.actors.id
LEFT JOIN imdb.movies ON imdb.movies.id = imdb.roles.movie_id
LEFT JOIN imdb.genres ON imdb.genres.movie_id = imdb.movies.id
LEFT JOIN imdb.movie_directors ON imdb.movie_directors.movie_id = imdb.movies.id
LEFT JOIN imdb.directors ON imdb.directors.id = imdb.movie_directors.director_id
)
GROUP BY id
ORDER BY movies DESC
LIMIT 5;
```text
โโโโโโidโโฌโnameโโโโโโโโโโฌโnum_moviesโโฌโโโโโโโโโโโavg_rankโโฌโunique_genresโโฌโuniq_directorsโโฌโโโโโโโโโโupdated_atโโ
โ 45332 โ Mel Blanc โ 909 โ 5.7884792542982515 โ 19 โ 148 โ 2024-11-11 12:01:35 โ
โ 621468 โ Bess Flowers โ 672 โ 5.540605094212635 โ 20 โ 301 โ 2024-11-11 12:01:35 โ
โ 283127 โ Tom London โ 549 โ 2.8057034230202023 โ 18 โ 208 โ 2024-11-11 12:01:35 โ
โ 356804 โ Bud Osborne โ 544 โ 1.9575342420755093 โ 16 โ 157 โ 2024-11-11 12:01:35 โ
โ 41669 โ Adoor Bhasi โ 544 โ 0 โ 4 โ 121 โ 2024-11-11 12:01:35 โ
โโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโ
5 rows in set. Elapsed: 0.393 sec. Processed 5.45 million rows, 86.82 MB (13.87 million rows/s., 221.01 MB/s.)
Peak memory usage: 1.38 GiB.
```
It doesn't take too long to return a result, but let's say we want it to be even quicker and less computationally expensive.
Suppose that this dataset is also subject to constant updates - movies are constantly released with new actors and directors also emerging.
It's time for a refreshable materialized view, so let's first create a target table for the results:
sql
CREATE TABLE imdb.actor_summary
(
`id` UInt32,
`name` String,
`num_movies` UInt16,
`avg_rank` Float32,
`unique_genres` UInt16,
`uniq_directors` UInt16,
`updated_at` DateTime
)
ENGINE = MergeTree
ORDER BY num_movies
And now we can define the view: | {"source_file": "refreshable-materialized-view.md"} | [
0.06829503923654556,
-0.1189783588051796,
0.015748994424939156,
0.031016560271382332,
-0.014226237311959267,
0.07751626521348953,
0.061195556074380875,
-0.004748438019305468,
0.032843466848134995,
-0.00027196278097108006,
0.026350239291787148,
-0.03873009607195854,
0.06649873405694962,
0.0... |
60b8fd89-1f24-45c7-b7a7-f39248372557 | And now we can define the view:
sql
CREATE MATERIALIZED VIEW imdb.actor_summary_mv
REFRESH EVERY 1 MINUTE TO imdb.actor_summary AS
SELECT
id,
any(actor_name) AS name,
uniqExact(movie_id) AS num_movies,
avg(rank) AS avg_rank,
uniqExact(genre) AS unique_genres,
uniqExact(director_name) AS uniq_directors,
max(created_at) AS updated_at
FROM
(
SELECT
imdb.actors.id AS id,
concat(imdb.actors.first_name, ' ', imdb.actors.last_name) AS actor_name,
imdb.movies.id AS movie_id,
imdb.movies.rank AS rank,
genre,
concat(imdb.directors.first_name, ' ', imdb.directors.last_name) AS director_name,
created_at
FROM imdb.actors
INNER JOIN imdb.roles ON imdb.roles.actor_id = imdb.actors.id
LEFT JOIN imdb.movies ON imdb.movies.id = imdb.roles.movie_id
LEFT JOIN imdb.genres ON imdb.genres.movie_id = imdb.movies.id
LEFT JOIN imdb.movie_directors ON imdb.movie_directors.movie_id = imdb.movies.id
LEFT JOIN imdb.directors ON imdb.directors.id = imdb.movie_directors.director_id
)
GROUP BY id
ORDER BY num_movies DESC;
The view will execute immediately and every minute thereafter as configured to ensure updates to the source table are reflected. Our previous query to obtain a summary of actors becomes syntactically simpler and significantly faster!
sql
SELECT *
FROM imdb.actor_summary
ORDER BY num_movies DESC
LIMIT 5
```text
โโโโโโidโโฌโnameโโโโโโโโโโฌโnum_moviesโโฌโโavg_rankโโฌโunique_genresโโฌโuniq_directorsโโฌโโโโโโโโโโupdated_atโโ
โ 45332 โ Mel Blanc โ 909 โ 5.7884793 โ 19 โ 148 โ 2024-11-11 12:01:35 โ
โ 621468 โ Bess Flowers โ 672 โ 5.540605 โ 20 โ 301 โ 2024-11-11 12:01:35 โ
โ 283127 โ Tom London โ 549 โ 2.8057034 โ 18 โ 208 โ 2024-11-11 12:01:35 โ
โ 356804 โ Bud Osborne โ 544 โ 1.9575342 โ 16 โ 157 โ 2024-11-11 12:01:35 โ
โ 41669 โ Adoor Bhasi โ 544 โ 0 โ 4 โ 121 โ 2024-11-11 12:01:35 โ
โโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโ
5 rows in set. Elapsed: 0.007 sec.
```
Suppose we add a new actor, "Clicky McClickHouse" to our source data who happens to have appeared in a lot of films!
sql
INSERT INTO imdb.actors VALUES (845466, 'Clicky', 'McClickHouse', 'M');
INSERT INTO imdb.roles SELECT
845466 AS actor_id,
id AS movie_id,
'Himself' AS role,
now() AS created_at
FROM imdb.movies
LIMIT 10000, 910;
Less than 60 seconds later, our target table is updated to reflect the prolific nature of Clicky's acting:
sql
SELECT *
FROM imdb.actor_summary
ORDER BY num_movies DESC
LIMIT 5; | {"source_file": "refreshable-materialized-view.md"} | [
-0.01415995229035616,
-0.15261678397655487,
-0.04526351019740105,
0.056112632155418396,
-0.018831828609108925,
0.06520567089319229,
0.023471392691135406,
-0.0651041716337204,
0.043304864317178726,
0.0038378650788217783,
0.047092970460653305,
-0.03563303127884865,
0.06776566058397293,
0.025... |
6f81e49e-7b70-47a3-a386-42eb8c087fee | Less than 60 seconds later, our target table is updated to reflect the prolific nature of Clicky's acting:
sql
SELECT *
FROM imdb.actor_summary
ORDER BY num_movies DESC
LIMIT 5;
```text
โโโโโโidโโฌโnameโโโโโโโโโโโโโโโโโฌโnum_moviesโโฌโโavg_rankโโฌโunique_genresโโฌโuniq_directorsโโฌโโโโโโโโโโupdated_atโโ
โ 845466 โ Clicky McClickHouse โ 910 โ 1.4687939 โ 21 โ 662 โ 2024-11-11 12:53:51 โ
โ 45332 โ Mel Blanc โ 909 โ 5.7884793 โ 19 โ 148 โ 2024-11-11 12:01:35 โ
โ 621468 โ Bess Flowers โ 672 โ 5.540605 โ 20 โ 301 โ 2024-11-11 12:01:35 โ
โ 283127 โ Tom London โ 549 โ 2.8057034 โ 18 โ 208 โ 2024-11-11 12:01:35 โ
โ 41669 โ Adoor Bhasi โ 544 โ 0 โ 4 โ 121 โ 2024-11-11 12:01:35 โ
โโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโ
5 rows in set. Elapsed: 0.006 sec.
``` | {"source_file": "refreshable-materialized-view.md"} | [
0.0024795299395918846,
-0.14885695278644562,
-0.010569888167083263,
0.004331547766923904,
-0.021050546318292618,
0.07193291932344437,
0.07742641866207123,
-0.05051343888044357,
-0.015780305489897728,
0.022688651457428932,
0.052222806960344315,
-0.07049541175365448,
0.0151227917522192,
-0.0... |
2f2d4ee8-52c5-4653-84e3-872861851f4d | slug: /materialized-view/incremental-materialized-view
title: 'Incremental materialized view'
description: 'How to use incremental materialized views to speed up queries'
keywords: ['incremental materialized views', 'speed up queries', 'query optimization']
score: 10000
doc_type: 'guide'
import materializedViewDiagram from '@site/static/images/materialized-view/materialized-view-diagram.png';
import Image from '@theme/IdealImage';
Background {#background}
Incremental Materialized Views (Materialized Views) allow users to shift the cost of computation from query time to insert time, resulting in faster
SELECT
queries.
Unlike in transactional databases like Postgres, a ClickHouse materialized view is just a trigger that runs a query on blocks of data as they are inserted into a table. The result of this query is inserted into a second "target" table. Should more rows be inserted, results will again be sent to the target table where the intermediate results will be updated and merged. This merged result is the equivalent of running the query over all of the original data.
The principal motivation for Materialized Views is that the results inserted into the target table represent the results of an aggregation, filtering, or transformation on rows. These results will often be a smaller representation of the original data (a partial sketch in the case of aggregations). This, along with the resulting query for reading the results from the target table being simple, ensures query times are faster than if the same computation was performed on the original data, shifting computation (and thus query latency) from query time to insert time.
Materialized views in ClickHouse are updated in real time as data flows into the table they are based on, functioning more like continually updating indexes. This is in contrast to other databases where Materialized Views are typically static snapshots of a query that must be refreshed (similar to ClickHouse
Refreshable Materialized Views
).
Example {#example}
For example purposes we'll use the Stack Overflow dataset documented in
"Schema Design"
.
Suppose we want to obtain the number of up and down votes per day for a post.
``sql
CREATE TABLE votes
(
Id
UInt32,
PostId
Int32,
VoteTypeId
UInt8,
CreationDate
DateTime64(3, 'UTC'),
UserId
Int32,
BountyAmount` UInt8
)
ENGINE = MergeTree
ORDER BY (VoteTypeId, CreationDate, PostId)
INSERT INTO votes SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/*.parquet')
0 rows in set. Elapsed: 29.359 sec. Processed 238.98 million rows, 2.13 GB (8.14 million rows/s., 72.45 MB/s.)
```
This is a reasonably simple query in ClickHouse thanks to the
toStartOfDay
function:
```sql
SELECT toStartOfDay(CreationDate) AS day,
countIf(VoteTypeId = 2) AS UpVotes,
countIf(VoteTypeId = 3) AS DownVotes
FROM votes
GROUP BY day
ORDER BY day ASC
LIMIT 10 | {"source_file": "incremental-materialized-view.md"} | [
-0.07030172646045685,
-0.04298212751746178,
-0.046441201120615005,
0.026803849264979362,
-0.03560570627450943,
-0.07006929069757462,
0.003023960627615452,
-0.002255172934383154,
0.0817691832780838,
0.025270948186516762,
0.007639734540134668,
0.06191549077630043,
0.04478278383612633,
-0.046... |
58aa05d8-a013-4be2-b4bc-bb0b60ceabfc | ```sql
SELECT toStartOfDay(CreationDate) AS day,
countIf(VoteTypeId = 2) AS UpVotes,
countIf(VoteTypeId = 3) AS DownVotes
FROM votes
GROUP BY day
ORDER BY day ASC
LIMIT 10
โโโโโโโโโโโโโโโโโโdayโโฌโUpVotesโโฌโDownVotesโโ
โ 2008-07-31 00:00:00 โ 6 โ 0 โ
โ 2008-08-01 00:00:00 โ 182 โ 50 โ
โ 2008-08-02 00:00:00 โ 436 โ 107 โ
โ 2008-08-03 00:00:00 โ 564 โ 100 โ
โ 2008-08-04 00:00:00 โ 1306 โ 259 โ
โ 2008-08-05 00:00:00 โ 1368 โ 269 โ
โ 2008-08-06 00:00:00 โ 1701 โ 211 โ
โ 2008-08-07 00:00:00 โ 1544 โ 211 โ
โ 2008-08-08 00:00:00 โ 1241 โ 212 โ
โ 2008-08-09 00:00:00 โ 576 โ 46 โ
โโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโโโ
10 rows in set. Elapsed: 0.133 sec. Processed 238.98 million rows, 2.15 GB (1.79 billion rows/s., 16.14 GB/s.)
Peak memory usage: 363.22 MiB.
```
This query is already fast thanks to ClickHouse, but can we do better?
If we want to compute this at insert time using a materialized view, we need a table to receive the results. This table should only keep 1 row per day. If an update is received for an existing day, the other columns should be merged into the existing day's row. For this merge of incremental states to happen, partial states must be stored for the other columns.
This requires a special engine type in ClickHouse: the
SummingMergeTree
. This replaces all the rows with the same ordering key with one row which contains summed values for the numeric columns. The following table will merge any rows with the same date, summing any numerical columns:
sql
CREATE TABLE up_down_votes_per_day
(
`Day` Date,
`UpVotes` UInt32,
`DownVotes` UInt32
)
ENGINE = SummingMergeTree
ORDER BY Day
To demonstrate our materialized view, assume our votes table is empty and have yet to receive any data. Our materialized view performs the above
SELECT
on data inserted into
votes
, with the results sent to
up_down_votes_per_day
:
sql
CREATE MATERIALIZED VIEW up_down_votes_per_day_mv TO up_down_votes_per_day AS
SELECT toStartOfDay(CreationDate)::Date AS Day,
countIf(VoteTypeId = 2) AS UpVotes,
countIf(VoteTypeId = 3) AS DownVotes
FROM votes
GROUP BY Day
The
TO
clause here is key, denoting where results will be sent to i.e.
up_down_votes_per_day
.
We can repopulate our votes table from our earlier insert:
```sql
INSERT INTO votes SELECT toUInt32(Id) AS Id, toInt32(PostId) AS PostId, VoteTypeId, CreationDate, UserId, BountyAmount
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/votes/*.parquet')
0 rows in set. Elapsed: 111.964 sec. Processed 477.97 million rows, 3.89 GB (4.27 million rows/s., 34.71 MB/s.)
Peak memory usage: 283.49 MiB.
```
On completion, we can confirm the size of our
up_down_votes_per_day
- we should have 1 row per day:
```sql
SELECT count()
FROM up_down_votes_per_day
FINAL
โโcount()โโ
โ 5723 โ
โโโโโโโโโโโ
``` | {"source_file": "incremental-materialized-view.md"} | [
-0.0076056174002587795,
-0.07624471187591553,
-0.013410433195531368,
0.08766300231218338,
-0.049405742436647415,
0.0043618627823889256,
0.06440716981887817,
-0.013701331801712513,
-0.07716876268386841,
0.03869349882006645,
0.03477711230516434,
-0.061303481459617615,
0.011825895868241787,
-... |
025ee4fd-2dbb-415e-b763-e20c21b043b0 | ```sql
SELECT count()
FROM up_down_votes_per_day
FINAL
โโcount()โโ
โ 5723 โ
โโโโโโโโโโโ
```
We've effectively reduced the number of rows here from 238 million (in
votes
) to 5000 by storing the result of our query. What's key here, however, is that if new votes are inserted into the
votes
table, new values will be sent to the
up_down_votes_per_day
for their respective day where they will be automatically merged asynchronously in the background - keeping only one row per day.
up_down_votes_per_day
will thus always be both small and up-to-date.
Since the merging of rows is asynchronous, there may be more than one vote per day when a user queries. To ensure any outstanding rows are merged at query time, we have two options:
Use the
FINAL
modifier on the table name. We did this for the count query above.
Aggregate by the ordering key used in our final table i.e.
CreationDate
and sum the metrics. Typically this is more efficient and flexible (the table can be used for other things), but the former can be simpler for some queries. We show both below:
```sql
SELECT
Day,
UpVotes,
DownVotes
FROM up_down_votes_per_day
FINAL
ORDER BY Day ASC
LIMIT 10
10 rows in set. Elapsed: 0.004 sec. Processed 8.97 thousand rows, 89.68 KB (2.09 million rows/s., 20.89 MB/s.)
Peak memory usage: 289.75 KiB.
SELECT Day, sum(UpVotes) AS UpVotes, sum(DownVotes) AS DownVotes
FROM up_down_votes_per_day
GROUP BY Day
ORDER BY Day ASC
LIMIT 10
โโโโโโโโโDayโโฌโUpVotesโโฌโDownVotesโโ
โ 2008-07-31 โ 6 โ 0 โ
โ 2008-08-01 โ 182 โ 50 โ
โ 2008-08-02 โ 436 โ 107 โ
โ 2008-08-03 โ 564 โ 100 โ
โ 2008-08-04 โ 1306 โ 259 โ
โ 2008-08-05 โ 1368 โ 269 โ
โ 2008-08-06 โ 1701 โ 211 โ
โ 2008-08-07 โ 1544 โ 211 โ
โ 2008-08-08 โ 1241 โ 212 โ
โ 2008-08-09 โ 576 โ 46 โ
โโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโโโ
10 rows in set. Elapsed: 0.010 sec. Processed 8.97 thousand rows, 89.68 KB (907.32 thousand rows/s., 9.07 MB/s.)
Peak memory usage: 567.61 KiB.
```
This has sped up our query from 0.133s to 0.004s โ an over 25x improvement!
:::important Important:
ORDER BY
=
GROUP BY
In most cases the columns used in the
GROUP BY
clause of the Materialized Views transformation, should be consistent with those used in the
ORDER BY
clause of the target table if using the
SummingMergeTree
or
AggregatingMergeTree
table engines. These engines rely on the
ORDER BY
columns to merge rows with identical values during background merge operations. Misalignment between
GROUP BY
and
ORDER BY
columns can lead to inefficient query performance, suboptimal merges, or even data discrepancies.
:::
A more complex example {#a-more-complex-example} | {"source_file": "incremental-materialized-view.md"} | [
-0.04827408120036125,
-0.046477776020765305,
0.011495950631797314,
0.047027405351400375,
-0.10450085997581482,
-0.02355649322271347,
0.028269914910197258,
0.03476322442293167,
0.0338696651160717,
-0.015258670784533024,
0.05846792086958885,
0.030841674655675888,
0.04114373400807381,
-0.0210... |
8ba32712-feba-4a13-9789-69475f8aef9d | A more complex example {#a-more-complex-example}
The above example uses Materialized Views to compute and maintain two sums per day. Sums represent the simplest form of aggregation to maintain partial states for - we can just add new values to existing values when they arrive. However, ClickHouse Materialized Views can be used for any aggregation type.
Suppose we wish to compute some statistics for posts for each day: the 99.9th percentile for the
Score
and an average of the
CommentCount
. The query to compute this might look like:
```sql
SELECT
toStartOfDay(CreationDate) AS Day,
quantile(0.999)(Score) AS Score_99th,
avg(CommentCount) AS AvgCommentCount
FROM posts
GROUP BY Day
ORDER BY Day DESC
LIMIT 10
โโโโโโโโโโโโโโโโโโDayโโฌโโโโโโโโScore_99thโโฌโโโโAvgCommentCountโโ
โ 2024-03-31 00:00:00 โ 5.23700000000008 โ 1.3429811866859624 โ
โ 2024-03-30 00:00:00 โ 5 โ 1.3097158891616976 โ
โ 2024-03-29 00:00:00 โ 5.78899999999976 โ 1.2827635327635327 โ
โ 2024-03-28 00:00:00 โ 7 โ 1.277746158224246 โ
โ 2024-03-27 00:00:00 โ 5.738999999999578 โ 1.2113264918282023 โ
โ 2024-03-26 00:00:00 โ 6 โ 1.3097536945812809 โ
โ 2024-03-25 00:00:00 โ 6 โ 1.2836721018539201 โ
โ 2024-03-24 00:00:00 โ 5.278999999999996 โ 1.2931667891256429 โ
โ 2024-03-23 00:00:00 โ 6.253000000000156 โ 1.334061135371179 โ
โ 2024-03-22 00:00:00 โ 9.310999999999694 โ 1.2388059701492538 โ
โโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโ
10 rows in set. Elapsed: 0.113 sec. Processed 59.82 million rows, 777.65 MB (528.48 million rows/s., 6.87 GB/s.)
Peak memory usage: 658.84 MiB.
```
As before, we can create a materialized view which executes the above query as new posts are inserted into our
posts
table.
For the purposes of example, and to avoid loading the posts data from S3, we will create a duplicate table
posts_null
with the same schema as
posts
. However, this table will not store any data and simply be used by the materialized view when rows are inserted. To prevent storage of data, we can use the
Null
table engine type
.
sql
CREATE TABLE posts_null AS posts ENGINE = Null
The Null table engine is a powerful optimization - think of it as
/dev/null
. Our materialized view will compute and store our summary statistics when our
posts_null
table receives rows at insert time - it's just a trigger. However, the raw data will not be stored. While in our case, we probably still want to store the original posts, this approach can be used to compute aggregates while avoiding storage overhead of the raw data.
The materialized view thus becomes:
sql
CREATE MATERIALIZED VIEW post_stats_mv TO post_stats_per_day AS
SELECT toStartOfDay(CreationDate) AS Day,
quantileState(0.999)(Score) AS Score_quantiles,
avgState(CommentCount) AS AvgCommentCount
FROM posts_null
GROUP BY Day | {"source_file": "incremental-materialized-view.md"} | [
-0.09224919229745865,
-0.05891801789402962,
-0.030559713020920753,
0.09164872765541077,
-0.09879864007234573,
-0.019226444885134697,
0.005098484456539154,
-0.029831554740667343,
0.02354176715016365,
0.021981172263622284,
-0.02901509404182434,
-0.050726182758808136,
0.04459453374147415,
-0.... |
52256b23-6e50-40c0-9919-df73112bd720 | Note how we append the suffix
State
to the end of our aggregate functions. This ensures the aggregate state of the function is returned instead of the final result. This will contain additional information to allow this partial state to merge with other states. For example, in the case of an average, this will include a count and sum of the column.
Partial aggregation states are necessary to compute correct results. For example, for computing an average, simply averaging the averages of sub-ranges produces incorrect results.
We now create the target table for this view
post_stats_per_day
which stores these partial aggregate states:
sql
CREATE TABLE post_stats_per_day
(
`Day` Date,
`Score_quantiles` AggregateFunction(quantile(0.999), Int32),
`AvgCommentCount` AggregateFunction(avg, UInt8)
)
ENGINE = AggregatingMergeTree
ORDER BY Day
While earlier the
SummingMergeTree
was sufficient to store counts, we require a more advanced engine type for other functions: the
AggregatingMergeTree
.
To ensure ClickHouse knows that aggregate states will be stored, we define the
Score_quantiles
and
AvgCommentCount
as the type
AggregateFunction
, specifying the function source of the partial states and the type of their source columns. Like the
SummingMergeTree
, rows with the same
ORDER BY
key value will be merged (
Day
in the above example).
To populate our
post_stats_per_day
via our materialized view, we can simply insert all rows from
posts
into
posts_null
:
```sql
INSERT INTO posts_null SELECT * FROM posts
0 rows in set. Elapsed: 13.329 sec. Processed 119.64 million rows, 76.99 GB (8.98 million rows/s., 5.78 GB/s.)
```
In production, you would likely attach the materialized view to the
posts
table. We have used
posts_null
here to demonstrate the null table.
Our final query needs to utilize the
Merge
suffix for our functions (as the columns store partial aggregation states):
sql
SELECT
Day,
quantileMerge(0.999)(Score_quantiles),
avgMerge(AvgCommentCount)
FROM post_stats_per_day
GROUP BY Day
ORDER BY Day DESC
LIMIT 10
Note we use a
GROUP BY
here instead of using
FINAL
.
Other applications {#other-applications}
The above focuses primarily on using Materialized Views to incrementally update partial aggregates of data, thus moving the computation from query to insert time. Beyond this common use case, Materialized Views have a number of other applications.
Filtering and transformation {#filtering-and-transformation}
In some situations, we may wish to only insert a subset of the rows and columns on insertion. In this case, our
posts_null
table could receive inserts, with a
SELECT
query filtering rows prior to insertion into the
posts
table. For example, suppose we wished to transform a
Tags
column in our
posts
table. This contains a pipe delimited list of tag names. By converting these into an array, we can more easily aggregate by individual tag values. | {"source_file": "incremental-materialized-view.md"} | [
-0.055529240518808365,
-0.027552569285035133,
0.016908323392271996,
0.08956271409988403,
-0.10676150768995285,
0.034921880811452866,
-0.006849970668554306,
0.04568975418806076,
-0.006922535598278046,
-0.022105494514107704,
-0.0020534528885036707,
-0.07845133543014526,
0.04483983293175697,
... |
7b806f6f-3832-48c2-9eee-833f3fede8be | We could perform this transformation when running an
INSERT INTO SELECT
. The materialized view allows us to encapsulate this logic in ClickHouse DDL and keep our
INSERT
simple, with the transformation applied to any new rows.
Our materialized view for this transformation is shown below:
sql
CREATE MATERIALIZED VIEW posts_mv TO posts AS
SELECT * EXCEPT Tags, arrayFilter(t -> (t != ''), splitByChar('|', Tags)) as Tags FROM posts_null
Lookup table {#lookup-table}
Users should consider their access patterns when choosing a ClickHouse ordering key. Columns which are frequently used in filter and aggregation clauses should be used. This can be restrictive for scenarios where users have more diverse access patterns which cannot be encapsulated in a single set of columns. For example, consider the following
comments
table:
``sql
CREATE TABLE comments
(
Id
UInt32,
PostId
UInt32,
Score
UInt16,
Text
String,
CreationDate
DateTime64(3, 'UTC'),
UserId
Int32,
UserDisplayName` LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY PostId
0 rows in set. Elapsed: 46.357 sec. Processed 90.38 million rows, 11.14 GB (1.95 million rows/s., 240.22 MB/s.)
```
The ordering key here optimizes the table for queries which filter by
PostId
.
Suppose a user wishes to filter on a specific
UserId
and compute their average
Score
:
```sql
SELECT avg(Score)
FROM comments
WHERE UserId = 8592047
โโโโโโโโโโโavg(Score)โโ
โ 0.18181818181818182 โ
โโโโโโโโโโโโโโโโโโโโโโโ
1 row in set. Elapsed: 0.778 sec. Processed 90.38 million rows, 361.59 MB (116.16 million rows/s., 464.74 MB/s.)
Peak memory usage: 217.08 MiB.
```
While fast (the data is small for ClickHouse), we can tell this requires a full table scan from the number of rows processed - 90.38 million. For larger datasets, we can use a materialized view to lookup our ordering key values
PostId
for filtering column
UserId
. These values can then be used to perform an efficient lookup.
In this example, our materialized view can be very simple, selecting only the
PostId
and
UserId
from
comments
on insert. These results are in turn sent to a table
comments_posts_users
which is ordered by
UserId
. We create a null version of the
Comments
table below and use this to populate our view and
comments_posts_users
table:
```sql
CREATE TABLE comments_posts_users (
PostId UInt32,
UserId Int32
) ENGINE = MergeTree ORDER BY UserId
CREATE TABLE comments_null AS comments
ENGINE = Null
CREATE MATERIALIZED VIEW comments_posts_users_mv TO comments_posts_users AS
SELECT PostId, UserId FROM comments_null
INSERT INTO comments_null SELECT * FROM comments
0 rows in set. Elapsed: 5.163 sec. Processed 90.38 million rows, 17.25 GB (17.51 million rows/s., 3.34 GB/s.)
```
We can now use this View in a subquery to accelerate our previous query: | {"source_file": "incremental-materialized-view.md"} | [
-0.10839328914880753,
-0.05386443808674812,
-0.034643419086933136,
0.013981287367641926,
-0.10746070742607117,
-0.03186590597033501,
-0.01617257669568062,
-0.06388843059539795,
0.03279618173837662,
-0.00242471881210804,
0.025210386142134666,
0.02253291942179203,
0.046114999800920486,
-0.03... |
a9667918-8c98-418d-b13e-ff0d6ce7f20d | 0 rows in set. Elapsed: 5.163 sec. Processed 90.38 million rows, 17.25 GB (17.51 million rows/s., 3.34 GB/s.)
```
We can now use this View in a subquery to accelerate our previous query:
```sql
SELECT avg(Score)
FROM comments
WHERE PostId IN (
SELECT PostId
FROM comments_posts_users
WHERE UserId = 8592047
) AND UserId = 8592047
โโโโโโโโโโโavg(Score)โโ
โ 0.18181818181818182 โ
โโโโโโโโโโโโโโโโโโโโโโโ
1 row in set. Elapsed: 0.012 sec. Processed 88.61 thousand rows, 771.37 KB (7.09 million rows/s., 61.73 MB/s.)
```
Chaining / cascading materialized views {#chaining}
Materialized views can be chained (or cascaded), allowing complex workflows to be established.
For more information see the guide
"Cascading materialized views"
.
Materialized views and JOINs {#materialized-views-and-joins}
:::note Refreshable Materialized Views
The following applies to Incremental Materialized Views only. Refreshable Materialized Views execute their query periodically over the full target dataset and fully support JOINs. Consider using them for complex JOINs if a reduction in result freshness can be tolerated.
:::
Incremental Materialized views in ClickHouse fully support
JOIN
operations, but with one crucial constraint:
the materialized view only triggers on inserts to the source table (the left-most table in the query).
Right-side tables in JOINs do not trigger updates, even if their data changes. This behavior is especially important when building
Incremental
Materialized Views, where data is aggregated or transformed during insert time.
When an Incremental materialized view is defined using a
JOIN
, the left-most table in the
SELECT
query acts as the source. When new rows are inserted into this table, ClickHouse executes the materialized view query
only
with those newly inserted rows. Right-side tables in the JOIN are read in full during this execution, but changes to them alone do not trigger the view.
This behavior makes JOINs in Materialized Views similar to a snapshot join against static dimension data.
This works well for enriching data with reference or dimension tables. However, any updates to the right-side tables (e.g., user metadata) will not retroactively update the materialized view. To see updated data, new inserts must arrive in the source table.
Example {#materialized-views-and-joins-example}
Let's walk through a concrete example using the
Stack Overflow dataset
. We'll use a materialized view to compute
daily badges per user
, including the display name of the user from the
users
table.
As a reminder, our table schemas are:
``sql
CREATE TABLE badges
(
Id
UInt32,
UserId
Int32,
Name
LowCardinality(String),
Date
DateTime64(3, 'UTC'),
Class
Enum8('Gold' = 1, 'Silver' = 2, 'Bronze' = 3),
TagBased` Bool
)
ENGINE = MergeTree
ORDER BY UserId | {"source_file": "incremental-materialized-view.md"} | [
-0.05282462388277054,
-0.09368899464607239,
-0.04652990400791168,
0.04941391944885254,
-0.04672698676586151,
-0.06525811553001404,
-0.010127292945981026,
-0.047238267958164215,
0.022770876064896584,
0.022306738421320915,
-0.054088663309812546,
0.015128149650990963,
0.05420947074890137,
-0.... |
a93e0b42-a616-4411-a137-da64daa0dcf8 | CREATE TABLE users
(
Id
Int32,
Reputation
UInt32,
CreationDate
DateTime64(3, 'UTC'),
DisplayName
LowCardinality(String),
LastAccessDate
DateTime64(3, 'UTC'),
Location
LowCardinality(String),
Views
UInt32,
UpVotes
UInt32,
DownVotes
UInt32
)
ENGINE = MergeTree
ORDER BY Id;
```
We'll assume our
users
table is pre-populated:
sql
INSERT INTO users
SELECT * FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/users.parquet');
The materialized view and its associated target table are defined as:
```sql
CREATE TABLE daily_badges_by_user
(
Day Date,
UserId Int32,
DisplayName LowCardinality(String),
Gold UInt32,
Silver UInt32,
Bronze UInt32
)
ENGINE = SummingMergeTree
ORDER BY (DisplayName, UserId, Day);
CREATE MATERIALIZED VIEW daily_badges_by_user_mv TO daily_badges_by_user AS
SELECT
toDate(Date) AS Day,
b.UserId,
u.DisplayName,
countIf(Class = 'Gold') AS Gold,
countIf(Class = 'Silver') AS Silver,
countIf(Class = 'Bronze') AS Bronze
FROM badges AS b
LEFT JOIN users AS u ON b.UserId = u.Id
GROUP BY Day, b.UserId, u.DisplayName;
```
:::note Grouping and Ordering Alignment
The
GROUP BY
clause in the materialized view must include
DisplayName
,
UserId
, and
Day
to match the
ORDER BY
in the
SummingMergeTree
target table. This ensures rows are correctly aggregated and merged. Omitting any of these can lead to incorrect results or inefficient merges.
:::
If we now populate the badges, the view will be triggered - populating our
daily_badges_by_user
table.
```sql
INSERT INTO badges SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet')
0 rows in set. Elapsed: 433.762 sec. Processed 1.16 billion rows, 28.50 GB (2.67 million rows/s., 65.70 MB/s.)
```
Suppose we wish to view the badges achieved by a specific user, we can write the following query:
```sql
SELECT *
FROM daily_badges_by_user
FINAL
WHERE DisplayName = 'gingerwizard'
โโโโโโโโโDayโโฌโโUserIdโโฌโDisplayNameโโโฌโGoldโโฌโSilverโโฌโBronzeโโ
โ 2023-02-27 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-02-28 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2013-10-30 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2024-03-04 โ 2936484 โ gingerwizard โ 0 โ 1 โ 0 โ
โ 2024-03-05 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-04-17 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2013-11-18 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-10-31 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโดโโโโโโโโโดโโโโโโโโโ
8 rows in set. Elapsed: 0.018 sec. Processed 32.77 thousand rows, 642.14 KB (1.86 million rows/s., 36.44 MB/s.)
```
Now, if this user receives a new badge and a row is inserted, our view will be updated: | {"source_file": "incremental-materialized-view.md"} | [
-0.0521392785012722,
-0.04893776774406433,
-0.03658987954258919,
-0.024102337658405304,
-0.08899513632059097,
-0.0009505223133601248,
0.044307827949523926,
-0.03484005481004715,
-0.00016658194363117218,
0.025584125891327858,
0.04597160220146179,
-0.06570720672607422,
0.04831705987453461,
-... |
86373864-b36d-4f81-9843-fcfe912ff3ce | Now, if this user receives a new badge and a row is inserted, our view will be updated:
```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0);
1 row in set. Elapsed: 7.517 sec.
SELECT *
FROM daily_badges_by_user
FINAL
WHERE DisplayName = 'gingerwizard'
โโโโโโโโโDayโโฌโโUserIdโโฌโDisplayNameโโโฌโGoldโโฌโSilverโโฌโBronzeโโ
โ 2013-10-30 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2013-11-18 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-02-27 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-02-28 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-04-17 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2023-10-31 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2024-03-04 โ 2936484 โ gingerwizard โ 0 โ 1 โ 0 โ
โ 2024-03-05 โ 2936484 โ gingerwizard โ 0 โ 0 โ 1 โ
โ 2025-04-13 โ 2936484 โ gingerwizard โ 1 โ 0 โ 0 โ
โโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโดโโโโโโโโโดโโโโโโโโโ
9 rows in set. Elapsed: 0.017 sec. Processed 32.77 thousand rows, 642.27 KB (1.96 million rows/s., 38.50 MB/s.)
```
:::warning
Notice the latency of the insert here. The inserted user row is joined against the entire
users
table, significantly impacting insert performance. We propose approaches to address this below in
"Using source table in filters and joins"
.
:::
Conversely, if we insert a badge for a new user, followed by the row for the user, our materialized view will fail to capture the users' metrics.
sql
INSERT INTO badges VALUES (53505059, 23923286, 'Good Answer', now(), 'Bronze', 0);
INSERT INTO users VALUES (23923286, 1, now(), 'brand_new_user', now(), 'UK', 1, 1, 0);
```sql
SELECT *
FROM daily_badges_by_user
FINAL
WHERE DisplayName = 'brand_new_user';
0 rows in set. Elapsed: 0.017 sec. Processed 32.77 thousand rows, 644.32 KB (1.98 million rows/s., 38.94 MB/s.)
```
The view, in this case, only executes for the badge insert before the user row exists. If we insert another badge for the user, a row is inserted, as is expected:
```sql
INSERT INTO badges VALUES (53505060, 23923286, 'Teacher', now(), 'Bronze', 0);
SELECT *
FROM daily_badges_by_user
FINAL
WHERE DisplayName = 'brand_new_user'
โโโโโโโโโDayโโฌโโโUserIdโโฌโDisplayNameโโโโโฌโGoldโโฌโSilverโโฌโBronzeโโ
โ 2025-04-13 โ 23923286 โ brand_new_user โ 0 โ 0 โ 1 โ
โโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโโโโโดโโโโโโโดโโโโโโโโโดโโโโโโโโโ
1 row in set. Elapsed: 0.018 sec. Processed 32.77 thousand rows, 644.48 KB (1.87 million rows/s., 36.72 MB/s.)
```
Note, however, that this result is incorrect.
Best practices for JOINs in materialized views {#join-best-practices}
Use the left-most table as the trigger.
Only the table on the left side of the
SELECT
statement triggers the materialized view. Changes to right-side tables will not trigger updates. | {"source_file": "incremental-materialized-view.md"} | [
-0.05902951955795288,
-0.0189096387475729,
0.005295072216540575,
0.03675743564963341,
-0.0537014901638031,
0.025864798575639725,
0.12335741519927979,
-0.04793716222047806,
-0.057771168649196625,
-0.01347412820905447,
0.07627549767494202,
-0.061203423887491226,
0.05588804557919502,
-0.05788... |
ef344545-2c54-49a1-995b-f2f927d56fea | Use the left-most table as the trigger.
Only the table on the left side of the
SELECT
statement triggers the materialized view. Changes to right-side tables will not trigger updates.
Pre-insert joined data.
Ensure that data in joined tables exists before inserting rows into the source table. The JOIN is evaluated at insert time, so missing data will result in unmatched rows or nulls.
Limit columns pulled from joins.
Select only the required columns from joined tables to minimize memory use and reduce insert-time latency (see below).
Evaluate insert-time performance.
JOINs increase the cost of inserts, especially with large right-side tables. Benchmark insert rates using representative production data.
Prefer dictionaries for simple lookups
. Use
Dictionaries
for key-value lookups (e.g., user ID to name) to avoid expensive JOIN operations.
Align
GROUP BY
and
ORDER BY
for merge efficiency.
When using
SummingMergeTree
or
AggregatingMergeTree
, ensure
GROUP BY
matches the
ORDER BY
clause in the target table to allow efficient row merging.
Use explicit column aliases.
When tables have overlapping column names, use aliases to prevent ambiguity and ensure correct results in the target table.
Consider insert volume and frequency.
JOINs work well in moderate insert workloads. For high-throughput ingestion, consider using staging tables, pre-joins, or other approaches such as Dictionaries and
Refreshable Materialized Views
.
Using source table in filters and joins {#using-source-table-in-filters-and-joins-in-materialized-views}
When working with Materialized Views in ClickHouse, it's important to understand how the source table is treated during the execution of the materialized view's query. Specifically, the source table in the materialized view's query is replaced with the inserted block of data. This behavior can lead to some unexpected results if not properly understood.
Example scenario {#example-scenario}
Consider the following setup:
``sql
CREATE TABLE t0 (
c0
Int) ENGINE = Memory;
CREATE TABLE mvw1_inner (
c0
Int) ENGINE = Memory;
CREATE TABLE mvw2_inner (
c0` Int) ENGINE = Memory;
CREATE VIEW vt0 AS SELECT * FROM t0;
CREATE MATERIALIZED VIEW mvw1 TO mvw1_inner
AS SELECT count(*) AS c0
FROM t0
LEFT JOIN ( SELECT * FROM t0 ) AS x ON t0.c0 = x.c0;
CREATE MATERIALIZED VIEW mvw2 TO mvw2_inner
AS SELECT count(*) AS c0
FROM t0
LEFT JOIN vt0 ON t0.c0 = vt0.c0;
INSERT INTO t0 VALUES (1),(2),(3);
INSERT INTO t0 VALUES (1),(2),(3),(4),(5);
SELECT * FROM mvw1;
โโc0โโ
โ 3 โ
โ 5 โ
โโโโโโ
SELECT * FROM mvw2;
โโc0โโ
โ 3 โ
โ 8 โ
โโโโโโ
```
Explanation {#explanation}
In the above example, we have two Materialized Views
mvw1
and
mvw2
that perform similar operations but with a slight difference in how they reference the source table
t0
. | {"source_file": "incremental-materialized-view.md"} | [
-0.018514292314648628,
-0.025079168379306793,
-0.027500325813889503,
0.036559440195560455,
-0.08195879310369492,
-0.011923201382160187,
-0.04138853773474693,
-0.01235063374042511,
-0.04243961721658707,
0.028451744467020035,
0.03260781988501549,
-0.03276309743523598,
0.06024343892931938,
-0... |
26b0ebbd-59a5-4cc3-b91d-7e53a32b94f4 | In the above example, we have two Materialized Views
mvw1
and
mvw2
that perform similar operations but with a slight difference in how they reference the source table
t0
.
In
mvw1
, table
t0
is directly referenced inside a
(SELECT * FROM t0)
subquery on the right side of the JOIN. When data is inserted into
t0
, the materialized view's query is executed with the inserted block of data replacing
t0
. This means that the JOIN operation is performed only on the newly inserted rows, not the entire table.
In the second case with joining
vt0
, the view reads all the data from
t0
. This ensures that the JOIN operation considers all rows in
t0
, not just the newly inserted block.
The key difference lies in how ClickHouse handles the source table in the materialized view's query. When a materialized view is triggered by an insert, the source table (
t0
in this case) is replaced by the inserted block of data. This behavior can be leveraged to optimize queries but also requires careful consideration to avoid unexpected results.
Use cases and caveats {#use-cases-and-caveats}
In practice, you may use this behavior to optimize Materialized Views that only need to process a subset of the source table's data. For example, you can use a subquery to filter the source table before joining it with other tables. This can help reduce the amount of data processed by the materialized view and improve performance.
```sql
CREATE TABLE t0 (id UInt32, value String) ENGINE = MergeTree() ORDER BY id;
CREATE TABLE t1 (id UInt32, description String) ENGINE = MergeTree() ORDER BY id;
INSERT INTO t1 VALUES (1, 'A'), (2, 'B'), (3, 'C');
CREATE TABLE mvw1_target_table (id UInt32, value String, description String) ENGINE = MergeTree() ORDER BY id;
CREATE MATERIALIZED VIEW mvw1 TO mvw1_target_table AS
SELECT t0.id, t0.value, t1.description
FROM t0
JOIN (SELECT * FROM t1 WHERE t1.id IN (SELECT id FROM t0)) AS t1
ON t0.id = t1.id;
```
In this example, the set built from the
IN (SELECT id FROM t0)
subquery has only the newly inserted rows, which can help to filter
t1
against it.
Example with stack overflow {#example-with-stack-overflow}
Consider our
earlier materialized view example
to compute
daily badges per user
, including the user's display name from the
users
table.
sql
CREATE MATERIALIZED VIEW daily_badges_by_user_mv TO daily_badges_by_user
AS SELECT
toDate(Date) AS Day,
b.UserId,
u.DisplayName,
countIf(Class = 'Gold') AS Gold,
countIf(Class = 'Silver') AS Silver,
countIf(Class = 'Bronze') AS Bronze
FROM badges AS b
LEFT JOIN users AS u ON b.UserId = u.Id
GROUP BY Day, b.UserId, u.DisplayName;
This view significantly impacted insert latency on the
badges
table e.g.
```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0);
1 row in set. Elapsed: 7.517 sec.
``` | {"source_file": "incremental-materialized-view.md"} | [
-0.09934733062982559,
-0.026156283915042877,
-0.005317657254636288,
0.03800068795681,
-0.00384396780282259,
-0.04160774499177933,
-0.00038635393138974905,
-0.030595043674111366,
0.06229378655552864,
-0.029826637357473373,
0.026489153504371643,
-0.003049681894481182,
0.07477640360593796,
-0... |
a82865f6-75ca-45e0-a130-e53814acd3c2 | ```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0);
1 row in set. Elapsed: 7.517 sec.
```
Using the approach above, we can optimize this view. We'll add a filter to the
users
table using the user ids in the inserted badge rows:
sql
CREATE MATERIALIZED VIEW daily_badges_by_user_mv TO daily_badges_by_user
AS SELECT
toDate(Date) AS Day,
b.UserId,
u.DisplayName,
countIf(Class = 'Gold') AS Gold,
countIf(Class = 'Silver') AS Silver,
countIf(Class = 'Bronze') AS Bronze
FROM badges AS b
LEFT JOIN
(
SELECT
Id,
DisplayName
FROM users
WHERE Id IN (
SELECT UserId
FROM badges
)
) AS u ON b.UserId = u.Id
GROUP BY
Day,
b.UserId,
u.DisplayName
Not only does this speed up the initial badges insert:
```sql
INSERT INTO badges SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet')
0 rows in set. Elapsed: 132.118 sec. Processed 323.43 million rows, 4.69 GB (2.45 million rows/s., 35.49 MB/s.)
Peak memory usage: 1.99 GiB.
```
But also means future badge inserts are efficient:
```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0);
1 row in set. Elapsed: 0.583 sec.
```
In the above operation, only one row is retrieved from the users table for the user id
2936484
. This lookup is also optimized with a table ordering key of
Id
.
Materialized views and unions {#materialized-views-and-unions}
UNION ALL
queries are commonly used to combine data from multiple source tables into a single result set.
While
UNION ALL
is not directly supported in Incremental Materialized Views, you can achieve the same outcome by creating a separate materialized view for each
SELECT
branch and writing their results to a shared target table.
For our example, we'll use the Stack Overflow dataset. Consider the
badges
and
comments
tables below, which represent the badges earned by a user and the comments they make on posts:
``sql
CREATE TABLE stackoverflow.comments
(
Id
UInt32,
PostId
UInt32,
Score
UInt16,
Text
String,
CreationDate
DateTime64(3, 'UTC'),
UserId
Int32,
UserDisplayName` LowCardinality(String)
)
ENGINE = MergeTree
ORDER BY CreationDate
CREATE TABLE stackoverflow.badges
(
Id
UInt32,
UserId
Int32,
Name
LowCardinality(String),
Date
DateTime64(3, 'UTC'),
Class
Enum8('Gold' = 1, 'Silver' = 2, 'Bronze' = 3),
TagBased
Bool
)
ENGINE = MergeTree
ORDER BY UserId
```
These can be populated with the following
INSERT INTO
commands:
sql
INSERT INTO stackoverflow.badges SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/badges.parquet')
INSERT INTO stackoverflow.comments SELECT *
FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/comments/*.parquet') | {"source_file": "incremental-materialized-view.md"} | [
-0.026645321398973465,
-0.007703088223934174,
0.0005223773187026381,
0.05987443029880524,
-0.05158303305506706,
0.03264346346259117,
0.1126314178109169,
-0.07493707537651062,
-0.030526774004101753,
-0.06661473214626312,
0.05414709076285362,
-0.04702199995517731,
0.10960347205400467,
-0.037... |
38c27b93-09e5-4e2f-a535-06e69aa78230 | Suppose we want to create a unified view of user activity, showing the last activity by each user by combining these two tables:
sql
SELECT
UserId,
argMax(description, event_time) AS last_description,
argMax(activity_type, event_time) AS activity_type,
max(event_time) AS last_activity
FROM
(
SELECT
UserId,
CreationDate AS event_time,
Text AS description,
'comment' AS activity_type
FROM stackoverflow.comments
UNION ALL
SELECT
UserId,
Date AS event_time,
Name AS description,
'badge' AS activity_type
FROM stackoverflow.badges
)
GROUP BY UserId
ORDER BY last_activity DESC
LIMIT 10
Let's assume we have a target table to receive the results of this query. Note the use of the
AggregatingMergeTree
table engine and
AggregateFunction
to ensure results are merged correctly:
sql
CREATE TABLE user_activity
(
`UserId` String,
`last_description` AggregateFunction(argMax, String, DateTime64(3, 'UTC')),
`activity_type` AggregateFunction(argMax, String, DateTime64(3, 'UTC')),
`last_activity` SimpleAggregateFunction(max, DateTime64(3, 'UTC'))
)
ENGINE = AggregatingMergeTree
ORDER BY UserId
Wanting this table to update as new rows are inserted into either
badges
or
comments
, a naive approach to this problem may be to try and create a materialized view with the previous union query:
sql
CREATE MATERIALIZED VIEW user_activity_mv TO user_activity AS
SELECT
UserId,
argMaxState(description, event_time) AS last_description,
argMaxState(activity_type, event_time) AS activity_type,
max(event_time) AS last_activity
FROM
(
SELECT
UserId,
CreationDate AS event_time,
Text AS description,
'comment' AS activity_type
FROM stackoverflow.comments
UNION ALL
SELECT
UserId,
Date AS event_time,
Name AS description,
'badge' AS activity_type
FROM stackoverflow.badges
)
GROUP BY UserId
ORDER BY last_activity DESC
While this is valid syntactically, it will produce unintended results - the view will only trigger inserts to the
comments
table. For example:
```sql
INSERT INTO comments VALUES (99999999, 23121, 1, 'The answer is 42', now(), 2936484, 'gingerwizard');
SELECT
UserId,
argMaxMerge(last_description) AS description,
argMaxMerge(activity_type) AS activity_type,
max(last_activity) AS last_activity
FROM user_activity
WHERE UserId = '2936484'
GROUP BY UserId
โโUserIdโโโฌโdescriptionโโโโโโโฌโactivity_typeโโฌโโโโโโโโโโโlast_activityโโ
โ 2936484 โ The answer is 42 โ comment โ 2025-04-15 09:56:19.000 โ
โโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ
1 row in set. Elapsed: 0.005 sec.
```
Inserts into the
badges
table will not trigger the view, causing
user_activity
to not receive updates:
```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0); | {"source_file": "incremental-materialized-view.md"} | [
0.014329703524708748,
-0.08302903175354004,
0.02283845841884613,
0.015066283755004406,
-0.047218795865774155,
0.02514456957578659,
0.055478621274232864,
0.042593661695718765,
0.007252588402479887,
-0.026749294251203537,
0.0036671694833785295,
-0.023549506440758705,
0.04542995244264603,
-0.... |
b3e56bfd-0c05-4186-9e36-5d17da934d36 | Inserts into the
badges
table will not trigger the view, causing
user_activity
to not receive updates:
```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0);
SELECT
UserId,
argMaxMerge(last_description) AS description,
argMaxMerge(activity_type) AS activity_type,
max(last_activity) AS last_activity
FROM user_activity
WHERE UserId = '2936484'
GROUP BY UserId;
โโUserIdโโโฌโdescriptionโโโโโโโฌโactivity_typeโโฌโโโโโโโโโโโlast_activityโโ
โ 2936484 โ The answer is 42 โ comment โ 2025-04-15 09:56:19.000 โ
โโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ
1 row in set. Elapsed: 0.005 sec.
```
To solve this, we simply create a materialized view for each SELECT statement:
```sql
DROP TABLE user_activity_mv;
TRUNCATE TABLE user_activity;
CREATE MATERIALIZED VIEW comment_activity_mv TO user_activity AS
SELECT
UserId,
argMaxState(Text, CreationDate) AS last_description,
argMaxState('comment', CreationDate) AS activity_type,
max(CreationDate) AS last_activity
FROM stackoverflow.comments
GROUP BY UserId;
CREATE MATERIALIZED VIEW badges_activity_mv TO user_activity AS
SELECT
UserId,
argMaxState(Name, Date) AS last_description,
argMaxState('badge', Date) AS activity_type,
max(Date) AS last_activity
FROM stackoverflow.badges
GROUP BY UserId;
```
Inserting to either table now results in the correct results. For example, if we insert into the
comments
table:
```sql
INSERT INTO comments VALUES (99999999, 23121, 1, 'The answer is 42', now(), 2936484, 'gingerwizard');
SELECT
UserId,
argMaxMerge(last_description) AS description,
argMaxMerge(activity_type) AS activity_type,
max(last_activity) AS last_activity
FROM user_activity
WHERE UserId = '2936484'
GROUP BY UserId;
โโUserIdโโโฌโdescriptionโโโโโโโฌโactivity_typeโโฌโโโโโโโโโโโlast_activityโโ
โ 2936484 โ The answer is 42 โ comment โ 2025-04-15 10:18:47.000 โ
โโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ
1 row in set. Elapsed: 0.006 sec.
```
Likewise, inserts into the
badges
table are reflected in the
user_activity
table:
```sql
INSERT INTO badges VALUES (53505058, 2936484, 'gingerwizard', now(), 'Gold', 0);
SELECT
UserId,
argMaxMerge(last_description) AS description,
argMaxMerge(activity_type) AS activity_type,
max(last_activity) AS last_activity
FROM user_activity
WHERE UserId = '2936484'
GROUP BY UserId
โโUserIdโโโฌโdescriptionโโโฌโactivity_typeโโฌโโโโโโโโโโโlast_activityโโ
โ 2936484 โ gingerwizard โ badge โ 2025-04-15 10:20:18.000 โ
โโโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโ
1 row in set. Elapsed: 0.006 sec.
```
Parallel vs sequential processing {#materialized-views-parallel-vs-sequential}
As shown in the previous example, a table can act as the source for multiple Materialized Views. The order in which these are executed depends on the setting
parallel_view_processing
. | {"source_file": "incremental-materialized-view.md"} | [
-0.04504009708762169,
-0.10780434310436249,
0.04502027854323387,
0.03408291935920715,
-0.051987774670124054,
-0.008724387735128403,
0.09753865748643875,
-0.019815005362033844,
0.013375671580433846,
0.019638001918792725,
0.07004421949386597,
-0.033978261053562164,
0.10451848804950714,
0.002... |
9175777b-2c98-4c7e-88a0-1e1f356390cd | As shown in the previous example, a table can act as the source for multiple Materialized Views. The order in which these are executed depends on the setting
parallel_view_processing
.
By default, this setting is equal to
0
(
false
), meaning Materialized Views are executed sequentially in
uuid
order.
For example, consider the following
source
table and 3 Materialized Views, each sending rows to a
target
table:
``sql
CREATE TABLE source
(
message` String
)
ENGINE = MergeTree
ORDER BY tuple();
CREATE TABLE target
(
message
String,
from
String,
now
DateTime64(9),
sleep
UInt8
)
ENGINE = MergeTree
ORDER BY tuple();
CREATE MATERIALIZED VIEW mv_2 TO target
AS SELECT
message,
'mv2' AS from,
now64(9) as now,
sleep(1) as sleep
FROM source;
CREATE MATERIALIZED VIEW mv_3 TO target
AS SELECT
message,
'mv3' AS from,
now64(9) as now,
sleep(1) as sleep
FROM source;
CREATE MATERIALIZED VIEW mv_1 TO target
AS SELECT
message,
'mv1' AS from,
now64(9) as now,
sleep(1) as sleep
FROM source;
```
Notice that each of the views pauses 1 second prior to inserting their rows to the
target
table while also including their name and insertion time.
Inserting a row into the table
source
takes ~3 seconds, with each view executing sequentially:
```sql
INSERT INTO source VALUES ('test')
1 row in set. Elapsed: 3.786 sec.
```
We can confirm the arrival of rows from each row with a
SELECT
:
```sql
SELECT
message,
from,
now
FROM target
ORDER BY now ASC
โโmessageโโฌโfromโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโnowโโ
โ test โ mv3 โ 2025-04-15 14:52:01.306162309 โ
โ test โ mv1 โ 2025-04-15 14:52:02.307693521 โ
โ test โ mv2 โ 2025-04-15 14:52:03.309250283 โ
โโโโโโโโโโโดโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
3 rows in set. Elapsed: 0.015 sec.
```
This aligns with the
uuid
of the views:
```sql
SELECT
name,
uuid
FROM system.tables
WHERE name IN ('mv_1', 'mv_2', 'mv_3')
ORDER BY uuid ASC
โโnameโโฌโuuidโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ mv_3 โ ba5e36d0-fa9e-4fe8-8f8c-bc4f72324111 โ
โ mv_1 โ b961c3ac-5a0e-4117-ab71-baa585824d43 โ
โ mv_2 โ e611cc31-70e5-499b-adcc-53fb12b109f5 โ
โโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
3 rows in set. Elapsed: 0.004 sec.
```
Conversely, consider what happens if we insert a row with
parallel_view_processing=1
enabled. With this enabled, the views are executed in parallel, giving no guarantees to the order at which rows arrive to the target table:
```sql
TRUNCATE target;
SET parallel_view_processing = 1;
INSERT INTO source VALUES ('test');
1 row in set. Elapsed: 1.588 sec.
SELECT
message,
from,
now
FROM target
ORDER BY now ASC
โโmessageโโฌโfromโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโnowโโ
โ test โ mv3 โ 2025-04-15 19:47:32.242937372 โ
โ test โ mv1 โ 2025-04-15 19:47:32.243058183 โ
โ test โ mv2 โ 2025-04-15 19:47:32.337921800 โ
โโโโโโโโโโโดโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | {"source_file": "incremental-materialized-view.md"} | [
-0.024470848962664604,
-0.10137457400560379,
-0.00694515835493803,
0.016760190948843956,
-0.0122441491112113,
-0.05585858225822449,
-0.03803591430187225,
-0.06959135830402374,
0.05475979670882225,
0.022397127002477646,
0.048331551253795624,
-0.0065054576843976974,
0.04619878530502319,
-0.0... |
9d331c58-3345-4a9c-9629-60870367a613 | 3 rows in set. Elapsed: 0.004 sec.
```
Although our ordering of the arrival of rows from each view is the same, this is not guaranteed - as illustrated by the similarity of each row's insert time. Also note the improved insert performance.
When to use parallel processing {#materialized-views-when-to-use-parallel}
Enabling
parallel_view_processing=1
can significantly improve insert throughput, as shown above, especially when multiple Materialized Views are attached to a single table. However, it's important to understand the trade-offs:
Increased insert pressure
: All Materialized Views are executed simultaneously, increasing CPU and memory usage. If each view performs heavy computation or JOINs, this can overload the system.
Need for strict execution order
: In rare workflows where the order of view execution matters (e.g., chained dependencies), parallel execution may lead to inconsistent state or race conditions. While possible to design around this, such setups are fragile and may break with future versions.
:::note Historical defaults and stability
Sequential execution was the default for a long time, in part due to error handling complexities. Historically, a failure in one materialized view could prevent others from executing. Newer versions have improved this by isolating failures per block, but sequential execution still provides clearer failure semantics.
:::
In general, enable
parallel_view_processing=1
when:
You have multiple independent Materialized Views
You're aiming to maximize insert performance
You're aware of the system's capacity to handle concurrent view execution
Leave it disabled when:
- Materialized Views have dependencies on one another
- You require predictable, ordered execution
- You're debugging or auditing insert behavior and want deterministic replay
Materialized views and Common Table Expressions (CTE) {#materialized-views-common-table-expressions-ctes}
Non-recursive
Common Table Expressions (CTEs) are supported in Materialized Views.
:::note Common Table Expressions
are not
materialized
ClickHouse does not materialize CTEs; instead, it substitutes the CTE definition directly into the query, which can lead to multiple evaluations of the same expression (if the CTE is used more than once).
:::
Consider the following example which computes daily activity for each post type.
```sql
CREATE TABLE daily_post_activity
(
Day Date,
PostType String,
PostsCreated SimpleAggregateFunction(sum, UInt64),
AvgScore AggregateFunction(avg, Int32),
TotalViews SimpleAggregateFunction(sum, UInt64)
)
ENGINE = AggregatingMergeTree
ORDER BY (Day, PostType); | {"source_file": "incremental-materialized-view.md"} | [
-0.05915999040007591,
-0.08647170662879944,
-0.06399662047624588,
-0.019334940239787102,
-0.04457477107644081,
-0.04815296083688736,
-0.10375254601240158,
-0.03359764814376831,
0.07193631678819656,
0.03865427151322365,
0.013644699938595295,
-0.002173167420551181,
0.03729168698191643,
-0.03... |
c224d43c-1daa-4e04-ab3f-4873298eaa7a | CREATE MATERIALIZED VIEW daily_post_activity_mv TO daily_post_activity AS
WITH filtered_posts AS (
SELECT
toDate(CreationDate) AS Day,
PostTypeId,
Score,
ViewCount
FROM posts
WHERE Score > 0 AND PostTypeId IN (1, 2) -- Question or Answer
)
SELECT
Day,
CASE PostTypeId
WHEN 1 THEN 'Question'
WHEN 2 THEN 'Answer'
END AS PostType,
count() AS PostsCreated,
avgState(Score) AS AvgScore,
sum(ViewCount) AS TotalViews
FROM filtered_posts
GROUP BY Day, PostTypeId;
```
While the CTE is strictly unnecessary here, for example purposes, the view will work as expected:
sql
INSERT INTO posts
SELECT *
FROM s3Cluster('default', 'https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow/parquet/posts/by_month/*.parquet')
```sql
SELECT
Day,
PostType,
avgMerge(AvgScore) AS AvgScore,
sum(PostsCreated) AS PostsCreated,
sum(TotalViews) AS TotalViews
FROM daily_post_activity
GROUP BY
Day,
PostType
ORDER BY Day DESC
LIMIT 10
โโโโโโโโโDayโโฌโPostTypeโโฌโโโโโโโโโโโAvgScoreโโฌโPostsCreatedโโฌโTotalViewsโโ
โ 2024-03-31 โ Question โ 1.3317757009345794 โ 214 โ 9728 โ
โ 2024-03-31 โ Answer โ 1.4747191011235956 โ 356 โ 0 โ
โ 2024-03-30 โ Answer โ 1.4587912087912087 โ 364 โ 0 โ
โ 2024-03-30 โ Question โ 1.2748815165876777 โ 211 โ 9606 โ
โ 2024-03-29 โ Question โ 1.2641509433962264 โ 318 โ 14552 โ
โ 2024-03-29 โ Answer โ 1.4706927175843694 โ 563 โ 0 โ
โ 2024-03-28 โ Answer โ 1.601637107776262 โ 733 โ 0 โ
โ 2024-03-28 โ Question โ 1.3530864197530865 โ 405 โ 24564 โ
โ 2024-03-27 โ Question โ 1.3225806451612903 โ 434 โ 21346 โ
โ 2024-03-27 โ Answer โ 1.4907539118065434 โ 703 โ 0 โ
โโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโโโโโโโ
10 rows in set. Elapsed: 0.013 sec. Processed 11.45 thousand rows, 663.87 KB (866.53 thousand rows/s., 50.26 MB/s.)
Peak memory usage: 989.53 KiB.
```
In ClickHouse, CTEs are inlined which means they are effectively copy-pasted into the query during optimization and
not
materialized. This means:
If your CTE references a different table from the source table (i.e., the one the materialized view is attached to), and is used in a
JOIN
or
IN
clause, it will behave like a subquery or join, not a trigger.
The materialized view will still only trigger on inserts into the main source table, but the CTE will be re-executed on every insert, which may cause unnecessary overhead, especially if the referenced table is large.
For example,
sql
WITH recent_users AS (
SELECT Id FROM stackoverflow.users WHERE CreationDate > now() - INTERVAL 7 DAY
)
SELECT * FROM stackoverflow.posts WHERE OwnerUserId IN (SELECT Id FROM recent_users) | {"source_file": "incremental-materialized-view.md"} | [
-0.003974011167883873,
-0.09900426864624023,
-0.015917297452688217,
0.028752967715263367,
-0.07906835526227951,
0.003621116280555725,
0.027913138270378113,
-0.03158029168844223,
-0.022003691643476486,
0.04685370996594429,
0.020304163917899132,
-0.11532411724328995,
0.04743319749832153,
0.0... |
7f31e49e-ef97-461e-b310-4e0513593dcb | sql
WITH recent_users AS (
SELECT Id FROM stackoverflow.users WHERE CreationDate > now() - INTERVAL 7 DAY
)
SELECT * FROM stackoverflow.posts WHERE OwnerUserId IN (SELECT Id FROM recent_users)
In this case, the users CTE is re-evaluated on every insert into posts, and the materialized view will not update when new users are inserted - only when posts are.
Generally, use CTEs for logic that operates on the same source table the materialized view is attached to or ensure that referenced tables are small and unlikely to cause performance bottlenecks. Alternatively, consider
the same optimizations as JOINs with Materialized Views
. | {"source_file": "incremental-materialized-view.md"} | [
-0.057173702865839005,
-0.03354649245738983,
0.03493558242917061,
0.023831257596611977,
-0.07383593171834946,
-0.0047648148611187935,
0.03605641424655914,
-0.03642386570572853,
-0.03309816122055054,
0.00955772865563631,
0.022531665861606598,
-0.03090011700987816,
0.011823931708931923,
-0.0... |
db8eabff-02cb-4aa2-bb5b-363de42302ff | slug: /materialized-views
title: 'Materialized Views'
description: 'Index page for materialized views'
keywords: ['materialized views', 'speed up queries', 'query optimization', 'refreshable', 'incremental']
doc_type: 'landing-page'
| Page | Description |
|-------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Incremental materialized view
| Allow users to shift the cost of computation from query time to insert time, resulting in faster
SELECT
queries. |
|
Refreshable materialized view
| Conceptually similar to incremental materialized views but require the periodic execution of the query over the full dataset - the results of which are stored in a target table for querying. | | {"source_file": "index.md"} | [
-0.027871761471033096,
0.0037415518891066313,
0.01404921431094408,
0.024297252297401428,
-0.05937086045742035,
-0.03976445272564888,
-0.030478600412607193,
-0.0412786528468132,
0.04331357777118683,
0.035881608724594116,
-0.005260490346699953,
0.001955127576366067,
0.013210233300924301,
-0.... |
f13a924d-754f-4fca-8cf9-e9e4f7ff6949 | sidebar_position: 1
slug: /tips-and-tricks/community-wisdom
sidebar_label: 'Community wisdom'
doc_type: 'landing-page'
keywords: [
'database tips',
'community wisdom',
'production troubleshooting',
'performance optimization',
'database debugging',
'clickhouse guides',
'real world examples',
'database best practices',
'meetup insights',
'production lessons',
'interactive tutorials',
'database solutions'
]
title: 'ClickHouse community wisdom'
description: 'Learn from the ClickHouse community with real world scenarios and lessons learned'
ClickHouse community wisdom: tips and tricks from meetups {#community-wisdom}
These interactive guides represent collective wisdom from hundreds of production deployments. Each runnable example helps you understand ClickHouse patterns using real GitHub events data - practice these concepts to avoid common mistakes and accelerate your success.
Combine this collected knowledge with our
Best Practices
guide for optimal ClickHouse Experience.
Problem-specific quick jumps {#problem-specific-quick-jumps}
| Issue | Document | Description |
|-------|---------|-------------|
|
Production issue
|
Debugging insights
| Community production debugging tips |
|
Slow queries
|
Performance optimization
| Optimize Performance |
|
Materialized views
|
MV double-edged sword
| Avoid 10x storage instances |
|
Too many parts
|
Too many parts
| Addressing the 'Too Many Parts' error and performance slowdown |
|
High costs
|
Cost optimization
| Optimize Cost |
|
Success stories
|
Success stories
| Examples of ClickHouse in successful use cases |
Last Updated:
Based on community meetup insights through 2024-2025
Contributing:
Found a mistake or have a new lesson? Community contributions welcome | {"source_file": "community-wisdom.md"} | [
-0.02529166266322136,
-0.024892734363675117,
0.01583944819867611,
0.04785041883587837,
0.028426973149180412,
-0.015614300966262817,
0.0511590801179409,
0.014677637256681919,
-0.1000826433300972,
-0.010195418260991573,
0.07044966518878937,
0.048291873186826706,
0.04953194409608841,
-0.02978... |
5e4835c6-8883-4e69-9768-0abf03aa7d1c | sidebar_position: 1
slug: /community-wisdom/cost-optimization
sidebar_label: 'Cost optimization'
doc_type: 'guide'
keywords: [
'cost optimization',
'storage costs',
'partition management',
'data retention',
'storage analysis',
'database optimization',
'clickhouse cost reduction',
'storage hot spots',
'ttl performance',
'disk usage',
'compression strategies',
'retention analysis'
]
title: 'Lessons - cost optimization'
description: 'Cost optimization strategies from ClickHouse community meetups with real production examples and verified techniques.'
Cost optimization: strategies from the community {#cost-optimization}
This guide is part of a collection of findings gained from community meetups. The findings on this page cover community wisdom related to optimizing cost while using ClickHouse that worked well for their specific experience and setup. For more real world solutions and insights you can
browse by specific problem
.
Learn about how
ClickHouse Cloud can help manage operational costs
.
Compression strategy: LZ4 vs ZSTD in production {#compression-strategy}
When Microsoft Clarity needed to handle hundreds of terabytes of data, they discovered that compression choices have dramatic cost implications. At their scale, every bit of storage savings matters, and they faced a classic trade-off: performance versus storage costs. Microsoft Clarity handles massive volumesโtwo petabytes of uncompressed data per month across all accounts, processing around 60,000 queries per hour across eight nodes and serving billions of page views from millions of websites. At this scale, compression strategy becomes a critical cost factor.
They initially used ClickHouse's default
LZ4
compression but discovered significant cost savings were possible with
ZSTD
. While LZ4 is faster, ZSTD provides better compression at the cost of slightly slower performance. After testing both approaches, they made a strategic decision to prioritize storage savings. The results were significant: 50% storage savings on large tables with manageable performance impact on ingestion and queries.
Key results:
- 50% storage savings on large tables through ZSTD compression
- 2 petabytes monthly data processing capacity
- Manageable performance impact on ingestion and queries
- Significant cost reduction at hundreds of TB scale
Column-based retention strategy {#column-retention}
One of the most powerful cost optimization techniques comes from analyzing which columns are actually being used. Microsoft Clarity implements sophisticated column-based retention strategies using ClickHouse's built-in telemetry capabilities. ClickHouse provides detailed metrics on storage usage by column as well as comprehensive query patterns: which columns are accessed, how frequently, query duration, and overall usage statistics. | {"source_file": "cost-optimization.md"} | [
-0.058929964900016785,
0.0605345144867897,
-0.0529918372631073,
0.03883548080921173,
0.03407749906182289,
-0.04511174559593201,
-0.016070807352662086,
0.03554447740316391,
-0.043756067752838135,
0.06679584830999374,
-0.036728452891111374,
0.051244452595710754,
0.048251792788505554,
0.02847... |
580414c3-0878-43be-8d84-0e01d248f272 | This data-driven approach enables strategic decisions about retention policies and column lifecycle management. By analyzing this telemetry data, Microsoft can identify storage hot spots - columns that consume significant space but receive minimal queries. For these low-usage columns, they can implement aggressive retention policies, reducing storage time from 30 months to just one month, or delete the columns entirely if they're not queried at all. This selective retention strategy reduces storage costs without impacting user experience.
The strategy:
- Analyze column usage patterns using ClickHouse telemetry
- Identify high-storage, low-query columns
- Implement selective retention policies
- Monitor query patterns for data-driven decisions
Related docs
-
Managing Data - Column Level TTL
Partition-based data management {#partition-management}
Microsoft Clarity discovered that partitioning strategy impacts both performance and operational simplicity. Their approach: partition by date, order by hour. This strategy delivers multiple benefits beyond just cleanup efficiencyโit enables trivial data cleanup, simplifies billing calculations for their customer-facing service, and supports GDPR compliance requirements for row-based deletion.
Key benefits:
- Trivial data cleanup (drop partition vs row-by-row deletion)
- Simplified billing calculations
- Better query performance through partition elimination
- Easier operational management
Related docs
-
Managing Data - Partitions
String-to-integer conversion strategy {#string-integer-conversion}
Analytics platforms often face a storage challenge with categorical data that appears repeatedly across millions of rows. Microsoft's engineering team encountered this problem with their search analytics data and developed an effective solution that achieved 60% storage reduction on affected datasets.
In Microsoft's web analytics system, search results trigger different types of answers - weather cards, sports information, news articles, and factual responses. Each query result was tagged with descriptive strings like "weather_answer," "sports_answer," or "factual_answer." With billions of search queries processed, these string values were being stored repeatedly in ClickHouse, consuming massive amounts of storage space and requiring expensive string comparisons during queries.
Microsoft implemented a string-to-integer mapping system using a separate MySQL database. Instead of storing the actual strings in ClickHouse, they store only integer IDs. When users run queries through the UI and request data for
weather_answer
, their query optimizer first consults the MySQL mapping table to get the corresponding integer ID, then converts the query to use that integer before sending it to ClickHouse. | {"source_file": "cost-optimization.md"} | [
-0.05150957405567169,
0.04351894557476044,
-0.00030390010215342045,
0.041919149458408356,
0.027459751814603806,
-0.06611012667417526,
0.012165130116045475,
0.010291824117302895,
0.04247452691197395,
0.053610287606716156,
-0.015628403052687645,
0.09455341100692749,
0.013792915269732475,
-0.... |
d345047b-8efb-4dd7-9d16-961a17d2bd75 | This architecture preserves the user experience - people still see meaningful labels like
weather_answer
in their dashboards - while the backend storage and queries operate on much more efficient integers. The mapping system handles all translation transparently, requiring no changes to the user interface or user workflows.
Key benefits:
- 60% storage reduction on affected datasets
- Faster query performance on integer comparisons
- Reduced memory usage for joins and aggregations
- Lower network transfer costs for large result sets
:::note
This is a an example specifically used for Microsoft Clarity's data scenario. If you have all your data in ClickHouse or do not have constraints against moving data to ClickHouse, try using
dictionaries
instead.
:::
Video sources {#video-sources}
Microsoft Clarity and ClickHouse
- Microsoft Clarity Team
ClickHouse journey in Contentsquare
- Doron Hoffman & Guram Sigua (ContentSquare)
These community cost optimization insights represent strategies from companies processing hundreds of terabytes to petabytes of data, showing real-world approaches to reducing ClickHouse operational costs. | {"source_file": "cost-optimization.md"} | [
0.0059984708204865456,
-0.026839088648557663,
-0.08165254443883896,
-0.012172380462288857,
0.009321790188550949,
-0.023627007380127907,
0.04317627474665642,
-0.01615259423851967,
0.04915029555559158,
0.008577628992497921,
-0.05043339729309082,
-0.001081976224668324,
-0.0031914347782731056,
... |
c1ab0de1-4fff-4a51-ba75-2e073444fe4f | sidebar_position: 1
slug: /community-wisdom/creative-use-cases
sidebar_label: 'Success stories'
doc_type: 'guide'
keywords: [
'clickhouse creative use cases',
'clickhouse success stories',
'unconventional database uses',
'clickhouse rate limiting',
'analytics database applications',
'clickhouse mobile analytics',
'customer-facing analytics',
'database innovation',
'clickhouse real-time applications',
'alternative database solutions',
'breaking database conventions',
'production success stories'
]
title: 'Lessons - Creative Use Cases'
description: 'Find solutions to the most common ClickHouse problems including slow queries, memory errors, connection issues, and configuration problems.'
Success stories {#breaking-the-rules}
This guide is part of a collection of findings gained from community meetups. For more real world solutions and insights you can
browse by specific problem
.
Need tips on debugging an issue in prod? Check out the
Debugging Insights
community guide.
These stories showcase how companies found success by using ClickHouse for their use cases, some even challenging traditional database categories and proving that sometimes the "wrong" tool becomes exactly the right solution.
ClickHouse as rate limiter {#clickhouse-rate-limiter}
When Craigslist needed to add tier-one rate limiting to protect their users, they faced the same decision every engineering team encounters - follow conventional wisdom and use Redis, or explore something different. Brad Lhotsky, working at Craigslist, knew Redis was the standard choice - virtually every rate limiting tutorial and example online uses Redis for good reason. It has rich primitives for rate limiting operations, well-established patterns, and proven track record. But Craigslist's experience with Redis wasn't matching the textbook examples.
"Our experience with Redis is not like what you've seen in the movies... there are a lot of weird maintenance issues that we've hit where we reboot a node in a Redis cluster and some latency spike hits the front end."
For a small team that values maintenance simplicity, these operational headaches were becoming a real problem. | {"source_file": "success-stories.md"} | [
-0.012979856692254543,
-0.021186217665672302,
-0.05672093853354454,
0.01677878201007843,
0.01702585443854332,
-0.051583871245384216,
0.042515333741903305,
0.02449178323149681,
-0.08979339152574539,
0.031820133328437805,
0.0767817497253418,
0.0280856154859066,
0.05310263857245445,
0.0153049... |
d524cafd-9d55-412b-b48a-deb135f2d3b8 | So when Brad was approached with the rate limiting requirements, he took a different approach:
"I asked my boss, 'What do you think of this idea? Maybe I can try this with ClickHouse?'"
The idea was unconventional - using an analytical database for what's typically a caching layer problem - but it addressed their core requirements: fail open, impose no latency penalties, and be maintenance-safe for a small team. The solution leveraged their existing infrastructure where access logs were already flowing into ClickHouse via Kafka. Instead of maintaining a separate Redis cluster, they could analyze request patterns directly from the access log data and inject rate limiting rules into their existing ACL API. The approach meant slightly higher latency than Redis, which
"is kind of cheating by instantiating that data set upfront"
rather than doing real-time aggregate queries, but the queries still completed in under 100 milliseconds.
Key Results:
- Dramatic improvement over Redis infrastructure
- Built-in TTL for automatic cleanup eliminated maintenance overhead
- SQL flexibility enabled complex rate limiting rules beyond simple counters
- Leveraged existing data pipeline instead of requiring separate infrastructure
ClickHouse for customer analytics {#customer-analytics}
When ServiceNow needed to upgrade their mobile analytics platform, they faced a simple question:
"Why would we replace something that works?"
Amir Vaza from ServiceNow knew their existing system was reliable, but customer demands were outgrowing what it could handle.
"The motivation to replace an existing reliable model is actually from the product world,"
Amir explained. ServiceNow offered mobile analytics as part of their solution for web, mobile, and chatbots, but customers wanted analytical flexibility that went beyond pre-aggregated data.
Their previous system used about 30 different tables with pre-aggregated data segmented by fixed dimensions: application, app version, and platform. For custom propertiesโkey-value pairs that customers could sendโthey created separate counters for each group. This approach delivered fast dashboard performance but came with a major limitation.
"While this is great for quick value breakdown, I mentioned limitation leads to a lot of loss of analytical context,"
Amir noted. Customers couldn't perform complex customer journey analysis or ask questions like "how many sessions started with the search term 'research RSA token'" and then analyze what those users did next. The pre-aggregated structure destroyed the sequential context needed for multi-step analysis, and every new analytical dimension required engineering work to pre-aggregate and store. | {"source_file": "success-stories.md"} | [
-0.08837074786424637,
-0.028048021718859673,
-0.08352094888687134,
0.06353989243507385,
-0.027677539736032486,
-0.07979724556207657,
-0.03964605554938316,
-0.018760088831186295,
0.06252384930849075,
0.026524880900979042,
-0.016155635938048363,
0.020462660118937492,
0.035465843975543976,
-0... |
bcefd1f3-ea64-4971-a53d-517073f4a1e6 | So when the limitations became clear, ServiceNow moved to ClickHouse and eliminated these pre-computation constraints entirely. Instead of calculating every variable upfront, they broke metadata into data points and inserted everything directly into ClickHouse. They used ClickHouse's async insert queue, which Amir called
"actually amazing,"
to handle data ingestion efficiently. The approach meant customers could now create their own segments, slice data freely across any dimensions, and perform complex customer journey analysis that wasn't possible before.
Key Results:
- Dynamic segmentation across any dimensions without pre-computation
- Complex customer journey analysis became possible
- Customers could create their own segments and slice data freely
- No more engineering bottlenecks for new analytical requirements
Video sources {#video-sources}
Breaking the Rules - Building a Rate Limiter with ClickHouse
- Brad Lhotsky (Craigslist)
ClickHouse as an Analytical Solution in ServiceNow
- Amir Vaza (ServiceNow)
These stories demonstrate how questioning conventional database wisdom can lead to breakthrough solutions that redefine what's possible with analytical databases. | {"source_file": "success-stories.md"} | [
-0.10089728236198425,
-0.0464971661567688,
-0.06603214144706726,
-0.01900666393339634,
-0.029322821646928787,
-0.03213370591402054,
0.03713501989841461,
-0.03153528273105621,
-0.008701063692569733,
0.006453829817473888,
0.022892862558364868,
0.08275806903839111,
0.04740027338266373,
0.0477... |
959b5cdc-5d69-4836-bb18-de82ee997e72 | sidebar_position: 1
slug: /community-wisdom/debugging-insights
sidebar_label: 'Debugging insights'
doc_type: 'guide'
keywords: [
'clickhouse troubleshooting',
'clickhouse errors',
'slow queries',
'memory problems',
'connection issues',
'performance optimization',
'database errors',
'configuration problems',
'debug',
'solutions'
]
title: 'Lessons - debugging insights'
description: 'Find solutions to the most common ClickHouse problems including slow queries, memory errors, connection issues, and configuration problems.'
ClickHouse operations: community debugging insights {#clickhouse-operations-community-debugging-insights}
This guide is part of a collection of findings gained from community meetups. For more real world solutions and insights you can
browse by specific problem
.
Suffering from high operational costs? Check out the
Cost Optimization
community insights guide.
Essential system tables {#essential-system-tables}
These system tables are fundamental for production debugging:
system.errors {#system-errors}
Shows all active errors in your ClickHouse instance.
sql
SELECT name, value, changed
FROM system.errors
WHERE value > 0
ORDER BY value DESC;
system.replicas {#system-replicas}
Contains replication lag and status information for monitoring cluster health.
sql
SELECT database, table, replica_name, absolute_delay, queue_size, inserts_in_queue
FROM system.replicas
WHERE absolute_delay > 60
ORDER BY absolute_delay DESC;
system.replication_queue {#system-replication-queue}
Provides detailed information for diagnosing replication problems.
sql
SELECT database, table, replica_name, position, type, create_time, last_exception
FROM system.replication_queue
WHERE last_exception != ''
ORDER BY create_time DESC;
system.merges {#system-merges}
Shows current merge operations and can identify stuck processes.
sql
SELECT database, table, elapsed, progress, is_mutation, total_size_bytes_compressed
FROM system.merges
ORDER BY elapsed DESC;
system.parts {#system-parts}
Essential for monitoring part counts and identifying fragmentation issues.
sql
SELECT database, table, count() as part_count
FROM system.parts
WHERE active = 1
GROUP BY database, table
ORDER BY count() DESC;
Common production issues {#common-production-issues}
Disk space problems {#disk-space-problems}
Disk space exhaustion in replicated setups creates cascading problems. When one node runs out of space, other nodes continue trying to sync with it, causing network traffic spikes and confusing symptoms. One community member spent 4 hours debugging what was simply low disk space. Check out this
query
to monitor your disk storage on a particular cluster.
AWS users should be aware that default general purpose EBS volumes have a 16TB limit.
Too many parts error {#too-many-parts-error} | {"source_file": "debugging-insights.md"} | [
-0.0010242459829896688,
-0.02903490513563156,
-0.03164130076766014,
0.08083399385213852,
0.022092405706644058,
-0.09548729658126831,
0.039201393723487854,
0.014162278734147549,
-0.09984143078327179,
0.041785310953855515,
0.027455607429146767,
-0.0052236528135836124,
0.018131906166672707,
-... |
188e4918-8c21-488f-b766-1131df95fa68 | AWS users should be aware that default general purpose EBS volumes have a 16TB limit.
Too many parts error {#too-many-parts-error}
Small frequent inserts create performance problems. The community has identified that insert rates above 10 per second often trigger "too many parts" errors because ClickHouse cannot merge parts fast enough.
Solutions:
- Batch data using 30-second or 200MB thresholds
- Enable async_insert for automatic batching
- Use buffer tables for server-side batching
- Configure Kafka for controlled batch sizes
Official recommendation
: minimum 1,000 rows per insert, ideally 10,000 to 100,000.
Invalid timestamps issues {#data-quality-issues}
Applications that send data with arbitrary timestamps create partition problems. This leads to partitions with data from unrealistic dates (like 1998 or 2050), causing unexpected storage behavior.
ALTER
operation risks {#alter-operation-risks}
Large
ALTER
operations on multi-terabyte tables can consume significant resources and potentially lock databases. One community example involved changing an Integer to a Float on 14TB of data, which locked the entire database and required rebuilding from backups.
Monitor expensive mutations:
sql
SELECT database, table, mutation_id, command, parts_to_do, is_done
FROM system.mutations
WHERE is_done = 0;
Test schema changes on smaller datasets first.
Memory and performance {#memory-and-performance}
External aggregation {#external-aggregation}
Enable external aggregation for memory-intensive operations. It's slower but prevents out-of-memory crashes by spilling to disk. You can do this by using
max_bytes_before_external_group_by
which will help prevent out of memory crashes on large
GROUP BY
operations. You can learn more about this setting
here
.
sql
SELECT
column1,
column2,
COUNT(*) as count,
SUM(value) as total
FROM large_table
GROUP BY column1, column2
SETTINGS max_bytes_before_external_group_by = 1000000000; -- 1GB threshold
Async insert details {#async-insert-details}
Async insert automatically batches small inserts server-side to improve performance. You can configure whether to wait for data to be written to disk before returning acknowledgment - immediate return is faster but less durable. Modern versions support deduplication to handle duplicate data within batches.
Related docs
-
Selecting an insert strategy
Distributed table configuration {#distributed-table-configuration}
By default, distributed tables use single-threaded inserts. Enable
insert_distributed_sync
for parallel processing and immediate data sending to shards.
Monitor temporary data accumulation when using distributed tables.
Performance monitoring thresholds {#performance-monitoring-thresholds}
Community-recommended monitoring thresholds:
- Parts per partition: preferably less than 100
- Delayed inserts: should stay at zero
- Insert rate: limit to about 1 per second for optimal performance | {"source_file": "debugging-insights.md"} | [
-0.030216297134757042,
0.013925331644713879,
-0.03331432119011879,
0.02269619144499302,
-0.029555125162005424,
-0.05009883642196655,
-0.10272186994552612,
0.009751125238835812,
0.02502499893307686,
0.07897046208381653,
0.008446451276540756,
-0.02700534649193287,
0.018661655485630035,
-0.06... |
0d3cf0ab-ca2a-41de-b320-e2ba1c1d5a38 | Related docs
-
Custom partitioning key
Quick reference {#quick-reference}
| Issue | Detection | Solution |
|-------|-----------|----------|
| Disk Space | Check
system.parts
total bytes | Monitor usage, plan scaling |
| Too Many Parts | Count parts per table | Batch inserts, enable async_insert |
| Replication Lag | Check
system.replicas
delay | Monitor network, restart replicas |
| Bad Data | Validate partition dates | Implement timestamp validation |
| Stuck Mutations | Check
system.mutations
status | Test on small data first |
Video sources {#video-sources}
10 Lessons from Operating ClickHouse
Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse | {"source_file": "debugging-insights.md"} | [
-0.0711565837264061,
-0.07146431505680084,
-0.037270478904247284,
-0.040667757391929626,
0.03447239100933075,
-0.07505610585212708,
-0.008489533327519894,
0.06031368672847748,
-0.02457187883555889,
0.023801438510417938,
0.05956608057022095,
0.013296286575496197,
0.01734057441353798,
-0.003... |
686f808e-65eb-4650-b462-f7f3cbf3a5eb | sidebar_position: 1
slug: /tips-and-tricks/too-many-parts
sidebar_label: 'Too many parts'
doc_type: 'guide'
keywords: [
'clickhouse too many parts',
'too many parts error',
'clickhouse insert batching',
'part explosion problem',
'clickhouse merge performance',
'batch insert optimization',
'clickhouse async inserts',
'small insert problems',
'clickhouse parts management',
'insert performance optimization',
'clickhouse batching strategy',
'database insert patterns'
]
title: 'Lessons - Too Many Parts Problem'
description: 'Solutions and prevention of Too Many Parts'
The too many parts problem {#the-too-many-parts-problem}
This guide is part of a collection of findings gained from community meetups. For more real world solutions and insights you can
browse by specific problem
.
Need more performance optimization tips? Check out the
Performance Optimization
community insights guide.
Understanding the problem {#understanding-the-problem}
ClickHouse will throw a "Too many parts" error to prevent severe performance degradation. Small parts cause multiple issues: poor query performance from reading and merging more files during queries, increased memory usage since each part requires metadata in memory, reduced compression efficiency as smaller data blocks compress less effectively, higher I/O overhead from more file handles and seek operations, and slower background merges giving the merge scheduler more work.
Related Docs
-
MergeTree Engine
-
Parts
-
Parts System Table
Recognize the problem early {#recognize-parts-problem}
This query monitors table fragmentation by analyzing part counts and sizes across all active tables. It identifies tables with excessive or undersized parts that may need merge optimization. Use this regularly to catch fragmentation issues before they impact query performance. | {"source_file": "too-many-parts.md"} | [
-0.001263154437765479,
0.026322362944483757,
0.011839918792247772,
0.02833402343094349,
0.010771012865006924,
-0.0474635548889637,
0.012921524234116077,
0.02144896425306797,
-0.11838695406913757,
-0.0056424494832754135,
0.013940460979938507,
0.04364224895834923,
0.05162312835454941,
-0.033... |
81cc45f6-b606-47bd-b0b8-456c5361541c | sql runnable editable
-- Challenge: Replace with your actual database and table names for production use
-- Experiment: Adjust the part count thresholds (1000, 500, 100) based on your system
SELECT
database,
table,
count() as total_parts,
sum(rows) as total_rows,
round(avg(rows), 0) as avg_rows_per_part,
min(rows) as min_rows_per_part,
max(rows) as max_rows_per_part,
round(sum(bytes_on_disk) / 1024 / 1024, 2) as total_size_mb,
CASE
WHEN count() > 1000 THEN 'CRITICAL - Too many parts (>1000)'
WHEN count() > 500 THEN 'WARNING - Many parts (>500)'
WHEN count() > 100 THEN 'CAUTION - Getting many parts (>100)'
ELSE 'OK - Reasonable part count'
END as parts_assessment,
CASE
WHEN avg(rows) < 1000 THEN 'POOR - Very small parts'
WHEN avg(rows) < 10000 THEN 'FAIR - Small parts'
WHEN avg(rows) < 100000 THEN 'GOOD - Medium parts'
ELSE 'EXCELLENT - Large parts'
END as part_size_assessment
FROM system.parts
WHERE active = 1
AND database NOT IN ('system', 'information_schema')
GROUP BY database, table
ORDER BY total_parts DESC
LIMIT 20;
Video Sources {#video-sources}
Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse
- ClickHouse team member explains async inserts and the too many parts problem
Production ClickHouse at Scale
- Real-world batching strategies from observability platforms | {"source_file": "too-many-parts.md"} | [
0.08188372850418091,
0.013236324302852154,
0.002126244129613042,
0.02910914458334446,
-0.021736226975917816,
-0.07013145089149475,
0.002820064779371023,
0.08518979698419571,
-0.0797203853726387,
0.06756678968667984,
-0.018495960161089897,
-0.08831583708524704,
0.07429709285497665,
0.006520... |
e0790536-b12c-4885-ac13-41713bc8ea1a | sidebar_position: 1
slug: /community-wisdom/performance-optimization
sidebar_label: 'Performance optimization'
doc_type: 'guide'
keywords: [
'performance optimization',
'query performance',
'database tuning',
'slow queries',
'memory optimization',
'cardinality analysis',
'indexing strategies',
'aggregation optimization',
'sampling techniques',
'database performance',
'query analysis',
'performance troubleshooting'
]
title: 'Lessons - performance optimization'
description: 'Real world examples of performance optimization strategies'
Performance optimization: community tested strategies {#performance-optimization}
This guide is part of a collection of findings gained from community meetups. For more real world solutions and insights you can
browse by specific problem
.
Having trouble with Materialized Views? Check out the
Materialized Views
community insights guide.
If you're experiencing slow queries and want more examples, we also have a
Query Optimization
guide.
Order by cardinality (lowest to highest) {#cardinality-ordering}
ClickHouse's primary index works best when low-cardinality columns come first, allowing it to skip large chunks of data efficiently. High-cardinality columns later in the key provide fine-grained sorting within those chunks. Start with columns that have few unique values (like status, category, country) and end with columns that have many unique values (like user_id, timestamp, session_id).
Check out more documentation on cardinality and primary indexes:
-
Choosing a Primary Key
-
Primary indexes
Time granularity matters {#time-granularity}
When using timestamps in your ORDER BY clause, consider the cardinality vs precision trade-off. Microsecond-precision timestamps create very high cardinality (nearly one unique value per row), which reduces the effectiveness of ClickHouse's sparse primary index. Rounded timestamps create lower cardinality that enables better index skipping, but you lose precision for time-based queries.
sql runnable editable
-- Challenge: Try different time functions like toStartOfMinute or toStartOfWeek
-- Experiment: Compare the cardinality differences with your own timestamp data
SELECT
'Microsecond precision' as granularity,
uniq(created_at) as unique_values,
'Creates massive cardinality - bad for sort key' as impact
FROM github.github_events
WHERE created_at >= '2024-01-01'
UNION ALL
SELECT
'Hour precision',
uniq(toStartOfHour(created_at)),
'Much better for sort key - enables skip indexing'
FROM github.github_events
WHERE created_at >= '2024-01-01'
UNION ALL
SELECT
'Day precision',
uniq(toStartOfDay(created_at)),
'Best for reporting queries'
FROM github.github_events
WHERE created_at >= '2024-01-01';
Focus on individual queries, not averages {#focus-on-individual-queries-not-averages} | {"source_file": "performance-optimization.md"} | [
0.016050666570663452,
-0.0218935739248991,
-0.06689304858446121,
0.04382568597793579,
-0.0232944767922163,
-0.04005080834031105,
-0.005448159296065569,
-0.011813927441835403,
-0.09066696465015411,
0.013604474253952503,
-0.041385892778635025,
0.06867670267820358,
0.04284491762518883,
-0.029... |
7df62d80-0ecc-4a73-bf78-5d68ba513c07 | Focus on individual queries, not averages {#focus-on-individual-queries-not-averages}
When debugging ClickHouse performance, don't rely on average query times or overall system metrics. Instead, identify why specific queries are slow. A system can have good average performance while individual queries suffer from memory exhaustion, poor filtering, or high cardinality operations.
According to Alexey, CTO of ClickHouse:
"The right way is to ask yourself why this particular query was processed in five seconds... I don't care if median and other queries process quickly. I only care about my query"
When a query is slow, don't just look at averages. Ask "Why was THIS specific query slow?" and examine the actual resource usage patterns.
Memory and row scanning {#memory-and-row-scanning}
Sentry is a developer-first error tracking platform processing billions of events daily from 4+ million developers. Their key insight:
"The cardinality of the grouping key that's going to drive memory in this particular situation"
- High cardinality aggregations kill performance through memory exhaustion, not row scanning.
When queries fail, determine if it's a memory problem (too many groups) or scanning problem (too many rows).
A query like
GROUP BY user_id, error_message, url_path
creates a separate memory state for every unique combination of all three values together. With a higher load of users, error types, and URL paths, you could easily generate millions of aggregation states that must be held in memory simultaneously.
For extreme cases, Sentry uses deterministic sampling. A 10% sample reduces memory usage by 90% while maintaining roughly 5% accuracy for most aggregations:
sql
WHERE cityHash64(user_id) % 10 = 0 -- Always same 10% of users
This ensures the same users appear in every query, providing consistent results across time periods. The key insight:
cityHash64()
produces consistent hash values for the same input, so
user_id = 12345
will always hash to the same value, ensuring that user either always appears in your 10% sample or never does - no flickering between queries.
Sentry's bit mask optimization {#bit-mask-optimization}
When aggregating by high-cardinality columns (like URLs), each unique value creates a separate aggregation state in memory, leading to memory exhaustion. Sentry's solution: instead of grouping by the actual URL strings, group by boolean expressions that collapse into bit masks.
Here is a query that you can try on your own tables if this situation applies to you:
```sql
-- Memory-Efficient Aggregation Pattern: Each condition = one integer per group
-- Key insight: sumIf() creates bounded memory regardless of data volume
-- Memory per group: N integers (N * 8 bytes) where N = number of conditions
SELECT
your_grouping_column, | {"source_file": "performance-optimization.md"} | [
0.07094161957502365,
-0.014268563129007816,
-0.04088076204061508,
0.07016738504171371,
0.004064554814249277,
-0.10095702856779099,
0.06339855492115021,
0.03190175071358681,
0.03217717632651329,
0.033951129764318466,
-0.06787959486246109,
0.004689852241426706,
0.022911805659532547,
-0.02155... |
2c7d2451-f57b-4d70-a2bc-815204b2ffcb | SELECT
your_grouping_column,
-- Each sumIf creates exactly one integer counter per group
-- Memory stays constant regardless of how many rows match each condition
sumIf(1, your_condition_1) as condition_1_count,
sumIf(1, your_condition_2) as condition_2_count,
sumIf(1, your_text_column LIKE '%pattern%') as pattern_matches,
sumIf(1, your_numeric_column > threshold_value) as above_threshold,
-- Complex multi-condition aggregations still use constant memory
sumIf(1, your_condition_1 AND your_text_column LIKE '%pattern%') as complex_condition_count,
-- Standard aggregations for context
count() as total_rows,
avg(your_numeric_column) as average_value,
max(your_timestamp_column) as latest_timestamp
FROM your_schema.your_table
WHERE your_timestamp_column >= 'start_date'
AND your_timestamp_column < 'end_date'
GROUP BY your_grouping_column
HAVING condition_1_count > minimum_threshold
OR condition_2_count > another_threshold
ORDER BY (condition_1_count + condition_2_count + pattern_matches) DESC
LIMIT 20
```
Instead of storing every unique string in memory, you're storing the answer to questions about those strings as integers. The aggregation state becomes bounded and tiny, regardless of data diversity.
From Sentry's engineering team: "These heavy queries are more than 10x faster and our memory usage is 100x lower (and, more importantly, bounded). Our largest customers no longer see errors when searching for replays and we can now support customers of arbitrary size without running out of memory."
Video sources {#video-sources}
Lost in the Haystack - Optimizing High Cardinality Aggregations
- Sentry's production lessons on memory optimization
ClickHouse Performance Analysis
- Alexey Milovidov on debugging methodology
ClickHouse Meetup: Query Optimization Techniques
- Community optimization strategies
Read Next
:
-
Query Optimization Guide
-
Materialized Views Community Insights | {"source_file": "performance-optimization.md"} | [
-0.002513972343876958,
0.016378847882151604,
-0.03419838100671768,
0.06764642149209976,
-0.033356569707393646,
0.06535991281270981,
0.03171044960618019,
0.0702846348285675,
-0.02098371647298336,
0.00958303827792406,
0.06326041370630264,
-0.10688512027263641,
0.05039997398853302,
-0.0146193... |
8543abac-122b-49c4-b823-43f768e7921a | sidebar_position: 1
slug: /tips-and-tricks/materialized-views
sidebar_label: 'Materialized views'
doc_type: 'guide'
keywords: [
'clickhouse materialized views',
'materialized view optimization',
'materialized view storage issues',
'materialized view best practices',
'database aggregation patterns',
'materialized view anti-patterns',
'storage explosion problems',
'materialized view performance',
'database view optimization',
'aggregation strategy',
'materialized view troubleshooting',
'view storage overhead'
]
title: 'Lessons - materialized views'
description: 'Real world examples of materialized views, problems and solutions'
Materialized views: how they can become a double edged sword {#materialized-views-the-double-edged-sword}
This guide is part of a collection of findings gained from community meetups. For more real world solutions and insights you can
browse by specific problem
.
Too many parts bogging your database down? Check out the
Too Many Parts
community insights guide.
Learn more about
Materialized Views
.
The 10x storage anti-pattern {#storage-antipattern}
Real production problem:
"We had a materialized view. The raw log table was around 20 gig but the view from that log table exploded to 190 gig, so almost 10x the size of the raw table. This happened because we were creating one row per attribute and each log can have 10 attributes."
Rule:
If your
GROUP BY
creates more rows than it eliminates, you're building an expensive index, not a materialized view.
Production materialized view health validation {#mv-health-validation}
This query helps you predict whether a materialized view will compress or explode your data before you create it. Run it against your actual table and columns to avoid the "190GB explosion" scenario.
What it shows:
-
Low aggregation ratio
(\<10%) = Good MV, significant compression
-
High aggregation ratio
(>70%) = Bad MV, storage explosion risk
-
Storage multiplier
= How much bigger/smaller your MV will be
```sql
-- Replace with your actual table and columns
SELECT
count() as total_rows,
uniq(your_group_by_columns) as unique_combinations,
round(uniq(your_group_by_columns) / count() * 100, 2) as aggregation_ratio
FROM your_table
WHERE your_filter_conditions;
-- If aggregation_ratio > 70%, reconsider your MV design
-- If aggregation_ratio < 10%, you'll get good compression
```
When materialized views become a problem {#mv-problems}
Warning signs to monitor:
- Insert latency increases (queries that took 10ms now take 100ms+)
- "Too many parts" errors appearing more frequently
- CPU spikes during insert operations
- Insert timeouts that didn't happen before
You can compare insert performance before and after adding MVs using
system.query_log
to track query duration trends.
Video sources {#video-sources} | {"source_file": "materialized-views.md"} | [
-0.03606022149324417,
-0.04309059679508209,
-0.014761448837816715,
0.04094161093235016,
-0.03054601140320301,
-0.0490458644926548,
-0.045675333589315414,
0.020353222265839577,
-0.062401384115219116,
0.056959547102451324,
0.012530410662293434,
0.06079613044857979,
0.051766932010650635,
0.00... |
ee964cd9-5ae4-46a1-8842-50c42b9f1751 | You can compare insert performance before and after adding MVs using
system.query_log
to track query duration trends.
Video sources {#video-sources}
ClickHouse at CommonRoom - Kirill Sapchuk
- Source of the "over enthusiastic about materialized views" and "20GBโ190GB explosion" case study | {"source_file": "materialized-views.md"} | [
0.042950551956892014,
-0.11835603415966034,
-0.09452693164348602,
0.009738889522850513,
0.027451425790786743,
-0.006543392315506935,
-0.05492054671049118,
0.015310370363295078,
0.06687664240598679,
0.029045283794403076,
0.01585976593196392,
0.034756824374198914,
0.04753350093960762,
-0.016... |
b64a2f30-665f-4e9a-9f15-e91b0b2821e1 | slug: /whats-new/security-changelog
sidebar_position: 20
sidebar_label: 'Security changelog'
title: 'Security changelog'
description: 'Security changelog detailing security related updates and changes'
doc_type: 'changelog'
keywords: ['security', 'CVE', 'vulnerabilities', 'security fixes', 'patches']
Security changelog
Fixed in ClickHouse v25.1.5.5, 2025-01-05 {#fixed-in-clickhouse-release-2025-01-05}
CVE-2025-1385
{#CVE-2025-1385}
When the library bridge feature is enabled, the clickhouse-library-bridge exposes an HTTP API on localhost. This allows clickhouse-server to dynamically load a library from a specified path and execute it in an isolated process. Combined with the ClickHouse table engine functionality that permits file uploads to specific directories, a misconfigured server can be exploited by an attacker with privileges to access both table engines, allowing them to execute arbitrary code on the ClickHouse server.
Fix has been pushed to the following open-source versions: v24.3.18.6, v24.8.14.27, v24.11.5.34, v24.12.5.65, v25.1.5.5
ClickHouse Cloud is unaffected by this vulnerability.
Credits:
Arseniy Dugin
Fixed in ClickHouse v24.5, 2024-08-01 {#fixed-in-clickhouse-release-2024-08-01}
CVE-2024-6873
{#CVE-2024-6873}
It is possible to redirect the execution flow of the ClickHouse server process from an unauthenticated vector by sending a specially crafted request to the ClickHouse server native interface. This redirection is limited to what is available within a 256-byte range of memory at the time of execution. This vulnerability was identified through our Bugbounty program and no known Proof of Concept Remote Code Execution (RCE) code has been produced or exploited.
Fix has been pushed to the following open-source versions: v23.8.15.35-lts, v24.3.4.147-lts, v24.4.2.141-stable, v24.5.1.1763, v24.6.1.4423-stable
ClickHouse Cloud uses different versioning and a fix for this vulnerability was applied to all instances running v24.2 onward.
Credits: malacupa (Independent researcher)
Fixed in ClickHouse v24.1, 2024-01-30 {#fixed-in-clickhouse-release-24-01-30}
CVE-2024-22412
{#CVE-2024-22412}
When toggling between user roles while using ClickHouse with query cache enabled, there is a risk of obtaining inaccurate data. ClickHouse advises users with vulnerable versions of ClickHouse not to use the query cache when their application dynamically switches between various roles.
Fix has been pushed to the following open-source versions: v24.1.1.2048, v24.1.8.22-stable, v23.12.6.19-stable, v23.8.12.13-lts, v23.3.22.3-lts
ClickHouse Cloud uses different versioning and a fix for this vulnerability was applied at v24.0.2.54535.
Credits: Evan Johnson and Alan Braithwaite from Runreveal team - More information can be found on
their blog post
.
Fixed in ClickHouse v23.10.5.20, 2023-11-26 {#fixed-in-clickhouse-release-23-10-5-20-2023-11-26}
CVE-2023-47118
{#CVE-2023-47118} | {"source_file": "security-changelog.md"} | [
-0.0857997015118599,
0.02224252000451088,
-0.013190210796892643,
-0.01763995923101902,
0.051238883286714554,
-0.061630576848983765,
-0.03235777094960213,
-0.013061683624982834,
0.008950592018663883,
0.0564071424305439,
0.0009099669987335801,
0.035127222537994385,
0.06338632851839066,
-0.00... |
f9c44301-0975-4ed2-bf0d-c6eb0e80f6f9 | Fixed in ClickHouse v23.10.5.20, 2023-11-26 {#fixed-in-clickhouse-release-23-10-5-20-2023-11-26}
CVE-2023-47118
{#CVE-2023-47118}
A heap buffer overflow vulnerability affecting the native interface running by default on port 9000/tcp. An attacker, by triggering a bug in the T64 compression codec, can cause the ClickHouse server process to crash. This vulnerability can be exploited without the need to authenticate.
Fix has been pushed to the following open-source versions: v23.10.2.13, v23.9.4.11, v23.8.6.16, v23.3.16.7
ClickHouse Cloud uses different versioning and a fix for this vulnerability was applied at v23.9.2.47475.
Credits: malacupa (Independent researcher)
CVE-2023-48298
{#CVE-2023-48298}
An integer underflow vulnerability in the FPC compressions codec. An attacker can use it to cause the ClickHouse server process to crash. This vulnerability can be exploited without the need to authenticate.
Fix has been pushed to the following open-source versions: v23.10.4.25, v23.9.5.29, v23.8.7.24, v23.3.17.13.
ClickHouse Cloud uses different versioning and a fix for this vulnerability was applied at v23.9.2.47475.
Credits: malacupa (Independent researcher)
CVE-2023-48704
{#CVE-2023-48704}
A heap buffer overflow vulnerability affecting the native interface running by default on port 9000/tcp. An attacker, by triggering a bug in the Gorilla codec, can cause the ClickHouse server process to crash. This vulnerability can be exploited without the need to authenticate.
Fix has been pushed to the following open-source versions: v23.10.5.20, v23.9.6.20, v23.8.8.20, v23.3.18.15.
ClickHouse Cloud uses different versioning and a fix for this vulnerability was applied at v23.9.2.47551.
Credits: malacupa (Independent researcher)
Fixed in ClickHouse 22.9.1.2603, 2022-09-22 {#fixed-in-clickhouse-release-22-9-1-2603-2022-9-22}
CVE-2022-44011 {#CVE-2022-44011}
A heap buffer overflow issue was discovered in ClickHouse server. A malicious user with ability to load data into ClickHouse server could crash the ClickHouse server by inserting a malformed CapnProto object.
Fix has been pushed to version 22.9.1.2603, 22.8.2.11, 22.7.4.16, 22.6.6.16, 22.3.12.19
Credits: Kiojj (independent researcher)
CVE-2022-44010 {#CVE-2022-44010}
A heap buffer overflow issue was discovered in ClickHouse server. An attacker could send a specially crafted HTTP request to the HTTP Endpoint (listening on port 8123 by default), causing a heap-based buffer overflow that crashes the ClickHouse server process. This attack does not require authentication.
Fix has been pushed to version 22.9.1.2603, 22.8.2.11, 22.7.4.16, 22.6.6.16, 22.3.12.19
Credits: Kiojj (independent researcher)
Fixed in ClickHouse 21.10.2.15, 2021-10-18 {#fixed-in-clickhouse-release-21-10-2-215-2021-10-18}
CVE-2021-43304 {#cve-2021-43304} | {"source_file": "security-changelog.md"} | [
-0.0955076813697815,
0.0663544312119484,
-0.010402369312942028,
-0.0435824953019619,
0.024742651730775833,
-0.058855075389146805,
-0.05319364368915558,
-0.004139609634876251,
-0.006604932714253664,
0.013040569610893726,
-0.0024328858125954866,
0.0056744907051324844,
-0.024967188015580177,
... |
995b1255-5233-4cf8-96fb-8af340327aca | Credits: Kiojj (independent researcher)
Fixed in ClickHouse 21.10.2.15, 2021-10-18 {#fixed-in-clickhouse-release-21-10-2-215-2021-10-18}
CVE-2021-43304 {#cve-2021-43304}
Heap buffer overflow in ClickHouse's LZ4 compression codec when parsing a malicious query. There is no verification that the copy operations in the LZ4::decompressImpl loop and especially the arbitrary copy operation
wildCopy<copy_amount>(op, ip, copy_end)
, don't exceed the destination buffer's limits.
Credits: JFrog Security Research Team
CVE-2021-43305 {#cve-2021-43305}
Heap buffer overflow in ClickHouse's LZ4 compression codec when parsing a malicious query. There is no verification that the copy operations in the LZ4::decompressImpl loop and especially the arbitrary copy operation
wildCopy<copy_amount>(op, ip, copy_end)
, don't exceed the destination buffer's limits. This issue is very similar to CVE-2021-43304, but the vulnerable copy operation is in a different wildCopy call.
Credits: JFrog Security Research Team
CVE-2021-42387 {#cve-2021-42387}
Heap out-of-bounds read in ClickHouse's LZ4 compression codec when parsing a malicious query. As part of the LZ4::decompressImpl() loop, a 16-bit unsigned user-supplied value ('offset') is read from the compressed data. The offset is later used in the length of a copy operation, without checking the upper bounds of the source of the copy operation.
Credits: JFrog Security Research Team
CVE-2021-42388 {#cve-2021-42388}
Heap out-of-bounds read in ClickHouse's LZ4 compression codec when parsing a malicious query. As part of the LZ4::decompressImpl() loop, a 16-bit unsigned user-supplied value ('offset') is read from the compressed data. The offset is later used in the length of a copy operation, without checking the lower bounds of the source of the copy operation.
Credits: JFrog Security Research Team
CVE-2021-42389 {#cve-2021-42389}
Divide-by-zero in ClickHouse's Delta compression codec when parsing a malicious query. The first byte of the compressed buffer is used in a modulo operation without being checked for 0.
Credits: JFrog Security Research Team
CVE-2021-42390 {#cve-2021-42390}
Divide-by-zero in ClickHouse's DeltaDouble compression codec when parsing a malicious query. The first byte of the compressed buffer is used in a modulo operation without being checked for 0.
Credits: JFrog Security Research Team
CVE-2021-42391 {#cve-2021-42391}
Divide-by-zero in ClickHouse's Gorilla compression codec when parsing a malicious query. The first byte of the compressed buffer is used in a modulo operation without being checked for 0.
Credits: JFrog Security Research Team
Fixed in ClickHouse 21.4.3.21, 2021-04-12 {#fixed-in-clickhouse-release-21-4-3-21-2021-04-12}
CVE-2021-25263 {#cve-2021-25263}
An attacker that has CREATE DICTIONARY privilege, can read arbitary file outside permitted directory. | {"source_file": "security-changelog.md"} | [
-0.07475423067808151,
0.013734914362430573,
-0.03412775322794914,
-0.023281777277588844,
0.06248174607753754,
-0.08099224418401718,
-0.010894405655562878,
-0.0031139284837991,
-0.03290216997265816,
0.008682860992848873,
-0.014004251919686794,
0.035660210996866226,
0.012465318664908409,
-0.... |
fab60510-10db-46dc-97c0-ab7b6105dd02 | CVE-2021-25263 {#cve-2021-25263}
An attacker that has CREATE DICTIONARY privilege, can read arbitary file outside permitted directory.
Fix has been pushed to versions 20.8.18.32-lts, 21.1.9.41-stable, 21.2.9.41-stable, 21.3.6.55-lts, 21.4.3.21-stable and later.
Credits:
Vyacheslav Egoshin
Fixed in ClickHouse Release 19.14.3.3, 2019-09-10 {#fixed-in-clickhouse-release-19-14-3-3-2019-09-10}
CVE-2019-15024 {#cve-2019-15024}
ะn attacker that has write access to ZooKeeper and who can run a custom server available from the network where ClickHouse runs, can create a custom-built malicious server that will act as a ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from the malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem.
Credits: Eldar Zaitov of Yandex Information Security Team
CVE-2019-16535 {#cve-2019-16535}
ะn OOB read, OOB write and integer underflow in decompression algorithms can be used to achieve RCE or DoS via native protocol.
Credits: Eldar Zaitov of Yandex Information Security Team
CVE-2019-16536 {#cve-2019-16536}
Stack overflow leading to DoS can be triggered by a malicious authenticated client.
Credits: Eldar Zaitov of Yandex Information Security Team
Fixed in ClickHouse Release 19.13.6.1, 2019-09-20 {#fixed-in-clickhouse-release-19-13-6-1-2019-09-20}
CVE-2019-18657 {#cve-2019-18657}
Table function
url
had the vulnerability allowed the attacker to inject arbitrary HTTP headers in the request.
Credits:
Nikita Tikhomirov
Fixed in ClickHouse Release 18.12.13, 2018-09-10 {#fixed-in-clickhouse-release-18-12-13-2018-09-10}
CVE-2018-14672 {#cve-2018-14672}
Functions for loading CatBoost models allowed path traversal and reading arbitrary files through error messages.
Credits: Andrey Krasichkov of Yandex Information Security Team
Fixed in ClickHouse Release 18.10.3, 2018-08-13 {#fixed-in-clickhouse-release-18-10-3-2018-08-13}
CVE-2018-14671 {#cve-2018-14671}
unixODBC allowed loading arbitrary shared objects from the file system which led to a Remote Code Execution vulnerability.
Credits: Andrey Krasichkov and Evgeny Sidorov of Yandex Information Security Team
Fixed in ClickHouse Release 1.1.54388, 2018-06-28 {#fixed-in-clickhouse-release-1-1-54388-2018-06-28}
CVE-2018-14668 {#cve-2018-14668}
"remote" table function allowed arbitrary symbols in "user", "password" and "default_database" fields which led to Cross Protocol Request Forgery Attacks.
Credits: Andrey Krasichkov of Yandex Information Security Team
Fixed in ClickHouse Release 1.1.54390, 2018-07-06 {#fixed-in-clickhouse-release-1-1-54390-2018-07-06}
CVE-2018-14669 {#cve-2018-14669}
ClickHouse MySQL client had "LOAD DATA LOCAL INFILE" functionality enabled that allowed a malicious MySQL database read arbitrary files from the connected ClickHouse server.
Credits: Andrey Krasichkov and Evgeny Sidorov of Yandex Information Security Team | {"source_file": "security-changelog.md"} | [
-0.13515174388885498,
0.030522922053933144,
-0.07852574437856674,
-0.04078583046793938,
0.04929758608341217,
-0.06754757463932037,
0.01187771838158369,
0.04747553914785385,
0.019109327346086502,
0.057844676077365875,
0.029673142358660698,
0.012147007510066032,
0.06698352098464966,
0.065524... |
94808bd4-02e8-4191-b2d9-01bdb300d3a7 | Credits: Andrey Krasichkov and Evgeny Sidorov of Yandex Information Security Team
Fixed in ClickHouse Release 1.1.54131, 2017-01-10 {#fixed-in-clickhouse-release-1-1-54131-2017-01-10}
CVE-2018-14670 {#cve-2018-14670}
Incorrect configuration in deb package could lead to the unauthorized use of the database.
Credits: the UK's National Cyber Security Centre (NCSC) | {"source_file": "security-changelog.md"} | [
-0.05895031616091728,
-0.08801126480102539,
-0.014415748417377472,
-0.03144139051437378,
-0.013913667760789394,
-0.0367547944188118,
0.04085952788591385,
-0.04513046145439148,
-0.0343465656042099,
0.0275344867259264,
0.028893981128931046,
0.021733442321419716,
0.09203654527664185,
0.023548... |
da163578-cb93-474c-8611-560c6468b333 | title: 'Roadmap'
slug: /whats-new/roadmap
sidebar_position: 50
description: 'Present and past ClickHouse road maps'
doc_type: 'landing-page'
keywords: ['roadmap', 'future features', 'development plans', 'upcoming releases', 'product direction']
Current roadmap {#current-roadmap}
The current roadmap is published for open discussion:
2025
Previous roadmaps {#previous-roadmaps}
2024
2023
2022
2021
2020
2019
2018 | {"source_file": "roadmap.md"} | [
0.028785238042473793,
0.006868173833936453,
0.10384076833724976,
-0.04591676965355873,
0.04001202806830406,
-0.0018263529054820538,
-0.050142087042331696,
-0.024290472269058228,
-0.09649971127510071,
0.019342394545674324,
-0.041380226612091064,
-0.0025765658356249332,
-0.013143979012966156,
... |
4d56c9fd-7526-466f-85c8-53cf527b769f | sidebar_position: 1
sidebar_label: 'Beta Features and Experimental'
title: 'Beta and Experimental Features'
description: 'ClickHouse has beta and experimental features. This documentation page discusses definition.'
slug: /beta-and-experimental-features
doc_type: 'reference'
Because ClickHouse is open-source, it receives many contributions not only from ClickHouse employees but also from the community. These contributions are often developed at different speeds; certain features may require a lengthy prototyping phase or more time for sufficient community feedback and iteration to be considered generally available (GA).
Due to the uncertainty of when features are classified as generally available, we delineate features into two categories:
Beta
and
Experimental
.
Beta
features are officially supported by the ClickHouse team.
Experimental
features are early prototypes driven by either the ClickHouse team or the community and are not officially supported.
The sections below explicitly describe the properties of
Beta
and
Experimental
features:
Beta features {#beta-features}
Under active development to make them generally available (GA)
Main known issues can be tracked on GitHub
Functionality may change in the future
Possibly enabled in ClickHouse Cloud
The ClickHouse team supports beta features
You can find below the features considered Beta in ClickHouse Cloud and are available for use in your ClickHouse Cloud Services.
Note: please be sure to be using a current version of the ClickHouse
compatibility
setting to be using a recently introduced feature.
Experimental features {#experimental-features}
May never become GA
May be removed
Can introduce breaking changes
Functionality may change in the feature
Need to be deliberately enabled
The ClickHouse team
does not support
experimental features
May lack important functionality and documentation
Cannot be enabled in the cloud
Please note: no additional experimental features are allowed to be enabled in ClickHouse Cloud other than those listed above as Beta.
Beta settings {#beta-settings} | {"source_file": "beta-and-experimental-features.md"} | [
-0.03160494565963745,
-0.07037194073200226,
0.04774373397231102,
0.04109795764088631,
0.07454472035169601,
-0.001931319828145206,
-0.022555768489837646,
-0.06289716064929962,
-0.07919127494096756,
-0.006136992946267128,
0.008608951233327389,
0.037990447133779526,
-0.008979793637990952,
-0.... |
14ea5eab-93a3-4659-a201-e15ed604c331 | Please note: no additional experimental features are allowed to be enabled in ClickHouse Cloud other than those listed above as Beta.
Beta settings {#beta-settings}
| Name | Default |
|------|--------|
|
geotoh3_argument_order
|
lat_lon
|
|
enable_lightweight_update
|
1
|
|
allow_experimental_correlated_subqueries
|
1
|
|
allow_experimental_parallel_reading_from_replicas
|
0
|
|
parallel_replicas_mode
|
read_tasks
|
|
parallel_replicas_count
|
0
|
|
parallel_replica_offset
|
0
|
|
parallel_replicas_custom_key
|
|
| [parallel_replicas_custom_key_range_lower](/operations/settings/settings#parallel_replicas_custom_key_range_lower) | `0` |
| [parallel_replicas_custom_key_range_upper](/operations/settings/settings#parallel_replicas_custom_key_range_upper) | `0` |
| [cluster_for_parallel_replicas](/operations/settings/settings#cluster_for_parallel_replicas) |
|
|
parallel_replicas_allow_in_with_subquery
|
1
|
|
parallel_replicas_for_non_replicated_merge_tree
|
0
|
|
parallel_replicas_min_number_of_rows_per_replica
|
0
|
|
parallel_replicas_prefer_local_join
|
1
|
|
parallel_replicas_mark_segment_size
|
0
|
|
parallel_replicas_local_plan
|
1
|
|
parallel_replicas_index_analysis_only_on_coordinator
|
1
|
|
parallel_replicas_support_projection
|
1
|
|
parallel_replicas_only_with_analyzer
|
1
|
|
parallel_replicas_insert_select_local_pipeline
|
1
|
|
parallel_replicas_connect_timeout_ms
|
300
|
|
allow_experimental_database_iceberg
|
0
|
|
allow_experimental_database_unity_catalog
|
0
|
|
allow_experimental_database_glue_catalog
|
0
|
|
session_timezone
|
` |
| [low_priority_query_wait_time_ms](/operations/settings/settings#low_priority_query_wait_time_ms) |
1000
|
| [allow_experimental_delta_kernel_rs](/operations/settings/settings#allow_experimental_delta_kernel_rs) |
1` |
Experimental settings {#experimental-settings} | {"source_file": "beta-and-experimental-features.md"} | [
-0.008685342967510223,
-0.11128418147563934,
-0.03617721050977707,
0.010410496033728123,
-0.00816948153078556,
-0.011400304734706879,
-0.08539047837257385,
-0.07509829849004745,
-0.09042853116989136,
0.06107461079955101,
0.044994328171014786,
-0.004801151342689991,
0.018572960048913956,
-0... |
7142cd18-34f4-4c4a-a4aa-5e78e15ee822 | | Name | Default |
|------|--------|
|
allow_experimental_replacing_merge_with_cleanup
|
0
|
|
allow_experimental_reverse_key
|
0
|
|
allow_remote_fs_zero_copy_replication
|
0
|
|
enable_replacing_merge_with_cleanup_for_min_age_to_force_merge
|
0
|
|
force_read_through_cache_for_merges
|
0
|
|
merge_selector_algorithm
|
Simple
|
|
notify_newest_block_number
|
0
|
|
part_moves_between_shards_delay_seconds
|
30
|
|
part_moves_between_shards_enable
|
0
|
|
remote_fs_zero_copy_path_compatible_mode
|
0
|
|
remote_fs_zero_copy_zookeeper_path
|
/clickhouse/zero_copy
|
|
remove_rolled_back_parts_immediately
|
1
|
|
shared_merge_tree_activate_coordinated_merges_tasks
|
0
|
|
shared_merge_tree_enable_coordinated_merges
|
0
|
|
shared_merge_tree_enable_keeper_parts_extra_data
|
0
|
|
shared_merge_tree_merge_coordinator_election_check_period_ms
|
30000
|
|
shared_merge_tree_merge_coordinator_factor
|
1.1
|
|
shared_merge_tree_merge_coordinator_fetch_fresh_metadata_period_ms
|
10000
|
|
shared_merge_tree_merge_coordinator_max_merge_request_size
|
20
|
|
shared_merge_tree_merge_coordinator_max_period_ms
|
10000
|
|
shared_merge_tree_merge_coordinator_merges_prepare_count
|
100
|
|
shared_merge_tree_merge_coordinator_min_period_ms
|
1
|
|
shared_merge_tree_merge_worker_fast_timeout_ms
|
100
|
|
shared_merge_tree_merge_worker_regular_timeout_ms
|
10000
|
|
shared_merge_tree_virtual_parts_discovery_batch
|
1
|
|
allow_experimental_time_time64_type
|
0
|
|
allow_experimental_kafka_offsets_storage_in_keeper
|
0
|
|
allow_experimental_delta_lake_writes
|
0
|
|
allow_experimental_materialized_postgresql_table
|
0
|
|
allow_experimental_funnel_functions
|
0
|
|
allow_experimental_nlp_functions
|
0
|
|
allow_experimental_hash_functions
|
0
|
|
allow_experimental_object_type
|
0
|
|
allow_experimental_time_series_table
|
0
|
|
allow_experimental_codecs
|
0
|
|
throw_on_unsupported_query_inside_transaction
|
1
|
|
wait_changes_become_visible_after_commit_mode
|
wait_unknown
|
|
implicit_transaction
|
0
|
|
grace_hash_join_initial_buckets
|
1
|
|
grace_hash_join_max_buckets
|
1024
|
|
join_to_sort_minimum_perkey_rows
|
40
|
|
join_to_sort_maximum_table_rows
|
10000
|
|
allow_experimental_join_right_table_sorting
|
0
|
|
allow_statistics_optimize
|
0
|
|
allow_experimental_statistics
|
0
|
|
use_statistics_cache
|
0
|
|
allow_experimental_full_text_index
|
0
|
|
allow_experimental_window_view
|
0
|
|
window_view_clean_interval
|
60
|
|
window_view_heartbeat_interval
|
15
|
|
wait_for_window_view_fire_signal_timeout
|
10
|
|
stop_refreshable_materialized_views_on_startup
|
0
|
|
allow_experimental_database_materialized_postgresql
|
0
|
|
allow_experimental_qbit_type
|
0
|
|
allow_experimental_query_deduplication
|
0
|
| | {"source_file": "beta-and-experimental-features.md"} | [
0.023727264255285263,
-0.027213020250201225,
-0.04124777019023895,
0.007102231960743666,
0.032925285398960114,
-0.04194081202149391,
-0.033182062208652496,
-0.014658798463642597,
-0.040753625333309174,
0.07403174042701721,
0.08910569548606873,
-0.0008595703984610736,
0.03974209353327751,
-... |
c3999fc8-d294-4bbf-826d-5347fe6b2505 | |
0
|
|
allow_experimental_database_materialized_postgresql
|
0
|
|
allow_experimental_qbit_type
|
0
|
|
allow_experimental_query_deduplication
|
0
|
|
allow_experimental_database_hms_catalog
|
0
|
|
allow_experimental_kusto_dialect
|
0
|
|
allow_experimental_prql_dialect
|
0
|
|
enable_adaptive_memory_spill_scheduler
|
0
|
|
allow_experimental_insert_into_iceberg
|
0
|
|
allow_experimental_iceberg_compaction
|
0
|
|
write_full_path_in_iceberg_metadata
|
0
|
|
iceberg_metadata_compression_method
|
|
| [make_distributed_plan](/operations/settings/settings#make_distributed_plan) | `0` |
| [distributed_plan_execute_locally](/operations/settings/settings#distributed_plan_execute_locally) | `0` |
| [distributed_plan_default_shuffle_join_bucket_count](/operations/settings/settings#distributed_plan_default_shuffle_join_bucket_count) | `8` |
| [distributed_plan_default_reader_bucket_count](/operations/settings/settings#distributed_plan_default_reader_bucket_count) | `8` |
| [distributed_plan_force_exchange_kind](/operations/settings/settings#distributed_plan_force_exchange_kind) |
|
|
distributed_plan_max_rows_to_broadcast
|
20000
|
|
allow_experimental_ytsaurus_table_engine
|
0
|
|
allow_experimental_ytsaurus_table_function
|
0
|
|
allow_experimental_ytsaurus_dictionary_source
|
0
|
|
distributed_plan_force_shuffle_aggregation
|
0
|
|
enable_join_runtime_filters
|
0
|
|
join_runtime_filter_exact_values_limit
|
10000
|
|
join_runtime_bloom_filter_bytes
|
524288
|
|
join_runtime_bloom_filter_hash_functions
|
3
|
|
rewrite_in_to_join
|
0
|
|
allow_experimental_time_series_aggregate_functions
|
0
|
|
promql_database
|
|
| [promql_table](/operations/settings/settings#promql_table) |
|
|
promql_evaluation_time
|
auto
|
|
allow_experimental_alias_table_engine
|
0
| | {"source_file": "beta-and-experimental-features.md"} | [
0.020686348900198936,
-0.03134728968143463,
-0.07937055826187134,
0.05788496136665344,
-0.04853639379143715,
-0.046000365167856216,
-0.07307197898626328,
0.010273292660713196,
-0.05347856506705284,
0.05258307605981827,
0.04743065685033798,
-0.013563845306634903,
0.047075606882572174,
-0.09... |
a4ac9170-768c-4d83-9c4a-8590c8a09e9c | title: 'FAQ'
slug: /about-us/faq
description: 'Landing page'
doc_type: 'landing-page'
keywords: ['ClickHouse FAQ', 'frequently asked questions', 'common questions', 'help documentation', 'troubleshooting']
| FAQ |
|-------------------------------------------------------------------------------------------------------------------------------|
|
What is a columnar database?
|
|
What does "ClickHouse" mean?
|
|
Integrating ClickHouse with other systems
|
|
How to import JSON into ClickHouse?
|
|
What if I have a problem with encodings when using Oracle via ODBC?
|
|
Is it possible to delete old records from a ClickHouse table?
|
|
Question about operating ClickHouse servers and clusters
|
|
Is it possible to deploy ClickHouse with separate storage and compute?
|
|
Questions about ClickHouse use cases
|
|
Can I use ClickHouse as a key-value storage?
|
|
Can I use ClickHouse as a time-series database?
| | {"source_file": "about-faq-index.md"} | [
-0.03245869651436806,
-0.03775516152381897,
-0.10673926025629044,
0.02102111279964447,
-0.004845361225306988,
-0.03275933489203453,
0.04624898359179497,
0.035549577325582504,
-0.05313648656010628,
-0.04447798803448677,
0.026707107201218605,
-0.041081350296735764,
0.054383449256420135,
-0.0... |
c36006d5-bea3-4f75-b3b3-dd0b0f70edd5 | slug: /about-us/history
sidebar_label: 'ClickHouse history'
sidebar_position: 40
description: 'History of ClickHouse development'
keywords: ['history','development','Metrica']
title: 'ClickHouse History'
doc_type: 'reference'
ClickHouse history {#clickhouse-history}
ClickHouse was initially developed to power
Yandex.Metrica
,
the second largest web analytics platform in the world
, and continues to be its core component. With more than 13 trillion records in the database and more than 20 billion events daily, ClickHouse allows generating custom reports on the fly directly from non-aggregated data. This article briefly covers the goals of ClickHouse in the early stages of its development.
Yandex.Metrica builds customized reports on the fly based on hits and sessions, with arbitrary segments defined by the user. Doing so often requires building complex aggregates, such as the number of unique users, with new data for building reports arriving in real-time.
As of April 2014, Yandex.Metrica was tracking about 12 billion events (page views and clicks) daily. All these events needed to be stored, in order to build custom reports. A single query may have required scanning millions of rows within a few hundred milliseconds, or hundreds of millions of rows in just a few seconds.
Usage in Yandex.Metrica and other Yandex services {#usage-in-yandex-metrica-and-other-yandex-services}
ClickHouse serves multiple purposes in Yandex.Metrica.
Its main task is to build reports in online mode using non-aggregated data. It uses a cluster of 374 servers, which store over 20.3 trillion rows in the database. The volume of compressed data is about 2 PB, without accounting for duplicates and replicas. The volume of uncompressed data (in TSV format) would be approximately 17 PB.
ClickHouse also plays a key role in the following processes:
Storing data for Session Replay from Yandex.Metrica.
Processing intermediate data.
Building global reports with Analytics.
Running queries for debugging the Yandex.Metrica engine.
Analyzing logs from the API and the user interface.
Nowadays, there are a multiple dozen ClickHouse installations in other Yandex services and departments: search verticals, e-commerce, advertisement, business analytics, mobile development, personal services, and others.
Aggregated and non-aggregated data {#aggregated-and-non-aggregated-data}
There is a widespread opinion that to calculate statistics effectively, you must aggregate data since this reduces the volume of data.
However data aggregation comes with a lot of limitations:
You must have a pre-defined list of required reports.
The user can't make custom reports.
When aggregating over a large number of distinct keys, the data volume is barely reduced, so aggregation is useless.
For a large number of reports, there are too many aggregation variations (combinatorial explosion). | {"source_file": "history.md"} | [
-0.05821554735302925,
-0.06573490053415298,
-0.05195927992463112,
0.05183420702815056,
-0.041061099618673325,
-0.022192660719156265,
-0.000707963656168431,
-0.0020347191020846367,
0.005074209999293089,
0.009094622917473316,
-0.0028579062782227993,
0.00579648744314909,
0.05290453881025314,
... |
2754be1b-7a3d-4afe-9077-0b48cc7dad1c | For a large number of reports, there are too many aggregation variations (combinatorial explosion).
When aggregating keys with high cardinality (such as URLs), the volume of data is not reduced by much (less than twofold).
For this reason, the volume of data with aggregation might grow instead of shrink.
Users do not view all the reports we generate for them. A large portion of those calculations are useless.
The logical integrity of the data may be violated for various aggregations.
If we do not aggregate anything and work with non-aggregated data, this might reduce the volume of calculations.
However, with aggregation, a significant part of the work is taken offline and completed relatively calmly. In contrast, online calculations require calculating as fast as possible, since the user is waiting for the result.
Yandex.Metrica has a specialized system for aggregating data called Metrage, which was used for the majority of reports.
Starting in 2009, Yandex.Metrica also used a specialized OLAP database for non-aggregated data called OLAPServer, which was previously used for the report builder.
OLAPServer worked well for non-aggregated data, but it had many restrictions that did not allow it to be used for all reports as desired. These included a lack of support for data types (numbers only), and the inability to incrementally update data in real-time (it could only be done by rewriting data daily). OLAPServer is not a DBMS, but a specialized DB.
The initial goal for ClickHouse was to remove the limitations of OLAPServer and solve the problem of working with non-aggregated data for all reports, but over the years, it has grown into a general-purpose database management system suitable for a wide range of analytical tasks. | {"source_file": "history.md"} | [
-0.050706781446933746,
0.004162949044257402,
-0.05907796323299408,
0.08412083983421326,
-0.045015908777713776,
-0.02669885754585266,
-0.09728594869375229,
0.12496652454137802,
0.11116928607225418,
0.024511486291885376,
-0.037628352642059326,
0.06538926064968109,
0.05332993343472481,
0.0153... |
3fd090be-10bc-4efd-89de-d9e3b772dd28 | slug: /about-us/distinctive-features
sidebar_label: 'Why is ClickHouse unique?'
sidebar_position: 50
description: 'Understand what makes ClickHouse stand apart from other database management systems'
title: 'Distinctive Features of ClickHouse'
keywords: ['compression', 'secondary-indexes','column-oriented']
doc_type: 'guide'
Distinctive features of ClickHouse
True column-oriented database management system {#true-column-oriented-database-management-system}
In a real column-oriented DBMS, no extra data is stored with the values. This means that constant-length values must be supported to avoid storing their length "number" next to the values. For example, a billion UInt8-type values should consume around 1 GB uncompressed, or this strongly affects the CPU use. It is essential to store data compactly (without any "garbage") even when uncompressed since the speed of decompression (CPU usage) depends mainly on the volume of uncompressed data.
This is in contrast to systems that can store values of different columns separately, but that cannot effectively process analytical queries due to their optimization for other scenarios, such as HBase, Bigtable, Cassandra, and Hypertable. You would get throughput of around a hundred thousand rows per second in these systems, but not hundreds of millions of rows per second.
Finally, ClickHouse is a database management system, not a single database. It allows creating tables and databases in runtime, loading data, and running queries without reconfiguring and restarting the server.
Data compression {#data-compression}
Some column-oriented DBMSs do not use data compression. However, data compression plays a key role in achieving excellent performance.
In addition to efficient general-purpose compression codecs with different trade-offs between disk space and CPU consumption, ClickHouse provides
specialized codecs
for specific kinds of data, which allows ClickHouse to compete with and outperform more niche databases, like time-series ones.
Disk storage of data {#disk-storage-of-data}
Keeping data physically sorted by primary key makes it possible to extract data based on specific values or value ranges with low latency in less than a few dozen milliseconds. Some column-oriented DBMSs, such as SAP HANA and Google PowerDrill, can only work in RAM. This approach requires allocation of a larger hardware budget than necessary for real-time analysis.
ClickHouse is designed to work on regular hard drives, which means the cost per GB of data storage is low, but SSD and additional RAM are also fully used if available.
Parallel processing on multiple cores {#parallel-processing-on-multiple-cores}
Large queries are parallelized naturally, taking all the necessary resources available on the current server.
Distributed processing on multiple servers {#distributed-processing-on-multiple-servers}
Almost none of the columnar DBMSs mentioned above have support for distributed query processing. | {"source_file": "distinctive-features.md"} | [
0.013843139633536339,
-0.0030099889263510704,
-0.09810436517000198,
0.0360453799366951,
-0.016242720186710358,
-0.04414655268192291,
0.003111679572612047,
0.029639989137649536,
-0.002128880238160491,
-0.01474924385547638,
0.016429606825113297,
0.044258005917072296,
0.05888274312019348,
-0.... |
25df7ff7-a677-4474-b983-6175184e7bde | Distributed processing on multiple servers {#distributed-processing-on-multiple-servers}
Almost none of the columnar DBMSs mentioned above have support for distributed query processing.
In ClickHouse, data can reside on different shards. Each shard can be a group of replicas used for fault tolerance. All shards are used to run a query in parallel, transparently for the user.
SQL support {#sql-support}
ClickHouse supports
a declarative query language
based on SQL that is mostly compatible with the ANSI SQL standard.
Supported queries include
GROUP BY
,
ORDER BY
, subqueries in
FROM
, the
JOIN
clause, the
IN
operator,
window functions
and scalar subqueries.
Correlated (dependent) subqueries are not supported at the time of writing but might become available in the future.
Vector computation engine {#vector-engine}
Data is not only stored by columns but is processed by vectors (parts of columns), which allows achieving high CPU efficiency.
Real-time data inserts {#real-time-data-updates}
ClickHouse supports tables with a primary key. To quickly perform queries on the range of the primary key, the data is sorted incrementally using the merge tree. Due to this, data can continually be added to the table. No locks are taken when new data is ingested.
Primary indexes {#primary-index}
Having data physically sorted by primary key makes it possible to extract data based on specific values or value ranges with low latency in less than a few dozen milliseconds.
Secondary indexes {#secondary-indexes}
Unlike other database management systems, secondary indexes in ClickHouse do not point to specific rows or row ranges. Instead, they allow the database to know in advance that all rows in some data parts would not match the query filtering conditions and do not read them at all, thus they are called
data skipping indexes
.
Suitable for online queries {#suitable-for-online-queries}
Most OLAP database management systems do not aim for online queries with sub-second latencies. In alternative systems, report building time of tens of seconds or even minutes is often considered acceptable. Sometimes it takes even more time, which forces systems to prepare reports offline (in advance or by responding with "come back later").
In ClickHouse, "low latency" means that queries can be processed without delay and without trying to prepare an answer in advance, right at the moment when the user interface page is loading โ in other words,
online
.
Support for approximated calculations {#support-for-approximated-calculations}
ClickHouse provides various ways to trade accuracy for performance:
Aggregate functions for approximated calculation of the number of distinct values, medians, and quantiles.
Running a query based on a part (
SAMPLE
) of data and getting an approximated result. In this case, proportionally less data is retrieved from the disk. | {"source_file": "distinctive-features.md"} | [
-0.04869299381971359,
-0.09742993861436844,
-0.049159225076436996,
0.03225591778755188,
-0.06296224892139435,
-0.11119682341814041,
-0.03488246724009514,
-0.03809994459152222,
-0.004965411499142647,
0.05440631881356239,
-0.05208471044898033,
0.08724330365657806,
0.06728909909725189,
-0.061... |
9dd15a14-4dae-4124-a3b1-f66906cd93d0 | Running a query based on a part (
SAMPLE
) of data and getting an approximated result. In this case, proportionally less data is retrieved from the disk.
Running an aggregation for a limited number of random keys, instead of for all keys. Under certain conditions for key distribution in the data, this provides a reasonably accurate result while using fewer resources.
Adaptive join algorithm {#adaptive-join-algorithm}
ClickHouse adaptively chooses how to
JOIN
multiple tables, by preferring hash join and falling back to merge join if there's more than one large table.
Data replication and data integrity support {#data-replication-and-data-integrity-support}
ClickHouse uses asynchronous multi-master replication. After being written to any available replica, all the remaining replicas retrieve their copy in the background. The system maintains identical data on different replicas. Recovery after most failures is performed automatically, or semi-automatically in complex cases.
For more information, see the section
Data replication
.
Role-Based Access Control {#role-based-access-control}
ClickHouse implements user account management using SQL queries and allows for
role-based access control configuration
similar to what can be found in the ANSI SQL standard and popular relational database management systems.
Features that can be considered disadvantages {#clickhouse-features-that-can-be-considered-disadvantages}
No full-fledged transactions.
Lack of ability to modify or delete already inserted data with a high rate and low latency. There are batch deletes and updates available to clean up or modify data, for example, to comply with
GDPR
.
The sparse index makes ClickHouse not so efficient for point queries retrieving single rows by their keys. | {"source_file": "distinctive-features.md"} | [
-0.05986860394477844,
-0.03294777870178223,
-0.03810715675354004,
0.02494775876402855,
-0.04582934081554413,
-0.11771734803915024,
0.009096667170524597,
-0.0009187165414914489,
0.01745857112109661,
0.01228434406220913,
0.009606223553419113,
0.09987074136734009,
0.1354176551103592,
-0.09955... |
f92fa725-cd92-4ff6-b4d1-e3d4962dcf63 | slug: /about-us/support
sidebar_label: 'Support'
title: 'ClickHouse Cloud support services'
sidebar_position: 30
description: 'Information on ClickHouse Cloud support services'
doc_type: 'reference'
keywords: ['support', 'help', 'customer service', 'technical support', 'service level agreement']
ClickHouse Cloud support services
ClickHouse provides support services for our ClickHouse Cloud users and customers. Our objective is a support services team that represents the ClickHouse product โ unparalleled performance, ease of use, and exceptionally fast, high-quality results. For details,
visit our ClickHouse Support Program
page.
Login to the Cloud console
and select
Help -> Support
from the menu options to open a new support case and view the status of your submitted cases.
You can also subscribe to our
status page
to get notified quickly about any incidents affecting our platform.
:::note
Please note that only subscription customers have a service level agreement on support incidents. If you are not currently a ClickHouse Cloud user โ while we will try to answer your question, we'd encourage you to go instead to one of our community resources:
ClickHouse community Slack channel
Other community options
::: | {"source_file": "support.md"} | [
-0.024548344314098358,
-0.10563115030527115,
0.012526393868029118,
-0.032680444419384,
0.06038752943277359,
0.006934285629540682,
0.013004399836063385,
-0.06459399312734604,
-0.024702757596969604,
-0.0005770925781689584,
0.030165676027536392,
0.003814869560301304,
0.00554584339261055,
0.01... |
7e2fb02d-442d-46c9-9680-9962f3918c5e | slug: /about-us/cloud
sidebar_label: 'Cloud service'
sidebar_position: 10
description: 'ClickHouse Cloud'
title: 'ClickHouse Cloud'
keywords: ['ClickHouse Cloud', 'cloud database', 'managed ClickHouse', 'serverless database', 'cloud OLAP']
doc_type: 'reference'
ClickHouse Cloud
ClickHouse Cloud is the cloud offering created by the original creators of the popular open-source OLAP database ClickHouse.
You can experience ClickHouse Cloud by
starting a free trial
.
ClickHouse Cloud benefits {#clickhouse-cloud-benefits}
Some of the benefits of using ClickHouse Cloud are described below:
Fast time to value
: Start building instantly without having to size and scale your cluster.
Seamless scaling
: Automatic scaling adjusts to variable workloads so you don't have to over-provision for peak usage.
Serverless operations
: Sit back while we take care of sizing, scaling, security, reliability, and upgrades.
Transparent pricing
: Pay only for what you use, with resource reservations and scaling controls.
Total cost of ownership
: Best price / performance ratio and low administrative overhead.
Broad ecosystem
: Bring your favorite data connectors, visualization tools, SQL and language clients with you.
What version of ClickHouse does ClickHouse Cloud use? {#what-version-of-clickhouse-does-clickhouse-cloud-use}
Clickhouse Cloud continuously upgrades your service to a newer version. After publishing a core database version in the open source, we do additional validation in our cloud staging environment, which typically takes 6-8 weeks before rolling out to production. The rollout is phased out by cloud service provider, service type, and region.
We offer a "Fast" Release Channel for subscribing to updates ahead of the regular release schedule. For more details, see
"Fast Release Channel"
.
If you rely on functionality in the earlier version, you can, in some cases, revert to the previous behavior using your service's compatibility setting. | {"source_file": "cloud.md"} | [
-0.00110140151809901,
-0.048192840069532394,
0.025849053636193275,
0.04591972380876541,
0.02892555296421051,
-0.038482844829559326,
-0.00400141254067421,
-0.005932110361754894,
-0.01600339263677597,
0.036302175372838974,
0.023502301424741745,
0.00684518925845623,
0.04410523548722267,
-0.06... |
94655043-5bb3-4b52-83c1-e874513e11a8 | slug: /about
title: 'About ClickHouse'
description: 'Landing page for About ClickHouse'
doc_type: 'landing-page'
keywords: ['about', 'overview', 'introduction']
About ClickHouse
In this section of the docs you'll find information about ClickHouse. Refer to
the table of contents below for a list of pages in this section of the docs.
| Page | Description |
|----------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
Adopters
| A list of companies using ClickHouse and their success stories, assembled from public sources |
|
Support
| An introduction to ClickHouse Cloud support services and their mission. |
|
Beta features and experimental features
| Learn about how ClickHouse uses "Beta" and "Experimental" labels to distinguish between officially supported and early-stage, unsupported features due to varied development speeds from community contributions. |
|
Cloud service
| Discover ClickHouse Cloud - a fully managed service that allows users to spin up open-source ClickHouse databases and offers benefits like fast time to value, seamless scaling, and serverless operations. |
|
ClickHouse history
| Learn more about the history of ClickHouse. | | {"source_file": "index.md"} | [
0.01976652443408966,
0.002104860730469227,
0.024537665769457817,
-0.0010547476122155786,
-0.024727458134293556,
-0.012061257846653461,
0.004885191563516855,
-0.032868891954422,
-0.028839562088251114,
-0.01730312407016754,
0.004603515844792128,
0.02131686545908451,
0.013867751695215702,
-0.... |
799ae559-2fdb-45b3-9bf0-9ee8c43e25a3 | slug: /about-us/adopters
sidebar_label: 'Adopters'
title: 'ClickHouse Adopters'
sidebar_position: 60
description: 'A list of companies using ClickHouse and their success stories'
keywords: ['ClickHouse adopters', 'success stories', 'case studies', 'company testimonials', 'ClickHouse users']
doc_type: 'reference'
The following list of companies using ClickHouse and their success stories is assembled from public sources, thus might differ from current reality. We'd appreciate it if you share the story of adopting ClickHouse in your company and
add it to the list
, but please make sure you won't have any NDA issues by doing so. Providing updates with publications from other companies is also useful. | {"source_file": "adopters.md"} | [
-0.0123792365193367,
-0.09164377301931381,
-0.030971886590123177,
0.007422205992043018,
-0.005118554458022118,
0.009735481813549995,
-0.0158305112272501,
-0.042981334030628204,
-0.08975403755903244,
-0.05757592245936394,
0.09831323474645615,
0.03769474849104881,
0.0019228277960792184,
-0.0... |
59477815-a720-4c74-9d60-afec1c5ef37e | | Company | Industry | Use case | Reference | Cluster Size | (Un)Compressed data size |
|----------------------------------------------------------------------------------------------------|-------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|--------------------------------------------------------------------|
| [1Flow](https://1flow.ai/) | Feedback automation | โ | ClickHouse Cloud user | โ | โ | | {"source_file": "adopters.md"} | [
0.013157522305846214,
0.0220977570861578,
-0.06583824753761292,
0.03985651582479477,
-0.054467059671878815,
-0.043848007917404175,
-0.04584987461566925,
0.0007972850580699742,
-0.034657612442970276,
-0.026924356818199158,
0.03354835510253906,
0.02230256423354149,
0.01887505128979683,
-0.08... |
9f634ab2-63d6-44ae-96ab-f2edf723180e | | [2gis](https://2gis.ru) | Maps | Monitoring | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) | โ | โ |
| [3xpl](https://3xpl.com/) | Software & Technology | Blockchain Explorer | [Reddit, February 2023](https://www.reddit.com/r/ethereum/comments/1159pdg/new_ethereum_explorer_by_3xpl_no_ads_super_fast/) | โ | โ |
| [5CNetwork](https://www.5cnetwork.com/) | Software | Analytics | [Community Slack](https://clickhouse.com/slack) | โ | โ | | {"source_file": "adopters.md"} | [
-0.02231455035507679,
-0.08850543946027756,
-0.010657784529030323,
0.014854214154183865,
0.09805767983198166,
-0.02732159197330475,
0.015366202220320702,
-0.056539759039878845,
-0.06271905452013016,
-0.025996999815106392,
0.002440875396132469,
0.004978420212864876,
-0.0458405539393425,
0.0... |
57400430-b583-4481-9d66-bbfbcc728360 | | [ABTasty](https://www.abtasty.com/) | Web Analytics | Analytics | [Paris Meetup, March 2024](https://www.meetup.com/clickhouse-france-user-group/events/298997115/) | โ | โ |
| [Arkhn](https://www.arkhn.com) | Healthcare | Data Warehouse | [Paris Meetup, March 2024](https://www.meetup.com/clickhouse-france-user-group/events/298997115/) | โ | โ |
| [ASO.dev](https://aso.dev/) | Software & Technology | App store optimisation | [Twitter, April 2023](https://twitter.com/gorniv/status/1642847791226445828) | โ | โ | | {"source_file": "adopters.md"} | [
-0.058045852929353714,
-0.0003105479700025171,
-0.0529603511095047,
-0.000056961704103741795,
0.0685337707400322,
-0.03672722354531288,
-0.00028547990950755775,
-0.008692842908203602,
-0.011155680753290653,
-0.03830602392554283,
-0.01232371386140585,
-0.04907757788896561,
-0.0595377832651138... |
d6846404-54c7-4041-9848-1a4ae3069064 | | [AdGreetz](https://www.adgreetz.com/) | Software & Technology | AdTech & MarTech | [Blog, April 2023](https://clickhouse.com/blog/adgreetz-processes-millions-of-daily-ad-impressions) | โ | โ |
| [AdGuard](https://adguard.com/) | Anti-Ads | AdGuard DNS | [Official Website, August 2022](https://adguard.com/en/blog/adguard-dns-2-0-goes-open-source.html) | โ | 1,000,000 DNS requests per second from over 50 million users |
| [AdScribe](http://www.adscribe.tv/) | Ads | TV Analytics | [A quote from CTO](https://altinity.com/24x7-support/) | โ | โ | | {"source_file": "adopters.md"} | [
-0.10825017839670181,
-0.060257114470005035,
-0.030052529647946358,
-0.01607019454240799,
0.02426912821829319,
-0.11855967342853546,
0.04116459935903549,
-0.014287428930401802,
-0.03667917102575302,
-0.020017080008983612,
-0.008987223729491234,
0.05910684913396835,
-0.040652163326740265,
-... |
d69534a3-cb83-42e6-8b43-5d369062fe92 | | [Adapty](https://adapty.io/) | Subscription Analytics | Main product | [Twitter, November 2021](https://twitter.com/iwitaly/status/1462698148061659139) | โ | โ |
| [Adevinta](https://www.adevinta.com/) | Software & Technology | Online Classifieds | [Blog, April 2023](https://clickhouse.com/blog/serving-real-time-analytics-across-marketplaces-at-adevinta) | โ | โ |
| [Admiral](https://getadmiral.com/) | MarTech | Engagement Management | [Webinar Slides, June 2020](https://altinity.com/presentations/2020/06/16/big-data-in-real-time-how-clickhouse-powers-admirals-visitor-relationships-for-publishers) | โ | โ | | {"source_file": "adopters.md"} | [
-0.10769595205783844,
-0.07738947123289108,
-0.06673494726419449,
0.05969750136137009,
0.04380341246724129,
-0.00871638860553503,
0.009082481265068054,
-0.03250356391072273,
-0.06009184569120407,
-0.0033201423939317465,
0.019717883318662643,
0.02660413831472397,
-0.018333198502659798,
0.03... |
619f5171-e1a3-44dc-9c1f-f8d555e3eb0c | | [Admixer](https://admixer.com/) | Media & Entertainment | Ad Analytics | [Blog Post](https://clickhouse.com/blog/admixer-aggregates-over-1-billion-unique-users-a-day-using-clickhouse) | โ | โ |
| [Aggregations.io](https://aggregations.io/) | Real-time analytics | Main product | [Twitter](https://twitter.com/jsneedles/status/1734606200199889282) | โ | โ |
| [Ahrefs](https://ahrefs.com/) | SEO | Analytics | [Job listing](https://ahrefs.com/jobs/data-scientist-search) | Main cluster is 100k+ CPU cores, 800TB RAM. | 110PB NVME storage, uncompressed data size on main cluster is 1EB. | | {"source_file": "adopters.md"} | [
-0.08287985622882843,
-0.05949792638421059,
-0.0628959909081459,
0.07953046262264252,
0.06164739653468132,
-0.07238883525133133,
0.027688076719641685,
0.004979515448212624,
-0.08139830082654953,
-0.010902742855250835,
0.025359952822327614,
0.0013808644143864512,
0.0376705601811409,
0.00594... |
d049a268-d4b7-4e2c-b50e-d6b859a94c73 | | [Airfold](https://www.airfold.co/) | API platform | Main Product | [Documentation](https://docs.airfold.co/workspace/pipes) | โ | โ |
| [Aiven](https://aiven.io/) | Cloud data platform | Managed Service | [Blog post](https://aiven.io/blog/introduction-to-clickhouse) | โ | โ |
| [Akamai](https://www.akamai.com/) | Software & Technology | CDN | [LinkedIn](https://www.linkedin.com/in/david-piatek-bb27368/) | โ | โ | | {"source_file": "adopters.md"} | [
-0.08848760277032852,
-0.05920391157269478,
-0.02919015847146511,
-0.016492484137415886,
0.05726471543312073,
-0.05390189215540886,
0.013340494595468044,
0.03357846662402153,
-0.057774562388658524,
0.0030577818397432566,
0.058828115463256836,
0.03582347184419632,
-0.004485825542360544,
-0.... |
dc4c6217-37ca-413a-abcf-d82bc8b0983b | | [Akvorado](https://demo.akvorado.net/) | Network Monitoring | Main Product | [Documentation](https://demo.akvorado.net/docs/intro) | โ | โ |
| [Alauda](https://alauda.io) | Software & Technology | Analytics, Logs | [Alauda, November 2024](https://www.alauda.io) | โ | โ |
| [AlgoNode](https://algonode.io/) | Software & Technology | Algorand Hosting | [Twitter, April 2023](https://twitter.com/AlgoNode_io/status/1650594948998213632) | โ | โ | | {"source_file": "adopters.md"} | [
-0.0419536791741848,
-0.013595771044492722,
-0.02670881897211075,
-0.012257194146513939,
0.06943759322166443,
-0.07855566591024399,
-0.01726921834051609,
-0.07380189001560211,
0.012795377522706985,
0.046127330511808395,
0.019047770649194717,
-0.007285335101187229,
-0.00228550611063838,
-0.... |
79636558-15d3-4808-9d3d-7f4e1e273448 | | [Alibaba Cloud](https://cn.aliyun.com/) | Cloud | E-MapReduce | [Official Website](https://help.aliyun.com/document_detail/212195.html) | โ | โ |
| [Alibaba Cloud](https://cn.aliyun.com/) | Cloud | Managed Service | [Official Website](https://help.aliyun.com/product/144466.html) | โ | โ |
| [Aloha Browser](https://alohabrowser.com/) | Mobile App | Browser backend | [Slides in Russian, May 2019](https://presentations.clickhouse.com/meetup22/aloha.pdf) | โ | โ | | {"source_file": "adopters.md"} | [
-0.0638907253742218,
-0.0229775533080101,
-0.02079598419368267,
-0.005268475506454706,
0.005981919821351767,
0.019216228276491165,
-0.03841733559966087,
-0.009073380380868912,
-0.03262823075056076,
-0.03097711130976677,
0.0782635509967804,
0.038398001343011856,
-0.010123669169843197,
-0.04... |
f507e361-7528-4e56-ac96-a30b760e1e71 | | [Altinity](https://altinity.com/) | Cloud, SaaS | Main product | [Official Website](https://altinity.com/) | โ | โ |
| [Amadeus](https://amadeus.com/) | Travel | Analytics | [Press Release, April 2018](https://www.altinity.com/blog/2018/4/5/amadeus-technologies-launches-investment-and-insights-tool-based-on-machine-learning-and-strategy-algorithms) | โ | โ |
| [AMP](https://useamp.com/) | Software & Technology | e-Commerce Metrics | [Twitter Post, May 2024](https://x.com/pc_barnes/status/1793846059724357832) [Meetup Slides](https://github.com/ClickHouse/clickhouse-presentations/blob/master/2024-meetup-melbourne-2/Talk%20Track%201%20-%20AMP's%20data%20journey%20from%20OSS%20to%20ClickHouse%20Cloud%20-%20Chris%20Lawrence%20.pdf) | โ | โ | | {"source_file": "adopters.md"} | [
-0.0445404127240181,
-0.08092357963323593,
-0.06965412944555283,
0.012006458826363087,
0.005811602342873812,
0.041454385966062546,
-0.0033411940094083548,
0.012470710091292858,
-0.060962289571762085,
0.007383286952972412,
-0.028330132365226746,
0.06126544624567032,
-0.036365896463394165,
0... |
1784f105-ddf9-4284-a4cd-9a37beea3778 | | [Android Hub](https://bestforandroid.com/) | Blogging, Analytics, Advertising data | โ | [Official Website](https://bestforandroid.com/) | โ | โ |
| [AnswerAI](https://www.answerai.co.uk/) | Software & Technology | AI Customer Support | [Twitter, May 2024](https://twitter.com/TomAnswerAi/status/1791062219678998880) | โ | โ |
| [Anton](https://anton.tools/) | Software & Technology | Blockchain Indexer | [GitHub](https://github.com/tonindexer/anton) | โ | โ | | {"source_file": "adopters.md"} | [
-0.07756773382425308,
-0.04776543006300926,
0.014437351375818253,
-0.050830863416194916,
0.09406234323978424,
-0.04351164400577545,
-0.04349394515156746,
0.050756826996803284,
-0.01579497940838337,
0.0031576023902744055,
0.013572586700320244,
0.03735220432281494,
0.05528479814529419,
0.006... |
3e7d302d-61a7-4ee8-8370-8b7ff342b2b8 | | [Antrea](https://antrea.io/) | Software & Technology | Kubernetes Network Security | [Documentation](https://antrea.io/docs/main/docs/network-flow-visibility/) | โ | โ |
| [ApiRoad](https://apiroad.net/) | API marketplace | Analytics | [Blog post, November 2018, March 2020](https://pixeljets.com/blog/clickhouse-vs-elasticsearch/) | โ | โ |
| [Apitally](https://apitally.io/) | Software & Technology | API Monitoring | [Twitter, March 2024](https://twitter.com/simongurcke/status/1766005582971170926) | โ | โ | | {"source_file": "adopters.md"} | [
-0.04522667080163956,
-0.020238520577549934,
0.009417347609996796,
-0.024339670315384865,
0.06813524663448334,
-0.023512786254286766,
-0.023596571758389473,
-0.04163160175085068,
-0.02503390982747078,
0.025615638121962547,
-0.020905639976263046,
0.0032632937654852867,
-0.053518518805503845,
... |
d1a76319-74f2-4e0a-89b3-0338ba231917 | | [Appsflyer](https://www.appsflyer.com) | Mobile analytics | Main product | [Talk in Russian, July 2019](https://www.youtube.com/watch?v=M3wbRlcpBbY) | โ | โ |
| [Aptabase](https://aptabase.com/) | Analytics | Privacy-first / open-source Firebase Analytics alternative | [GitHub Repository](https://github.com/aptabase/aptabase/tree/main/etc/clickhouse) | โ | โ |
| [ArenaData](https://arenadata.tech/) | Data Platform | Main product | [Slides in Russian, December 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup38/indexes.pdf) | โ | โ | | {"source_file": "adopters.md"} | [
-0.04353119805455208,
-0.06294692307710648,
-0.0781555026769638,
-0.007953107357025146,
0.07472264766693115,
-0.009428875520825386,
0.006371563766151667,
-0.0016520674107596278,
-0.023093508556485176,
0.04453530162572861,
-0.023348374292254448,
-0.009334761649370193,
0.015293775126338005,
... |
9a4d4afb-1dd0-4947-81dc-91964349d6af | | [Argedor](https://www.argedor.com/en/clickhouse/) | ClickHouse support | โ | [Official website](https://www.argedor.com/en/clickhouse/) | โ | โ |
| [Atani](https://atani.com/en/) | Software & Technology | Crypto Platform | [CTO LinkedIn](https://www.linkedin.com/in/fbadiola/) | โ | โ |
| [Attentive](https://www.attentive.com/) | Email Marketing | Main product | [Blog Post](https://clickhouse.com/blog/confoundingly-fast-inside-attentives-migration-to-clickhouse) | โ | โ | | {"source_file": "adopters.md"} | [
-0.0958118885755539,
-0.08587481081485748,
-0.06063428148627281,
-0.00643966905772686,
0.13672737777233124,
-0.04244033619761467,
0.04849127680063248,
0.02605336904525757,
-0.07186006009578705,
-0.04643876478075981,
0.016323549672961235,
0.05313938111066818,
0.0037284065037965775,
-0.00244... |
c47e7df1-4c74-483e-8115-0b25a3330591 | | [Astronomer](https://www.astronomer.io/) | Software & Technology | Observability | [Slide Deck](https://github.com/ClickHouse/clickhouse-presentations/blob/master/2024-meetup-san-francisco/2024.12.09%20Clickhouse%20_%20Powering%20Astro%20Observe%20with%20Clickhouse.pdf) | โ | โ |
| [Autoblocks](https://autoblocks.ai) | Software & Technology | LLM Monitoring & Deployment | [Twitter, August 2023](https://twitter.com/nolte_adam/status/1690722237953794048) | โ | โ |
| [Aviso](https://www.aviso.com/) | AI Platform | Reporting | ClickHouse Cloud user | โ | โ | | {"source_file": "adopters.md"} | [
-0.11450008302927017,
-0.032431721687316895,
0.019984399899840355,
-0.004768568556755781,
0.05034565180540085,
-0.020194578915834427,
-0.010018916800618172,
-0.0314045287668705,
-0.04754488542675972,
-0.020328175276517868,
-0.008524158038198948,
-0.04183599352836609,
-0.08583065122365952,
... |
55fb987d-2864-404e-85bb-0d332a51025a | | [Avito](https://avito.ru/) | Classifieds | Monitoring | [Meetup, April 2020](https://www.youtube.com/watch?v=n1tm4j4W8ZQ) | โ | โ |
| [Axis Communications](https://www.axis.com/en-ca) | Video surveillance | Main product | [Blog post](https://engineeringat.axis.com/schema-changes-clickhouse/) | โ | โ |
| [Azura](https://azura.xyz/) | Crypto | Analytics | [Meetup Video](https://youtu.be/S3uroekuYuQ) | โ | โ | | {"source_file": "adopters.md"} | [
-0.10475101321935654,
-0.035694386810064316,
-0.11047232151031494,
-0.04644501954317093,
0.06847596913576126,
0.03077997826039791,
0.03254033997654915,
-0.04819061979651451,
0.013137508183717728,
0.0510249063372612,
-0.001405024086125195,
0.0022605995181947947,
-0.023117076605558395,
0.046... |
ab8f7c35-7cbf-4bde-bbae-f3235561f40d | | [AzurePrice](https://azureprice.net/) | Analytics | Main Product | [Blog, November 2022](https://blog.devgenius.io/how-i-migrate-to-clickhouse-and-speedup-my-backend-7x-and-decrease-cost-by-6x-part-1-2553251a9059) | โ | โ |
| [AzurGames](https://azurgames.com/) | Gaming | Analytics | [AWS Blog, Aug 2024](https://aws.amazon.com/blogs/gametech/azur-games-migrates-all-game-analytics-data-to-clickhouse-cloud-on-aws/) | โ | โ |
| [B2Metric](https://b2metric.com/) | Marketing | Analytics | [ProductHunt, July 2023](https://www.producthunt.com/posts/b2metric-decision-intelligence?bc=1) | โ | โ | | {"source_file": "adopters.md"} | [
-0.04680071771144867,
-0.024875005707144737,
-0.0809779241681099,
-0.007275313604623079,
0.07651160657405853,
0.0355038084089756,
0.012100007385015488,
-0.04480552673339844,
0.002133646747097373,
0.0915544331073761,
0.0032818783074617386,
-0.0036800773814320564,
0.0961647555232048,
-0.0229... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.